IL264714A - Video geolocation - Google Patents

Video geolocation

Info

Publication number
IL264714A
IL264714A IL264714A IL26471419A IL264714A IL 264714 A IL264714 A IL 264714A IL 264714 A IL264714 A IL 264714A IL 26471419 A IL26471419 A IL 26471419A IL 264714 A IL264714 A IL 264714A
Authority
IL
Israel
Prior art keywords
image
motion
sensor
platform
scene
Prior art date
Application number
IL264714A
Other languages
Hebrew (he)
Other versions
IL264714B (en
Original Assignee
Raytheon Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytheon Co filed Critical Raytheon Co
Publication of IL264714A publication Critical patent/IL264714A/en
Publication of IL264714B publication Critical patent/IL264714B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Description

VIDEO GEOLOCATION BACKGROUND
[0001] This ation generally relates to image processing, and more particularly, identifying and correcting errors in pointing solutions for persistent observation sensors.
[0002] There is a desire to collect persistent video (i.e., multiple image sequences) of a target from overhead platform-based (e.g., airborne or based) sensors that can easily be viewed, and/ or interpreted, via ys. This may be especially important for military nel and/ or for other persons, using portable devices that may have limited processing lities.
Conventional persistent video sensors generally stay fixed to (or focus on) a single point, for instance, on the ground, while the overhead platform is in motion.
[0003] The motion of the platform, however, causes changes in scale, perspective (e. g. parallax), rotation, and/or other s in viewing geometry. These changes can complicate or prevent human and/or machine interpretation of targets, features, and threats. Conventional persistent video relies on human interpretation to ignore changes in the measured scene that result from rm motion and/or imperfect sensor staring.
[0004] Prior approaches that attempt to correct for errors in ng solutions have included very computationally intensive and laborious techniques for iteratively determining platform location and sensor boresight pointing. U.S. patent no. 8,471,915, issued June 25, 2013, entitled “Self-Correcting Adaptive Long-Stare Electro-Optical System“, discloses techniques for ating transformations to prevent image frame distortion caused by a relative motion between the scene and the imaging platform, and preventing geometric ences from manifesting as smear within an integration time, thus preventing intra-frame distortion. However, this system relies upon controlling an optical element based on the transformation to prevent the image distortion, and may require more computations for intra-frame motion prevention. [0004A] US 9,294,755 discloses an imaging platform that minimizes inter-frame image changes when there is ve motion of the imaging platform with respect to the scene being imaged, where the imaging rm may be particularly susceptible to image change, especially when it is con?gured with a wide field of view or high angular rate of movement. In one embodiment, a system is configured to e images and comprises: a movable imaging platform having a sensor that is ured to capture images of a scene, each image comprising a ity of pixels; and an image processor configured to: digitally transform captured images with respect to a common ?eld of view (FOV) such that the transformed images appear to be taken by a non-moving imaging platform, wherein the pixel size and orientation of pixels of each ormed image are the same. A method for measuring and displaying 3-D features is also sed. [0004B] US 2012/0307901 discloses a system and method for processing images of a scene captured by an imaging platform include a correction processor configured to determine a plurality of coefficients associated with transformations that ntially t expected inter- frame changes in the images caused by relative motion between the scene and the imaging platform; a transformation processor configured to transform the captured images using the plurality of coefficients and transformations so as to substantially correct said expected inter- frame changes; and a module configured to store the plurality of coef?cients in image metadata associated with the . [0004C] US 2013/0216144 ses a system, a method, and computer readable medium having instructions for processing . For example, the method includes receiving, at an image processor, a set of images corresponding to a scene changing with time, decomposing, at the image processor, the set of images to detect static objects, leaner objects, and mover objects in the scene, the mover objects being objects that change spatial orientation in the scene with time, and compressing, using the image processor, the mover s in the scene separately at a rate different from that of the static objects and the leaner objects for storage and/or transmission.
[0005] Thus, systems and methods for providing feedback as to r an electro- optical/infrared sensor is staring perfectly are desired without the entioned drawbacks. For example, a system that can determine whether errors exist in the sensor pointing solution, that may facilitate identification of one or more root cause(s) of such errors (e. g., biases in gimbal angle, trajectory error cularly height, etc.), that can improving image quality by correcting such errors instantly and in future image acquisition in applications which are particularly la susceptible to inter-frame changes (e. g., imaging platforms having a wide ?eld of view and/or high angular rates of movement with respect to the ground) would be y appreciated.
SUMMARY
[0006] According to one or more embodiments, closed-loop systems and/or methods are ed that enable image frames to be captured by a moving platform-based sensor and to be displayed and/or processed, as if the platform motion never occurred and as if no geolocation errors (e. g., sensor gimbal angle, trajectory error including altitude, etc.) were present. In addition, the system and method can provide feedback (e.g., for ng on calibration purposes) on whether an o-optical infrared sensor is staring perfectly at a point on the Earth, and help determine the root cause of errors in the imaging process. The identi?ed errors may be fed back to the host system to enable perfect staring in future image acquisition and/or improved “freezing” of imagery for enhanced signal to noise ratio (SNR). This greatly facilitates and simplifies both human and machine target recognition when displayed.
[0007] In one ment, a system is provided for ng error corrected g by a movable imaging platform including an imaging sensor (e. g., focal plane array sensor con?gured to point at a constant point on Earth.) The pointing error may include sensor angular pointing errors, errors in knowledge of scene mean de, or platform altitude knowledge errors. One or more g sors may be con?gured to receive frames of a scene captured by the imaging sensor, wherein each frame comprises a plurality of scene pixels. The captured frames may be lly transformed with respect to a common ?eld of view (FOV), applying one or more transformations that compensate for apparent motion in the captured frames induced by relative motion between the scene and the e imaging platform, such that the pixel size and orientation of pixels of the digitally transformed frames are the same. The processor(s) may then calculate any motion residuals, comprising any apparent motion remaining in the digitally transformed frames, based on inter-frame scene gradients between the digitally transformed frames. If any motion residuals are determined to remain in the digitally transformed frames, the processor(s) may ?t a set of image eigenfunctions to the ated motion residuals, in order to compute residual transformation coef?cients representing a pointing error of the imaging sensor.
The processor(s) may then apply the set of image eigenfunctions scaled by the residual transformation coefficients to the digitally transformed frames to compensate for the ng error, and output the compensated digitally transformed frames.
[0008] In another embodiment, the imaging processor(s) may compare the computed residual transformation coefficients to residual transformation coefficients previously computed and stored in a database of motion residuals, in order to determine one or more causes of the pointing error (as described below). In certain embodiments, the imaging processor(s) may previously te the database with residual transformation coefficients based on known or expected relative motion of the rm to the scene and on a known or expected pointing angle.
[0009] In another embodiment, the imaging processor(s) may correct the ormations applied to future image itions based on the computed residual transformation coef?cients.
[0010] In certain embodiments, only linear transformations are needed as image eigenfunctions to successfully identify the pointing error(s). However, in alternate embodiments the calculated motion residuals are compared to a ed or de?ned threshold value, and if the motion residuals exceed the old, additional eigenfunctions may be utilized, including rotation, scale, anamorphic stretch, skew and/or jitter.
[0011] In yet another embodiment, the imaging sor(s) may identify in the captured frames information representing one or more moving targets, remove that truly moving target information from the captured frames prior to digitally transforming the captured frames, and later add back in the information to the compensated lly transformed frames.
[0012] In other embodiments, the l transformations comprise homography functions or eigenfunctions scaled by coefficient computed based on a known trajectory of the movable imaging platform and a known imaging sensor pointing angle relative to the scene being imaged.
[0013] In other implementations, methods may be provided for pointing error compensated imaging by performing some or all of the processing steps bed above as performed by one or more image processors.
[0014] In yet another implementation, a non-transient computer readable medium may be provided having stored therein program instructions that, when executed by one or more processors, cause the processor(s) to provide for pointing error sated imaging by ming some or all of the processing steps according to any of the methods described above.
[0015] These and other features and advantages of the system and method will be apparent from this disclosure. It is to be understood that the summary, drawings, and detailed ption are not restrictive of the scope of the inventive concept described herein.
BRIEF PTION OF THE S
[0016] The foregoing and other obj ects, features and advantages will be nt from the following, more particular description of the embodiments, as illustrated in the accompanying ?gures, wherein like reference characters generally refer to identical or structurally and/or functionally similar parts throughout the different views. The ?gures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments, wherein:
[0017] FIG. 1A shows an imaging platform and its initial field of view (FOV);
[0018] FIG. 1B shows changes n the initial FOV and a uent FOV;
[0019] FIG. 2A shows an imaging platform and its initial FOV about a staring point;
[0020] FIG. 2B shows a subsequent FOV due to the movement of the imaging platform between the initial and subsequent g time;
[0021] FIG. 3A shows an imaging platform and its initial FOV;
[0022] FIG. 3B shows a change in scale of a subsequent FOV of the imaging rm due to movement of the imaging platform toward the area being imaged;
[0023] FIG. 3C shows a perspective view of an ary use environment of an imaging platform relative to Earth
[0024] FIG. 4A shows an imaging platform as both its altitude and angle from the zenith is reduced;
[0025] FIG. 4B shows a subsequent FOV sealed in both the X and Y-directions due to the reduction in altitude and zenith angle;
[0026] FIG. 5A shows an imaging platform as it approaches the reader in a direction perpendicular to the plane of the page;
[0027] FIG. 5B shows a uent FOV due to skew;
[0028] FIG. 6A shows a vector field of an exemplary transformation comprising a skew;
[0029] FIG. 6B shows a vector field of an exemplary transformation comprising linear X- motion;
[0030] FIG. 6C shows a vector field of an exemplary transformation comprising a lineary Y- motion;
[0031] FIG. 6D shows a vector ?eld of an exemplary transformation comprising a rotation;
[0032] FIG. 6E shows a vector field of an exemplary transformation comprising change in scale;
[0033] FIG. 6F shows a vector field of an exemplary transformation comprising an anamorphic stretch;
[0034] FIG. 7 shows a schematic of an ary system for identifying and ting errors in pointing solutions for persistent observation sensors;
[0035] FIG. 8 shows an ary schematic for processing med by a system in accordance with an embodiment;
[0036] FIGS. 9A-9C show examples of intended optical ?ow; actual optical ?ow; and residual optical ?ow respectively;
[0037] FIGS. 10A and 10B are plots illustrating experimental simulated testing s of a model of a satellite imaging platform over the Earth;
[0038] FIG. 11 is a plot of example simulation s g differences in eigenfunction coef?cient amplitudes obtained between an ideal case and an induced error case;
[0039] FIGS. 12A through 12C illustrate simulated frames of video data of an exemplary scene as if rendered by a sensor from an airborne imaging platform; and
[0040] FIGS. 13A through 13E are plots of experimental simulation test results.
DETAILED DESCRIPTION
[0041] In the description that follows; like components may be given the same reference characters; regardless of whether they are shown in different examples. To illustrate an example(s) of the present sure in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in at schematic form. Features that are described and/or illustrated with respect to one example may be used in the same way or in a similar way in one or more other examples and/or in combination with or instead of the features of the other examples.
[0042] A system con?gured to capture images may include a movable imaging platform having a sensor that is con?gured to capture images of a scene; each image comprising a plurality of pixels; and one or more image sors for executing instructions for practicing the ques described below. One technique involves the digital transformation of captured images with respect to a common field of view (FOV) so as to the “freeze” the imagery. The pixel size and orientation of the pixels of each transformed image are the same in the common FOV.
[0043] The images may include, for example, video images and/ or multiple intermittent still images, collected by a sensor. In one or more entations, the sensor may be a camera. The frame rate for video may be, for e, 30 frames per second (fps) or Hz. Frame rates may also be , such as, for example, 60 fps. Image frames may be digitally data and include a ity of pixels, whether supporting various colors (e. g., red-green-blue (RGB) or cyan- yellow-magenta-black (CYMK)) or monochrome, and that are of suf?cient resolution to permit a viewer to appreciate what is depicted therein. For example, the resolution may be 480 pixels in both width and height, or greater, such as 640><480, 800><800, lO24><768 or 00, for example. Other tions (e. g., smaller and larger) are also possible.
[0044] U.S. patent no. 9,294,755, issued March 22, 2016, entitled “Correcting Frame-to- Frame Image Changes Due to Motion for Three Dimensional Persistent ations” (the ‘75 5 patent), showed that a priori information of platform trajectory and sensor point solution could be used to ent scene-wide transformations to enable “freezing” of imagery for enhanced signal to noise ratio (SNR) and motion detection. Satellites and aircraft have very precise knowledge of their location and are equipped with precision pointing systems, yet this knowledge may contain errors that detract from such techniques being able to ensure persistent staring at a point on the Earth. In situations where the system may not be making correct ation measurements of inter-frame ric changes, due to such errors, residual image eigenfunction transformation coefficients (also referred to herein interchangeably as “Eigen ients”) may be calculated based on inter-frame scene gradients between the digitally transformed frames, and the changes in these coefficients trended in order to estimate sensor pointing errors.
[0045] FIGS. 1-6 illustrate image change problems due to a moving imaging rm-based sensor. As mentioned above, persistent image and video sensors generally stay fixed to (or stare at, or focus on) a single point being tracked, for instance, on the ground, while the overhead imaging platform is in motion. r, motion of the platform and pointing solution errors (e.g., sensing system gimbal errors, altitude errors, etc.) can cause changes in scale, perspective (e. g. parallax), rotation, and/or other changes in viewing geometry. These changes can complicate or prevent human and/or machine interpretation of targets, features, and threats.
[0046] FIG. 1A shows imaging platform 105 (in this case, a satellite), having initial field of view (FOV) 110, capturing images while gazing at g point 115 with a pointing solution. An initial image is sensed at initial detector points (e. g., pixels) (shown as open circles). However, in a subsequent image, the FOV of imaging platform 105 may change due to relative movement between the scene and imaging platform 105.
[0047] FIG. 1B shows that due to the motion of imaging platform 105 a subsequent FOV 120 is no longer coextensive with initial FOV 110 in a later image capture. For instance, while it is le to align (center) staring point 115, the detector points (shown as darkened s) are shifted with respect to the initial detector points. As a result, an image, or a composite image formed by combining images may be d.
[0048] FIGS. 2A-5C show examples of physical motions which may cause image change.
FIG. 2A, for example, shows initial FOV 110 as imaging platform 105 stares at point 115 while the platform moves at velocity V. FIG. 2B shows a change of subsequent FOV 220 due to the overall motion.
[0049] The s in the size and orientation of the FOV are decomposed into a series of eigenmodes.
[0050] FIG. 3A shows l FOV 110 as the altitude of imaging platform 105 is reduced.
FIG. 3B shows scale changes of uent FOV 320. In this example, the change in scale is equal in both the horizontal and vertical directions since imaging platform 105 moves directly toward FOV 110. r, in general, the change in scale may be different along each axis.
Changes in scale of the FOV also result in changes in the mapping of individual image pixels to the scene. FIG. 3C illustrates an additional perspective of the viewing geometry between imagining platform 105 and staring point 115 on the surface of the Earth 120. While global position systems (GPS) may provide very accurate information about the ce of imaging platform 105, measured from the center of the Earth 120, inaccuracies (e. g., due to non-uniform Earth surface elevation, etc.) in the relative distance between imaging platform 105 and the staring point 115 on the surface of the Earth may introduce residual errors in imagery not effectively compensated by previous techniques based on a priori platform motion information alone. In this example, which is the basis for tion testing described below, g rm 105 is initially positioned along the x-axis (e. g., at an altitude of 400 km above a spherical Earth with radius 6371 km) and has an initial velocity V in the positive y-axis direction. FOV 110 is shown projected onto Earth pointed northward with a 50 nadir angle.
[0051] FIG. 4A shows imaging platform 105 approaching both the zenith and the area being B 20. imaged. FIG. 4 shows an anamorphic scale change of subsequent FOV 4 In particular, subsequent FOV 420 is scaled in both the X and Y directions due to the reduction in altitude of g platform 105. Further, subsequent FOV 420 is scaled in the Y-direction more than in the X-direction because line-of—sight 425 remains perpendicular to the X-axis while angle 430 changes with respect to the Y-axis due to the change in zenith angle.
[0052] FIG. 5A shows g platform 105 having line-of—sight 525 moving with velocity V (i.e., approaches the reader in a direction perpendicular to the plane of the page). FIG. 5B shows initial FOV 110 and subsequent FOV 520 caused by skew change. FIG. 6A shows an alternative depiction of skew as a vector ?eld. The lengths of the vectors correspond to magnitudes of the displacement from the line of site.
[0053] These and other detected inter-frame image changes due to movement of the imaging platform-based sensor may be initially corrected as a first step using the imaging system and method bed herein, in one or more ments, which digitally transforms successive images with respect to a common FOV such that the successive images appear to viewed from the same non-moving platform. The pixel size and orientation of pixels of each transformed image are the same or common. After transformation, the scene may contain al motion that can then be measured and used to compute and correct pointing errors.
[0054] FIGS. 6A-6F show vector ?elds associated with various eigenmode change transformations for providing the stationary view. In particular, they illustrate skew (FIG. 6A), linear motion in the X-direction (FIG. 6B), linear motion in the Y-direction (FIG. 6C), on (FIG. 6D), scale or gain (FIG. 6E), and anamorphic h (FIG. 6F), respectively, which may be performed by the imaging system (and method) according to embodiments.
[0055] FIG. 7 shows a schematic of an ary g system 700 for al geolocation error root cause identification and/or pointing solution correction for 3-D persistent observations, according to an embodiment.
[0056] System 700 captures one or more images of scene 705 via sensor optics 710, which may comprise multiple re?ective and/or transmissive lens elements. Images of scene 705, as modified by sensor optics 710, are d onto sensor 720. More particularly, sensor optics 710 receives electromagnetic radiation ) from scene 705 and focuses the received electromagnetic radiation (light) onto sensor 720. In one implementation, sensor optics 710 may include an objective lens, or other conventional optics, such as one or more mirrors and/or lenses. Imaging platform 105 may use high precision gimbal mounts (not shown) to achieve a desired pointing solution for sensor optics 710 and/or sensor 720.
[0057] Sensor 720 may be mounted on a moving platform, such as an airborne or space-based imaging platform 105 (shown in FIGS. 1A-5B), that is configured to collect image frames.
Sensor 720 may include any two-dimensional (2-D) sensor red to detect electromagnetic radiation (light) corresponding to the entering light of interest and generate image , whether still or video image. Exemplary electromagnetic radiation detectors may include complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), or other detectors having sufficient spectral response to detect electromagnetic radiation (light) of interest, for example, in the infrared (IR), visible (VIS), and/or ultraviolet (UV) spectra. In one implementation, sensor 720 may be a focal plane array (FPA) .
[0058] Relative motion between imaging platform 105 and scene 705 may be determined to minimize motion, oscillation, or vibration induced frame-to-frame image changes. A variety of sources can provide input data 715 describing the relative motion of imaging platform to the target and viewing ry of the sensor relative to the imaging platform 105. For example, g platform 105 may have a predetermined ground track (e.g., deterministic path) for imaging selected terrain. Accordingly, input data 715 may comprise control data specifying the route and/or trajectory of imaging platform 105. Input data 715 may also be provided by one or more trajectory sensors (not shown), either alone or in combination with control data, to directly detect the motion of g platform 105 or the relative motion between imaging platform 105 and scene 705. According to various embodiments, tory sensors can include inertial, global positions system (GPS), image processors, velocity (speed), ration, etc. They may include mechanical, o-mechanical, piezoelectric, optical, s, radar (ladar), or the like, which are included with the ?ight systems or avionics of imaging platform 105 or otherwise separately provided. Trajectory sensor(s) may be ured to e various data, including one or more of: velocity (speed), directional g, and r heading, for example, of moving g platform 105. Data output from sensor 720 may be con?gured for Cartesian coordinates, Polar coordinate, cylindrical or spherical coordinates, and/ or other nce coordinate frames and systems. In one entation, imaging platform 105 may implement a World Geodetic System WGS-84 oblate Earth coordinate frame model.
[0059] An image processor 730 may be con?gured to receive image frames from sensor 720 (and other data gathering devices, such as trajectory sensors or the like) and m image processing, as discussed herein. Image processor 730 may include hardware, such as Application Speci?c Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other ated formats. However, those skilled in the art will recognize that processor 730 may, in whole or in part, be equivalently implemented in integrated circuits, as one or more er programs having computer-executable instructions or code running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e. g., as one or more programs running on one or more microprocessors), as e, or as any combination thereof, and that designing the circuitry and/ or writing the code for the software and/ or ?rmware would be well within the skill of one d in the art in light of this disclosure.
[0060] In some embodiments, image processor 730 may be located directly on imaging platform 105 and/or with sensor 720. As such, the transformed imagery can be directly transmitted to users who can view the imagery without the need for any additional image processing. However, this need not be the case. Thus, in some ments (as shown), image processor 730 may be separate from imaging rm 105. For instance, image processor 730 may be ground-based (such as, at a command center). In another instance, image sor 730 may be vehicle-based, such as, for example, in an automobile, tank, helicopter, airplane, ship, ine, or the like. Of course, image processor 730 might also be located with users, such as within a display device 750, user terminal 755 or other portable device.
[0061] Sensor 720 and image processor 730 may communicate and/or share information and data, preferably, in “real-time,” via one or more connections and/or networks there n.
Sensor 720 may it image frames, trajectory information, and sensor viewing information to image sor 730 by any means (including, for ce, radio, microwave, or other electromagnetic radiation means, optical, electrical, wired or wireless transmissions or the like).
In addition, networked communication over one or more digital networks, such as intranets and Internet are possible.
[0062] In some instances, memory device 725 (which may also be referred to as a cache or stack) may arily or ently store image frames collected by sensor 720 for subsequent processing by image processor 730. Memory device 725 may be located, for example, with sensor 720 or alternatively with image processor 730.
[0063] FIG. 8 illustrates a ?ow diagram of an exemplary process 800 using system 700, according to an embodiment. Referring to both FIGS. 7 and 8, in step 805, coef?cients of eigenfunctions may be computed, based on imaging platform 105 geometry and pointing ation. Exemplary ques for performing this operation are described in the ‘75 5 patent cited above.
[0064] According to an embodiment, computing the image transformation coef?cients may involve determining frame-to-frame changes for persistent video frames 728 acquired by sensor 720, for example, based on a function of platform trajectory and sensor pointing angles with respect to a fixed FOV. Inter-frame changes for a persistent video collection may be determined or computed for image frame sets (i.e., sequences of images) as well as super-frame sets (i.e., multiple frame sets). As used herein, “inter-frame” refers to aspects between image frames, also referred to as “frame-to-frame.”
[0065] The image frames 728 may be collected by the sensor 720 at different times or instances. In some instances, these frames 728 may be adjacent or sive image frames, such as in the case for typical video. In others, the frames may be processed at different times but not necessarily in the order collected by the sensor 720.
[0066] Many short exposure images (e. g., l to 100 ms) of the scene 705 may be taken by the sensor 720. The exposures are selected to be sufficiently short that the imaging platform motion within one exposure period (or image) is expected to be relatively small. Successive frames are then manipulated or ormed to have the ance of being viewed by a stationary .
[0067] It will be appreciated that the sensor 720 need not be trained on any particular on in the scene 705 . Rather, the ormations and geolocation error correction may provide a scene that appears to be taken from a non-moving platform (with exception of actual moving objects), and accounting for geolocation errors. Truly moving objects may be more readily detected by an observer since the ound is approximately stationary.
[0068] As shown, image processor 730 may include a look up table (LUT) builder 726, geometry prediction module 732, image frame transform module 734, residual error module 736, ll resolution enhancement module 738, LUT ison module 742 and pointing error module 744. According to various embodiments, the processes bed can be implemented with a variety of microprocessors and/or software, for example. In some implementations, one or more s (of their functionality) may be combined or omitted. Other modules and functions are also possible. Further, image processor 730 can be implemented onboard and/or off-site of imaging platform 105 (e. g., at a ground location physically separated from imaging platform 105)
[0069] Image processor 730 may be configured to utilize planar, spherical, or oblate earth models, relief or topographic models, 3-D models of man-made objects, and/or n elevation maps.
[0070] During operation of step 805, geometry prediction module 732 may be configured to determine the nature and degree of change between different images collected by sensor 720 by receiving input data 715 and determining one or more transformation functions 733 which mathematically describe the inter-frame change due to movement of imaging platform 105 and/ or sensor 720 ve to a target in scene 705. In one embodiment, the transformation function 733 may be Eigen transformations with each eigenfunction being directly translatable into a digital adjustment of image frame data for counteracting and/ or preventing the determined inter-frame changes.
[0071] Moreover, geometry prediction module 732 may receive input data 715 indicating the relative motion, trajectory of g platform 105 and sensor viewing ry, which is used to output one or more model eigenfunctions to correct for image change. Geometry prediction module 732 may compute from the received input data 715 inter-frame FOV mapping to the ground for each set of image frames 728. This may include, for example, taking the difference between different image frames on a pixel-by—pixel basis. For video, these may be successive . Geometry prediction module 732 may select one or more image transformations to t for the inter-frame differences ents) in the FOV. For instance, the changes between the initial and subsequent FOV may be d by Eigen transformations describing a set of adjustments which are capable of compensating for all image changes induced by platform . In ular, they may comprise one or more of the Eigenfunction transformations shown in FIGS. 6A-6F for scaling with Eigen transformation coefficients. Geometry prediction module 732 may then perform modeling to find “best-fit” Eigen transformation coefficients for 12 each Eigen mode for the one or more selected Eigen transformations. The transformations may be optimized by calculating “best ?ts” or coefficients to minimize mean-square error (MSE) or the maximum error, for example. After calculating best fits, the modeled Eigen transformations characterizing ting image distortion are outputted to LUT builder 726.
[0072] Optionally, prior to determining the Eigen transformations and coefficients, geometry prediction module 732 may identify in the frames 728 information enting one or more truly moving objects, and remove the fied information (for addition back to the image frames at the end of image processing.)
[0073] With nce again to FIG. 8, in step 810 LUT builder 726 computes a LUT 743 of residual errors representing sensor optics pointing error, / altitude errors, or other conceivable errors. These residual errors comprise the residual errors not attributable to relative motion between the imaging platform 105 and scene 705. The LUT 743 may contain many sets of Eigen transformation coefficients, each associated with a different pointing error. For example, any 100 collected frames of data may include sets of Eigen transformation coef?cients for each frame and for each hypothetical error. These may be computed a priori or off line, and would be specific to a given nominal platform ry and nominal set of pointing angles (e. g., with respect to ) After collecting the 100 frames, the motion residuals would be measured and then, if above a threshold, the motion residuals would be fit to the eigenfunctions to compute the al transformation coefficients. The computed residual transformation coefficients may then be compared to the LUT (e. g., using a t neighbor comparison, etc.) to determine the nature of the ng errors.
[0074] In step 815, image frame transform module 734 applies the selected image Eigen transformations (shown in FIGS. 6A-6F) lied by the Eigenfunction coefficients to the image frames 728 so as to digitally transform the image frames 728 of scene 705 with respect to a common FOV. This results in image frames that appear to be collected by a non-moving imaging platform, in which the pixel size and orientation of pixels are the same and that any motion remaining in the digitally transformed frames comprises motion residuals.
[0075] In step 820, residual error module 736 computes apparent motion residuals by determining inter-frame differences/gradients n the digitally transformed frames output by image frame transform module 734. This technique involves measuring line of sight error using 13 scene data in the digitally transformed frames only, i.e., no onal a priori platform motion information is required.
[0076] In step 825, residual error module 736 fits the Eigen transformation functions for linear motion ion and linear X-direction (FIG. 6B) and linear ction (FIG. 6C) to the motion residuals to estimate residuals translation coefficients. Residual error module 736 is configured to determine the nature and degree of change between successive digitally transformed image frames 728 and to apply the linear transformation functions that mathematically describe the inter-frame change to create difference or gradient images. Residual error module 736 may, prior to comparing successive frames, co-register the successive digitally transformed image frames, based on the fixed, known spatial relationship between or among the co-registered residual error frames. Residual error module 736 ?ts the scene-wide residual motion difference or nt images to the X-direction and Y-direction linear eigenfunctions (described above and shown in FIGS. 6B-6C) to determine the appropriate one or more transformation(s) and a ponding optimal set of translation als Eigen cients for compensating for geolocation errors. Residual error transform modeling may be performed to find “best-fit” residuals transformation coefficients for each unction for the one or more translation eigenfunction transformations. For example, the transformations may be optimized by calculating “best fit” residuals transformation coefficients that minimize mean-square error (MSE) or the maximum error. In step 830, residual error module 736 may determine r the residuals are greater than a selected threshold. The residuals have a size measured in scene pixels or ground sample distance. If any of the residuals are larger than a user-selected threshold, the operations continue. If the residuals are smaller than the threshold, g that the linear ation transformations were ient to compensate for the residual errors, the translation residuals transformation coefficients and linear transformations are passed to resolution enhance module 738. If, however, the residuals are greater than the selected threshold, then in step 830 one or more additional transformations — e. g., skew (FIG. 6A), rotation (FIG. 6D), scale (FIG. 6E), and/or anamorphic h (FIG. 6F) (others may be possible) — are fit to the difference/ gradients associated with the digitally transformed image frames 728. After calculating best fits, the modeled unction transformations using all six eigenfunctions and their computed residuals transformation coefficients may be output to resolution enhancement module 738. 14
[0077] The inventors have determined that the application of the two linear eigenfunctions (X-motion and Y-motion) may prove sufficient for characterizing the vast majority of pointing/geolocation motion (e. g, caused by gimbal ng error and/or height error) still present in the post-digitally transformed frame images 728. The “expected” residual errors typically manifest in easily detected linear translation errors, though other transformation motions may be fit as needed. For example, FIGS. 9A-9C provided a simulation experiment illustrating this principle. FIG. 9A represents an optical ?ow for an intended ng on.
FIG. 9B depicts the optical ?ow actually observed after purposefully introducing a gimbal ng error comprising a nadir angle error of 0.01 degrees (174 urad) from a nominal nadir of degrees (azimuth 0°). FIG. 9C ents the residual optical flow, which is almost entirely a linear translation transformation for many of the errors.
[0078] Those of skill in the art of image processing will readily appreciate that the estimation of residual transformation(s) performed by residual error module 736, including for example, the two to six ?ed eigenfunction transformations (and/ or others) utilized and associated residuals transformation cients for estimating and correcting the residual geolocation error, using wide changes as described herein could be performed with alternative techniques, such as by using ame groups of pixels, r, such approaches would likely be computationally more burdensome.
[0079] Referring again to FIGS. 7 and 8, in step 835, resolution enhancement module 738 applies the two or more computed eigenfunctions and associated residuals ormation coefficients to the digitally transformed image frames to remove the estimated residual motion from the digitally transformed frames. Resolution enhancement module 738 is configured to enhance the resolution of transformed image frames, for example, by interpolating and transforming imagery to remove residual motion of successive frames, increasing sampling of aggregate images due to naturally occurring movement of pixels as mapped to the ground. This may be r aided by deterministic frame shifting.
[0080] In one implementation, a resolution enhancement process may be implemented by resolution enhancement module 738. Images of improved resolution , for example, may be generated by olating and aggregating images according to known algorithms, such as ncy or space domain algorithms. The images are not highly oversampled per se but a sequence of images that are ultimately aggregated become highly oversampled by virtue of recognizing the naturally ing changes in the sensor FOV and then creating a tailored, non- uniformly spaced interpolation grid based on these naturally occurring changes. One bene?t of super-resolution processing is improved edge contrasts. In some instances, the enhanced images may enable a high “rating” according to the National Imagery Interpretability Rating Scale (NIIRS). Additional sub-pixel steering of the field of view may be employed to further enhance the sampling of the scene.
[0081] One or more users may interface with system 700. Users typically will be located remotely from imaging platform 105 and/or image processor 730, for instance. Of course, users may also be located on imaging rm 105, and/or a location near image processor 730. In one or more implementations, users can communicated with, and/ or share information and data with image processor 730 by any means (including, for instance, radio, microwave, or other omagnetic radiation means, l, electrical, wired, and wireless transmissions or the like). In addition, networked communication over one or more l networks, such as intranets and Internet are possible.
[0082] User display 750 may be red to enable one or more users to view motion and geolocation error corrected image frames (e. g., stills or video) output from image processor 730.
User display 750 may include, for instance, any display device configured for displayed video and/or image frames. Televisions, computer rs, laptops, tablets computing device, smart phones, personal digital assistant (PDAs) and/or other displays and computing devices may be used. User al 755 may be configured to enable users to interact with image processor 730.
In some implementations, users may be presented with one or more data acquisition planning tools.
[0083] Video ces of the transformed imagery may be yed, in which static, moving, and/or 3-D objects may be identified (e. g., highlighted, color-coded, annotated, etc.) in the displayed image(s) of the scene. As such, human and machine interpretation is greatly facilitated. No additional digital image processing may be required once the images are transformed, in many ces.
[0084] In step 845, LUT comparison module 742 compares the ed residual transformation coefficients to residual transformation coefficients stored in al motion LUT 743. Trends in residual motion revealed by this comparison permits estimation of pointing cation) errors in the present pointing solution. FIGS. 10A and 10B illustrate experimental 16 simulated testing results using the satellite imaging platform and Earth model shown in FIG. 3C, wherein the satellite is initially positioned along the X-aXis and has an initial velocity V in the ve y-aXis direction. The FOV initially projects onto Earth pointed northward with a 5° nadir angle. FIG. 10A illustrates the time evolution (trend) of residual motion error (e. g., similar data possibly being stored in the LUT 743 for comparison purposes) with an d 50 urad pointing error in the nadir direction. The graph demonstrates that virtually all of the induced error may be measured using the two linear translation (X-motion, Y-motion) mages. FIG. 10B illustrates the trend over time of residual motion errors with an induced 10 meter surface de (height) error. The graph illustrates that this error may be virtually entirely measured in one ation (Y-motion) eigenimage.
[0085] In step 850, pointing error module 744 may optionally interpret such trending and curve-fitting. of the al transformation coef?cients representing residual motion errors computed for a current frame to previously stored residual motion data so as to determine which of several possible root causes of the residual error is most likely responsible. LUT 743 may comprise sets of points and/or curve-fits for a given number of frames. FIG. 11 is a plot of the simulation results for the difference in eigenfunction coef?cient amplitudes obtained between the ideal case and the induced 10 meter surface height knowledge error case. The results demonstrate how virtually all of the residual motion error may be measured with the two translation eigenfunctions. ng error module 7 44 may provide feedback regarding r sensor 720 is staring perfectly at a point on the Earth. The computed ng (geolocation) errors may be output to update pointing ation and/ or pointing solution information at sensor optics 710. This enables adjustments in the pointing solution for future scene g with perfect persistent observations (“staring”) and/or image frame freezing, such as is useful in motion ion. Pointing error module 7 44 may also output an indication of the root cause of the pointing error, such as biases in gimbal angle and/ or trajectory error (especially height.) The computed residuals transformation coefficients may also be used to adjust the one or more Eigenfunction transformations that compensate for the apparent motion in the scene induced by relative motion between the scene and the movable imaging platform.
[0086] The scene wide transformations ed to enable “freezing” of imagery may be used for enhanced motion detection, and frame stacking may be used to enhance SNR. In step 840, resolution enhancement module 738 may optionally add or sum a ity of successive l7 compensated digitally ormed frames to obtain higher SNR scene imagery and/or to detect truly moving objects in the imagery. If “true mover” information had previously been identi?ed and removed from the captured image frames 728, such information may be added back into the sated digitally transformed frames. In some instances, the enhanced images may enable a high “rating” according to the National y Interpretability Rating Scale (NIIRS).
[0087] FIGS. 12A through 12C illustrate simulated frames of video data of an exemplary scene as if rendered by a sensor from an airborne imaging platform. FIG. 12A shows a simulated initial frame with no motion or geolocation error correction applied. The scene includes a plurality of vehicles, ing pick-up trucks and mobile (e. g., SCUD) missile launchers. FIG. 12B shows a simulated frame of video data of the same scene as shown in FIG. 12A, at a second instance. The images are taken shortly apart, and thus have a different angle separation. The image depicted in the FIG. 12B has changed slightly with regard to scale, on, and/or viewing angle. er, the image appears slightly more stretched in one dimension (horizontal) than the other. FIG. 12C shows the residual motion error after platform motion induced error has been removed from the frame. The plot maps the movement of the pixels to the ground. The length and direction of the vector arrows show the movement of pixels from one frame to r.
[0088] FIGS. 13A through 13E are results from addition simulation testing undertaken to explore the dependence of residual motion error on the magnitude of induced pointing bias. The simulation assumed conditions of a ite imaging platform at 400 km altitude, a sensor looking down at a nadir angle of 20°, the sensor looking broadside perpendicular to the velocity vector V, tion time of 10 seconds, FOV of 1°, a sensor controller attempting to keep the center of the FOV pointed at the same location on the ground for the duration of the collection time, perfect knowledge of the ite position and planar Earth e altitude, gimbal azimuth singularity at n, and two sensor bias cases — 0 and 50 urad in the nadir direction.
FIG. 13A shows the motion of image points in a focal plane frame of reference. With perfect knowledge of the satellite ephemeris, surface altitude and sensor point, the error at the center of the FOV is zero. FIG. 13B shows the image motion with a pointing bias of 50 urad. A consequence of the pointing bias is that even the center point of the FOV exhibits a small amount of motion on the focal plane. The true ground intersection point is not stationary, as expected, due to the pointing bias. FIG. 13C shows the difference in image motion between the ideal and 18 actual cases. The image motion ence between the expected and actual motion over the 10 second collection time is about 24 urad. FIG. 13D shows the residual motion once the common platform motion yed on the chart of FIG. 13C is removed. FIG. 13E shows the dependence of the residual motion error on the magnitude of the induced nadir angle pointing bias. Similar curves could be used to determine the size of ng bias in non-simulation image processing.
[0089] Although the above sure discusses what is currently considered to be a variety of useful examples, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed examples, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims.
[0090] One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing ption, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced n.
[0091] No element, act, or instruction used herein should be construed as critical or essential unless explicitly bed as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar ge is used. Also, as used , the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless itly stated otherwise. 19 399 0 R 397 0 R

Claims (1)

1. VIDEO GEOLOCATION OUND [0001] This application generally relates to image processing, and more particularly, identifying and correcting errors in pointing solutions for persistent observation sensors. [0002] There is a desire to collect persistent video (i.e., multiple image sequences) of a target from overhead platform-based (e.g., airborne or space-based) sensors that can easily be viewed, and/ or interpreted, via displays. This may be especially important for military personnel and/ or for other persons, using portable devices that may have limited processing capabilities. Conventional persistent video sensors generally stay fixed to (or focus on) a single point, for instance, on the , while the overhead rm is in motion. [0003] The motion of the platform, r, causes changes in scale, perspective (e. g. parallax), on, and/or other changes in viewing geometry. These changes can complicate or prevent human and/or machine interpretation of targets, features, and s. Conventional persistent video relies on human interpretation to ignore changes in the measured scene that result from platform motion and/or imperfect sensor g. [0004] Prior approaches that attempt to correct for errors in ng solutions have included very computationally intensive and laborious techniques for iteratively determining platform on and sensor ght pointing. U.S. patent no. 8,471,915, issued June 25, 2013, entitled “Self-Correcting Adaptive Long-Stare Electro-Optical System“, discloses ques for calculating transformations to prevent image intra-frame distortion caused by a relative motion between the scene and the imaging platform, and preventing ric differences from manifesting as smear within an integration time, thus preventing intra-frame distortion. However, this system relies upon controlling an optical element based on the ormation to prevent the image tion, and may require more computations for intra-frame motion prevention. [0004A] US 9,294,755 discloses an imaging platform that minimizes inter-frame image changes when there is relative motion of the imaging platform with respect to the scene being , where the imaging platform may be particularly susceptible to image change, especially when it is red with a wide field of view or high angular rate of movement. In one embodiment, a system is configured to capture images and comprises: a movable imaging platform having a sensor that is configured to capture images of a scene, each image comprising a plurality of pixels; and an image processor configured to: digitally transform captured images with respect to a common ?eld of view (FOV) such that the transformed images appear to be taken by a non-moving imaging platform, wherein the pixel size and orientation of pixels of each transformed image are the same. A method for measuring and displaying 3-D features is also disclosed. [0004B] US 2012/0307901 discloses a system and method for processing images of a scene captured by an imaging rm include a correction processor configured to determine a plurality of coefficients associated with transformations that substantially correct ed inter- frame changes in the images caused by relative motion between the scene and the imaging platform; a transformation processor configured to transform the captured images using the plurality of coefficients and ormations so as to substantially correct said expected inter- frame changes; and a module configured to store the plurality of coef?cients in image metadata associated with the images. [0004C] US 2013/0216144 discloses a system, a method, and computer le medium having instructions for processing images. For example, the method includes receiving, at an image processor, a set of images ponding to a scene changing with time, decomposing, at the image processor, the set of images to detect static objects, leaner objects, and mover objects in the scene, the mover objects being objects that change spatial orientation in the scene with time, and ssing, using the image sor, the mover objects in the scene separately at a rate different from that of the static objects and the leaner objects for storage and/or transmission. [0005] Thus, systems and methods for providing feedback as to whether an electro- optical/infrared sensor is staring perfectly are desired t the aforementioned drawbacks. For example, a system that can ine whether errors exist in the sensor pointing solution, that may facilitate identification of one or more root cause(s) of such errors (e. g., biases in gimbal angle, trajectory error (particularly height, etc.), that can improving image quality by correcting such errors instantly and in future image acquisition in ations which are particularly la susceptible to inter-frame changes (e. g., imaging platforms having a wide ?eld of view and/or high angular rates of movement with respect to the ground) would be greatly appreciated. SUMMARY [0006] According to one or more embodiments, closed-loop systems and/or methods are provided that enable image frames to be captured by a moving platform-based sensor and to be displayed and/or processed, as if the platform motion never occurred and as if no geolocation errors (e. g., sensor gimbal angle, trajectory error ing altitude, etc.) were present. In addition, the system and method can e feedback (e.g., for pointing solution calibration purposes) on whether an electro-optical infrared sensor is g perfectly at a point on the Earth, and help determine the root cause of errors in the imaging process. The identi?ed errors may be fed back to the host system to enable perfect staring in future image acquisition and/or improved “freezing” of imagery for enhanced signal to noise ratio (SNR). This greatly facilitates and fies both human and machine target recognition when displayed. [0007] In one embodiment, a system is provided for ng error ted imaging by a movable imaging rm including an imaging sensor (e. g., focal plane array sensor con?gured to point at a constant point on Earth.) The pointing error may include sensor r pointing errors, errors in dge of scene mean altitude, or platform altitude knowledge errors. One or more imaging processors may be red to receive frames of a scene captured by the g sensor, wherein each frame comprises a plurality of scene pixels. The captured frames may be digitally transformed with respect to a common ?eld of view (FOV), applying one or more transformations that compensate for apparent motion in the captured frames induced by relative motion between the scene and the movable imaging rm, such that the pixel size and orientation of pixels of the digitally transformed frames are the same. The processor(s) may then calculate any motion residuals, comprising any apparent motion remaining in the digitally transformed frames, based on inter-frame scene gradients between the digitally transformed frames. If any motion residuals are determined to remain in the digitally transformed , the processor(s) may ?t a set of image eigenfunctions to the calculated motion residuals, in order to compute residual transformation coef?cients representing a pointing error of the g sensor. The processor(s) may then apply the set of image eigenfunctions scaled by the residual transformation coefficients to the lly transformed frames to compensate for the pointing error, and output the compensated digitally transformed frames. [0008] In another embodiment, the imaging processor(s) may compare the computed residual ormation coefficients to residual transformation coefficients previously ed and stored in a database of motion residuals, in order to determine one or more causes of the pointing error (as described below). In certain embodiments, the imaging processor(s) may previously te the database with residual transformation coefficients based on known or expected relative motion of the platform to the scene and on a known or expected pointing angle. [0009] In r ment, the imaging processor(s) may correct the transformations applied to future image acquisitions based on the computed residual ormation coef?cients. [0010] In certain embodiments, only linear transformations are needed as image eigenfunctions to successfully identify the pointing error(s). However, in ate embodiments the calculated motion residuals are compared to a selected or de?ned threshold value, and if the motion residuals exceed the threshold, additional eigenfunctions may be utilized, including rotation, scale, anamorphic stretch, skew and/or jitter. [0011] In yet another embodiment, the g processor(s) may identify in the captured frames information representing one or more moving targets, remove that truly moving target information from the captured frames prior to digitally transforming the captured frames, and later add back in the information to the compensated digitally transformed frames. [0012] In other embodiments, the digital transformations comprise homography functions or eigenfunctions scaled by coefficient computed based on a known tory of the movable imaging platform and a known imaging sensor pointing angle relative to the scene being imaged. [0013] In other entations, methods may be provided for pointing error compensated imaging by performing some or all of the processing steps described above as med by one or more image processors. [0014] In yet another implementation, a non-transient computer readable medium may be provided having stored therein m instructions that, when executed by one or more processors, cause the processor(s) to provide for pointing error compensated imaging by performing some or all of the sing steps according to any of the methods described above. [0015] These and other features and advantages of the system and method will be apparent from this disclosure. It is to be understood that the summary, drawings, and detailed description are not restrictive of the scope of the inventive concept described . BRIEF DESCRIPTION OF THE FIGURES [0016] The foregoing and other obj ects, features and advantages will be apparent from the following, more particular description of the embodiments, as illustrated in the accompanying ?gures, wherein like reference characters generally refer to cal or structurally and/or onally similar parts throughout the different views. The ?gures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments, wherein: [0017] FIG. 1A shows an imaging platform and its initial field of view (FOV); [0018] FIG. 1B shows s between the initial FOV and a subsequent FOV; [0019] FIG. 2A shows an imaging platform and its initial FOV about a staring point; [0020] FIG. 2B shows a uent FOV due to the movement of the imaging rm between the initial and uent imaging time; [0021] FIG. 3A shows an imaging platform and its initial FOV; [0022] FIG. 3B shows a change in scale of a uent FOV of the imaging platform due to movement of the imaging platform toward the area being ; [0023] FIG. 3C shows a perspective view of an exemplary use environment of an imaging platform relative to Earth [0024] FIG. 4A shows an imaging platform as both its altitude and angle from the zenith is reduced; [0025] FIG. 4B shows a subsequent FOV sealed in both the X and Y-directions due to the reduction in altitude and zenith angle; [0026] FIG. 5A shows an imaging platform as it approaches the reader in a direction perpendicular to the plane of the page; [0027] FIG. 5B shows a subsequent FOV due to skew; [0028] FIG. 6A shows a vector field of an exemplary transformation comprising a skew; [0029] FIG. 6B shows a vector field of an exemplary transformation comprising linear X- motion; [0030] FIG. 6C shows a vector field of an exemplary transformation sing a lineary Y- motion; [0031] FIG. 6D shows a vector ?eld of an exemplary transformation comprising a rotation; [0032] FIG. 6E shows a vector field of an exemplary transformation comprising change in scale; [0033] FIG. 6F shows a vector field of an exemplary transformation comprising an anamorphic stretch; [0034] FIG. 7 shows a tic of an exemplary system for identifying and correcting errors in pointing ons for persistent observation sensors; [0035] FIG. 8 shows an ary schematic for processing performed by a system in accordance with an embodiment; [0036] FIGS. 9A-9C show examples of intended optical ?ow; actual optical ?ow; and residual optical ?ow respectively; [0037] FIGS. 10A and 10B are plots illustrating experimental simulated testing results of a model of a satellite g platform over the Earth; [0038] FIG. 11 is a plot of example simulation results showing differences in eigenfunction coef?cient amplitudes ed n an ideal case and an induced error case; [0039] FIGS. 12A through 12C illustrate simulated frames of video data of an exemplary scene as if rendered by a sensor from an ne imaging platform; and [0040] FIGS. 13A through 13E are plots of experimental simulation test results. DETAILED DESCRIPTION [0041] In the description that follows; like components may be given the same reference characters; regardless of whether they are shown in different examples. To illustrate an example(s) of the present disclosure in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form. Features that are described and/or illustrated with respect to one example may be used in the same way or in a similar way in one or more other examples and/or in combination with or instead of the features of the other es. [0042] A system con?gured to capture images may include a movable imaging platform having a sensor that is con?gured to capture images of a scene; each image sing a plurality of pixels; and one or more image processors for executing ctions for practicing the techniques described below. One technique involves the digital ormation of captured images with respect to a common field of view (FOV) so as to the “freeze” the imagery. The pixel size and orientation of the pixels of each transformed image are the same in the common FOV. [0043] The images may e, for example, video images and/ or le intermittent still images, ted by a sensor. In one or more implementations, the sensor may be a camera. The frame rate for video may be, for example, 30 frames per second (fps) or Hz. Frame rates may also be higher, such as, for example, 60 fps. Image frames may be digitally data and e a plurality of pixels, whether supporting various colors (e. g., red-green-blue (RGB) or cyan- yellow-magenta-black (CYMK)) or monochrome, and that are of suf?cient resolution to permit a viewer to appreciate what is ed therein. For example, the resolution may be 480 pixels in both width and height, or greater, such as 640><480, 800><800, lO24><768 or 1280X800, for example. Other resolutions (e. g., smaller and larger) are also possible. [0044] U.S. patent no. 9,294,755, issued March 22, 2016, entitled cting Frame-to- Frame Image Changes Due to Motion for Three Dimensional Persistent Observations” (the ‘75 5 patent), showed that a priori information of platform trajectory and sensor point solution could be used to implement scene-wide ormations to enable “freezing” of imagery for enhanced signal to noise ratio (SNR) and motion detection. Satellites and aircraft have very precise dge of their location and are equipped with precision pointing systems, yet this knowledge may contain errors that detract from such techniques being able to ensure persistent staring at a point on the Earth. In ions where the system may not be making correct geolocation measurements of inter-frame geometric changes, due to such errors, residual image eigenfunction transformation coefficients (also referred to herein interchangeably as “Eigen coef?cients”) may be ated based on inter-frame scene gradients n the digitally transformed frames, and the changes in these coefficients trended in order to estimate sensor pointing errors. [0045] FIGS. 1-6 illustrate image change problems due to a moving imaging platform-based sensor. As mentioned above, persistent image and video sensors generally stay fixed to (or stare at, or focus on) a single point being tracked, for instance, on the ground, while the overhead imaging rm is in motion. However, motion of the platform and pointing solution errors (e.g., sensing system gimbal errors, altitude errors, etc.) can cause changes in scale, perspective (e. g. parallax), on, and/or other s in viewing geometry. These changes can complicate or prevent human and/or machine interpretation of targets, features, and threats. [0046] FIG. 1A shows imaging platform 105 (in this case, a satellite), having initial field of view (FOV) 110, capturing images while gazing at staring point 115 with a pointing on. An l image is sensed at l detector points (e. g., pixels) (shown as open circles). However, in a uent image, the FOV of imaging platform 105 may change due to relative movement between the scene and imaging platform 105. [0047] FIG. 1B shows that due to the motion of imaging platform 105 a subsequent FOV 120 is no longer coextensive with initial FOV 110 in a later image capture. For instance, while it is possible to align (center) staring point 115, the detector points (shown as darkened circles) are shifted with respect to the initial detector points. As a result, an image, or a composite image formed by combining images may be d. [0048] FIGS. 2A-5C show examples of physical motions which may cause image change. FIG. 2A, for example, shows initial FOV 110 as g platform 105 stares at point 115 while the platform moves at velocity V. FIG. 2B shows a change of subsequent FOV 220 due to the overall motion. [0049] The changes in the size and orientation of the FOV are decomposed into a series of eigenmodes. [0050] FIG. 3A shows initial FOV 110 as the altitude of g platform 105 is reduced. FIG. 3B shows scale changes of subsequent FOV 320. In this example, the change in scale is equal in both the horizontal and al directions since imaging platform 105 moves directly toward FOV 110. However, in general, the change in scale may be different along each axis. Changes in scale of the FOV also result in changes in the mapping of individual image pixels to the scene. FIG. 3C illustrates an additional perspective of the viewing geometry n imagining platform 105 and g point 115 on the surface of the Earth 120. While global position systems (GPS) may provide very accurate information about the distance of imaging platform 105, measured from the center of the Earth 120, inaccuracies (e. g., due to non-uniform Earth surface elevation, etc.) in the relative distance between imaging rm 105 and the staring point 115 on the surface of the Earth may introduce residual errors in imagery not effectively compensated by us techniques based on a priori platform motion information alone. In this example, which is the basis for simulation testing described below, imaging platform 105 is initially positioned along the x-axis (e. g., at an altitude of 400 km above a cal Earth with radius 6371 km) and has an initial velocity V in the positive y-axis direction. FOV 110 is shown projected onto Earth pointed northward with a 50 nadir angle. [0051] FIG. 4A shows imaging rm 105 approaching both the zenith and the area being B 20. imaged. FIG. 4 shows an anamorphic scale change of subsequent FOV 4 In particular, subsequent FOV 420 is scaled in both the X and Y ions due to the reduction in altitude of imaging platform 105. Further, subsequent FOV 420 is scaled in the Y-direction more than in the X-direction because line-of—sight 425 remains perpendicular to the X-axis while angle 430 changes with respect to the Y-axis due to the change in zenith angle. [0052] FIG. 5A shows imaging platform 105 having line-of—sight 525 moving with velocity V (i.e., approaches the reader in a direction perpendicular to the plane of the page). FIG. 5B shows initial FOV 110 and subsequent FOV 520 caused by skew change. FIG. 6A shows an alternative depiction of skew as a vector ?eld. The lengths of the vectors correspond to magnitudes of the displacement from the line of site. [0053] These and other detected inter-frame image changes due to nt of the imaging platform-based sensor may be initially corrected as a first step using the imaging system and method bed herein, in one or more embodiments, which digitally transforms successive images with respect to a common FOV such that the successive images appear to viewed from the same ving platform. The pixel size and ation of pixels of each ormed image are the same or common. After transformation, the scene may contain residual motion that can then be measured and used to compute and correct pointing errors. [0054] FIGS. 6A-6F show vector ?elds associated with various eigenmode change transformations for providing the stationary view. In ular, they illustrate skew (FIG. 6A), linear motion in the X-direction (FIG. 6B), linear motion in the Y-direction (FIG. 6C), rotation (FIG. 6D), scale or gain (FIG. 6E), and anamorphic h (FIG. 6F), respectively, which may be performed by the imaging system (and method) according to embodiments. [0055] FIG. 7 shows a schematic of an exemplary imaging system 700 for al geolocation error root cause identification and/or pointing solution correction for 3-D persistent observations, according to an embodiment. [0056] System 700 captures one or more images of scene 705 via sensor optics 710, which may comprise multiple re?ective and/or transmissive lens elements. Images of scene 705, as modified by sensor optics 710, are focused onto sensor 720. More particularly, sensor optics 710 es electromagnetic radiation ) from scene 705 and focuses the received electromagnetic radiation (light) onto sensor 720. In one implementation, sensor optics 710 may include an objective lens, or other tional optics, such as one or more mirrors and/or lenses. Imaging platform 105 may use high ion gimbal mounts (not shown) to achieve a desired pointing solution for sensor optics 710 and/or sensor 720. [0057] Sensor 720 may be mounted on a moving platform, such as an airborne or based imaging platform 105 (shown in FIGS. 1A-5B), that is configured to collect image frames. Sensor 720 may include any two-dimensional (2-D) sensor con?gured to detect electromagnetic radiation ) corresponding to the entering light of interest and generate image frames, whether still or video image. Exemplary electromagnetic radiation ors may include complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), or other ors having sufficient al response to detect electromagnetic radiation (light) of interest, for example, in the infrared (IR), visible (VIS), and/or ultraviolet (UV) spectra. In one implementation, sensor 720 may be a focal plane array (FPA) sensor. [0058] Relative motion between imaging platform 105 and scene 705 may be determined to minimize motion, oscillation, or vibration induced frame-to-frame image changes. A variety of sources can provide input data 715 describing the relative motion of imaging platform to the target and viewing geometry of the sensor relative to the imaging platform 105. For example, imaging platform 105 may have a predetermined ground track (e.g., deterministic path) for imaging selected terrain. Accordingly, input data 715 may comprise control data specifying the route and/or tory of imaging platform 105. Input data 715 may also be provided by one or more trajectory sensors (not shown), either alone or in combination with control data, to directly detect the motion of imaging rm 105 or the relative motion between imaging platform 105 and scene 705. According to various embodiments, tory sensors can include inertial, global ons system (GPS), image processors, velocity (speed), acceleration, etc. They may e mechanical, electro-mechanical, piezoelectric, optical, sensors, radar (ladar), or the like, which are included with the ?ight systems or avionics of imaging platform 105 or otherwise tely provided. Trajectory sensor(s) may be ured to provide various data, including one or more of: velocity (speed), directional heading, and angular heading, for example, of moving imaging rm 105. Data output from sensor 720 may be con?gured for Cartesian coordinates, Polar coordinate, cylindrical or spherical coordinates, and/ or other reference coordinate frames and systems. In one entation, imaging platform 105 may implement a World Geodetic System WGS-84 oblate Earth coordinate frame model. [0059] An image processor 730 may be con?gured to receive image frames from sensor 720 (and other data ing devices, such as trajectory sensors or the like) and perform image processing, as discussed herein. Image processor 730 may include hardware, such as ation Speci?c Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that processor 730 may, in whole or in part, be equivalently implemented in integrated circuits, as one or more computer ms having computer-executable instructions or code running on one or more computers (e.g., as one or more programs running on one or more er systems), as one or more programs running on one or more processors (e. g., as one or more programs running on one or more microprocessors), as ?rmware, or as any combination thereof, and that designing the circuitry and/ or writing the code for the re and/ or ?rmware would be well within the skill of one skilled in the art in light of this disclosure. [0060] In some ments, image processor 730 may be located directly on imaging platform 105 and/or with sensor 720. As such, the ormed imagery can be directly transmitted to users who can view the imagery without the need for any additional image processing. However, this need not be the case. Thus, in some embodiments (as , image processor 730 may be separate from imaging platform 105. For instance, image processor 730 may be ground-based (such as, at a command center). In another ce, image processor 730 may be vehicle-based, such as, for example, in an automobile, tank, helicopter, airplane, ship, submarine, or the like. Of course, image processor 730 might also be located with users, such as within a display device 750, user terminal 755 or other portable . [0061] Sensor 720 and image processor 730 may icate and/or share information and data, preferably, in “real-time,” via one or more connections and/or networks there between. Sensor 720 may transmit image frames, trajectory information, and sensor viewing information to image processor 730 by any means ding, for instance, radio, microwave, or other electromagnetic radiation means, optical, electrical, wired or wireless transmissions or the like). In addition, networked communication over one or more digital networks, such as intranets and Internet are possible. 10 [0062] In some instances, memory device 725 (which may also be referred to as a cache or stack) may temporarily or permanently store image frames ted by sensor 720 for subsequent sing by image processor 730. Memory device 725 may be located, for example, with sensor 720 or atively with image processor 730. [0063] FIG. 8 illustrates a ?ow diagram of an exemplary process 800 using system 700, according to an embodiment. Referring to both FIGS. 7 and 8, in step 805, coef?cients of eigenfunctions may be ed, based on imaging platform 105 geometry and pointing information. Exemplary techniques for performing this ion are described in the ‘75 5 patent cited above. [0064] According to an embodiment, computing the image transformation coef?cients may involve determining frame-to-frame changes for persistent video frames 728 acquired by sensor 720, for example, based on a function of platform trajectory and sensor pointing angles with respect to a fixed FOV. Inter-frame changes for a tent video collection may be determined or ed for image frame sets (i.e., sequences of images) as well as super-frame sets (i.e., multiple frame sets). As used herein, “inter-frame” refers to s between image frames, also referred to as “frame-to-frame.” [0065] The image frames 728 may be collected by the sensor 720 at different times or instances. In some instances, these frames 728 may be adjacent or successive image frames, such as in the case for typical video. In others, the frames may be processed at different times but not necessarily in the order collected by the sensor 720. [0066] Many short exposure images (e. g., l to 100 ms) of the scene 705 may be taken by the sensor 720. The exposures are selected to be sufficiently short that the g platform motion within one exposure period (or image) is expected to be vely small. Successive frames are then manipulated or transformed to have the appearance of being viewed by a stationary viewer. [0067] It will be appreciated that the sensor 720 need not be trained on any particular location in the scene 705 . Rather, the transformations and geolocation error correction may provide a scene that appears to be taken from a non-moving platform (with exception of actual moving objects), and accounting for geolocation errors. Truly moving objects may be more readily detected by an observer since the background is approximately stationary. [0068] As shown, image processor 730 may include a look up table (LUT) builder 726, geometry prediction module 732, image frame transform module 734, residual error module 736, ll resolution enhancement module 738, LUT comparison module 742 and pointing error module 744. According to various embodiments, the processes described can be implemented with a variety of microprocessors and/or software, for example. In some implementations, one or more modules (of their functionality) may be combined or omitted. Other modules and functions are also possible. Further, image sor 730 can be implemented onboard and/or off-site of imaging platform 105 (e. g., at a ground location physically separated from imaging platform 105) [0069] Image processor 730 may be configured to utilize planar, spherical, or oblate earth models, relief or topographic models, 3-D models of man-made objects, and/or terrain ion maps. [0070] During operation of step 805, geometry prediction module 732 may be configured to ine the nature and degree of change n different images collected by sensor 720 by receiving input data 715 and determining one or more transformation functions 733 which mathematically describe the frame change due to movement of imaging platform 105 and/ or sensor 720 relative to a target in scene 705. In one embodiment, the transformation function 733 may be Eigen ormations with each eigenfunction being directly translatable into a digital adjustment of image frame data for counteracting and/ or preventing the determined inter-frame s. [0071] Moreover, ry tion module 732 may receive input data 715 indicating the ve motion, trajectory of imaging platform 105 and sensor viewing geometry, which is used to output one or more model eigenfunctions to correct for image change. ry prediction module 732 may compute from the received input data 715 inter-frame FOV mapping to the ground for each set of image frames 728. This may include, for example, taking the difference between different image frames on a pixel-by—pixel basis. For video, these may be successive frames. Geometry prediction module 732 may select one or more image transformations to correct for the inter-frame differences (gradients) in the FOV. For instance, the changes between the l and uent FOV may be modeled by Eigen transformations describing a set of adjustments which are capable of compensating for all image changes induced by platform motion. In ular, they may comprise one or more of the Eigenfunction transformations shown in FIGS. 6A-6F for scaling with Eigen transformation coefficients. Geometry prediction module 732 may then perform modeling to find “best-fit” Eigen transformation coefficients for 12 each Eigen mode for the one or more selected Eigen transformations. The transformations may be optimized by calculating “best ?ts” or cients to minimize mean-square error (MSE) or the maximum error, for example. After calculating best fits, the modeled Eigen transformations characterizing correcting image distortion are outputted to LUT r 726. [0072] Optionally, prior to ining the Eigen transformations and coefficients, geometry prediction module 732 may identify in the frames 728 information representing one or more truly moving objects, and remove the identified information (for addition back to the image frames at the end of image processing.) [0073] With reference again to FIG. 8, in step 810 LUT builder 726 computes a LUT 743 of residual errors representing sensor optics pointing error, height/ altitude errors, or other conceivable errors. These residual errors se the al errors not attributable to relative motion between the imaging platform 105 and scene 705. The LUT 743 may contain many sets of Eigen transformation coefficients, each associated with a different pointing error. For example, any 100 collected frames of data may include sets of Eigen transformation coef?cients for each frame and for each hypothetical error. These may be computed a priori or off line, and would be ic to a given nominal platform geometry and nominal set of pointing angles (e. g., with t to nadir.) After collecting the 100 frames, the motion residuals would be measured and then, if above a threshold, the motion residuals would be fit to the eigenfunctions to compute the residual transformation coefficients. The computed al transformation coefficients may then be compared to the LUT (e. g., using a nearest neighbor comparison, etc.) to determine the nature of the ng errors. [0074] In step 815, image frame transform module 734 applies the ed image Eigen transformations (shown in FIGS. 6A-6F) lied by the Eigenfunction coefficients to the image frames 728 so as to digitally transform the image frames 728 of scene 705 with respect to a common FOV. This results in image frames that appear to be collected by a non-moving imaging platform, in which the pixel size and orientation of pixels are the same and that any motion remaining in the digitally transformed frames comprises motion residuals. [0075] In step 820, residual error module 736 computes apparent motion residuals by determining frame differences/gradients between the digitally transformed frames output by image frame transform module 734. This technique involves measuring line of sight error using 13 scene data in the digitally transformed frames only, i.e., no additional a priori platform motion information is required. [0076] In step 825, residual error module 736 fits the Eigen transformation functions for linear motion ion and linear X-direction (FIG. 6B) and linear ction (FIG. 6C) to the motion als to estimate residuals translation coefficients. Residual error module 736 is configured to determine the nature and degree of change between successive digitally transformed image frames 728 and to apply the linear transformation functions that mathematically describe the inter-frame change to create ence or gradient images. Residual error module 736 may, prior to comparing sive frames, co-register the successive digitally transformed image frames, based on the fixed, known spatial relationship n or among the co-registered residual error frames. Residual error module 736 ?ts the scene-wide residual motion ence or gradient images to the X-direction and Y-direction linear eigenfunctions (described above and shown in FIGS. 6B-6C) to determine the appropriate one or more transformation(s) and a corresponding optimal set of translation residuals Eigen coefficients for compensating for geolocation errors. Residual error transform modeling may be performed to find “best-fit” residuals ormation coefficients for each eigenfunction for the one or more translation eigenfunction transformations. For example, the transformations may be optimized by ating “best fit” residuals transformation coefficients that minimize mean-square error (MSE) or the maximum error. In step 830, residual error module 736 may determine whether the residuals are greater than a selected threshold. The residuals have a size measured in scene pixels or ground sample distance. If any of the residuals are larger than a user-selected threshold, the operations continue. If the residuals are smaller than the threshold, g that the linear translation transformations were sufficient to compensate for the residual , the translation residuals transformation coefficients and linear transformations are passed to resolution enhance module 738. If, however, the residuals are greater than the ed threshold, then in step 830 one or more additional transformations — e. g., skew (FIG. 6A), rotation (FIG. 6D), scale (FIG. 6E), and/or anamorphic stretch (FIG. 6F) (others may be possible) — are fit to the difference/ gradients associated with the digitally transformed image frames 728. After ating best fits, the modeled eigenfunction transformations using all six eigenfunctions and their computed residuals ormation coefficients may be output to resolution enhancement module 738. 14 [0077] The inventors have determined that the application of the two linear eigenfunctions (X-motion and Y-motion) may prove sufficient for characterizing the vast majority of pointing/geolocation motion (e. g, caused by gimbal pointing error and/or height error) still present in the post-digitally transformed frame images 728. The ted” residual errors typically manifest in easily detected linear translation errors, though other transformation motions may be fit as needed. For example, FIGS. 9A-9C provided a simulation ment illustrating this principle. FIG. 9A represents an optical ?ow for an intended pointing solution. FIG. 9B depicts the optical ?ow actually observed after purposefully introducing a gimbal pointing error comprising a nadir angle error of 0.01 degrees (174 urad) from a nominal nadir of 30 degrees (azimuth 0°). FIG. 9C represents the al optical flow, which is almost entirely a linear translation transformation for many of the errors. [0078] Those of skill in the art of image processing will readily appreciate that the estimation of residual transformation(s) performed by residual error module 736, including for example, the two to six identi?ed eigenfunction transformations (and/ or others) utilized and associated als transformation coefficients for estimating and correcting the residual geolocation error, using scene-wide changes as described herein could be performed with alternative techniques, such as by using sub-frame groups of pixels, however, such approaches would likely be computationally more burdensome. [0079] Referring again to FIGS. 7 and 8, in step 835, resolution enhancement module 738 applies the two or more ed eigenfunctions and associated residuals transformation coefficients to the lly transformed image frames to remove the estimated residual motion from the lly transformed frames. Resolution enhancement module 738 is configured to enhance the resolution of transformed image frames, for example, by interpolating and transforming imagery to remove residual motion of successive frames, increasing sampling of aggregate images due to naturally occurring nt of pixels as mapped to the ground. This may be further aided by deterministic frame shifting. [0080] In one implementation, a resolution enhancement s may be implemented by resolution enhancement module 738. Images of improved resolution , for example, may be generated by interpolating and aggregating images ing to known thms, such as frequency or space domain thms. The images are not highly oversampled per se but a sequence of images that are ultimately ated become highly oversampled by virtue of 15 izing the naturally occurring changes in the sensor FOV and then creating a tailored, non- uniformly spaced interpolation grid based on these naturally occurring changes. One bene?t of super-resolution processing is improved edge contrasts. In some instances, the enhanced images may enable a high “rating” according to the National Imagery Interpretability Rating Scale ). Additional sub-pixel ng of the field of view may be ed to further e the sampling of the scene. [0081] One or more users may interface with system 700. Users typically will be located remotely from imaging platform 105 and/or image processor 730, for instance. Of , users may also be located on imaging platform 105, and/or a location near image sor 730. In one or more implementations, users can communicated with, and/ or share ation and data with image processor 730 by any means (including, for instance, radio, microwave, or other electromagnetic radiation means, optical, electrical, wired, and wireless transmissions or the like). In addition, networked communication over one or more digital networks, such as intranets and Internet are possible. [0082] User display 750 may be con?gured to enable one or more users to view motion and ation error corrected image frames (e. g., stills or video) output from image processor 730. User display 750 may include, for instance, any display device configured for displayed video and/or image frames. Televisions, computer monitors, laptops, tablets computing device, smart phones, personal digital ant (PDAs) and/or other displays and computing devices may be used. User terminal 755 may be configured to enable users to interact with image processor 730. In some implementations, users may be presented with one or more data acquisition planning tools. [0083] Video sequences of the ormed imagery may be displayed, in which static, moving, and/or 3-D objects may be identified (e. g., highlighted, color-coded, annotated, etc.) in the displayed s) of the scene. As such, human and machine interpretation is greatly facilitated. No additional digital image processing may be required once the images are transformed, in many instances. [0084] In step 845, LUT comparison module 742 compares the computed residual transformation coefficients to residual transformation coefficients stored in residual motion LUT 743. Trends in residual motion revealed by this comparison permits estimation of ng (geolocation) errors in the present pointing solution. FIGS. 10A and 10B rate experimental 16 simulated testing results using the satellite imaging platform and Earth model shown in FIG. 3C, wherein the ite is initially positioned along the X-aXis and has an initial velocity V in the positive y-aXis direction. The FOV initially ts onto Earth pointed northward with a 5° nadir angle. FIG. 10A illustrates the time ion (trend) of residual motion error (e. g., similar data possibly being stored in the LUT 743 for ison purposes) with an induced 50 urad ng error in the nadir direction. The graph demonstrates that lly all of the induced error may be measured using the two linear translation (X-motion, Y-motion) eigenimages. FIG. 10B illustrates the trend over time of residual motion errors with an induced 10 meter surface altitude (height) error. The graph illustrates that this error may be virtually entirely measured in one translation (Y-motion) eigenimage. [0085] In step 850, pointing error module 744 may optionally interpret such trending and curve-fitting. of the residual ormation ients representing residual motion errors computed for a t frame to previously stored residual motion data so as to determine which of several possible root causes of the residual error is most likely responsible. LUT 743 may comprise sets of points and/or curve-fits for a given number of frames. FIG. 11 is a plot of the simulation results for the difference in eigenfunction coef?cient amplitudes obtained between the ideal case and the induced 10 meter surface height knowledge error case. The results demonstrate how virtually all of the residual motion error may be measured with the two translation eigenfunctions. Pointing error module 7 44 may provide feedback regarding whether sensor 720 is staring perfectly at a point on the Earth. The computed pointing (geolocation) errors may be output to update ng calibration and/ or pointing solution information at sensor optics 710. This enables adjustments in the pointing on for future scene imaging with perfect persistent ations (“staring”) and/or image frame freezing, such as is useful in motion ion. Pointing error module 7 44 may also output an indication of the root cause of the pointing error, such as biases in gimbal angle and/ or trajectory error (especially height.) The computed residuals transformation coefficients may also be used to adjust the one or more Eigenfunction transformations that compensate for the apparent motion in the scene induced by relative motion between the scene and the movable imaging platform. [0086] The scene wide transformations employed to enable “freezing” of imagery may be used for enhanced motion detection, and frame stacking may be used to enhance SNR. In step 840, tion enhancement module 738 may optionally add or sum a plurality of successive l7 compensated digitally transformed frames to obtain higher SNR scene imagery and/or to detect truly moving objects in the imagery. If “true mover” information had previously been identi?ed and removed from the captured image frames 728, such information may be added back into the compensated digitally transformed frames. In some instances, the enhanced images may enable a high “rating” according to the National Imagery Interpretability Rating Scale (NIIRS). [0087] FIGS. 12A h 12C illustrate ted frames of video data of an exemplary scene as if rendered by a sensor from an airborne imaging platform. FIG. 12A shows a simulated initial frame with no motion or geolocation error correction applied. The scene includes a plurality of vehicles, including pick-up trucks and mobile (e. g., SCUD) missile launchers. FIG. 12B shows a ted frame of video data of the same scene as shown in FIG. 12A, at a second instance. The images are taken shortly apart, and thus have a different angle separation. The image depicted in the FIG. 12B has changed slightly with regard to scale, rotation, and/or viewing angle. Moreover, the image appears slightly more hed in one dimension (horizontal) than the other. FIG. 12C shows the residual motion error after platform motion induced error has been removed from the frame. The plot maps the movement of the pixels to the ground. The length and direction of the vector arrows show the movement of pixels from one frame to another. [0088] FIGS. 13A h 13E are results from on simulation testing undertaken to explore the dependence of residual motion error on the magnitude of induced pointing bias. The simulation d conditions of a satellite imaging platform at 400 km altitude, a sensor looking down at a nadir angle of 20°, the sensor looking broadside perpendicular to the velocity vector V, collection time of 10 s, FOV of 1°, a sensor controller attempting to keep the center of the FOV pointed at the same location on the ground for the duration of the collection time, t knowledge of the satellite position and planar Earth surface altitude, gimbal azimuth singularity at horizon, and two sensor bias cases — 0 and 50 urad in the nadir direction. FIG. 13A shows the motion of image points in a focal plane frame of reference. With perfect dge of the satellite ephemeris, e altitude and sensor point, the error at the center of the FOV is zero. FIG. 13B shows the image motion with a pointing bias of 50 urad. A consequence of the pointing bias is that even the center point of the FOV exhibits a small amount of motion on the focal plane. The true ground intersection point is not nary, as expected, due to the pointing bias. FIG. 13C shows the difference in image motion between the ideal and 18 actual cases. The image motion difference between the expected and actual motion over the 10 second collection time is about 24 urad. FIG. 13D shows the residual motion once the common platform motion displayed on the chart of FIG. 13C is removed. FIG. 13E shows the ence of the residual motion error on the magnitude of the d nadir angle pointing bias. Similar curves could be used to determine the size of pointing bias in mulation image processing. [0089] Although the above disclosure discusses what is currently considered to be a variety of useful examples, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed examples, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. [0090] One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential teristics thereof. The foregoing embodiments are therefore to be considered in all respects rative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced n. [0091] No t, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is ed to mean “based, at least in part, on” unless itly stated otherwise. 19 399 0 R 397 0 R
IL264714A 2016-08-22 2019-02-07 Video geolocation IL264714B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/243,216 US9794483B1 (en) 2016-08-22 2016-08-22 Video geolocation
PCT/US2017/022136 WO2018038768A1 (en) 2016-08-22 2017-03-13 Video geolocation

Publications (2)

Publication Number Publication Date
IL264714A true IL264714A (en) 2019-03-31
IL264714B IL264714B (en) 2019-09-26

Family

ID=58455660

Family Applications (1)

Application Number Title Priority Date Filing Date
IL264714A IL264714B (en) 2016-08-22 2019-02-07 Video geolocation

Country Status (4)

Country Link
US (1) US9794483B1 (en)
EP (1) EP3501004A1 (en)
IL (1) IL264714B (en)
WO (1) WO2018038768A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235817B2 (en) * 2015-09-01 2019-03-19 Ford Global Technologies, Llc Motion compensation for on-board vehicle sensors
KR101953626B1 (en) * 2017-06-29 2019-03-06 서강대학교산학협력단 Method of tracking an object based on multiple histograms and system using the method
CN109344878B (en) * 2018-09-06 2021-03-30 北京航空航天大学 Eagle brain-like feature integration small target recognition method based on ResNet
US11138696B2 (en) * 2019-09-27 2021-10-05 Raytheon Company Geolocation improvement of image rational functions via a fit residual correction
US11019265B1 (en) * 2020-11-04 2021-05-25 Bae Systems Information And Electronic Systems Integration Inc. Optimized motion compensation via fast steering mirror and roll axis gimbal
CN115623242A (en) * 2022-08-30 2023-01-17 华为技术有限公司 Video processing method and related equipment thereof

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6422508B1 (en) 2000-04-05 2002-07-23 Galileo Group, Inc. System for robotic control of imaging data having a steerable gimbal mounted spectral sensor and methods
IL165190A (en) 2004-11-14 2012-05-31 Elbit Systems Ltd System and method for stabilizing an image
US7548659B2 (en) 2005-05-13 2009-06-16 Microsoft Corporation Video enhancement
US7862188B2 (en) 2005-07-01 2011-01-04 Flir Systems, Inc. Image detection improvement via compensatory high frequency motions of an undedicated mirror
JP4695972B2 (en) * 2005-12-14 2011-06-08 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing method
US8073196B2 (en) * 2006-10-16 2011-12-06 University Of Southern California Detection and tracking of moving objects from a moving platform in presence of strong parallax
US8400619B1 (en) * 2008-08-22 2013-03-19 Intelligent Automation, Inc. Systems and methods for automatic target tracking and beam steering
WO2010116366A1 (en) * 2009-04-07 2010-10-14 Nextvision Stabilized Systems Ltd Video motion compensation and stabilization gimbaled imaging system
US8471915B2 (en) 2009-04-16 2013-06-25 Raytheon Company Self-correcting adaptive long-stare electro-optical system
US9294755B2 (en) * 2010-10-20 2016-03-22 Raytheon Company Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations
US8923401B2 (en) * 2011-05-31 2014-12-30 Raytheon Company Hybrid motion image compression
US9230333B2 (en) * 2012-02-22 2016-01-05 Raytheon Company Method and apparatus for image processing
US8908090B2 (en) 2013-03-15 2014-12-09 Freefly Systems, Inc. Method for enabling manual adjustment of a pointing direction of an actively stabilized camera

Also Published As

Publication number Publication date
EP3501004A1 (en) 2019-06-26
WO2018038768A1 (en) 2018-03-01
IL264714B (en) 2019-09-26
US9794483B1 (en) 2017-10-17

Similar Documents

Publication Publication Date Title
US9794483B1 (en) Video geolocation
US8964047B2 (en) Self-correcting adaptive long-stare electro-optical system
US9294755B2 (en) Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations
EP2791868B1 (en) System and method for processing multi-camera array images
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
US8111294B2 (en) Hybrid image stabilization method and apparatus
US7071970B2 (en) Video augmented orientation sensor
CN102741706B (en) The geographical method with reference to image-region
US7313252B2 (en) Method and system for improving video metadata through the use of frame-to-frame correspondences
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
KR101282718B1 (en) Absolute misalignment calibration method between attitude sensors and linear array image sensor
JPH11252440A (en) Method and device for ranging image and fixing camera to target point
CN104501779A (en) High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
US10341565B2 (en) Self correcting adaptive low light optical payload
Sturzl A lightweight single-camera polarization compass with covariance estimation
Stow et al. Evaluation of geometric elements of repeat station imaging and registration
Zhou et al. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera
US8559757B1 (en) Photogrammetric method and system for stitching and stabilizing camera images
US11415990B2 (en) Optical object tracking on focal plane with dynamic focal length
Zhang et al. Precise alignment method of time-delayed integration charge-coupled device charge shifting direction in aerial panoramic camera
Cai et al. Distortion measurement and geolocation error correction for high altitude oblique imaging using airborne cameras
Ma et al. Variable motion model for lidar motion distortion correction
Vasilyuk Calculation of motion blur trajectories in a digital image as a special problem of inertial navigation
Ringaby et al. Co-aligning aerial hyperspectral push-broom strips for change detection

Legal Events

Date Code Title Description
FF Patent granted
KB Patent renewed