CN110211054A - A kind of undistorted making video method of spaceborne push-broom type optical sensor - Google Patents

A kind of undistorted making video method of spaceborne push-broom type optical sensor Download PDF

Info

Publication number
CN110211054A
CN110211054A CN201910350812.1A CN201910350812A CN110211054A CN 110211054 A CN110211054 A CN 110211054A CN 201910350812 A CN201910350812 A CN 201910350812A CN 110211054 A CN110211054 A CN 110211054A
Authority
CN
China
Prior art keywords
camera
image
coordinate system
virtual
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910350812.1A
Other languages
Chinese (zh)
Other versions
CN110211054B (en
Inventor
孙向东
张过
蒋永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910350812.1A priority Critical patent/CN110211054B/en
Publication of CN110211054A publication Critical patent/CN110211054A/en
Application granted granted Critical
Publication of CN110211054B publication Critical patent/CN110211054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of spaceborne undistorted making video methods of push-broom type optical sensor, belong to satellite image manufacture technology field.The present invention completes the building of virtual camera geometry location model by the track and posture state of resolving satellite, realizes undistorted video generation finally by the method being virtually imaged again.The definition of true body coordinate system can not know for sure, the present invention goes to define body coordinate system according to certain rules, then each scape image is all gone to distinguish posture and elements of interior orientation in the same way and guarantees that last elements of interior orientation is stablized, can realize the resolving of posture and elements of interior orientation.The building for completing virtual camera geometry location model according to track and posture state later, finally realizes undistorted making video using virtual resampling.The present invention is measured using semiautomatic fashion auxiliary point by establishing ground control point library, realizes quick obtaining control point, save time cost, raising efficiency.

Description

A kind of undistorted making video method of spaceborne push-broom type optical sensor
Technical field
The present invention relates to a kind of spaceborne undistorted making video methods of push-broom type optical sensor, belong to satellite image production Technical field.
Background technique
Even if satellite image completes in-orbit geometry calibration and high frequency error is eliminated, lens distortion and high frequency error are to image Influence still have;The internal distortions of image can seriously hamper subsequent satellite image processing, reduce Satellite Product quality.
By the research prior art it is found that F.D.Lussy et al. propose with it is multiple just/superposition of cosine wave function is to defending Star high dither is fitted, and the waveform letter of different frequency is decomposed and calculated from the registration error that parallel observation calculates Number, MAY Stephane, based on similar principle, utilize the panchromatic and multispectral CCD linear array of SPOT5 with LATRY Christophe Parallel observation eliminate attitude jitter, the DTM of preferable precision is obtained under conditions of small base-height ratio;S.Mattson and A.Boyd etc. equally eliminates HiRISE attitude jitter using parallel observation, has finally obtained almost distortionless image, DEM precision is also promoted to<0.5m from>5m.But these methods are only for the high dither of platform,
Summary of the invention
The present invention claims be the technical issues of solution: eliminating complicated inside image caused by high frequency error, camera distortion etc. Deformation, realizes the spaceborne undistorted making video of push-broom type optical sensor, provides a kind of spaceborne push-broom type optical sensor without abnormal Become making video method.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of undistorted making video method of spaceborne push-broom type optical sensor, by the track and posture that resolve satellite State completes the building of virtual camera geometry location model, realizes that undistorted image is raw finally by the method being virtually imaged again At.The present invention removes each scape image to distinguish posture and inner orientation member in the same way after defining body coordinate system Element simultaneously guarantees that last elements of interior orientation is stablized, and can realize the resolving of posture and elements of interior orientation.Later according to track and appearance State state completes the building of virtual camera geometry location model, finally realizes undistorted making video using virtual resampling.
A kind of undistorted making video method of spaceborne push-broom type optical sensor, includes the following steps:
Step 1, by resolving satellite orbit, restore position and the posture of light when photography, that is, realize the positioning of light With orientation.
Method one, the geodetic coordinates that two intersection points are calculated by the inverse form of rational function model, two intersection point are The light on ground and the intersection point in highest elevation face and lowest elevation face, are shown below:
P5、P6、P7、P8The multinomial of geodetic coordinates is sought for solution;(x, y) is the picpointed coordinate of intersection point, and H is elevation, and Lat is Latitude coordinates of the intersection point under earth coordinates, Lon are longitude coordinate of the intersection point under earth coordinates.
In the case where two intersection point of ground under earth coordinates can be transformed into earth geocentric rectangular coordinate system.When the object of two intersection points After square coordinate determines, direction of the light under earth geocentric rectangular coordinate system of photographing is exactly the difference of the position of two intersection points.
Hand over highest elevation plane and lowest elevation plane in two o'clock respectively with photography light OA first, the earth of the two o'clock is sat Mark is respectively [Lat1,Lon1,H1] and [Lat2,Lon2,H2].The picpointed coordinate of two intersection points is unified for (x, y).[Lat1,Lon1] and [Lat2,lon2] solved according to formula (1) inverse.It is straight that two intersecting point coordinates are transformed into earth the earth's core from earth coordinates again accordingly Angular coordinate system, respectively [X1,Y1,Z1] and [X2,Y2,Z2]。
Method two, the method for solving earth geocentric rectangular coordinate system can also be obtained by tight model, such as following formula institute Show:
Wherein m is proportionality coefficient;Ψx、ΨyFor by that will visit member, imaging ray is directed along track, vertical track is divided Solution obtains the angle of its direction;R (t) is earth coordinates coordinate conversion matrix.[X (t), Y (t), Z (t)] is that phase center exists Coordinate in earth geocentric rectangular coordinate system.
Since highest elevation plane is identical with the image point position of lowest elevation plane point of intersection, so the corresponding [X of two intersection points (t), Y (t), Z (t)], R (t) and [tan (ψx),tan(ψy)] unanimously, still [XS,YS,ZS] different with m, therefore two intersection points is tight Close imaging model is expressed as follows:
Wherein [Xi,Yi,Zi] it is earth geocentric rectangular coordinate system.Make quotient after bringing two intersection points into formula (3) again up to following formula:
In above formula,Formula (4) can obtain following formula after rewriteeing
Rewriting formula (5) obtains following formula after eliminating unknown number k:
The deformation type of formula (6) is as follows:
There are three unknown numbers in formula (7), but a light is only capable of obtaining two independent equations, therefore in order to solve Position [X (t), Y (t), Z (t)], also needs another light repetition methods two, then solves formula (7) jointly.
Step 2, attitude algorithm resolves the spin moment that earth geocentric rectangular coordinate system is transformed into from the coordinate system of ontology Battle array.
The correlation of posture and elements of interior orientation is solved using the concept of equivalent body coordinate system.
Define equivalent body coordinate system are as follows: X-axis is directed toward heading;Z axis is directed toward ground direction, is two photography light The direction of the resultant vector of unit vector, two photography light can be obtained by step 1, i.e. [X2-X1,Y2-Y1,Z2-Z1], Z axis Direction under earth geocentric rectangular coordinate system is shown below:
The X-axis of equivalent body coordinate system is perpendicular to plane OAB.X-axis is under being oriented under earth geocentric rectangular coordinate system Formula:
The Y-axis of equivalent body coordinate system is determined by X-axis and Z axis according to the unsuccessful criterion of the right hand, is shown below:
After determining direction of three reference axis of equivalent body coordinate system in earth-fixed geocentric system, R (t) is according to the following formula Building:
This spin matrix is converted into quaternary number form formula or Eulerian angles form, can be used in the posture state for describing satellite, Realize attitude algorithm.
Step 3, elements of interior orientation is resolved.
The posture that the direction and step 2 acquired according to step 1 are acquired sits in the direction for light of photographing from earth geocentric rectangular Mark system is transformed into equivalent body coordinate system.Solution procedure is carried out by photography light.
Sensor visits direction of the member under body coordinate system and is expressed as following formula:
Wherein, [X, Y, Z]bodyIndicate direction of the photography light in body coordinate system.The definition for being directed toward angle according to member is visited, The elements of interior orientation of current sensor spy member is independent to be expressed as following formula:
Step 4, virtual camera elements of interior orientation model is constructed.
There is a virtual camera in definition, platform and real satellite are in intimate identical orbital position and the synchronous bat of posture Same area is taken the photograph, which is not influenced by lens distortion and linear array is ideal line, steadily pushes away and sweeps and expose by constant integration time Light imaging regards the image of virtual camera acquisition as undistorted image.It is closed by establishing virtual camera and real camera imaging geometry System carries out resampling to the image that real camera obtains, can eliminate the complex deformation in real image.
Since virtual camera platform and real satellite synchronize shooting, and virtualphase with intimate identical track, posture Machine platform even running is not influenced by attitude jitter;Therefore, it is carried out by the track that is resolved to real satellite, posture discrete data Fitting of a polynomial, using polynomial fitting as the appearance rail model of virtual camera platform.
And for horizontal-scanning interval, virtual camera is with constant integration time to ground exposure image;When virtual camera and very Reality machine is in t0Moment is switched on imaging simultaneously, then for virtual camera imaged rows l, imaging time t are as follows:
T=t0+τ·l (14)
In formula, it is the average integration time of real camera that τ, which is the virtual camera time of integration,.
By multi-disc CCD linear array obtain image be it is discontinuous, need by splicing remove adjacent C CD between overlapping It pixel and misplaces along rail and obtains consecutive image, by the way that there are four types of the position of common virtual ccd array and real CCD array passes System is respectively used to realize the panchromatic double camera of panchromatic one camera, splicing view field, multispectral spectral coverage registration and single star double camera splicing; And virtual camera contains only the single CCD of ideal line arrangement, acquisition image is consecutive image;Because making virtual camera and true Camera acquisition image resolution ratio is close, enables virtual camera with real camera master away from identical;Meanwhile in order to reduce true imaging light With the direction difference of virtual image light, the camera coordinates (x of member is respectively visited with real camerac,yc) it is used as observation, fitting is virtual Linear equation x of the CCD under camera coordinates systemc=ayc+ b, in this, as virtual camera elements of interior orientation model.
Step 5, it realizes and is virtually imaged again.Virtual weight imaging method is establishing virtual camera and real camera imaging geometry pass On the basis of system, resampling is carried out to the image that real camera obtains, obtains virtual undistorted image.
Step 5.1: using the virtual camera elements of interior orientation model of step 4, constructing virtual camera geometry location model;
Step 5.2: using the virtual camera geometry location model established in step 5.1, calculating the pixel on virtual image (x, y), the corresponding ground coordinate of the pixel are (X, Y, Z);
Step 5.3: according to real camera geometry location model, throwing into real image coordinate (x', y') for (X, Y, Z) is counter;
Step 5.4: resampling obtains (x', y') gray value, and gray value is assigned to (x, y) pixel;
Step 5.5: all pixels on traversal virtual image repeat step 5.2-5.4, generate whole scape image to get arriving Undistorted image;
Step 5.6: being based on virtual camera geometry location model, generate the corresponding RPC file of undistorted image, that is, complete The production of undistorted image.
Resampling shown in step 5.4 is the resampling carried out based on SRTM-DEM data;
Since virtual camera and real camera imaging ray angle have differences, resampling process can be missed because of object space point height Difference and introduce height displacement.Ground same point (X, Y, Z) is observed by real satellite and virtual satellite, respectively obtains image seat (x, y), (x ', y ') are marked, is shown below, θ0、θ1Angle is imaged in respectively true light and virtual ray, and Δ h is that object space point is high Journey error, Δ x attach most importance to the height displacement that imaging process is introduced due to Δ h.Known to geometrical relationship:
Δ x=Δ h (tan θ0-tanθ1) (15)
According to the placement position of virtual CCD, θ0With θ1Difference is smaller;For the optical satellite camera of technique study, work as use When SRTM-DEM data, because the height displacement that vertical error introduces ignores in weight imaging process.
The utility model has the advantages that
A kind of 1 spaceborne undistorted making video method of push-broom type optical sensor disclosed by the invention is by establishing ground Face control point library is measured using semiautomatic fashion auxiliary point, realizes quick obtaining control point, save time cost, promote effect Rate;
A kind of 2 spaceborne undistorted making video methods of push-broom type optical sensor disclosed by the invention, pass through bias matrix Satellite orbit and attitude error are compensated, outer orientation accuracy is promoted, realizes the generation of the undistorted image of push-broom type optical sensor.
Detailed description of the invention
Fig. 1 is that step 4 of the present invention is virtual camera CCD and real camera CCD relative positional relationship, in which: (a) panchromatic list Camera, (b) spells the panchromatic double camera of visual field, and (c) multispectral one camera (d) spells vision field multi-spectral double camera.
Fig. 2 is step 5 of the present invention virtually weight imaging schematic diagram;
Fig. 3 is height displacement's schematic diagram that vertical error introduces in the virtual weight imaging process of step 5 of the present invention;
" distortion " Local map is imaged in Fig. 4 XX6A artificial target;
The ideal undistorted image generating method of Fig. 5;
The ideal undistorted image generating method of Fig. 6.
Specific embodiment
Objects and advantages in order to better illustrate the present invention with reference to the accompanying drawing do further summary of the invention with example Explanation.
The two scape images that the shooting of Yunnan region is obtained respectively on December 9th, 2014 and on January 3rd, 2015 using XX6A As experimental data.In imaging region based on mountainous region, dispersed elevation 2854.5m, maximum height difference about 1698m.It is laid in image Several artificial rectangular marks, but since XX6A satellite uses CAST2000 small satellite platform, stability is lower, in imaging process Rectangular mark distortion after being made imaging by platform effect of jitter, such as Fig. 4:
The panchromatic B camera of XX6A contains 4 CCD linear arrays, and according to the elements of interior orientation that geometry calibration obtains, 4 CCD are arranged by arch Column, adjacent C CD are overlapped about 200 pixels, along rail about 2 pixels of dislocation, parallel observation same place imaging time interval about 0.0003s.
Embodiment 1:
A kind of spaceborne undistorted making video method of push-broom type optical sensor disclosed in the present embodiment implements step It is as follows:
Step 1, satellite orbit and attitude algorithm.
The true geometry location Models computed of satellite image can be by being realized using rational function model.Rational function Model is fitted as the height of tight imaging model, and tight imaging model may be implemented.When its key step is to restore photography Light position and posture, i.e., to light position and orientation.It can calculate ground by the inverse form of rational function model The geodetic coordinates of the light in face and the intersection point in highest elevation face and lowest elevation face, two intersection point of ground under earth coordinates can To be transformed under earth geocentric rectangular coordinate system.After the object coordinates of two intersection points determine, photography light is straight in earth the earth's core Direction under angular coordinate system is exactly the difference of the position of two intersection points.
Hand over highest elevation plane and lowest elevation plane in two o'clock, geodetic coordinates difference respectively with photography light OA first For [Lat1, Lon1, H1] and [Lat2, Lon2, H2].The same picpointed coordinate of two o'clock is (x, y).[Lat1, Lon1] and [Lat2, lon2] can be solved according to above formula inverse.Its coordinate is transformed into earth the earth's core from earth coordinates again accordingly Rectangular coordinate system, respectively [X1, Y1, Z1] and [X2, Y2, Z2].This can also be asked with process by tight model Solution, is shown below.
Since the image point position of two elevation face intersection points is same, thus two o'clock corresponding [X (t), Y (t), Z (t)], R (t) and [tan(ψx),tan(ψy)] unanimously, still [XS,YS,ZS] different with m, therefore the tight imaging model of two intersection points can be expressed as down Formula.
Wherein [Xi,Yi,Zi] it is earth geocentric rectangular coordinate system.Make quotient after bringing two elevation face intersection points into above-mentioned equation Up to following formula
In above formula,Above formula can obtain following formula after rewriteeing
Above formula is rewritten, following formula can be obtained after eliminating unknown number k.
Up to following formula
There are three unknown number in above formula, but a light only can be obtained two independent equations, therefore in order to can be with It solves position [X (t), Y (t), Z (t)], also needs another light to solve above formula, this process is linear solution process, is not needed Initial value and iteration.So far achievable track resolves.
Step 2, posture can be resolved after resolving track.Attitude algorithm resolves the coordinate system transfer from ontology Change to the spin matrix of earth geocentric rectangular coordinate system.Since actual ontology coordinate can not be obtained by rational function model System, so solving the correlation of posture and elements of interior orientation using the concept of equivalent body coordinate system.
The design of equivalent body coordinate system is as follows, and Z axis is directed toward ground direction, for ray OA and OB unit vector conjunction to Amount.Therefore the direction of OA and OB can solve to obtain by above formula position, i.e. [X2-X1,Y2-Y1,Z2-Z1], Z axis is in the earth Direction under heart rectangular coordinate system is shown below:
The X-axis of equivalent body coordinate system is directed toward heading, i.e., perpendicular to plane OAB.It is in earth geocentric rectangular coordinate Being oriented to shown in following formula under system:
The Y-axis of equivalent body coordinate system is determined by X-axis and Z axis according to the unsuccessful criterion of the right hand, is shown below:
After determining direction of the reference axis of three equivalent body coordinate systems in earth-fixed geocentric system, R (t) can be according to Following formula building:
This spin matrix can be converted into quaternary number form formula or Eulerian angles form, for describing the posture state of satellite.
Step 3, after having resolved track and posture state, elements of interior orientation is resolved.Light is in earth geocentric rectangular coordinate Direction in system can be obtained from rational function model.Since elements of interior orientation is merely representative of photography light in body coordinate system Direction, so by solve come posture, the direction for light of photographing can be transformed into from earth geocentric rectangular coordinate system Equivalent body coordinate system.It is worth noting that, the method can only restore opposite elements of interior orientation, because solving posture When introduce equivalent body coordinate system for solving the correlation of posture and elements of interior orientation.This solution procedure can be by photography Light carries out, and indicates elements of interior orientation with the angular formula of member direction is visited.
Sensor visits direction of the member A under body coordinate system and is expressed as following formula:
Wherein, [X, Y, Z]bodyIndicate direction of the photography light in body coordinate system.The definition for being directed toward angle according to member is visited, The current elements of interior orientation for visiting member A independent can be expressed as following formula:
The same place Rendezvous Error before and after high frequency error concealment is calculated and compares, as a result as follows:
Step 4, virtual camera geometry location model construction.It completes after in-orbit geometry calibration and high frequency error eliminate, it can be with Obtain distortionless geometry location model;And the influence in the picture such as lens distortion, high frequency error is not eliminated, in image Existing complex deformation can reduce the application effects such as its subsequent registration fusion.Linear array push is swept in imaging process, and image is caused on star The principal element of complex deformation includes: the elements of interior orientation error such as 1) lens distortion, so that imaging ray is deviateed ideal direction, causes Higher order deformation (picture point with reference to caused by elements of interior orientation deviates);2) time of integration jumps, and becomes image along rail resolution ratio Change;3) " distortion " is imaged caused by attitude jitter.
It is assumed that there are a virtual camera, platform and real satellite are in intimate identical orbital position and the synchronous bat of posture Same area is taken the photograph, which is not influenced by lens distortion and linear array is ideal line, steadily pushes away and sweeps and expose by constant integration time Light imaging regards the image of its acquisition as undistorted image.By establishing virtual camera and real camera imaging geometry, to true The image that reality machine obtains carries out resampling, can eliminate the complex deformation in real image.
The key of building virtual camera geometry location model is to establish its platform track, posture, horizontal-scanning interval and interior Element of orientation model.
Since virtual camera platform and real satellite synchronize shooting, and virtualphase with intimate identical track, posture Machine platform even running is not influenced by attitude jitter;Therefore, the track that passes down to real satellite, posture discrete data can be passed through Fitting of a polynomial is carried out, using polynomial fitting as the appearance rail model of virtual camera platform.
And for horizontal-scanning interval, virtual camera is with constant integration time to ground exposure image;It is assumed that virtual camera with Real camera is in t0Moment is switched on imaging simultaneously, then for virtual camera imaged rows l, imaging time t are as follows:
T=t0+ τ l formula 2
In formula, τ is the virtual camera time of integration, can use the average integration time of real camera.
In general, by multi-disc CCD linear array obtain image be it is discontinuous, need by splicing remove adjacent C CD between Overlaid pixel and along rail misplace and obtain consecutive image;And virtual camera contains only the single CCD of ideal line arrangement, obtains Image is consecutive image;To keep virtual camera close with real camera acquisition image resolution ratio, virtual camera and real camera are enabled Master is away from identical;Meanwhile in order to reduce the direction difference of true imaging light Yu virtual image light, member is respectively visited with real camera Camera coordinates (xc,yc) it is used as observation, it is fitted linear equation x of the virtual CCD under camera coordinates systemc=ayc+ b is made with this For virtual camera elements of interior orientation model.Four kinds of common virtual ccd array are given in Fig. 1 and real CCD array position closes It is schematic diagram, is respectively used to realize one camera splicing, multispectral spectral coverage registration, single star double camera splicing, ultimately generate undistorted Image.
The undistorted image of generated ideal by the method for the invention eliminates multiple caused by camera distortion, high frequency error etc. Miscellaneous deformation, splicing image internal accuracy are high;And the internal accuracy of image, determine the application effects such as its subsequent registration fusion.Fig. 5 It is shown, for the registration result of panchromatic, the multispectral splicing image of XX10 of the mentioned method production of the present invention;Ideal undistorted image is matched It is evenly distributed on schedule, the fusion evaluation effect of final production is preferable, as shown in Figure 6.
Step 5, it realizes and is virtually imaged again.Virtual weight imaging algorithm is by establishing virtual camera and real camera imaging geometry Relationship carries out resampling to the image that real camera obtains, obtains virtual undistorted image.Its algorithm flow is as follows:
1) virtual camera geometry location model is constructed;
2) it is corresponding that its is calculated using 1) the middle geometry location model established to any pixel (x, y) on undistorted image Ground coordinate (X, Y, Z);
3) according to real camera geometry location model, real image coordinate (x', y') is thrown by (X, Y, Z) is counter;
4) resampling obtains (x', y') gray value, and is assigned to (x, y) pixel;
5) all pixels on undistorted image are traversed, repeat 2) -4), generate whole scape image;
6) it is based on virtual camera geometry location model, generates the corresponding RPC file of virtual image.
Since virtual camera and real camera imaging ray angle have differences, weight imaging process may be high because of object space point Journey error and introduce height displacement.It is shown below, θ0、θ1Angle is imaged in respectively true light and virtual ray, and Δ h is object space Point height error, Δ x attach most importance to the height displacement that imaging process is introduced due to Δ h;From geometrical relationship in figure:
Δ x=Δ h (tan θ0-tanθ1) formula 3
According to the placement position of virtual CCD, θ0With θ1Difference is smaller;For the domestic optical satellite camera of technique study, when When using SRTM-DEM data, because the height displacement that vertical error introduces can be ignored in weight imaging process.
Using the control point of each scape image, the image planes affine model based on undistorted image RPC carries out outer orientation, by fixed Image positioning precision is verified to precision, is as a result listed in the table below that (wherein Original represents raw using raw pose data At splicing image, Corrected represents the undistorted image that generates after high frequency error is eliminated.It can see from table, due to The presence of XX10 high frequency error, the positioning precision of original posture only in several or even more than ten of pixel magnitude, position residual Difference cloth is random, and conventional treatment method is difficult to eliminate error influence;And by the method for the invention, it is being not necessarily to extra control data Under conditions of eliminate the influence of high frequency error, positioning precision is promoted to 1 pixel or so, demonstrates the method for the present invention Correctness.
Above-described specific descriptions have carried out further specifically the purpose of invention, technical scheme and beneficial effects It is bright, it should be understood that the above is only a specific embodiment of the present invention, the protection model being not intended to limit the present invention It encloses, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the present invention Protection scope within.

Claims (3)

1. a kind of undistorted making video method of spaceborne push-broom type optical sensor, it is characterised in that: include the following steps;
Step 1, by resolving satellite orbit, restore position and the posture of light when photography, that is, realize the positioning of light and determine To;
Method one, the geodetic coordinates that two intersection points are calculated by the inverse form of rational function model, two intersection point are ground Light and highest elevation face and lowest elevation face intersection point, be shown below:
P5、P6、P7、P8The multinomial of geodetic coordinates is sought for solution;(x, y) is the picpointed coordinate of intersection point, and H is elevation, and Lat is that intersection point exists Latitude coordinates under earth coordinates, Lon are longitude coordinate of the intersection point under earth coordinates;
In the case where two intersection point of ground under earth coordinates can be transformed into earth geocentric rectangular coordinate system;When the object space of two intersection points is sat After mark determines, direction of the light under earth geocentric rectangular coordinate system of photographing is exactly the difference of the position of two intersection points;
Hand over highest elevation plane and lowest elevation plane in two o'clock, the geodetic coordinates point of two intersection points respectively with photography light OA first It Wei not [Lat1,Lon1,H1] and [Lat2,Lon2,H2];The picpointed coordinate of two intersection points is unified for (x, y);[Lat1,Lon1] and [Lat2,lon2] solved according to formula (1) inverse;It is straight that two intersecting point coordinates are transformed into earth the earth's core from earth coordinates again accordingly Angular coordinate system, respectively [X1,Y1,Z1] and [X2,Y2,Z2];
Step 2, attitude algorithm resolves the spin matrix that earth geocentric rectangular coordinate system is transformed into from the coordinate system of ontology;
The correlation of posture and elements of interior orientation is solved using the concept of equivalent body coordinate system;
Equivalent body coordinate system are as follows: X-axis is directed toward heading;Z axis is directed toward ground direction, is the unit vector of two photography light Resultant vector;The direction of two days photography light can be obtained by step 1, i.e. [X2-X1,Y2-Y1,Z2-Z1], Z axis is in the earth Direction under heart rectangular coordinate system is shown below:
The X-axis of equivalent body coordinate system is perpendicular to plane OAB;X-axis is oriented to following formula under earth geocentric rectangular coordinate system:
The Y-axis of equivalent body coordinate system is determined by X-axis and Z axis according to the unsuccessful criterion of the right hand, is shown below:
After determining direction of three reference axis of equivalent body coordinate system in earth-fixed geocentric system, R (t) structure according to the following formula It builds:
This spin matrix is converted into quaternary number form formula or Eulerian angles form, can be used in the posture state for describing satellite, realizes Attitude algorithm;
Step 3, elements of interior orientation is resolved;
The posture that the direction acquired according to step 1 and step 2 acquire, by the direction for light of photographing from earth geocentric rectangular coordinate system It is transformed into equivalent body coordinate system;Solution procedure is carried out by photography light;
Sensor visits direction of the member under body coordinate system and is expressed as following formula:
Wherein, [X, Y, Z]bodyIndicate direction of the photography light in body coordinate system;The definition for being directed toward angle according to member is visited, currently The elements of interior orientation of sensor spy member is independent to be expressed as following formula:
Step 4, virtual camera elements of interior orientation model is constructed;
There are a virtual cameras for definition, and platform is with real satellite in intimate identical orbital position and posture sync pulse jamming phase Same region, the camera not by lens distortion influenced and linear array be ideal line, steadily push away sweep and by constant integration time exposure at Picture regards the image of virtual camera acquisition as undistorted image;It is right by establishing virtual camera and real camera imaging geometry The image that real camera obtains carries out resampling, can eliminate the complex deformation in real image;
Since virtual camera platform and real satellite synchronize shooting with intimate identical track, posture, and virtual camera is flat Platform even running is not influenced by attitude jitter;Therefore, it is carried out by the track that is resolved to real satellite, posture discrete data multinomial Formula fitting, using polynomial fitting as the appearance rail model of virtual camera platform;
And for horizontal-scanning interval, virtual camera is with constant integration time to ground exposure image;When virtual camera and true phase Machine is in t0Moment is switched on imaging simultaneously, then for virtual camera imaged rows l, imaging time t are as follows:
T=t0+τ·l (14)
In formula, it is the average integration time of real camera that τ, which is the virtual camera time of integration,;
By multi-disc CCD linear array obtain image be it is discontinuous, need by splicing remove adjacent C CD between overlaid pixel And misplace along rail and obtain consecutive image, by there are four types of the positional relationship of common virtual ccd array and real CCD array, It is respectively used to realize the panchromatic double camera of panchromatic one camera, splicing view field, multispectral spectral coverage registration and single star double camera splicing;And it is empty Quasi- camera contains only the single CCD of ideal line arrangement, and acquisition image is consecutive image;Because making virtual camera and real camera It is close to obtain image resolution ratio, enables virtual camera with real camera master away from identical;Meanwhile in order to reduce true imaging light and void The direction difference of quasi- imaging ray, the camera coordinates (x of member is respectively visited with real camerac,yc) it is used as observation, it is fitted virtual CCD and exists Linear equation x under camera coordinates systemc=ayc+ b, in this, as virtual camera elements of interior orientation model;
Step 5, it realizes and is virtually imaged again;Virtual weight imaging method is establishing virtual camera and real camera imaging geometry On the basis of, resampling is carried out to the image that real camera obtains, obtains virtual undistorted image;
Step 5.1: using the virtual camera elements of interior orientation model of step 4, constructing virtual camera geometry location model;
Step 5.2: using the virtual camera geometry location model established in step 5.1, calculate any pixel on virtual image (x, Y), corresponding ground coordinate (X, Y, Z);
Step 5.3: according to real camera geometry location model, throwing into real image coordinate (x', y') for (X, Y, Z) is counter;
Step 5.4: resampling obtains (x', y') gray value, and gray value is assigned to (x, y) pixel described in step 5.2;
Step 5.5: all pixels on traversal virtual image repeat step 5.2-5.4, generate whole scape image to get to without abnormal Become image;
Step 5.6: being based on virtual camera geometry location model, generate the corresponding RPC file of undistorted image, that is, complete without abnormal Become the production of image.
2. a kind of undistorted making video method of spaceborne push-broom type optical sensor, it is characterised in that: include the following steps;
Step 1, by resolving satellite orbit, restore position and the posture of light when photography, that is, realize the positioning of light and determine To;
The method for solving earth geocentric rectangular coordinate system can also be obtained by tight model, be shown below:
Wherein m is proportionality coefficient;Ψx、ΨyFor by that will visit member, imaging ray is directed along track, vertical track decompose The angle being directed toward to it;R (t) is earth coordinates coordinate conversion matrix;[X (t), Y (t), Z (t)] is phase center in the earth Coordinate in geocentric rectangular coordinate system;
Since highest elevation plane is identical with the image point position of lowest elevation plane point of intersection, so two intersection points corresponding [X (t), Y (t), Z (t)], R (t) and [tan (ψx),tan(ψy)] unanimously, still [XS,YS,ZS] it is different with m, therefore two intersection points it is tight at As model tormulation is as follows:
Wherein [Xi,Yi,Zi] it is earth geocentric rectangular coordinate system;Make quotient after bringing two intersection points into formula (3) again up to following formula:
In above formula,Formula (4) can obtain following formula after rewriteeing
Rewriting formula (5) obtains following formula after eliminating unknown number k:
The deformation type of formula (6) is as follows:
There are three unknown numbers in formula (7), but a light is only capable of obtaining two independent equations, therefore in order to solve position [X (t), Y (t), Z (t)] also needs another light repetition methods two, then solves formula (7) jointly;
Step 2, attitude algorithm resolves the spin matrix that earth geocentric rectangular coordinate system is transformed into from the coordinate system of ontology;
The correlation of posture and elements of interior orientation is solved using the concept of equivalent body coordinate system;
Equivalent body coordinate system are as follows: X-axis is directed toward heading;Z axis is directed toward ground direction, is the unit vector of two photography light Resultant vector;The direction of two photography light can be obtained by step 1, i.e. [X2-X1,Y2-Y1,Z2-Z1], Z axis is in the earth Direction under heart rectangular coordinate system is shown below:
The X-axis of equivalent body coordinate system is perpendicular to plane OAB;X-axis is oriented to following formula under earth geocentric rectangular coordinate system:
The Y-axis of equivalent body coordinate system is determined by X-axis and Z axis according to the unsuccessful criterion of the right hand, is shown below:
After determining direction of three reference axis of equivalent body coordinate system in earth-fixed geocentric system, R (t) structure according to the following formula It builds:
This spin matrix is converted into quaternary number form formula or Eulerian angles form, can be used in the posture state for describing satellite, realizes Attitude algorithm;
Step 3, elements of interior orientation is resolved;
The posture that the direction acquired according to step 1 and step 2 acquire, by the direction for light of photographing from earth geocentric rectangular coordinate system It is transformed into equivalent body coordinate system;Solution procedure is carried out by photography light;
Sensor visits direction of the member under body coordinate system and is expressed as following formula:
Wherein, [X, Y, Z]bodyIndicate direction of the photography light in body coordinate system;The definition for being directed toward angle according to member is visited, currently The elements of interior orientation of sensor spy member is independent to be expressed as following formula:
Step 4, virtual camera elements of interior orientation model is constructed;
There are a virtual cameras for definition, and platform is with real satellite in intimate identical orbital position and posture sync pulse jamming phase Same region, the camera not by lens distortion influenced and linear array be ideal line, steadily push away sweep and by constant integration time exposure at Picture regards the image of virtual camera acquisition as undistorted image;It is right by establishing virtual camera and real camera imaging geometry The image that real camera obtains carries out resampling, can eliminate the complex deformation in real image;
Since virtual camera platform and real satellite synchronize shooting with intimate identical track, posture, and virtual camera is flat Platform even running is not influenced by attitude jitter;Therefore, it is carried out by the track that is resolved to real satellite, posture discrete data multinomial Formula fitting, using polynomial fitting as the appearance rail model of virtual camera platform;
And for horizontal-scanning interval, virtual camera is with constant integration time to ground exposure image;When virtual camera and true phase Machine is in t0Moment is switched on imaging simultaneously, then for virtual camera imaged rows l, imaging time t are as follows:
T=t0+τ·l (14)
In formula, it is the average integration time of real camera that τ, which is the virtual camera time of integration,;
By multi-disc CCD linear array obtain image be it is discontinuous, need by splicing remove adjacent C CD between overlaid pixel And misplace along rail and obtain consecutive image, by there are four types of the positional relationship of common virtual ccd array and real CCD array, It is respectively used to realize the panchromatic double camera of panchromatic one camera, splicing view field, multispectral spectral coverage registration and single star double camera splicing;And it is empty Quasi- camera contains only the single CCD of ideal line arrangement, and acquisition image is consecutive image;Because making virtual camera and real camera It is close to obtain image resolution ratio, enables virtual camera with real camera master away from identical;Meanwhile in order to reduce true imaging light and void The direction difference of quasi- imaging ray, the camera coordinates (x of member is respectively visited with real camerac,yc) it is used as observation, it is fitted virtual CCD and exists Linear equation x under camera coordinates systemc=ayc+ b, in this, as virtual camera elements of interior orientation model;
Step 5, it realizes and is virtually imaged again;Virtual weight imaging method is establishing virtual camera and real camera imaging geometry On the basis of, resampling is carried out to the image that real camera obtains, obtains virtual undistorted image;
Step 5.1: using the virtual camera elements of interior orientation model of step 4, constructing virtual camera geometry location model;
Step 5.2: using the virtual camera geometry location model established in step 5.1, calculate any pixel on virtual image (x, Y), corresponding ground coordinate (X, Y, Z);
Step 5.3: according to real camera geometry location model, throwing into real image coordinate (x', y') for (X, Y, Z) is counter;
Step 5.4: resampling obtains (x', y') gray value, and gray value is assigned to (x, y) pixel;
Step 5.5: all pixels on traversal virtual image repeat step 5.2-5.4, generate whole scape image to get to without abnormal Become image;
Step 5.6: being based on virtual camera geometry location model, generate the corresponding RPC file of undistorted image, that is, complete without abnormal Become the production of image.
3. a kind of spaceborne undistorted making video method of push-broom type optical sensor as claimed in claim 1 or 2, feature exist In: resampling described in step 5.4 is the resampling carried out based on SRTM-DEM data;
Since virtual camera and real camera imaging ray angle have differences, resampling process can be due to object space point height error Introduce height displacement;Ground same point (X, Y, Z) is observed by real satellite and virtual satellite, respectively obtains image coordinate (x, y), (x ', y '), is shown below, θ0、θ1Angle is imaged in respectively true light and virtual ray, and Δ h is object space point height Error, Δ x attach most importance to the height displacement that imaging process is introduced due to Δ h;Known to geometrical relationship:
Δ x=Δ h (tan θ0-tanθ1) (15)
According to the placement position of virtual CCD, θ0With θ1Difference is smaller;For the optical satellite camera of technique study, work as use When SRTM-DEM data, because the height displacement that vertical error introduces ignores in weight imaging process.
CN201910350812.1A 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor Active CN110211054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910350812.1A CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910350812.1A CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Publications (2)

Publication Number Publication Date
CN110211054A true CN110211054A (en) 2019-09-06
CN110211054B CN110211054B (en) 2021-01-15

Family

ID=67786541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910350812.1A Active CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Country Status (1)

Country Link
CN (1) CN110211054B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105380A (en) * 2020-02-07 2020-05-05 武汉玄景科技有限公司 Method and system for linear recovery of rigorous imaging model
CN111127564A (en) * 2019-12-23 2020-05-08 中电科新型智慧城市研究院有限公司 Video image correction method based on geometric positioning model
CN111612693A (en) * 2020-05-19 2020-09-01 中国科学院微小卫星创新研究院 Method for correcting rotary large-width optical satellite sensor
CN112419380A (en) * 2020-11-25 2021-02-26 湖北工业大学 High-precision registration method for static orbit satellite sequence images based on cloud mask
CN113393499A (en) * 2021-07-12 2021-09-14 自然资源部国土卫星遥感应用中心 Automatic registration method for panchromatic image and multispectral image of high-resolution seven-satellite
CN113536485A (en) * 2021-07-20 2021-10-22 中国科学院西安光学精密机械研究所 Ionosphere imaging detector image geographic coordinate calculating method
CN115046571A (en) * 2022-08-16 2022-09-13 成都国星宇航科技股份有限公司 Star sensor installation error correction method and device based on remote sensing image
CN115829879A (en) * 2022-12-15 2023-03-21 二十一世纪空间技术应用股份有限公司 Attitude quaternion processing method, device and equipment for agile satellite
CN116664430A (en) * 2023-05-30 2023-08-29 自然资源部国土卫星遥感应用中心 Method for improving geometric accuracy of large-range satellite image under ground-free control condition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN105091906A (en) * 2015-06-30 2015-11-25 武汉大学 High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
US20160343118A1 (en) * 2013-03-15 2016-11-24 Eric Olsen Systems and methods for producing temperature accurate thermal images
CN106403902A (en) * 2016-08-31 2017-02-15 武汉大学 Satellite-ground cooperative in-orbit real-time geometric positioning method and system for optical satellites
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN109668579A (en) * 2019-01-23 2019-04-23 张过 Spaceborne push away based on angular displacement sensor clears off load high frequency error compensation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343118A1 (en) * 2013-03-15 2016-11-24 Eric Olsen Systems and methods for producing temperature accurate thermal images
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN105091906A (en) * 2015-06-30 2015-11-25 武汉大学 High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
CN106403902A (en) * 2016-08-31 2017-02-15 武汉大学 Satellite-ground cooperative in-orbit real-time geometric positioning method and system for optical satellites
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN109668579A (en) * 2019-01-23 2019-04-23 张过 Spaceborne push away based on angular displacement sensor clears off load high frequency error compensation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WENCHAO HUANG ET AL.: "Compensation for Distortion of Basic Satellite Images Based on Rational Function Model", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
WENCHAO HUANG ET AL.: "Robust Approach for Recovery of Rigorous Sensor Model Using Rational Function Model", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
周平: "资源三号卫星遥感影像高精度几何处理关键技术与测图效能评价方法", 《中国博士学位论文全文数据库 基础科学辑》 *
黄文超: "基础遥感产品***误差补偿方法研究", 《中国博士学位论文全文数据库 基础科学辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127564A (en) * 2019-12-23 2020-05-08 中电科新型智慧城市研究院有限公司 Video image correction method based on geometric positioning model
CN111127564B (en) * 2019-12-23 2023-02-28 中电科新型智慧城市研究院有限公司 Video image correction method based on geometric positioning model
CN111105380A (en) * 2020-02-07 2020-05-05 武汉玄景科技有限公司 Method and system for linear recovery of rigorous imaging model
CN111612693A (en) * 2020-05-19 2020-09-01 中国科学院微小卫星创新研究院 Method for correcting rotary large-width optical satellite sensor
CN112419380B (en) * 2020-11-25 2023-08-15 湖北工业大学 Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN112419380A (en) * 2020-11-25 2021-02-26 湖北工业大学 High-precision registration method for static orbit satellite sequence images based on cloud mask
CN113393499A (en) * 2021-07-12 2021-09-14 自然资源部国土卫星遥感应用中心 Automatic registration method for panchromatic image and multispectral image of high-resolution seven-satellite
CN113536485A (en) * 2021-07-20 2021-10-22 中国科学院西安光学精密机械研究所 Ionosphere imaging detector image geographic coordinate calculating method
CN113536485B (en) * 2021-07-20 2022-12-06 中国科学院西安光学精密机械研究所 Ionosphere imaging detector image geographic coordinate calculating method
CN115046571A (en) * 2022-08-16 2022-09-13 成都国星宇航科技股份有限公司 Star sensor installation error correction method and device based on remote sensing image
CN115829879A (en) * 2022-12-15 2023-03-21 二十一世纪空间技术应用股份有限公司 Attitude quaternion processing method, device and equipment for agile satellite
CN115829879B (en) * 2022-12-15 2023-10-27 二十一世纪空间技术应用股份有限公司 Attitude quaternion processing method, device and equipment for agile satellite
CN116664430A (en) * 2023-05-30 2023-08-29 自然资源部国土卫星遥感应用中心 Method for improving geometric accuracy of large-range satellite image under ground-free control condition
CN116664430B (en) * 2023-05-30 2023-11-14 自然资源部国土卫星遥感应用中心 Method for improving geometric accuracy of large-range satellite image under ground-free control condition

Also Published As

Publication number Publication date
CN110211054B (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN110211054A (en) A kind of undistorted making video method of spaceborne push-broom type optical sensor
Hu et al. Understanding the rational function model: methods and applications
Di et al. A Self-Calibration Bundle Adjustment Method for Photogrammetric Processing of Chang $^{\prime} $ E-2 Stereo Lunar Imagery
CN108305237B (en) Multi-stereo image fusion drawing method considering different illumination imaging conditions
CN109696182A (en) A kind of spaceborne push-broom type optical sensor elements of interior orientation calibrating method
Zhang et al. On-orbit geometric calibration of ZY-3 three-line array imagery with multistrip data sets
CN110388898B (en) Multisource multiple coverage remote sensing image adjustment method for constructing virtual control point constraint
CN107144293A (en) A kind of geometric calibration method of video satellite area array cameras
CN106885585B (en) Integrated calibration method of satellite-borne photogrammetry system based on light beam adjustment
Jiang et al. Detection and correction of relative attitude errors for ZY1-02C
CN105910607B (en) Based on ground control satellite long period attitude error modification method
CN105091906A (en) High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
Wu et al. Integration of Chang'E-1 imagery and laser altimeter data for precision lunar topographic modeling
CN113900125B (en) Satellite-ground combined linear array imaging remote sensing satellite full-autonomous geometric calibration method and system
CN103697864B (en) A kind of narrow visual field double camera image splicing method based on large virtual camera
CN111612693B (en) Method for correcting rotary large-width optical satellite sensor
CN111473802A (en) Optical sensor internal orientation element calibration method based on linear array push-scanning
CN111272196A (en) In-orbit outside orientation element self-checking and correcting method and system under specific shooting condition
CN107967700A (en) The in-orbit geometric correction of the wide working distance binocular camera of big visual field and precision test method
CN110006452A (en) No. six wide visual field cameras of high score are with respect to geometric calibration method and system
Hattori et al. Orientation of high-resolution satellite images based on affine projection
Pi et al. Robust camera distortion calibration via unified RPC model for optical remote sensing satellites
Ren et al. A global adjustment method for photogrammetric processing of Chang’E-2 stereo images
CN111156969A (en) Wide remote sensing image stereo mapping method and system
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant