CN105374009A - Remote sensing image splicing method and apparatus - Google Patents

Remote sensing image splicing method and apparatus Download PDF

Info

Publication number
CN105374009A
CN105374009A CN201410568949.1A CN201410568949A CN105374009A CN 105374009 A CN105374009 A CN 105374009A CN 201410568949 A CN201410568949 A CN 201410568949A CN 105374009 A CN105374009 A CN 105374009A
Authority
CN
China
Prior art keywords
ccd
point
model
picture point
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410568949.1A
Other languages
Chinese (zh)
Inventor
陈科杰
马灵霞
汪红强
王剑
陈元伟
肖倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Space Star Technology Co Ltd
Original Assignee
Space Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Space Star Technology Co Ltd filed Critical Space Star Technology Co Ltd
Priority to CN201410568949.1A priority Critical patent/CN105374009A/en
Publication of CN105374009A publication Critical patent/CN105374009A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a remote sensing image splicing method and an apparatus. The method comprises the following steps of establishing a first positive computation model and a first inverse computation model between an image point and a ground point on each CCD according to an imaging parameter of each CCD, generating an imaging parameter of a virtual CCD according to the imaging parameter of each CCD, and establishing a second positive computation model and a second inverse computation model between an image point and a ground point on the virtual CCD according to the imaging parameter of the virtual CCD; according to the first positive computation model, the first inverse computation model, the second positive computation model and the second inverse computation model, establishing a third positive computation model and a third inverse computation model between the image point on each CCD and the image point on the virtual CCD; reading image data of each CCD, and determining the CCD corresponding to each image point on the virtual CCD and an image point coordinate on the CCD according to the third positive computation model and the third inverse computation model; and carrying out resampling processing on each determined image point and acquiring the image after splicing. Through the method and the apparatus of the invention, the quality of remote sensing image splicing is increased.

Description

Remote sensing image joining method and device
Technical field
The present invention relates to image processing field, in particular to a kind of remote sensing image joining method and device.
Background technology
Along with the continuous lifting of Optical remote satellite observation resolution, by the restriction of single CCD pixel number, the fabric width of monolithic line array CCD cannot meet observation requirements.In order to improve the efficiency of earth observation, ensureing the image obtaining certain fabric width, adopting multi-disc CCD to realize by optic splice or field stitching the important trend that larger fabric width is the development of spaceborne high-resolution optical camera.
In correlation technique, splicing builds image space transformation model based on tie point, realizes the registration of adjacent C CD overlay region image.In correlation technique, image space splicing at least has the following disadvantages:
The first, stitching algorithm all depends on the connection point information between sheet, if the horizontal overlapping region of adjacent C CD image lacks textural characteristics, such algorithm just exists the limitation of application.
The second, such algorithm lacks tight theoretical foundation as guidance, also cannot eliminate or weaken the image internal distortions problem because sensor geometry deformation etc. causes, fundamentally cannot improve the almost T-stable of virtual scan scape product.
3rd, for the situation that topographic relief in territory, horizontal overlapping region on raw video is violent, the precision of splicing cannot be ensured.
Summary of the invention
For the problem that image joint in correlation technique is of low quality, the invention provides a kind of remote sensing image joining method and device, at least to solve the problem.
According to an aspect of the present invention, provide a kind of remote sensing image joining method, comprising:
Set up first between picture point and ground point on described each CCD according to the imaging parameters of each CCD and just calculate model and the first inverse model, and the imaging parameters of imaging parameters generating virtual CCD according to described each CCD, set up second between picture point and ground point on described virtual CCD according to the imaging parameters of described virtual CCD and just calculate model and the second inverse model;
According to described first just calculating model, model just calculated by described first inverse model, described second and described second inverse model, to set up on described each CCD in picture point and described virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Read the image data of described each CCD, just calculate CCD that model and described 3rd inverse model determine that on described virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the described 3rd;
Resampling process is carried out to determined each picture point, obtains spliced image.
According to a further aspect in the invention, provide a kind of image joint device, comprising:
First sets up module, model and the first inverse model is just being calculated for setting up first between picture point and ground point on described each CCD according to the imaging parameters of each CCD, and the imaging parameters of imaging parameters generating virtual CCD according to described each CCD, set up second between picture point and ground point on described virtual CCD according to the imaging parameters of described virtual CCD and just calculate model and the second inverse model;
Second sets up module, for according to described first just calculating model, model just calculated by described first inverse model, described second and described second inverse model, to set up on described each CCD in picture point and described virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Determination module, for reading the image data of described each CCD, is just calculating CCD that model and described 3rd inverse model determine that on described virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the described 3rd;
Processing module, for carrying out resampling process to determined each picture point, obtains spliced image.
By the present invention, set up the picpointed coordinate conversion relation of virtual scan scape and raw video, and then realize the seamless inner field stitching process to imaging data, improve image joint quality.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, and form a application's part, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the process flow diagram of the remote sensing image joining method according to the embodiment of the present invention;
Fig. 2 is the structured flowchart of the remote sensing image splicing apparatus according to the embodiment of the present invention; And
Fig. 3 is the schematic flow sheet of an Alternate embodiments according to the embodiment of the present invention.
Embodiment
Hereinafter also describe the present invention in detail with reference to accompanying drawing in conjunction with the embodiments.It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.
Embodiments provide a kind of remote sensing image joining method.
Fig. 1 is the process flow diagram of the remote sensing image joining method according to the embodiment of the present invention, as shown in Figure 1, and 101 to step that the method comprising the steps of 104.
Step 101, set up first between picture point and ground point on each CCD according to the imaging parameters of each CCD and just calculate model and the first inverse model, and the imaging parameters of imaging parameters generating virtual CCD according to each CCD, set up second between picture point and ground point on virtual CCD according to the imaging parameters of virtual CCD and just calculate model and the second inverse model;
Step 102, according to above-mentioned first just calculating model, model just calculated by the first inverse model, second and the second inverse model, to set up on each CCD in picture point and virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Step 103, reads the image data of each CCD, is just calculating CCD that model and the 3rd inverse model determine that on virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the above-mentioned 3rd;
Step 104, carries out resampling process to determined each picture point, obtains spliced image.
In an Alternate embodiments of the embodiment of the present invention, identical method can be adopted to determine just to calculate model and inverse model for each CCD and virtual CCD, in above-mentioned steps 101, respective just calculation model and inverse model can be set up according to respective imaging parameters, specifically comprise:
The just calculation model on CCD between each picture point and ground point is set up according to the imaging parameters of each picture point;
Just calculation model according to each picture point calculates dominating pair of vertices parameter, sets up the affine Transform Model on CCD between each picture point and ground point, and calculated the inverse model set up on CCD between each picture point and ground point by successive ignition according to dominating pair of vertices parameter.
Alternatively, for each picture point, set up the just calculation model on CCD between each picture point and ground point according to the imaging parameters of each picture point, comprising:
Position [the X of satellite in agreement geocentric coordinate system is inscribed when obtaining picture point shooting s, Y s, Z s];
Obtain camera picture point primary optical axis unit vector corresponding to image and the angle psiX of satellite body coordinate system X-axis, the angle psiY of Y-axis;
Obtain the installation matrix M of satellite body coordinate system relative to camera 0;
Satellite is inscribed to orbital coordinate system rotation matrix M when obtaining the shooting of this picture point 1;
Obtain picture point shooting moment lower railway to J2000.0 coordinate system rotation matrix M 2;
J2000.0 to WGS84 coordinate system rotation matrix M is inscribed when obtaining picture point shooting 3;
Determine that the just calculation model between picture point and ground point is:
X G Y G Z G = X s Y s Z s + uM 3 M 2 M 1 M 0 tan psiX - tan psiY 1 , Wherein: [X g, Y g, Z g] be the coordinate of terrain object point in agreement geocentric coordinate system corresponding to picture point; U is scale factor.
There is no sequencing between the step that it should be noted that above-mentioned acquisition parameters, can adjust according to actual conditions.
Alternatively, just calculation model according to each picture point calculates dominating pair of vertices parameter, set up the affine Transform Model on CCD between each picture point and ground point according to dominating pair of vertices parameter, and calculated the inverse model set up on CCD between each picture point and ground point by successive ignition, comprising:
Choose four angle points in predetermined radii around heart point coordinate in the picture, utilize rigorous geometry model to calculate ground point longitude and latitude corresponding to four angle points respectively;
The affine Transform Model Model1 between picture point and ground point is set up according to the ground point longitude and latitude by four angle points and correspondence thereof;
Picture point (i1, j1) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model1;
At picture point (i1, j1) four angle points are chosen in described predetermined radii around, calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model2 between picture point and ground point by the ground point longitude and latitude of these four angle points and correspondence thereof;
Picture point (i2, j2) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model2;
Determine the distance L between picture point (i2, j2) and picture point (i1, j1), and whether judging distance L is greater than predetermined threshold value;
If distance L is greater than described predetermined threshold value, continue to select affine Transform Model to continue iterative computation, until distance L is less than or equal to described predetermined threshold value near (i2, j2) point; If distance L is less than described predetermined threshold value, then millet cake (lat, lon) is the ground point of picture point (i1, j1) correspondence definitely.
In an Alternate embodiments of the embodiment of the present invention, for each picture point, above-mentioned steps 102, according to above-mentioned first just calculating model, model just calculated by the first inverse model, second and the second inverse model, to set up on each CCD in picture point and virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model, comprising:
One, just model is calculated
Just calculate model according to first of CCD and calculate picture point (i on CCD, j) ground point (lat at place, lon), the second inverse model according to virtual CCD calculates ground point (lat, lon) picture point (i1 of corresponding virtual CCD, j1), obtain picture point (i, j) the 3rd and just calculate model; And
Two, inverse model
A, is just calculating model according to second of virtual CCD and is calculating the ground point (lat, lon) that on virtual CCD, picture point (i1, j1) is corresponding;
B, calculates CCD picture point (i2, j2) corresponding to ground point (lat, lon) according to the first inverse model of CCDM, and wherein M is default sheet number mark;
C, judges ground point (lat, lon) whether on CCDM according to the value of i2, wherein,
If i2 is less than overlap/2, then think that ground point (lat, lon) is on the CCDM left side, is set to M-1 by sheet M, a sheet number mark M is set to M-1, and return A, wherein overlap is the overlapping picture point between adjacent two panels CCD;
If the width that i2 is greater than CCDM deducts the difference of overlap/2, then ground point (lat, lon) is on the right of this sheet CCD, a sheet number mark M is set to M+1, returns A;
If i2 is at (overlap/2, the width of CCDM and the difference of overlap/2) between, then ground point (lat, lon) in CCDM, the picture point (i2, j2) of the CCDM that picture point (i1, j1) is corresponding on virtual CCD, obtain the 3rd inverse model of picture point (i1, j1).
In an Alternate embodiments of the embodiment of the present invention, according to the imaging parameters of the imaging parameters generating virtual CCD of each CCD in above-mentioned steps 101, comprising: the smoothing and process of fitting treatment to the imaging parameters of each CCD, obtains the imaging parameters of virtual CCD.Alternatively, to the interior orientation parameter of each CCD, carry out linear fit by least square method, set up the elements of interior orientation of virtual CCD.
The embodiment of the present invention additionally provides a kind of remote sensing image splicing apparatus.
Fig. 2 is the structured flowchart of the remote sensing image splicing apparatus according to the invention process dynamics, and as shown in Figure 2, this device comprises:
First sets up module 201, model and the first inverse model is just being calculated for setting up first between picture point and ground point on each CCD according to the imaging parameters of each CCD, and the imaging parameters of imaging parameters generating virtual CCD according to each CCD, set up second between picture point and ground point on virtual CCD according to the imaging parameters of virtual CCD and just calculate model and the second inverse model;
Second sets up module 202, set up module 201 be connected with first, for according to above-mentioned first just calculating model, model just calculated by the first inverse model, second and the second inverse model, to set up on each CCD in picture point and virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Determination module 203, sets up module 202 be connected with second, for reading the image data of each CCD, is just calculating CCD that model and the 3rd inverse model determine that on described virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the 3rd;
Processing module 204, is connected with determination module 203, for carrying out resampling process to determined each picture point, obtains spliced image.
Alternatively, first sets up module 201, is just calculating model and inverse model for setting up according to the imaging parameters of each CCD or the imaging parameters of virtual CCD; First sets up module 201, comprising:
Just calculating model determining unit, for setting up the just calculation model on CCD between each picture point and ground point according to the imaging parameters of each picture point;
Inverse model determining unit, dominating pair of vertices parameter is calculated for the just calculation model according to each picture point, set up the affine Transform Model on CCD between each picture point and ground point according to dominating pair of vertices parameter, and calculated the inverse model set up on CCD between each picture point and ground point by successive ignition.
Alternatively, just calculating model determining unit, specifically for
Position [the X of satellite in agreement geocentric coordinate system is inscribed when obtaining picture point shooting s, Y s, Z s];
Obtain camera picture point primary optical axis unit vector corresponding to image and the angle psiX of satellite body coordinate system X-axis, the angle psiY of Y-axis;
Obtain the installation matrix M of satellite body coordinate system relative to camera 0;
Satellite is inscribed to orbital coordinate system rotation matrix M when obtaining the shooting of this picture point 1;
Obtain picture point shooting moment lower railway to J2000.0 coordinate system rotation matrix M 2;
J2000.0 to WGS84 coordinate system rotation matrix M is inscribed when obtaining picture point shooting 3;
Determine that the just calculation model between picture point and ground point is:
X G Y G Z G = X s Y s Z s + uM 3 M 2 M 1 M 0 tan psiX - tan psiY 1 , Wherein: [X g, Y g, Z g] be the coordinate of terrain object point in agreement geocentric coordinate system corresponding to picture point; U is scale factor.
Alternatively, above-mentioned inverse model determining unit, specifically for:
Choose four angle points in predetermined radii around heart point coordinate in the picture, utilize rigorous geometry model to calculate ground point longitude and latitude corresponding to four angle points respectively;
The affine Transform Model Model1 between picture point and ground point is set up according to the ground point longitude and latitude by four angle points and correspondence thereof;
Picture point (i1, j1) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model1;
At picture point (i1, j1) four angle points are chosen in described predetermined radii around, calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model2 between picture point and ground point by the ground point longitude and latitude of these four angle points and correspondence thereof;
Picture point (i2, j2) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model2;
Determine the distance L between picture point (i2, j2) and picture point (i1, j1), and whether judging distance L is greater than predetermined threshold value;
If distance L is greater than described predetermined threshold value, continue to select affine Transform Model to continue iterative computation, until distance L is less than or equal to described predetermined threshold value near (i2, j2) point; If distance L is less than described predetermined threshold value, then millet cake (lat, lon) is the ground point of picture point (i1, j1) correspondence definitely.
Alternatively, above-mentioned second sets up module 202, specifically for
Just calculate model according to first of CCD and calculate picture point (i on CCD, j) ground point (lat at place, lon), the second inverse model according to virtual CCD calculates ground point (lat, lon) picture point (i1 of corresponding virtual CCD, j1), obtain picture point (i, j) the 3rd and just calculate model; And
A, is just calculating model according to second of virtual CCD and is calculating the ground point (lat, lon) that on virtual CCD, picture point (i1, j1) is corresponding;
B, calculates CCD picture point (i2, j2) corresponding to ground point (lat, lon) according to the first inverse model of CCDM, and wherein M is default sheet number mark;
C, judges ground point (lat, lon) whether on CCDM according to the value of i2, wherein,
If i2 is less than overlap/2, then think that ground point (lat, lon) is on the CCDM left side, is set to M-1 by sheet M, a sheet number mark M is set to M-1, and return A, wherein overlap is the overlapping picture point between adjacent two panels CCD;
If the width that i2 is greater than CCDM deducts the difference of overlap/2, then ground point (lat, lon) is on the right of this sheet CCD, a sheet number mark M is set to M+1, returns A;
If i2 is at (overlap/2, the width of CCDM and the difference of overlap/2) between, then ground point (lat, lon) in CCDM, the picture point (i2, j2) of the CCDM that picture point (i1, j1) is corresponding on virtual CCD, obtain the 3rd inverse model of picture point (i1, j1).
The part identical with said method, does not repeat them here.
An Alternate embodiments below in conjunction with Fig. 3 embodiment of the present invention is described in detail.
Fig. 3 is the schematic flow sheet of an Alternate embodiments according to the embodiment of the present invention, and shown in composition graphs 3, this Alternate embodiments can comprise following process:
1. the auxiliary data such as when obtaining each picture point place elements of interior orientation, track, attitude, row, sets up the just calculation model between each picture point and ground point, the rigorous geometry model at each picture point place.Specifically comprise the following steps:
(1) data such as when resolving track, attitude, row in calibration video imaging time range in the raw data passed from satellite;
(2) to arbitrary picture point Pointsi{sample, line, lat, lon, height}, according to the image coordinate (sample, line) of picture point, the photography time scanTime of its correspondence is obtained;
Directly resolve in the auxiliary data that photography time corresponding to picture point can pass from satellite, imaging time corresponding to the capable auxiliary data of line is photography time corresponding to this picture point.
(3) satellite orbital position (PX, PY, PZ, VX, VY, VZ) of Lagrange's interpolation algorithm interpolation calculation photography time scanTime is utilized; Satellite passes orbital data according under certain frequency, and therefore, the satellite orbital position that photography time scanTime is corresponding needs to utilize the orbital data of before and after the photography moment several groups to carry out interpolation calculation.The present invention adopts Lagrange's interpolation algorithm, utilizes the satellite orbital position of three groups of orbital data calculating photography times before and after photography time.
A) from first group of orbital data, judge the rise time of this group orbital data, relation between next group rise time of orbital data and scanTime, if scanTime is greater than the rise time of i-th group of orbital data, be less than the rise time of the i-th+1 group orbital data, then recording i is the orbital data sequence number nearest apart from photography time simultaneously.
B) utilize the i-th-1, i, i+1 group orbital data, calculate orbital position and the speed in photography moment based on Lagrangian Arithmetic.Lagrange's interpolation algorithm is expressed as follows:
For known y=f (x) function table (xi, f (xi)) (i=0,1 ..., n), for arbitrary x in [xo, xn] scope, have:
P n ( x ) = Σ i = 0 n ( y i * Π j = 0 , j ≠ i n x - x j x i - x j )
(4) utilize Lagrangian Arithmetic interpolation calculation photography time scanTime camera relative to the three-axis attitude angle (Roll, Pitch, Yaw) of orbital coordinate system;
(5) according to camera laboratory measurements, optical axis sensing angle (psiX, psiY) that sample CCD corresponding to the right image coordinate (sample, line) of picture point picture point visits unit is read.
(6) rigorous geometry model at picture point place is set up, for arbitrary picture point, the strict geometry imaging model Model of data construct remote sensing image such as when utilizing its interior orientation, track, attitude, row
The strict imaging geometry model that linear array push sweeps camera is as described below.
X G Y G Z G = X s Y s Z s + uM 3 M 2 M 1 M 0 tan psiX - tan psiY 1
Wherein:
[X s, Y s, Z s] for inscribing the position of satellite in agreement geocentric coordinate system time this, i.e. picture point place orbital position (PX, PY, PZ);
[X g, Y g, Z g] be the coordinate of terrain object point in agreement geocentric coordinate system corresponding to this pixel.
PsiX, psiY are respectively the angle of camera pixel primary optical axis unit vector corresponding to image and satellite body coordinate system X-axis, Y-axis;
U is scale factor.
M 0for satellite body coordinate system is relative to the installation matrix of camera, obtained by ground survey before satellite launch;
M 1for inscribing satellite time this to orbital coordinate system rotation matrix, be made up of the attitude angle that star is measured.
M 2for this moment lower railway is to J2000.0 coordinate system rotation matrix, be made up of right ascension of ascending node, orbit inclination, argument etc.
M 3for inscribing J2000.0 to WGS84 coordinate system rotation matrix time this, precession of the equinoxes correction, nutating correction need be carried out, Greenwich sidereal time corrects and Ghandler motion corrects.
2. calculate dominating pair of vertices parameter by rigorous geometry model, set up the affine Transform Model between picture point and ground point, and by the mode that successive ignition calculates, set up inverse model.
(1) four angle points are chosen in 5*5 radius around heart point coordinate in the picture, utilize rigorous geometry model to calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model1 between picture point and ground coordinate by the ground point longitude and latitude of four angle points and correspondence thereof;
(2) affine Transform Model Model1 is utilized to calculate picpointed coordinate (i1, j1) corresponding to topocentric coordinates (lat, lon);
(3) at picture point (i1, j1) four angle points are chosen in 5*5 radius around, calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model2 between picture point and ground coordinate by the ground point longitude and latitude of these four angle points and correspondence thereof;
(4) affine Transform Model Model2 is utilized to calculate picpointed coordinate (i2, j2) corresponding to topocentric coordinates (lat, lon);
(5) the distance L between (i2, j2) with (i1, j1) is calculated.If L is greater than the threshold value of setting, then continue to select affine Transform Model to continue iterative computation near (i2, j2) point; If the value of L is less than threshold value, then finishing iteration calculates, and the topocentric coordinates that step (4) calculates is picture point respective coordinates.
3. by the interior orientation parameter of the actual camera inputted each CCD, and data when the track of reality, attitude, row, smoothing and the Fitting Calculation, sets up the imaging parameters of desirable CCD.
The feature of this group parameter is that parameters presents level and smooth linear or polynomial of lower degree feature, and it is as follows that each parameter sets up target:
(1) virtual CCD elements of interior orientation: by true interior orientation parameter after the calibration of each CCD of camera, carry out linear fit by least square method, set up desirable undistorted elements of interior orientation;
(2) the capable time series of virtual image: by initial, end time of imaging according to imaging number of times decile, to ensure that between each row image, integration interval is equal;
(3) virtual image track: to the gps measurement data passed down (position and speed) smoothing process, fit to a straight line or low-order polynomial curve;
(4) virtual image attitude: to the smoothing process of attitude angle data passing down or calculate, fit to a straight line or low-order polynomial curve;
4. according to the positive inverse model of the positive inverse model of camera each CCD and the CCD of virtual scan, mapped by areal coordinate in the same manner, to set up on camera each CCD picture point directly positive inverse solving model in picture point and virtual CCD.
Just calculate model: establish N to represent the sheet number mark of CCD, for a certain picture point (i, j) on certain sheet CCDN, i represents line number, j representative row number.Virtual CCD picpointed coordinate (i1, the j1) calculation process of its correspondence is as follows:
A) just calculating by the picture point of this sheet CCD and ground point the ground coordinate (lat, lon) that model calculates picture point (i, j) place;
B) picpointed coordinate (i1, j1) of the corresponding virtual CCD of ground coordinate (lat, lon) is calculated by picture point and the ground point inverse model of virtual CCD.
Inverse model: establish N to represent the sheet number mark of CCD, for virtual CCD picpointed coordinate (i1, j1), picpointed coordinate (i, the j) calculation process on the CCD mark of its correspondence and this sheet CCD is as follows:
A) just calculate model by picture point and the ground point of virtual CCD and calculate topocentric coordinates (lat, lon) corresponding to the picpointed coordinate (i1, j1) of virtual CCD;
B) suppose that CCD sheet ccdID is N/2:
C) CCD picpointed coordinate (i2, j2) corresponding to topocentric coordinates (lat, lon) is calculated by the picture point of this sheet CCD with ground point inverse model;
D) this culture point is judged whether on this sheet CCD according to the value of i2;
If e) i2 is less than overlap/2 (overlap is the overlapping pixel between adjacent two panels CCD), then think that this culture point is on this sheet CCD left side, is reset to ccdID-1 by ccdID, repeat a);
If f) i2 is greater than (this sheet CCD width-overlap/2), then think that this culture point is on the right of this sheet CCD, is reset to ccdID+1 by ccdID, repeat a);
If g) i2 is at (overlap/2, (this sheet CCD width-overlap/2)) between, then think that this culture point is inside this sheet CCD, virtual representation point coordinate (i1, j1) picpointed coordinate on corresponding this sheet CCD is (i2, j2).
5., according to the mapping relations between each CCD of camera and virtual CCD picture point, gray resample is carried out to virtual each picture point of surface sweeping scape image, obtains spliced image.Concrete, read the raw video data that each CCD obtains, for each pixel of virtual CCD, ask for sheet number and the picpointed coordinate of its corresponding original CCD, by pixel resampling process (multithreading), export the imitate patching-up image after resampling.
From above description, can find out, present invention achieves following technique effect: utilize geometrical-restriction relation of photographing between the sheet of CCD, based on the continuity of object space, set up the picpointed coordinate conversion relation of virtual scan scape and raw video, and then realize the seamless inner field stitching process to imaging data; Utilize least square fitting to obtain desirable CCD interior orientation parameter simultaneously, effectively improve the geometric distortion of image inside.Breaching the limitation of traditional joining method, providing comparatively tight and desirable technical scheme for making high-quality virtual scan scape product.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, and in some cases, step shown or described by can performing with the order be different from herein, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a remote sensing image joining method, is characterized in that, comprising:
Set up first between picture point and ground point on described each CCD according to the imaging parameters of each CCD and just calculate model and the first inverse model, and the imaging parameters of imaging parameters generating virtual CCD according to described each CCD, set up second between picture point and ground point on described virtual CCD according to the imaging parameters of described virtual CCD and just calculate model and the second inverse model;
According to described first just calculating model, model just calculated by described first inverse model, described second and described second inverse model, to set up on described each CCD in picture point and described virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Read the image data of described each CCD, just calculate CCD that model and described 3rd inverse model determine that on described virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the described 3rd;
Resampling process is carried out to determined each picture point, obtains spliced image.
2. method according to claim 1, is characterized in that, for described each CCD and described virtual CCD, sets up respective just calculation model and inverse model, comprising according to described respective imaging parameters:
The just calculation model on CCD between each picture point and ground point is set up according to the imaging parameters of each picture point;
Just calculation model according to each picture point described calculates dominating pair of vertices parameter, set up the affine Transform Model on CCD between each picture point and ground point according to dominating pair of vertices parameter, and calculated the inverse model set up on CCD between each picture point and ground point by successive ignition.
3. method according to claim 2, is characterized in that, for each picture point, sets up the just calculation model on CCD between each picture point and ground point, comprising according to the imaging parameters of each picture point:
Position [the X of satellite in agreement geocentric coordinate system is inscribed when obtaining picture point shooting s, Y s, Z s];
Obtain camera picture point primary optical axis unit vector corresponding to image and the angle psiX of satellite body coordinate system X-axis, the angle psiY of Y-axis;
Obtain the installation matrix M of satellite body coordinate system relative to camera 0;
Satellite is inscribed to orbital coordinate system rotation matrix M when obtaining the shooting of this picture point 1;
Obtain picture point shooting moment lower railway to J2000.0 coordinate system rotation matrix M 2;
J2000.0 to WGS84 coordinate system rotation matrix M is inscribed when obtaining picture point shooting 3;
Determine that the just calculation model between picture point and ground point is:
X G Y G Z G = X s Y s Z s + u M 3 M 2 M 1 M 0 tan psiX - tan psiY 1 , Wherein: [X g, Y g, Z g] be the coordinate of terrain object point in agreement geocentric coordinate system corresponding to picture point; U is scale factor.
4. according to the method in claim 2 or 3, it is characterized in that, just calculation model according to each picture point described calculates dominating pair of vertices parameter, the affine Transform Model on CCD between each picture point and ground point is set up according to dominating pair of vertices parameter, and the inverse model set up on CCD between each picture point and ground point is calculated by successive ignition, comprising:
Choose four angle points in predetermined radii around heart point coordinate in the picture, utilize rigorous geometry model to calculate ground point longitude and latitude corresponding to four angle points respectively;
The affine Transform Model Model1 between picture point and ground point is set up according to the ground point longitude and latitude by four angle points and correspondence thereof;
Picture point (i1, j1) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model1;
At picture point (i1, j1) four angle points are chosen in described predetermined radii around, calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model2 between picture point and ground point by the ground point longitude and latitude of these four angle points and correspondence thereof;
Picture point (i2, j2) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model2;
Determine the distance L between picture point (i2, j2) and picture point (i1, j1), and whether judging distance L is greater than predetermined threshold value;
If distance L is greater than described predetermined threshold value, continue to select affine Transform Model to continue iterative computation, until distance L is less than or equal to described predetermined threshold value near (i2, j2) point; If distance L is less than described predetermined threshold value, then millet cake (lat, lon) is the ground point of picture point (i1, j1) correspondence definitely.
5. method according to claim 1, for each picture point, according to described first just calculating model, model just calculated by described first inverse model, described second and described second inverse model, to set up on described each CCD in picture point and described virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model, comprising:
Just calculate model according to first of CCD and calculate picture point (i on CCD, j) ground point (lat at place, lon), the second inverse model according to virtual CCD calculates ground point (lat, lon) picture point (i1 of corresponding virtual CCD, j1), obtain picture point (i, j) the 3rd and just calculate model; And
A, is just calculating model according to second of virtual CCD and is calculating the ground point (lat, lon) that on virtual CCD, picture point (i1, j1) is corresponding;
B, calculates CCD picture point (i2, j2) corresponding to ground point (lat, lon) according to the first inverse model of CCDM, and wherein M is default sheet number mark;
C, judges ground point (lat, lon) whether on CCDM according to the value of i2, wherein,
If i2 is less than overlap/2, then think that ground point (lat, lon) is on the CCDM left side, is set to M-1 by sheet M, a sheet number mark M is set to M-1, and return A, wherein overlap is the overlapping picture point between adjacent two panels CCD;
If the width that i2 is greater than CCDM deducts the difference of overlap/2, then ground point (lat, lon) is on the right of this sheet CCD, a sheet number mark M is set to M+1, returns A;
If i2 is at (overlap/2, the width of CCDM and the difference of overlap/2) between, then ground point (lat, lon) in CCDM, the picture point (i2, j2) of the CCDM that picture point (i1, j1) is corresponding on virtual CCD, obtain the 3rd inverse model of picture point (i1, j1).
6. method according to claim 1, is characterized in that, according to the imaging parameters of the imaging parameters generating virtual CCD of described each CCD, comprising:
To the imaging parameters of described each CCD, smoothing and process of fitting treatment, obtains the imaging parameters of virtual CCD.
7. method according to claim 6, is characterized in that, the imaging parameters of described each CCD comprises the elements of interior orientation of described each CCD, and the imaging parameters of described virtual CCD comprises the elements of interior orientation of described virtual CCD; Smoothing and the process of fitting treatment to the imaging parameters of described each CCD, obtains the imaging parameters of virtual CCD, comprising:
To the interior orientation parameter of described each CCD, carry out linear fit by least square method, set up the elements of interior orientation of described virtual CCD.
8. a remote sensing image splicing apparatus, is characterized in that, comprising:
First sets up module, model and the first inverse model is just being calculated for setting up first between picture point and ground point on described each CCD according to the imaging parameters of each CCD, and the imaging parameters of imaging parameters generating virtual CCD according to described each CCD, set up second between picture point and ground point on described virtual CCD according to the imaging parameters of described virtual CCD and just calculate model and the second inverse model;
Second sets up module, for according to described first just calculating model, model just calculated by described first inverse model, described second and described second inverse model, to set up on described each CCD in picture point and described virtual CCD between picture point the 3rd and just calculating model and the 3rd inverse model;
Determination module, for reading the image data of described each CCD, is just calculating CCD that model and described 3rd inverse model determine that on described virtual CCD, each picture point is corresponding and the picpointed coordinate on this CCD according to the described 3rd;
Processing module, for carrying out resampling process to determined each picture point, obtains spliced image.
9. device according to claim 8, is characterized in that, described first sets up module, is just calculating model and inverse model for setting up according to the imaging parameters of each CCD or the imaging parameters of virtual CCD; Described first sets up module, comprising:
Just calculating model determining unit, for setting up the just calculation model on CCD between each picture point and ground point according to the imaging parameters of each picture point;
Inverse model determining unit, dominating pair of vertices parameter is calculated for the just calculation model according to each picture point described, set up the affine Transform Model on CCD between each picture point and ground point according to dominating pair of vertices parameter, and calculated the inverse model set up on CCD between each picture point and ground point by successive ignition.
Or
Described second sets up module, specifically for
Just calculate model according to first of CCD and calculate picture point (i on CCD, j) ground point (lat at place, lon), the second inverse model according to virtual CCD calculates ground point (lat, lon) picture point (i1 of corresponding virtual CCD, j1), obtain picture point (i, j) the 3rd and just calculate model; And
A, is just calculating model according to second of virtual CCD and is calculating the ground point (lat, lon) that on virtual CCD, picture point (i1, j1) is corresponding;
B, calculates CCD picture point (i2, j2) corresponding to ground point (lat, lon) according to the first inverse model of CCDM, and wherein M is default sheet number mark;
C, judges ground point (lat, lon) whether on CCDM according to the value of i2, wherein,
If i2 is less than overlap/2, then think that ground point (lat, lon) is on the CCDM left side, is set to M-1 by sheet M, a sheet number mark M is set to M-1, and return A, wherein overlap is the overlapping picture point between adjacent two panels CCD;
If the width that i2 is greater than CCDM deducts the difference of overlap/2, then ground point (lat, lon) is on the right of this sheet CCD, a sheet number mark M is set to M+1, returns A;
If i2 is at (overlap/2, the width of CCDM and the difference of overlap/2) between, then ground point (lat, lon) in CCDM, the picture point (i2, j2) of the CCDM that picture point (i1, j1) is corresponding on virtual CCD, obtain the 3rd inverse model of picture point (i1, j1).
10. device according to claim 9, is characterized in that, is describedly just calculating model determining unit, for
Position [the X of satellite in agreement geocentric coordinate system is inscribed when obtaining picture point shooting s, Y s, Z s];
Obtain camera picture point primary optical axis unit vector corresponding to image and the angle psiX of satellite body coordinate system X-axis, the angle psiY of Y-axis;
Obtain the installation matrix M of satellite body coordinate system relative to camera 0;
Satellite is inscribed to orbital coordinate system rotation matrix M when obtaining the shooting of this picture point 1;
Obtain picture point shooting moment lower railway to J2000.0 coordinate system rotation matrix M 2;
J2000.0 to WGS84 coordinate system rotation matrix M is inscribed when obtaining picture point shooting 3;
Determine that the just calculation model between picture point and ground point is:
X G Y G Z G = X s Y s Z s + u M 3 M 2 M 1 M 0 tan psiX - tan psiY 1 , Wherein: [X g, Y g, Z g] be the coordinate of terrain object point in agreement geocentric coordinate system corresponding to picture point; U is scale factor.
Or
Described inverse model determining unit, for:
Choose four angle points in predetermined radii around heart point coordinate in the picture, utilize rigorous geometry model to calculate ground point longitude and latitude corresponding to four angle points respectively;
The affine Transform Model Model1 between picture point and ground point is set up according to the ground point longitude and latitude by four angle points and correspondence thereof;
Picture point (i1, j1) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model1;
At picture point (i1, j1) four angle points are chosen in described predetermined radii around, calculate ground point longitude and latitude corresponding to four angle points respectively, set up the affine Transform Model Model2 between picture point and ground point by the ground point longitude and latitude of these four angle points and correspondence thereof;
Picture point (i2, j2) corresponding to ground point (lat, lon) is calculated according to affine Transform Model Model2;
Determine the distance L between picture point (i2, j2) and picture point (i1, j1), and whether judging distance L is greater than predetermined threshold value;
If distance L is greater than described predetermined threshold value, continue to select affine Transform Model to continue iterative computation, until distance L is less than or equal to described predetermined threshold value near (i2, j2) point; If distance L is less than described predetermined threshold value, then millet cake (lat, lon) is the ground point of picture point (i1, j1) correspondence definitely.
CN201410568949.1A 2014-10-22 2014-10-22 Remote sensing image splicing method and apparatus Pending CN105374009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410568949.1A CN105374009A (en) 2014-10-22 2014-10-22 Remote sensing image splicing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410568949.1A CN105374009A (en) 2014-10-22 2014-10-22 Remote sensing image splicing method and apparatus

Publications (1)

Publication Number Publication Date
CN105374009A true CN105374009A (en) 2016-03-02

Family

ID=55376182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410568949.1A Pending CN105374009A (en) 2014-10-22 2014-10-22 Remote sensing image splicing method and apparatus

Country Status (1)

Country Link
CN (1) CN105374009A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN108242047A (en) * 2017-12-23 2018-07-03 北京卫星信息工程研究所 Optical satellite remote sensing image data bearing calibration based on CCD
CN112816184A (en) * 2020-12-17 2021-05-18 航天恒星科技有限公司 Uncontrolled calibration method and device for optical remote sensing satellite

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014488A1 (en) * 2004-07-09 2007-01-18 Ching-Chien Chen Automatically and accurately conflating road vector data, street maps, and orthoimagery
CN103679711A (en) * 2013-11-29 2014-03-26 航天恒星科技有限公司 Method for calibrating in-orbit exterior orientation parameters of push-broom optical cameras of remote sensing satellite linear arrays
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
US20140233809A1 (en) * 2011-05-13 2014-08-21 Beijing Electric Power Economic Research Institute Method and Device for Processing Geological Information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014488A1 (en) * 2004-07-09 2007-01-18 Ching-Chien Chen Automatically and accurately conflating road vector data, street maps, and orthoimagery
US20140233809A1 (en) * 2011-05-13 2014-08-21 Beijing Electric Power Economic Research Institute Method and Device for Processing Geological Information
CN103679711A (en) * 2013-11-29 2014-03-26 航天恒星科技有限公司 Method for calibrating in-orbit exterior orientation parameters of push-broom optical cameras of remote sensing satellite linear arrays
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘斌 等: "资源三号卫星传感器校正产品定位精度验证与分析", 《国土资源遥感》 *
唐新明 等: "资源三号测绘卫星三线阵成像几何模型构建与精度初步验证", 《测绘学报》 *
张过 等: "虚拟CCD线阵星载光学传感器内视场拼接", 《中国图像图形学报》 *
潘红播 等: "资源三号测绘卫星传感器校正产品几何模型", 《测绘学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN106895851B (en) * 2016-12-21 2019-08-13 中国资源卫星应用中心 A kind of sensor calibration method that the more CCD polyphasers of Optical remote satellite are uniformly processed
CN108242047A (en) * 2017-12-23 2018-07-03 北京卫星信息工程研究所 Optical satellite remote sensing image data bearing calibration based on CCD
CN112816184A (en) * 2020-12-17 2021-05-18 航天恒星科技有限公司 Uncontrolled calibration method and device for optical remote sensing satellite

Similar Documents

Publication Publication Date Title
CN104897175B (en) Polyphaser optics, which is pushed away, sweeps the in-orbit geometric calibration method and system of satellite
CN107504981B (en) Satellite attitude error correction method and device based on laser height measurement data
US8494225B2 (en) Navigation method and aparatus
US6735348B2 (en) Apparatuses and methods for mapping image coordinates to ground coordinates
Hu et al. Understanding the rational function model: methods and applications
Li et al. Rigorous photogrammetric processing of HiRISE stereo imagery for Mars topographic mapping
CN109115186B (en) 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system
CN102741706B (en) The geographical method with reference to image-region
CN107144293A (en) A kind of geometric calibration method of video satellite area array cameras
CN107192376B (en) Unmanned plane multiple image target positioning correction method based on interframe continuity
CN106403902A (en) Satellite-ground cooperative in-orbit real-time geometric positioning method and system for optical satellites
CN102735216B (en) CCD stereoscopic camera three-line imagery data adjustment processing method
CN102410831B (en) Design and positioning method of multi-stripe scan imaging model
CN102519436B (en) Chang'e-1 (CE-1) stereo camera and laser altimeter data combined adjustment method
CN105091906A (en) High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
CN102737357B (en) Method for generating simulation data of lunar three-linear array camera images
CN103673995A (en) Calibration method of on-orbit optical distortion parameters of linear array push-broom camera
CN101114022A (en) Navigation multiple spectrum scanner geometric approximate correction method under non gesture information condition
CN106895851A (en) A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN105444778B (en) A kind of star sensor based on imaging geometry inverting is in-orbit to determine appearance error acquisition methods
CN103310487B (en) A kind of universal imaging geometric model based on time variable generates method
CN103697864A (en) Narrow-view-field double-camera image fusion method based on large virtual camera
CN106525054A (en) Single satellite autonomous orbit measuring method adopting on-satellite push-broom remote sensing image information
CN110986888A (en) Aerial photography integrated method
CN114858133B (en) Attitude low-frequency error correction method under fixed star observation mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160302