JPS61193269A - Satellite image correction processing system - Google Patents

Satellite image correction processing system

Info

Publication number
JPS61193269A
JPS61193269A JP60032640A JP3264085A JPS61193269A JP S61193269 A JPS61193269 A JP S61193269A JP 60032640 A JP60032640 A JP 60032640A JP 3264085 A JP3264085 A JP 3264085A JP S61193269 A JPS61193269 A JP S61193269A
Authority
JP
Japan
Prior art keywords
point
convergence
image
distortion correction
corrected image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP60032640A
Other languages
Japanese (ja)
Inventor
Yoichi Seto
洋一 瀬戸
Hiroyuki Saito
博之 斉藤
Fuminobu Furumura
文伸 古村
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP60032640A priority Critical patent/JPS61193269A/en
Publication of JPS61193269A publication Critical patent/JPS61193269A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To correct the observed images regardless of status of convergence conditions by making it possible to evade occurrence of non-convergence points. CONSTITUTION:For example, in order to obtain the coordinates of non- convergence points of the 5th line, 1st column (i-5, j=1) the block distortion correction coefft. a0-a3, b0-b3. comprising (3,1), (4,1), (3,2), (4,2) are used. That is, the coordinate value x51 of the (5,1) lattice point in pixel direction is obtained by a0+a1X2XxL+X31, the coordinate y51 in the line direction is obtained from b0+b1X2XxL+Y31. X31 is the pixel direction coordinate value of the lattice point (3,1) on the corrected image while Y31 is the coordinate value in the line direction of the lattice point (3,1) on the corrected image.

Description

【発明の詳細な説明】 〔発明の利用分野〕 本発明は、衛星画像補正処理に係り、特に歪補正係数算
出において効率よく収束点を決定するに好適な衛星画像
の補正処理方式。
DETAILED DESCRIPTION OF THE INVENTION [Field of Application of the Invention] The present invention relates to satellite image correction processing, and in particular to a satellite image correction processing method suitable for efficiently determining a convergence point in calculating distortion correction coefficients.

〔発明の背景〕[Background of the invention]

衛星画像補正処理とは、第1図に示すように衛星の姿勢
、軌道データ10を用い歪補正係数を算出する処理20
と算出された歪補正係数30と観測画像データ10を用
い歪補正(内挿)処理40を行うことにより補正画像5
0を得る2段階の処理よりなる。
Satellite image correction processing is a process 20 for calculating distortion correction coefficients using satellite attitude and orbit data 10, as shown in FIG.
A corrected image 5 is obtained by performing a distortion correction (interpolation) process 40 using the calculated distortion correction coefficient 30 and observed image data 10.
It consists of two stages of processing to obtain 0.

以下に第2図を用い衛星画像補正処理の原理を説明する
The principle of satellite image correction processing will be explained below using FIG.

1、歪補正係数算出処理20 まず、補正画像60の座標系と観測画像70の座標系と
の幾何学対応関係を写像関数CP−1で表わす。
1. Distortion correction coefficient calculation process 20 First, the geometric correspondence relationship between the coordinate system of the corrected image 60 and the coordinate system of the observed image 70 is expressed by a mapping function CP-1.

ここで、?−1:衛星の軌道・姿勢データ、走査鏡振れ
角、地球の形状等の関数 xiy:補正画像の座標値 Q工p:観測画像の座標値 (P−1を厳密に1点1点計算すると処理時間がかかる
ので第2図に示すように画像を等密隔なブロック80に
わけ、ブロックの格子点のみ厳密に求め、ブロック内の
点は、(2)式に示すように線形近似式(区分陪直線式
)により求め、処理時間の低減化を図る。
here,? -1: Function of satellite orbit and attitude data, scanning mirror deflection angle, shape of the earth, etc.xiy: Coordinate value of corrected image Qp: Coordinate value of observed image (If P-1 is calculated point by point, Since it takes time to process, the image is divided into equally densely spaced blocks 80 as shown in Fig. 2, and only the grid points of the blocks are strictly determined. (piecewise paralinear equation) to reduce processing time.

81 mライン方向の区分陪直線係数 す、:ピクセル方向の区分陪直線係数 i=o、1,2.3 このattbiを歪補正係数と呼ぶ。81 Piecewise bilinear coefficient in m line direction : Piecewise paralinear coefficient in pixel direction i=o, 1, 2.3 This attbi is called a distortion correction coefficient.

2、歪補正処理40 補正画像60上の点P CX’ e y’ )に対応す
る観測画像70上の点Q(Qip)を歪補正係数を用い
て求める。
2. Distortion correction processing 40 A point Q (Qip) on the observed image 70 corresponding to the point P CX'ey' ) on the corrected image 60 is determined using the distortion correction coefficient.

座標(n1p)が決まれば、その点の観測画像70の画
素値(輝度レベル)は求まる。
Once the coordinates (n1p) are determined, the pixel value (luminance level) of the observed image 70 at that point is determined.

しかし一般に(Q□p)は整数値とならないため、(x
t y)の周囲の絶数個の画素値を用いる内挿処理によ
り座標Cnx p)の画素値を求める。
However, in general (Q□p) is not an integer value, so (x
The pixel value of the coordinate Cnx p) is determined by interpolation processing using an extremely large number of pixel values around the coordinate Cnx p).

内挿法には、キュービックコンボリューション法、ニア
レストネイル法がある。
Interpolation methods include the cubic convolution method and the nearest nail method.

以上の処理を補正画像全体について行えば、観測画像を
補正できる。
By performing the above processing on the entire corrected image, the observed image can be corrected.

次に従来の歪補正係数算出処理方式の欠点を述べる。補
正画像から観測画像への写像(P−1は、観測画像上に
設定した初期点を幾何学モデルによる写像Tにより補正
画像に投影し、補正画像上に設定した格子点との誤差が
設定した誤差量よりも小さくなるまで観測画像上の初期
点を修正し繰返しTを計算するとにより求める。この場
合法の問題が生ずる。
Next, the drawbacks of the conventional distortion correction coefficient calculation processing method will be described. Mapping from the corrected image to the observed image (P-1 is the projection of the initial point set on the observed image onto the corrected image by mapping T based on the geometric model, and the error with the grid point set on the corrected image is set. It is obtained by correcting the initial point on the observed image and repeatedly calculating T until it becomes smaller than the error amount.In this case, a problem with the method arises.

(1)補正画像の定義域が、撮影した画像の領域を越え
る場合、補正画像に対応する観測画像上の点が決定でき
ず収束計算が収束しない、収束計算が1点でも求まらな
いと歪補正係数が求まらず補正処理が不可能となる。
(1) If the defined area of the corrected image exceeds the area of the captured image, the point on the observed image that corresponds to the corrected image cannot be determined and the convergence calculation will not converge. The distortion correction coefficient cannot be determined and correction processing becomes impossible.

(2)収束状態のよくない点は、収束するまでの繰返し
回数が多く、計算時間がかかる。
(2) The disadvantage of poor convergence is that it requires a large number of iterations until convergence, which takes a long time to calculate.

衛星画像の形状歪補正処理について比較的詳しく述べで
ある公知例として以下のものがある。
The following is a known example that describes the shape distortion correction process for satellite images in relatively detail.

(イ)日立評論、土屋清他、′全ディジタル方式による
高精度地球観測画像情報処理技術”(ロ)画像の処理と
解析2日本リモートセンシング研究会編e p 172
 *昭56年り月、共立出版。
(a) Hitachi Review, Kiyoshi Tsuchiya et al., 'High-precision earth observation image information processing technology using all-digital methods' (b) Image processing and analysis 2 edited by Japan Remote Sensing Study Group e p 172
* Published by Kyoritsu Publishing, April 1982.

〔発明の目的〕[Purpose of the invention]

本発明の目的は歪補正係数を算出する際、補正画像から
観測画像への対応点を求める収束計算における非収束の
問題と処理時間の増大化の問題を解決する方式を提供す
ることにある。
An object of the present invention is to provide a method for solving the problem of non-convergence and increase in processing time in convergence calculation for finding corresponding points from a corrected image to an observed image when calculating distortion correction coefficients.

〔発明の概要〕[Summary of the invention]

衛星撮影画像の形状歪を補正処理する場合、歪補正係数
算出処理において、補正画像の定義域と観測画像の撮影
域の関係から、観測画像から補正画像への投影が不可能
となり計算が収束しない可能性が生ずる。
When correcting the shape distortion of a satellite image, in the distortion correction coefficient calculation process, it is impossible to project the observed image onto the corrected image due to the relationship between the defined area of the corrected image and the captured area of the observed image, and the calculation does not converge. A possibility arises.

この場合、1点でも収束しない点が存在すると歪補正係
数が算出できず、撮影画像の補正が不可能となる。
In this case, if even one point does not converge, the distortion correction coefficient cannot be calculated, making it impossible to correct the photographed image.

これに対し本発明では、収束しない格子点の座標を周囲
の収束した格子点の座標より内挿あるいは外挿により求
めることで、非収束点をなくし歪補正係数が求まらない
事態を避けることが可能であ、また、内挿あるいは外挿
により求めた点を初期点として、つまり真の収束点に近
い初期点を用いさらに収束計算を行い収束点を求めるこ
とで収束計算の効率化を図る。
In contrast, in the present invention, the coordinates of a non-convergent grid point are determined by interpolation or extrapolation from the coordinates of surrounding converged grid points, thereby eliminating non-converging points and avoiding the situation where the distortion correction coefficient cannot be determined. In addition, the efficiency of convergence calculation can be improved by using the point obtained by interpolation or extrapolation as the initial point, that is, the initial point close to the true convergence point, and performing further convergence calculation to find the convergence point. .

(発明の実施例〕 以下1本発明の一実施例を図を用いて説明する。(Example of the invention) An embodiment of the present invention will be described below with reference to the drawings.

本発明によるN0AA衛星(米国気象衛星)の画像歪補
正処理の構成は第1図に示すように歪補正係数算出処理
部20と歪補正処理部40より成る。
The configuration of the image distortion correction processing for the N0AA satellite (US meteorological satellite) according to the present invention includes a distortion correction coefficient calculation processing section 20 and a distortion correction processing section 40, as shown in FIG.

処理内容は以下の通りである。The processing details are as follows.

(1)歪補正係数算出処理20 衛星の軌道・姿勢データ、走査鏡の振れ角より補正画像
と観測画像を関係づける写像関数を求める。写像関数は
、(2)式に示す陪直線式で表わす。
(1) Distortion correction coefficient calculation process 20 A mapping function that associates the corrected image with the observed image is determined from the orbital and attitude data of the satellite and the deflection angle of the scanning mirror. The mapping function is expressed by a bilinear equation shown in equation (2).

陪直線式の係数を求めるには、補正画像上の格子点座標
値と補正画像上の格子点に対応する観測画像上の格子点
座標値が必要である。補正画像上の格子点は任意に設定
すればよい、観測画像上の格子点座標値は、衛星の軌道
・姿勢データ、走査鏡の振れ角等の情報を用い観測画像
から補正画像への写像を繰り返し行うことで算出する。
In order to obtain the coefficients of the bilinear equation, the coordinate values of the grid points on the corrected image and the coordinate values of the grid points on the observed image corresponding to the grid points on the corrected image are required. The grid points on the corrected image can be set arbitrarily, and the coordinate values of the grid points on the observed image are mapped from the observed image to the corrected image using information such as the orbit and attitude data of the satellite and the deflection angle of the scanning mirror. Calculate by repeating the process.

第3図に歪補正係数を算出する一連の処理の流れを示す
FIG. 3 shows the flow of a series of processes for calculating distortion correction coefficients.

(a)補正画像上の格子点に対応する観測画像の格子点
座標値の算出140 第4図に対応点算出処理の概要を示す。
(a) Calculating lattice point coordinate values of the observed image corresponding to the lattice points on the corrected image 140 FIG. 4 shows an overview of the corresponding point calculation process.

補正画像上の格子点230に対応するI!測面画像上格
子点座標240を求めるために、観測画像上に初期点2
10を設定し、衛星の軌道。
I! corresponding to grid point 230 on the corrected image! In order to obtain grid point coordinates 240 on the surface measurement image, initial point 2 is set on the observation image.
Set 10 and the orbit of the satellite.

姿勢、走査鏡の振れ角の情報を用い補正画像の対応点2
20を求める。次に対応点220と目標点230の差X
′を求めて、観測画像上の偏差量に換算する。一般に補
正画像座標はメートル距離、1[側面像は画素個数で表
しているのでメートル距離を1画素の大きさで割れば画
素個数に換算できる。
Corresponding points 2 of the corrected image using information on the attitude and deflection angle of the scanning mirror
Find 20. Next, the difference X between the corresponding point 220 and the target point 230
′ is calculated and converted to the amount of deviation on the observed image. Generally, the corrected image coordinates are a distance in meters, 1 [Since side images are expressed in number of pixels, it can be converted to the number of pixels by dividing the distance in meters by the size of one pixel.

初期点210に偏差量を加算した点240を同様に補正
画像に投影し対応点235を求める。
A point 240 obtained by adding the deviation amount to the initial point 210 is similarly projected onto the corrected image to obtain a corresponding point 235.

次に対応点算出計算が収束条件を満たしているか否かを
判定する。
Next, it is determined whether the corresponding point calculation computation satisfies a convergence condition.

xp−XP<ε      ・・・・・・(3)ここで
 xp:補正画像上で設定した格子点座標 xp:観測画像上より投影した対応 点 ε:収束条件 (3)式を満たす場合は、観肥画像上の点。
xp - XP < ε ... (3) where xp: Grid point coordinate set on the corrected image xp: Corresponding point ε projected from the observed image: Convergence condition When formula (3) is satisfied, A point on a visual image.

240を補正画像上の点230の対応する格子点とみな
す。(3)式を満たさない場合は、満たすまで上記算出
処理を繰り返す、一般的には数回の繰り返し計算で収束
する。しかし設定した収束回数内で収束しない場合があ
る。
240 is regarded as a grid point corresponding to point 230 on the corrected image. If formula (3) is not satisfied, the above calculation process is repeated until it is satisfied. Generally, the calculation is converged after several repeated calculations. However, it may not converge within the set number of convergence times.

次に収束しない例について説明する。収束しない例とし
ては、以下の2ケースがある。
Next, an example in which convergence does not occur will be explained. There are the following two cases as examples of non-convergence.

ケース1:対応点が存在しない場合 補正画像上の格子点270の観測画像上の対応点を求め
る際の初期格子点250を設定する。
Case 1: When there is no corresponding point An initial grid point 250 is set when finding a corresponding point on the observed image of the grid point 270 on the corrected image.

初期格子点250の補正画像上への投影点260を求め
る。格子点270と投影点260との偏差量を求め観測
画像上に換算し、対応点280を求める。この場合、対
応点280が観測画像上にないため撮影時刻、衛星軌道
・姿勢データ。
A projection point 260 of the initial grid point 250 onto the corrected image is determined. The amount of deviation between the grid point 270 and the projection point 260 is determined and converted onto the observed image, and a corresponding point 280 is determined. In this case, since the corresponding point 280 is not on the observed image, the photographing time and satellite orbit/attitude data.

走査鏡振れ角が決定できない、よって補正画像への投影
が不可能となる。
Since the scanning mirror deflection angle cannot be determined, projection onto a corrected image is impossible.

ケース2:収束計算打ち切り回数を越える場合補正画像
の観測画像への対応点は存在するが計算時間の制約から
決まる収束計算打切り回数の制限内に収束しないものは
非収束点となる。
Case 2: If the number of aborted convergence calculations exceeds the number of aborted calculations, there are points in the corrected image that correspond to the observed images, but those that do not converge within the limit of the number of aborted convergence calculations determined by calculation time constraints become non-convergent points.

本処理においては、補正画像上の各格子点の観測画像上
の対応点の座標を下記処理に従い算出する。
In this process, the coordinates of the corresponding point on the observed image of each grid point on the corrected image are calculated according to the following process.

(b)収束計算情報テーブルへの書込み処理150第6
図にテーブルの概要を示す、縦方向5個。
(b) Writing processing to the convergence calculation information table 150 No. 6
The figure shows an overview of the table, 5 vertically.

横方向5個の格子点を想定した。Five grid points in the horizontal direction were assumed.

各格子点の収束計算結果が収束した場合は1のフラッグ
290.収束しない場合はOのフラッグ295をたてる
When the convergence calculation result of each grid point converges, the flag 290 is set to 1. If it does not converge, flag 295 of O is set.

(c)非収束点座標の捕間算出処理160収束計算情報
テーブルより非収束点を検索する。つまり0フラツグが
たった格子点を検索する。
(c) Interpolation calculation process for non-convergence point coordinates 160 Search for non-convergence points from the convergence calculation information table. In other words, a search is made for grid points with a 0 flag.

非収束点は、隣接する収束した格子点の座標を用い補間
により算出する。
Non-convergent points are calculated by interpolation using the coordinates of adjacent converged grid points.

詳細は以下の通りである。Details are as follows.

(i)収束した点を用い歪補正係数を算出する。(i) Calculate the distortion correction coefficient using the converged points.

第6図に示すように観測画像上の収束した4点310(
・印)と補正画像上の4点320(0印)の格子点座標
を用い式(2)に示した歪補正係数を算出する(第6@
は、縦、横2次元座標のうち横方向の収束計算例を示す
、)。
As shown in Figure 6, the four converged points 310 (
The distortion correction coefficient shown in equation (2) is calculated using the lattice point coordinates of the four points 320 (0 mark) on the corrected image (marked by
(shows an example of convergence calculation in the horizontal direction of two-dimensional vertical and horizontal coordinates).

つまり、観測画像上および補正画像上の格子点座標値を
(2)式に代入することにより歪補正係数を算出する。
That is, the distortion correction coefficient is calculated by substituting the grid point coordinate values on the observed image and the corrected image into equation (2).

ただし、xl :補正画像上のピクセル(横)方向の格
子間隔 y。:補正画像上のライン(縦) 方向格子 b0〜b、も同様に算出できる。
Here, xl: grid interval y in the pixel (horizontal) direction on the corrected image. : Line (vertical) direction grids b0 to b on the corrected image can be calculated in the same way.

(it)非収束点の座標値を算出する。(it) Calculate the coordinate values of the non-convergence point.

たとえば、5行1列目El=5v j=13(以下、1
=tj=省略)の非収束点の座標を求めるためには、(
3,11、(4,1)(3,2)、(4,2)よりなる
ブロックの歪補正係数a0〜a3tbo〜b、を用いる
For example, 5th row, 1st column El = 5v j = 13 (hereinafter 1
= tj = omitted)) To find the coordinates of the non-convergence point, (
3,11, (4,1) (3,2), and (4,2), distortion correction coefficients a0 to a3tbo to b are used.

つまり(5,1)格子点のピクセル方向の座標値xs、
は、近似的に隣接する格子点座標値と(2)式より次の
ように求める。
In other words, the coordinate value xs of the (5,1) grid point in the pixel direction,
is determined as follows from approximately adjacent grid point coordinate values and equation (2).

x、1=a0+a1X2XxL+X、、  −(4)ま
た、ライン方向の座標値1stは、 ysz= b@+ b、X 2 X x@、+ Y@>
  −(5)ここで Xl、:補正画像上の格子点〔3
゜1〕のピクセル方向座標値 Yll:補正画像上の格子点〔3゜ 1〕のライン方向座標値 (iii) (it)により求めた非収束点座標値すな
わち仮収束点を初期値にして補正画像上の非収束格子点
に対応する観測画像上の格子点座標値の算出。
x, 1=a0+a1X2XxL+X, -(4) Also, the coordinate value 1st in the line direction is ysz= b@+ b, X 2
−(5) Here, Xl,: Grid point on the corrected image [3
Pixel direction coordinate value Yll of ゜1〕: Line direction coordinate value of lattice point [3゜1] on the corrected image (iii) (it) The non-convergence point coordinate value, that is, the provisional convergence point, obtained from the initial value, is corrected. Calculation of grid point coordinate values on the observed image corresponding to non-convergent grid points on the image.

上記(a)と同様の処理を行う。The same process as in (a) above is performed.

(d)収束状態の判定170 非収束点について(c)の処理を行った結果、収束した
かしないかをチェックする。
(d) Determination of convergence state 170 It is checked whether or not convergence has been achieved as a result of performing the process (c) on non-convergence points.

(e)収束した場合の処理180 収束点の座標値を補正画像上の格子点の投影点とする。(e) Processing 180 when converged Let the coordinate values of the convergence point be the projection point of the grid point on the corrected image.

(f)収束しない場合の処理190 (Q)の(h)の処理で求めた観測画像上の格子点座標
値を収束点とみなす。
(f) Process 190 in case of non-convergence The grid point coordinate values on the observed image obtained in the process of (h) of (Q) are regarded as the convergence point.

(g)歪補正係数の算出200 以上より求めた観測画像上の収束点と補正画像上の格子
点座標値・より(2)式に示す歪補正係数を算出する。
(g) Calculation of distortion correction coefficient 200 The distortion correction coefficient shown in equation (2) is calculated from the convergence point on the observed image obtained above and the grid point coordinate value on the corrected image.

(2)歪補正処理40 上記で求めた歪補正係数を用い補正画像上の各点に対応
する観測画像上の座標値を求め、観測画像の画素値を補
正画像上に置き換える。
(2) Distortion correction processing 40 Using the distortion correction coefficients obtained above, coordinate values on the observed image corresponding to each point on the corrected image are obtained, and the pixel values of the observed image are replaced on the corrected image.

この場合、求めた観測画像上の座標値は整数値でないた
め、適当な内挿法を用い画素値を算出する。
In this case, since the obtained coordinate values on the observed image are not integer values, pixel values are calculated using an appropriate interpolation method.

以上、本実施例によれば、歪補正係数を算出する際、補
正画像から観測画像への対応点を求める収束計算におけ
る非収束の問題と処理時間の増大化の問題を解決するこ
とができる。。
As described above, according to the present embodiment, when calculating a distortion correction coefficient, it is possible to solve the problem of non-convergence and the problem of increased processing time in convergence calculation for finding corresponding points from a corrected image to an observed image. .

〔発明の効果〕〔Effect of the invention〕

本発明によれば、 (1)非収束点の発生を回避出来るため、収束条件の状
況によらず観測画像の補正が可能である。
According to the present invention, (1) Since the occurrence of non-convergence points can be avoided, it is possible to correct observed images regardless of the convergence conditions.

(2)仮収束点を初期値に設定することで高速、高精度
は歪補正処理が可能である。
(2) By setting the temporary convergence point to the initial value, high-speed and highly accurate distortion correction processing is possible.

以上の効果がある。This has the above effects.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は衛星画像の補正処理概要図、第2図は補正画像
と観測画像の関係図、第3図は歪補正係数算出処理フロ
ー図、第4図は収束計算処理の説明図、第5図は収束計
算情報テーブル、第6図は歪補正係数算出処理の説明図
、である。 10・・・衛星の軌道・姿勢データおよび撮影画像、2
0・・・歪補正係数算出処1!、30・・・歪補正係数
、40・・・歪補正処理、140・・・補正画像上の格
子点に対応する未補正画像上の格子点座標値算出処理、
150・・・収束計算情報テーブルへの書き込み処理、
155・・・格子点数だけ繰返し処理、160・・・非
収束点座標の補間算出処理、170・・・収束状態の判
定、180・・・収束点の設定(I)、190・・・収
束点の設定(II)、220・・・歪補正係数の算出処
理。
Figure 1 is an overview of satellite image correction processing, Figure 2 is a diagram of the relationship between corrected images and observed images, Figure 3 is a flow diagram of distortion correction coefficient calculation processing, Figure 4 is an explanatory diagram of convergence calculation processing, and Figure 5 The figure is a convergence calculation information table, and FIG. 6 is an explanatory diagram of distortion correction coefficient calculation processing. 10...Satellite orbit/attitude data and photographed images, 2
0... Distortion correction coefficient calculation process 1! , 30... Distortion correction coefficient, 40... Distortion correction processing, 140... Grid point coordinate value calculation processing on the uncorrected image corresponding to the grid point on the corrected image,
150...Writing process to the convergence calculation information table,
155... Iterative processing for the number of grid points, 160... Interpolation calculation process for non-convergence point coordinates, 170... Determination of convergence state, 180... Setting of convergence point (I), 190... Convergence point Setting (II), 220... Distortion correction coefficient calculation process.

Claims (1)

【特許請求の範囲】[Claims] 1、衛星の位置、速度、センサ視線角および地球の形状
データより、地球とセンサ視線との交点を収束計算によ
り求める撮影画像の形状歪量算出処理において、収束し
ない点の位置座標を、周囲の収束した点の位置座標より
補間により求めることを特徴とする衛星画像の補正処理
方式。
1. In the process of calculating the shape distortion amount of the captured image, which uses the satellite's position, speed, sensor line of sight angle, and earth shape data to calculate the intersection point between the earth and the sensor line of sight by convergence calculation, the position coordinates of the point that does not converge are calculated using the surrounding points. A satellite image correction processing method characterized by calculating by interpolation from the position coordinates of converged points.
JP60032640A 1985-02-22 1985-02-22 Satellite image correction processing system Pending JPS61193269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60032640A JPS61193269A (en) 1985-02-22 1985-02-22 Satellite image correction processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60032640A JPS61193269A (en) 1985-02-22 1985-02-22 Satellite image correction processing system

Publications (1)

Publication Number Publication Date
JPS61193269A true JPS61193269A (en) 1986-08-27

Family

ID=12364445

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60032640A Pending JPS61193269A (en) 1985-02-22 1985-02-22 Satellite image correction processing system

Country Status (1)

Country Link
JP (1) JPS61193269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8231103B2 (en) 2007-01-19 2012-07-31 Kyungdong Navien Co. Flow control valve

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8231103B2 (en) 2007-01-19 2012-07-31 Kyungdong Navien Co. Flow control valve

Similar Documents

Publication Publication Date Title
CN109903352B (en) Method for making large-area seamless orthoimage of satellite remote sensing image
US5878174A (en) Method for lens distortion correction of photographic images for texture mapping
Morgan Epipolar resampling of linear array scanner scenes
Hartley et al. Linear pushbroom cameras
JP2003323611A (en) Ortho-correction processing method for satellite photographic image
JPS59133667A (en) Processing system of picture correction
CN108562900B (en) SAR image geometric registration method based on elevation correction
CN110555813A (en) rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN111508028A (en) Autonomous in-orbit geometric calibration method and system for optical stereo mapping satellite camera
Lee et al. Georegistration of airborne hyperspectral image data
CN107705272A (en) A kind of high-precision geometric correction method of aerial image
CN109579796B (en) Area network adjustment method for projected image
CN117092621A (en) Hyperspectral image-point cloud three-dimensional registration method based on ray tracing correction
JPS61193269A (en) Satellite image correction processing system
JPH0830194A (en) Method for forming geospecific texture
KR20090072030A (en) An implicit geometric regularization of building polygon using lidar data
CN102509275B (en) Resample method for remote sensing image composited based on image element imaging areas
CN111862332A (en) Method and system for correcting fitting error of satellite image general imaging model
CN108830781B (en) Wide baseline image straight line matching method under perspective transformation model
CN113034572B (en) Epipolar extraction method based on eight-parameter epipolar model
JPH05215848A (en) Image distortion correcting method and device therefor
JP4262830B2 (en) Three-dimensional object image analysis method and related technology
CN113487540B (en) Correction method and device for space-based large-dip-angle image
CN114972013B (en) Fisheye image rapid orthorectification method based on spherical geometry single transformation
Makisara et al. Geometric correction of airborne imaging spectrometer data