CN113379648B - High-resolution seven-number and resource three-number stereoscopic image joint adjustment method - Google Patents

High-resolution seven-number and resource three-number stereoscopic image joint adjustment method Download PDF

Info

Publication number
CN113379648B
CN113379648B CN202110778949.4A CN202110778949A CN113379648B CN 113379648 B CN113379648 B CN 113379648B CN 202110778949 A CN202110778949 A CN 202110778949A CN 113379648 B CN113379648 B CN 113379648B
Authority
CN
China
Prior art keywords
image
resolution
stereoscopic image
resource
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110778949.4A
Other languages
Chinese (zh)
Other versions
CN113379648A (en
Inventor
岳庆兴
唐新明
王霞
薛玉彩
王艺颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Original Assignee
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ministry Of Natural Resources Land Satellite Remote Sensing Application Center filed Critical Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority to CN202110778949.4A priority Critical patent/CN113379648B/en
Publication of CN113379648A publication Critical patent/CN113379648A/en
Application granted granted Critical
Publication of CN113379648B publication Critical patent/CN113379648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for combining and leveling high-resolution seven-size and resource three-size stereoscopic images, which comprises the following steps: carrying out joint free net adjustment on the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image; correcting the RPC parameter of the three-wire array stereoscopic image of the resource again based on the offset of the DSM of the small area high score number seven and the DSM of the resource corresponding to the ground coordinates of the laser points of the two-wire array stereoscopic image of the high score number seven; acquiring a laser control point of the high-resolution two-linear array stereoscopic image, and taking the laser control point as control information; and carrying out joint area network adjustment on the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image. The accuracy of the RPC is improved through DSM acquisition and preprocessing of the fast view, and the absolute elevation accuracy of the high-resolution seventh and resource third stereoscopic images is improved through joint participation adjustment of the homonymous image points and the laser control points.

Description

High-resolution seven-number and resource three-number stereoscopic image joint adjustment method
Technical Field
The invention relates to the technical field of image processing, in particular to a high-resolution seventh-size and resource third-size stereoscopic image joint adjustment method.
Background
At present, foreign Pleiades constellation and WorldView constellation depend on the technical advantages of multi-star observation and smart imaging, can realize high-resolution multi-view stereoscopic imaging, even realize oblique photography of satellites, and can carry out true three-dimensional modeling on complex ground objects and terrains including buildings, steep slopes and the like. The next generation of constellation Pleiades Neo and world view region will construct satellite constellations with higher resolution and shorter revisit time, which will tend to further enhance the global three-dimensional data acquisition capability. The stereoscopic surveying and mapping satellite in China is mainly in the same-platform and multi-camera mode, and the resources III and the high-resolution seventh are representative. Under the condition that the resolution and the flexibility are also deficient, how to fully play the characteristics of a plurality of existing stereo satellites is a considerable problem by improving the precision of stereo products and expanding the application of satellite stereo products through a combined processing method, and the multi-satellite stereo image combined area network adjustment is a basic link of combined processing, so that the precision of a subsequent processing link is fundamentally determined. The basic theory of satellite stereoscopic image adjustment can refer to the three-linear array CCD image satellite photogrammetry principle, and although the basic theory of the three-linear array CCD image satellite photogrammetry is mainly described as an adjustment method based on a strict imaging model, the basic idea of 'front intersection and rear intersection alternate execution' is also applicable to a general RPC model. In the aspect of multi-satellite image joint adjustment, xing Shuai and the like propose methods for combining different types of satellite remote sensing images with regional adjustment according to the basic principle of regional adjustment, and realize joint adjustment of satellites such as SPOT4/5, ERS2, radarsat1 and the like; xu researches the image area network adjustment theory of the first and third day painting and realizes the light beam method area network adjustment calculation based on RPC; jie Zhigang and the like propose and verify an uncontrolled joint adjustment method of a first-day-drawing satellite No. I and a third-resource satellite based on a rational function equation and an affine correction model of an image space; zheng Yi and the like analyze and verify the feasibility and achievement precision of the uncontrolled joint adjustment of the satellite images of the third resource and the first sky painting; zheng Maoteng respectively carrying out area network adjustment test on the first three-line array image drawn on the sky and the third three-line array image of the resource, and verifying adjustment precision of the orientation sheet model; ji Song and the like propose a three-linear-array satellite image regional adjustment constraint method under the assistance of a small area array, and the regional adjustment processing and the overall positioning accuracy improvement of the three-linear-array image are realized by taking the small area-array image data as the assistance.
It can be seen that the current study on the joint adjustment of stereoscopic images mainly aims at stereoscopic images with lower resolution such as resource III, heaven painting I and the like, the viewing angles of the stereoscopic images are relatively close, the coverage area is large, and the connection point meeting the requirements is easy to obtain. The third-order high-resolution stereo mapping satellite has larger resolution and visual angle difference with the third-order high-resolution stereo mapping satellite, and the difficulty of acquiring the same-name image points is increased. The joint processing of the high-resolution seventh footprint image, the laser ranging data and the high-resolution seventh stereoscopic image and the resource third stereoscopic image is a brand new technical problem.
Disclosure of Invention
In view of the above, the present invention provides a method for combining stereoscopic images of a third-size stereoscopic image with a seventh-size stereoscopic image and a third-size stereoscopic image, which can substantially obviate one or more problems due to limitations and disadvantages of the related art.
The invention provides a high-resolution seventh-size and resource third-size stereoscopic image joint adjustment method which is characterized by comprising the following steps of:
step 1, carrying out joint free net adjustment on a high-resolution two-line array stereoscopic image and a resource three-line array stereoscopic image, wherein in the joint free net adjustment process, homonymous image points of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image are extracted, an intersection control point is obtained through intersection in front of the homonymous image points, and RPC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image are corrected by utilizing the intersection control point;
Step 2, correcting RPC parameters of the three-dimensional resource three-dimensional array image again based on small-area high-score number seven DSM and three-dimensional resource DSM offset corresponding to the ground coordinates of the laser points of the high-score number seven two-dimensional array image so as to eliminate system errors to the greatest extent;
step 3, obtaining laser control points of the high-resolution two-line array stereoscopic image, and taking the laser control points as control information to improve the accuracy of the area network adjustment;
and 4, carrying out joint regional adjustment on the high-resolution two-linear-array stereoscopic image and the resource three-linear-array stereoscopic image based on the resource three-linear-array stereoscopic image with the RPC parameters corrected again in the step 2 and the laser control point of the high-resolution two-linear-array stereoscopic image obtained in the step 3.
Preferably, the step 1 specifically includes the following substeps:
step 1.1, generating a fast view of the resource three-line array stereoscopic image and a fast view of the high-resolution seven-line array stereoscopic image, and calculating RPC parameters of the fast view of the resource three-line array stereoscopic image and RPC parameters of the fast view of the high-resolution seven-line array stereoscopic image;
Step 1.2, correcting the PRC parameters of the fast view of the high-resolution two-linear array stereoscopic image and the PRC parameters of the fast view of the resource three-linear array stereoscopic image respectively;
step 1.3, respectively calculating a quick view DSM of the high-resolution seven-linear array stereoscopic image and a quick view DSM of the resource three-linear array stereoscopic image so as to reduce the searching range of homonymous image points of the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image;
step 1.4, the high-resolution seventh-sized quick view DSM and the resource third-sized quick view DSM are respectively processed to eliminate the relative error between the high-resolution seventh-sized quick views DSM overlapped with each other and the relative error between the resource third-sized quick views DSM overlapped with each other, so that the searching range of the same-name image points of the high-resolution seventh-sized two-linear array stereoscopic image and the resource third-linear array stereoscopic image is further reduced, and the efficiency and the accuracy for acquiring the same-name image points are improved;
and 1.5, extracting the homonymous image points of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image based on the searching range of the homonymous image points, carrying out front intersection on the homonymous image points to obtain intersection control points, and correcting RPC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image by using the intersection control points through a least square method.
Preferably, in step 1.1, the RPC parameters of the fast view of the stereoscopic image of the third three-line array and the fast view of the stereoscopic image of the seventh two-line array are calculated according to the following formula:
LINE_OFF_pre=(LINE_OFF_org-n/2.0+0.5)/n
SAMP_OFF_pre=(SAMP_OFF_org-n/2.0+0.5)/n
LINE_SCALE_pre=LINE_SCALE_org/n
SAMP_SCALE_pre=SAMP_SCALE_org/n
the line_off_pre is a LINE offset of a fast view, the samp_off_pre is a column offset of the fast view, the line_scale_pre is a LINE scaling of the fast view, the samp_scale_pre is a column scaling of the fast view, the line_off_org is a LINE offset of an original image, the samp_off_org is a column offset of the original image, the line_scale_org is a LINE scaling of the original image, the samp_scale_org is a column scaling of the original image, and n is a reduction multiple, wherein the fast view refers to a fast view of a resource three-LINE three-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image, and the original image refers to a resource three-LINE-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image.
Preferably, in step 1.2, "correcting the PRC parameter of the fast view of the high-resolution two-line array stereoscopic image" specifically includes the following substeps:
step 1.2.1, taking a high-resolution seventh back-view fast view as a main image, and obtaining a homonymic image point between the high-resolution seventh back-view fast view and the high-resolution seventh front-view fast view through a SIFT matching algorithm;
Step 1.2.2, calculating an image side residual difference of the same-name image point in the main image, wherein the image side residual difference comprises a row coordinate residual difference and a column coordinate residual difference;
step 1.2.3, filtering coarse difference points and obtaining intersection control points based on the image space residual difference, and correcting respective RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by using the obtained intersection control points;
step 1.2.4, re-acquiring the intersection control point by using the respective RPC parameters of the corrected high-resolution seventh forward-looking fast view and the corrected backward-looking fast view, and then updating the respective RPC parameters of the high-resolution seventh forward-looking fast view and the high-resolution seventh backward-looking fast view by using the re-acquired intersection control point again;
and step 1.2.5, performing loop iteration to execute the steps 1.2.3-1.2.4, and obtaining the final corrected PRC parameters of the fast view of the high-resolution seven-two-linear-array stereoscopic image.
The PRC parameter for correcting the fast view of the resource three-wire array stereoscopic image specifically comprises the following substeps:
step 1.2.6, taking a resource third forward-looking fast view as a main image, and obtaining an identical image point between the resource third forward-looking fast view and the backward-looking fast view through a SIFT matching algorithm, wherein the obtained identical image point between the resource third forward-looking fast view and the backward-looking fast view is called a three-vision identical image point;
Step 1.2.7, calculating an image side residual difference of the triple-view homonymous image point in the main image, wherein the image side residual difference comprises a row coordinate residual difference and a column coordinate residual difference;
step 1.2.8, filtering coarse difference points and obtaining intersection control points based on the image space residual difference, and correcting respective RPC parameters of the third forward, forward and backward view of the resource by using the obtained intersection control points;
step 1.2.9, re-acquiring the intersection control point by using the respective RPC parameters of the corrected forward, forward and backward looking fast view of the resource III, and updating the respective RPC parameters of the forward, forward and backward looking fast view of the resource III by using the re-acquired intersection control point again;
and step 1.2.10, performing loop iteration to obtain the final PRC parameters of the fast view of the three-linear array stereoscopic image of the resource III.
Preferably, step 1.4 specifically comprises the following sub-steps:
step 1.4.1, calculating the visual angle and the included angle of images between overlapped models corresponding to the overlapped quick views DSM, selecting the image with the minimum visual angle and the included angle smaller than a preset threshold (preferably, the preset threshold is 7 degrees) as the main image of the overlapped model, and matching the same name points between the main images of the overlapped model through a SIFT matching algorithm; the quick view DSM refers to a high-definition seventh quick view DSM or a resource third quick view DSM;
Step 1.4.2, the homonymy points between the main images of the overlapped model are projected onto a quick view DSM to obtain object side homonymy points;
step 1.4.3, carrying out RANSAC rough difference filtering on the object space with the same name point, removing the same name point with the error exceeding a certain threshold, simultaneously obtaining object space three-dimensional coordinate transformation coefficients between overlapped models through a least square method, and calculating relative offset of longitude, latitude and elevation of the object space between the overlapped models by utilizing an object space three-dimensional transformation formula based on the object space three-dimensional coordinate transformation coefficients between the overlapped models;
step 1.4.4, eliminating the relative offset of the longitude, latitude and elevation of the object space between the overlapped models by the iterative correction algorithm of the plane and the elevation offset of the overlapped models, correcting the RPC parameters of each overlapped quick view, and calculating and recording the mean square error sigma of the relative offset of the longitude, latitude and elevation between all the overlapped models L 、σ B 、σ H
Step 1.4.5 repeating steps 1.4.2-1.4.4 until the mean square error sigma of the relative offsets of longitude, latitude and elevation between all final overlapped models L 、σ B 、σ H And the relative error between the overlapped models is eliminated when the relative error is smaller than a preset threshold value.
Preferably, step 1.5 specifically comprises the following sub-steps:
Step 1.5.1, restoring the RPC parameters of the high-resolution two-linear array stereoscopic image fast view into the RPC parameters of the high-resolution two-linear array stereoscopic image fast view, and restoring the RPC parameters of the resource three-linear array stereoscopic image fast view into the RPC parameters of the resource three-linear array stereoscopic image fast view;
step 1.5.2, taking a front view image and a high-resolution seventh view image of a resource as main images, taking a front view image and a rear view image of the resource as auxiliary images, carrying out orthorectification on the main images and the auxiliary images, and then carrying out least square matching on the main images and the auxiliary images to obtain matching points;
step 1.5.3, projecting the matching point to a high-resolution two-line array stereoscopic image and a resource three-line array stereoscopic image by utilizing the RPC parameter restored in the step S1.5.1 so as to obtain a homonymous image point of the high-resolution two-line array stereoscopic image and a homonymous image point of the resource three-line array stereoscopic image;
1.5.4, carrying out RANSAC coarse difference filtering on the obtained homonymous image points of the high-resolution seventh two-linear array stereoscopic image and the homonymous image points of the resource third-linear array stereoscopic image, and removing coarse difference points;
Step 1.5.5, performing forward intersection on the same-name image points subjected to coarse difference filtering to calculate intersection control points;
and 1.5.6, correcting RPC parameters of the high-resolution two-wire array stereoscopic image and the resource three-wire array stereoscopic image by using the intersection control point.
Step 1.5.7, performing loop iteration to perform steps 1.5.5-1.5.6 until the mean square error sigma of the intersection control point residual difference L 、σ B 、σ H And the corrected RPC parameters are smaller than a preset threshold value to obtain the final high-resolution seven-linear array stereoscopic image and the corrected RPC parameters of the resource three-linear array stereoscopic image.
Preferably, the step 2 specifically includes the following substeps:
step 2.1, determining the image plane coordinates of the ground coordinates of the laser points of the high-resolution two-linear array stereoscopic image on the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image;
step 2.2, small areas are respectively and independently extracted from the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image areas corresponding to the image plane coordinates, the small areas DSM of the high-resolution two-line array stereoscopic image and the small areas DSM of the resource three-line array stereoscopic image which are extracted are respectively calculated, and the small areas DSM which are respectively calculated are used as the high-resolution two-line array stereoscopic image DSM and the resource three-line array stereoscopic image DSM;
Step 2.3, calculating the elevation offset between the high-resolution seven-size two-linear array stereoscopic image DSM and the resource three-size three-linear array stereoscopic image DSM based on an affine transformation formula;
and 2.4, correcting the RPC parameter of the three-wire array stereoscopic image of the resource third based on the elevation offset.
Preferably, the step 3 specifically includes the following sub-steps:
and 3.1, obtaining the same name image point between the high-resolution seven-linear array stereoscopic image and the high-resolution seven-laser footprint image through a SIFT matching algorithm.
And 3.2, obtaining the ground three-dimensional coordinates of the high-resolution seventh homonymous image point by converging and projecting the high-resolution seventh homonymous image point to the DSM of the high-resolution seventh two-linear array stereoscopic image.
And 3.3, obtaining a laser point homonymy image point, and forming a laser control point by the laser point homonymy image point and the ground three-dimensional coordinates of the high-resolution seventh homonymy image point.
Preferably, the step 4 specifically includes the following substeps:
step 4.1, calculating an intersection control point of a connection point of the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image based on the resource three-linear array stereoscopic image with the RPC parameter corrected again in the step 2 and the high-resolution seven-linear array stereoscopic image in the step 3, and calculating an intersection surface point of a laser point and a same name image point according to the RPC parameter of the high-resolution seven-linear array stereoscopic image with the same name image point and the same name image point;
Step 4.2, determining the offset between the intersection control point of the identical image point of the laser point and the laser control point;
step 4.3, correcting the intersection control point of the connection point of the high-resolution seventh two-linear array stereoscopic image and the resource third-linear array stereoscopic image and the intersection control point of the laser point homonymous image point, which are obtained in the step 4.1, according to the determined offset, wherein the homonymous image point is positioned on at least two images of the high-resolution seventh forward-looking image, the rear-looking image and the resource third-numbered forward-looking image, the rear-looking image and the forward-looking image;
and 4.4, forming the corrected intersection control point of the connecting point and the intersection control point of the laser point with the same-name image point into a control point of each image of the high-resolution seventh forward-looking image, the rear-looking image, the forward-looking image of the resource third, the rear-looking image and the forward-looking image, and correcting RPC parameters of the high-resolution seventh stereoscopic image and the resource third stereoscopic image by using the control points.
Compared with the prior art, the invention has the following advantages and positive effects due to the adoption of the technical scheme:
1. the accuracy of RPC can be improved through DSM acquisition and preprocessing of the fast view, and object space reference is established through corrected DSM, so that the accuracy and efficiency of subsequent matching points and adjustment processing are guaranteed;
2. The multi-source multi-view stereoscopic image homonymous image points with larger resolution and observation angle difference can be obtained and high-precision free net adjustment is carried out;
3. the accurate position of the high-resolution seventh laser point on the high-resolution seventh two-linear array stereoscopic image can be obtained, and laser control point data are formed;
4. and the absolute elevation precision of the high-resolution seventh-number and resource third-number stereoscopic images is improved through joint participation of the same-name image points and the laser control points.
Drawings
FIG. 1 is a flow chart of a method for joint adjustment of high-score seventh and resource third according to an embodiment of the invention;
FIG. 2 (a) is a diagram of a quick view DSM error distribution diagram before model iterative correction according to an embodiment of the present invention;
fig. 2 (b) is an error distribution diagram after model iterative correction according to an embodiment of the present invention.
Detailed Description
The present invention will be described more fully hereinafter, in which exemplary embodiments of the invention are shown.
According to the invention, through a rough-to-fine matching strategy and with the assistance of topographic data, the matching difficulty caused by the difference of sensors, scales and angles among the high-resolution seventh image, the resource third image and the footprint image is solved, and the stable area network adjustment is realized. Through the integral matching in the range of the footprint image, the accurate positioning of the laser center point on the image is realized, and then the high-precision combined area network adjustment of four types of data is realized.
The method for combining and leveling the high-resolution seventh and third resources provided by the embodiment of the invention specifically comprises the following steps:
and step 1, carrying out joint free net adjustment on the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image. And in the combined free network adjustment process, extracting homonymous image points of the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image, acquiring intersection control points through intersection in front of the homonymous image points, and correcting RPC parameters of the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image by utilizing the intersection control points.
The joint adjustment is, for example, to extract the homonymous point of the high-resolution two-linear-array stereoscopic image and correct the RPC parameter of the high-resolution two-linear-array stereoscopic image, extract the homonymous point of the resource three-linear-array stereoscopic image and correct the RPC parameter of the three-linear-array stereoscopic image, as compared with the "independent" adjustment. The high score No. seven and the resource No. three are not linked and are the "independent" adjustment. The invention extracts the homonymous image points of the images of the seventh and the third resources, and the intersection ground point uses all homonymous image points and all RPC parameters, thus being the joint adjustment.
The free net adjustment in step 1 mainly includes: and (1) acquiring an intersection control point by intersection in front of the same-name image point, and (2) correcting the iterative execution of the two steps of the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic RPC parameter through the intersection control point. The difficulty is to obtain reliable high-resolution seven-linear array stereoscopic images and the same-name image points of the resource three-linear array stereoscopic images. The method comprises the steps of firstly generating the fast view of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image, correcting the PRC parameters of the fast view of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image, respectively restoring the PRC parameters of the fast view of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image to the PRC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image, and obtaining the initial PRC parameters which are as accurate as possible through the fast view processing so as to improve the precision of the subsequent regional network adjustment.
Step 1 will be described in detail below. The high-resolution seven-linear array stereoscopic image comprises a rear-view image and a front-view image, and specifically comprises a rear-view camera image with an installation angle of-5 degrees and a front-view camera image with an installation angle of 26 degrees. The resolution of the seventh high back view is about 0.65 meters and the resolution of the front view is about 0.8 meters. The resource three-line array stereoscopic image comprises a front view image, a rear view image and a front view image, and specifically comprises a front view image with an installation angle of 0 degrees, a rear view image with an installation angle of-22 degrees and a front view image with an installation angle of 22 degrees. The resolution of the source three front view image is about 2.1 meters, and the resolution of the front view image and the rear view image is about 3.5 meters (02, 03 star is about 2.5 meters).
The premise of carrying out joint free net adjustment on the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image is to obtain high-precision homonymous image points between the two stereoscopic images with large difference in installation angles and resolution. The goal of the free net adjustment is to have the same name points intersect at the same ground point. The same-name image points which cannot be intersected with the same ground point after the initial free net adjustment are regarded as rough difference points, and the intersection residual difference is beyond a threshold and needs to be removed. The coarse threshold is gradually reduced through a plurality of iterations until all the homonymous image points no longer contain coarse points. At this time, it can be considered that all the reserved homonymous image points can be intersected with the corresponding ground points, and the ground point coordinates often have certain deviation compared with the real ground coordinates corresponding to the image points, but the spatial distribution of the deviation can be characterized by systematic errors or gradual changes, and absolute control of all the images can be realized by adding a certain number of distributed control points. In general, absolute correction of individual stereo pairs typically requires more than 4 more evenly distributed control points. All images form a whole due to the connection effect of connection points between a plurality of stereo pairs subjected to free net adjustment, and the control points can realize absolute correction of all images only by being uniformly distributed in the whole.
The high-resolution seven-linear array stereoscopic image comprises a rear-view image and a front-view image, and each image is provided with a corresponding rational polynomial parameter (rational polynomial coefficients, called RPC for short) file. The RPC functions to establish a transformation relationship between ground coordinates (represented by longitude L, latitude B, and elevation H) and image plane coordinates (represented by row coordinates r and column coordinates c). The RPC has the structure that:
the first 10 parameters are normalized parameters, including 5 normalized offsets and 5 normalized scaling, and the last 4 parameters are polynomial coefficients in row and column directions, which are specifically as follows:
name of the name Meaning of Unit (B)
LINE_OFF Line offset Pixel arrangement
SAMP_OFF Column offset Pixel arrangement
LAT_OFF Latitude offset Degree of
LONG_OFF Longitude offset Degree of
HEIGHT_OFF Elevation offset Rice
LINE_SCALE Line scaling Pixel arrangement
SAMP_SCALE Column scaling Pixel arrangement
LAT_SCALE Latitude scaling Degree of
LONG_SCALE Longitude scaling amount Degree of
HEIGHT_SCALE Elevation scaling Rice
LINE_NUM_COEFF_i 20 row transform polynomial molecular terms (i=1, …, 20)
LINE_DEN_COEFF_i 20 row transform polynomial denominator terms (i=1, …, 20)
SAMP_NUM_COEFF_i 20 column transform polynomial molecular terms (i=1, …, 20)
SAMP_DEN_COEFF_i 20 column transform polynomial denominator terms (i=1, …, 20)
The transformation relationship between the normalized ground coordinates (V, U, W) and the original ground coordinates (L, B, H) is:
V=(L-LONG_OFF)/LONG_SCALE
U=(B-LAT_OFF)/LAT_SCALE
W=(H-HEIGHT_OFF)/HEIGHT_SCALE
where L is longitude, B is latitude, H is elevation, V is normalized longitude, U is normalized latitude, W is normalized ellipsoidal HEIGHT, LONG_OFF is longitude offset, LAT_OFF is latitude offset, HEIGHT_OFF is elevation offset, LONG_SCALE is longitude scaling, LAT_SCALE is latitude scaling, HEIGHT_SCALE is elevation scaling.
The transformation relationship between the normalized image plane coordinates (l, s) and the original image plane coordinates (r, c) is:
l=(r-LINE_OFF)/LINE_SCALE
s=(c-SAMP_OFF)/SAMP_SCALE
where r is the row coordinate, c is the column coordinate, l is the normalized row coordinate, s is the normalized column coordinate, LINE_OFF is the row offset, SAMP_OFF is the column offset, LINE_SCALE is the row scaling, and SAMP_SCALE is the column scaling.
The conversion relationship between the normalized ground coordinates (V, U, W) and the normalized image plane coordinates (l, s) is:
l=F(V,U,W,LINE_NUM_COEFF_i)/F(V,U,W,LINE_DEN_COEFF_i),(i=1,…,20)
s=F(V,U,W,SAMP_NUM_COEFF_i)/F(V,U,W,SAMP_DEN_COEFF_i),(i=1,…,20)
where, (V, U, W) is a normalized ground coordinate, (l, s) is a normalized image plane coordinate, line_num_coeff_i is an i-th LINE transformation polynomial component, line_den_coeff_i is an i-th LINE transformation polynomial denominator, samp_num_coeff_i is an i-th column transformation polynomial component, samp_den_coeff_i is an i-th column transformation polynomial denominator, F represents a function described in a variable such as U, V, W, and numl=f (V, U, W, line_num_coeff_i) represents a LINE relational term, nul=f (V, U, W, line_den_coeff_i) represents a LINE relational term, nums=f (V, U, W, samp_num_coeff_i) represents a column relational term, and n=f (V, W, samp_coeff_i) represents a column relational term.
That is, two relation equations can be established between the normalized ground coordinates (V, U, W) and the normalized image plane coordinates (l, s), where (V, U, W) is known to be uniquely determined, but (V, U, W) is known to be uniquely determined. However, it is known that (l, s, W) can be uniquely determined (V, U), that is, the coordinates of longitude and latitude can be uniquely determined by knowing the coordinates of the image plane and an elevation value.
The same ground area is imaged from different angles no matter the high-resolution seven-linear array stereo image or the resource three-linear array stereo image, so that one ground coordinate (L, B, H) corresponds to 5 image plane coordinates (r) of 5 images j ,c j ) (j=1, …, 5). Image plane coordinates (r j ,c j ) (j=1, …, 5) is generally called a homonymous image point, which is basic data for stereoscopic adjustment. The homonymous image points are generally image points with obvious characteristics (such as characteristic points of intersection centers, field corners and the like), and are extracted by a characteristic point positioning operator such as Harris operator, forstner operator and the like. After the feature points are extracted from one image, the same feature point on other images is obtained through matching, and the prime marks are added on the same feature point on the one hand because the feature points are described by windows with certain sizes, and the feature points generally refer to the centers of the windows; on the other hand, because matching is often accompanied by a certain error, the same point in a physical sense cannot be accurately positioned.
N longitude and latitude coordinates are obtained by projecting n homonymous image points with different visual angles onto an ellipsoid with a height H. Taking the average value of n longitudes and the average value of n latitudes respectively to form an average longitude and latitude coordinate, the n longitude and latitude coordinates take the average longitude and latitude coordinate (La, ba) as the center, and n distances Di (i=1, …, n) from the n longitude and latitude coordinates to the average longitude and latitude coordinate (La, ba) can be calculated. Assuming that the mean square error of n distances is sgmD, as H varies over a range (e.g., minimum H0, maximum H1) with a certain step dH, sgmD also varies. Assuming that the elevation corresponding to the minimum sgmDm (assumed to be sgmDm) is Hm, coordinates (La, ba, hm) consisting of Hm and corresponding average longitude and latitude (La, ba) are object intersection points of the same-name image points, and the sgmDm is called object intersection residual. The corresponding image space coordinates can be obtained by (La, ba, hm) and RPC of each image, and the calculated image space coordinates and the matched acquired image space coordinates have a certain image plane distance di (i=1, …, n). Let di be the mean square error sdmd, referred to as the image side intersection residual. The sgmd is in units of pixels, and if the difference of image resolutions of different viewing angles is large, the sgmd cannot respond well to the accuracy of intersection. Therefore, the invention uses the object intersection residual difference sgmD as the basis for evaluating the intersection precision of single homonymous image points. Assuming that N homonymous image points are acquired on a plurality of images participating in the operation, the mean square error sgmDa calculated by sgmD of the N homonymous image points can reflect the overall intersection precision.
Let i image (i=1, …, n) have M homonymous pixels, one homonymous pixel (r) within i image i,k ,c i,k ) Intersection point with corresponding object (La i,k ,Ba i,k ,Hm i,k ) An "intersection control point" (k=1, …, M) is formed, and there are M intersection control points for the ith image. The polynomial coefficient of the RPC often has a certain error, and the purpose of adjustment is to correct the polynomial coefficient of the RPC so as to minimize the total intersection precision of all the homonymous image points. The invention corrects the numerator term coefficients line_num_coeff_i (i=1, …, 8) and samp_num_coeff_i (i=1, …, 8) in the RPC polynomial coefficients by the least squares method using the intersection control point.
From the above description, it can be seen that the key for realizing the adjustment of the stereoscopic image is to obtain the high-precision homonymous image point, and under the condition that the RPC has an error, the best matching point needs to be obtained by two-dimensional search within a certain search range. The larger the search range is, the larger the calculation amount is, and the error matching is more easy to occur, so that the difficulty of subsequent processing is increased.
Therefore, the invention takes the constrained homonymous image point matching search space as a main breakthrough point. Specifically, step 1 includes the following sub-steps:
step 1.1, generating a fast view of the resource three-line array stereoscopic image and a fast view of the high-resolution seven-line array stereoscopic image, and calculating RPC parameters of the fast view of the resource three-line array stereoscopic image and RPC parameters of the fast view of the high-resolution seven-line array stereoscopic image;
According to the preferred embodiment of the invention, the reduced images (hereinafter referred to as reduced images as fast views) of the three-line array stereoscopic image and the high-resolution two-line array stereoscopic image are generated by a method of pixel gray level averaging of a rectangular window (the length and the width of the rectangular window are certain pixels), and the width n (i.e. the height) of the rectangular window is a multiple of the reduction. The generated fast view of the resource three-linear array stereoscopic image comprises a resource three-front fast view, a rear fast view and a front fast view, and the generated fast view of the high-resolution seven-linear array stereoscopic image comprises a high-resolution seven-rear fast view and a front fast view.
According to the preferred embodiment of the invention, RPC parameters of the fast view of the resource three-linear array stereoscopic image and the fast view of the high-resolution seven-two-linear array stereoscopic image are generated according to the following formula:
LINE_OFF_pre=(LINE_OFF_org-n/2.0+0.5)/n
SAMP_OFF_pre=(SAMP_OFF_org-n/2.0+0.5)/n
LINE_SCALE_pre=LINE_SCALE_org/n
SAMP_SCALE_pre=SAMP_SCALE_org/n
the line_off_pre is a LINE offset of a fast view, the samp_off_pre is a column offset of the fast view, the line_scale_pre is a LINE scaling of the fast view, the samp_scale_pre is a column scaling of the fast view, the line_off_org is a LINE offset of an original image, the samp_off_org is a column offset of the original image, the line_scale_org is a LINE scaling of the original image, the samp_scale_org is a column scaling of the original image, and n is a reduction multiple, wherein the fast view refers to a fast view of a resource three-LINE three-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image, and the original image refers to a resource three-LINE-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image.
Wherein, two numbers of 2.0 and 0.5 are used to determine the corresponding relation between the original image row and column offset and the reduced image row and column offset, which are not simple n times relation, because the coordinates (0, 0) of the left upper corner of the original image correspond to the coordinates of the reduced image not (0, 0) but (n/2.0-0.5 ). Therefore, both the LINE_OFF_org and SAMP_OFF_org offset parameters are subtracted by n/2.0-0.5 and divided by n.
It can be seen that the original image RPC and the quick view RPC have only differences in row offset, column offset, row scaling, and column scaling, and the differences are only related to the reduction factors.
According to the preferred embodiment of the invention, the reduction multiple of the fast view of the resource three-line array stereoscopic image relative to the resource three-line array stereoscopic image is set to 4, and the reduction multiple of the fast view of the high-resolution seven-line array stereoscopic image relative to the high-resolution seven-line array stereoscopic image is set to 10.
The above description is to generate the fast view of the three-line three-array stereoscopic image and the high-resolution two-line three-array stereoscopic image, and the following description is to obtain the fast view correction RPC parameters of the three-line three-array stereoscopic image and the high-resolution two-line three-array stereoscopic image based on the fast view.
Step 1.2, correcting the PRC parameters of the fast view of the high-resolution two-linear array stereoscopic image and the PRC parameters of the fast view of the resource three-linear array stereoscopic image respectively;
the RPC parameters of the original image have initial errors, and the RPC parameters of the corresponding quick view also have initial errors. The intersection residual difference of the same-name image points of the stereoscopic rapid view image exceeds a certain threshold, such as 0.5 pixel. This can result in reduced quality or even failure to produce DSM. The method is characterized in that the intersection residual difference of the identical image points of the stereoscopic rapid image is smaller than a set threshold value through RPC parameter correction so as to ensure the quality of DSM generation, which is also one of main improvement points of the invention.
The method specifically comprises the following substeps of:
and 1.2.1, taking the high-resolution seventh back-view fast view as a main image, and obtaining a homonymous image point between the high-resolution seventh back-view fast view and the high-resolution seventh front-view fast view through a SIFT matching algorithm.
And 1.2.2, calculating an image side residual difference of the same-name image point in the main image, wherein the image side residual difference comprises a row coordinate residual difference and a column coordinate residual difference.
Specifically, with P i (r i ,c i ) (i=1, …, N1) represents the feature points of the main image, and N1 represents the number of feature points. The image space coordinate calculated by the principal image RPC through the intersection of the homonymous image points and the ground point is P' i (r’ i ,c’ i ) The calculated image space residual is expressed as E i (Er i ,Ec i )(i=1,…,N1),Er i =r’ i -r i ,Ec i =c’ i -c i
And 1.2.3, filtering coarse difference points based on the image space residual difference, acquiring intersection control points, and correcting respective RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by using the acquired intersection control points.
Specifically, the row coordinate residual error and the column coordinate residual error are respectively added to the row and column coordinates of the pixel point of the main image, and the characteristic point P of the main image is obtained i And pixel PE added with the image space residual error i (r i +Rr i ,c i +Ec i ) And carrying out RANSAC rough difference filtering to remove rough difference points which do not accord with local consistency, and filtering intersection in front of the same-name image points of the rough difference points to obtain intersection control points and correcting respective RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by using the obtained intersection control points.
In this step, the present invention determines the intersection control point by the elevation search method. For a given elevation H, according to the definition of RPC, two relational expressions of rows and columns can be determined, and the corresponding longitude and latitude (L, B) of the ground coordinates can be obtained through a least square method. If the same name image point is located on the N1 images, N1 (L, B) images can be obtained. The average coordinates (Lavg, bavg) are calculated, and then the degree of dispersion sgm of each coordinate with respect to the average coordinate is calculated:
sgm=((L1-Lavg) 2 +(B1-Bavg) 2 +(L2-Lavg) 2 +(B2-Bavg) 2 +…+(LN1-Lavg) 2 +(BN1-Bavg) 2 )/N1
The desired intersection point coordinates are calculated at regular intervals of a certain elevation (e.g., 0 to 9000 meters) at intervals of sgm, and the (L, B, H) corresponding to sgm being the smallest. The accuracy and the speed are improved by gradually reducing the elevation interval, wherein the first elevation interval is 50 meters, the elevation searching range is 0 to 9000 meters, and H corresponding to the minimum time of sgm is equal to H1; the second time of elevation interval is 5 m, the elevation searching range is H1-25 m to H1+25 m, and H corresponding to the minimum time of sgm is equal to H2; the third time of elevation interval is 0.5 m, the elevation searching range is H2-2.5 m to H1+2.5 m, and H corresponding to the minimum sgm is equal to H3; the fourth elevation interval is 0.05 m, the elevation search range is H2-0.25 m to H1+0.25 m, and the minimum time of sgm corresponds to H equal to H4. Taking H4 as the final elevation, the corresponding average longitude and latitude together with H4 form an intersection control point (Lavg, bavg, H4). The method for correcting RPC of the invention is to correct the molecular terms of RPC by a least square method through a plurality of homonymous points and intersection control points thereof, wherein the molecular terms comprise the first 6 coefficients of LINE_NUM_COEFF_i and the first 6 coefficients of SAMP_NUM_COEFF_i. Each intersection control point corresponds to two relational expressions of a row and a column, and coefficients corresponding to the relational expressions of the row and the column are solved independently. Let the normalized row and column coordinates of a certain intersecting ground point be (l, s) and the normalized ground coordinates be (U, V, W). The normalized row and column coordinates obtained by (U, V, W) and the RPC parameter to be corrected are (l 0, s 0), and the denominator of the row and column relation is (dense ). The normal equation coefficients corresponding to the row relations are:
(1/dent, V/dent, U/dent, V x V/dent), the remainder being l-l0;
the normal equation coefficients corresponding to the column relation are:
(1/dens, V/dens, U/dens, V. Times. U/dens, U. Times. U/dens, V. Times. V/dens), the remainder being s-s0;
assuming that there are N intersection control points in total, the row relation has a normal equation coefficient matrix B of N rows and 6 columns and a residual difference matrix L of N rows and 1 column, assuming that the transposed matrix of B is Bt, and calculating the matrix:
BtB=Bt*B
BtL=Bt*L
the inverse matrix BtB1 of BtB is calculated, and BtB1 is multiplied by BtL to obtain a matrix datL of 6 rows and 1 columns, and correction of the 6 coefficients of datL, i.e., the first 6 coefficients of line_num_coeff_i.
The same method calculates the equation coefficient matrix residual matrix of the column relation, and then obtains the correction of the first 6 digital coefficients of the samp_num_coeff_i. And 6 coefficients of the row relation and the column relation are corrected, so that the RPC correction is realized.
And 1.2.4, re-acquiring the intersection control point by using the RPC parameters of the corrected high-resolution seventh forward-looking fast view and the corrected rear-looking fast view, and updating the RPC parameters of the high-resolution seventh forward-looking fast view and the high-resolution seventh backward-looking fast view by using the re-acquired intersection control point again.
And re-calculating the RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by re-using the re-acquired intersection control point, and replacing the RPC parameters obtained by re-calculation with the RPC parameters before the RPC parameters are replaced to realize the updating of the RPC parameters.
And step 1.2.5, performing loop iteration to execute the steps 1.2.3-1.2.4, and obtaining the final corrected PRC parameters of the fast view of the high-resolution seven-two-linear-array stereoscopic image.
Each time the calculation of the front intersection control point is completed, the mean square error of all the discrete degrees sgm is calculated, and the condition for the iteration termination is that the absolute value of the difference between the current mean square error and the mean square error calculated in the previous time is smaller than a threshold, and according to the preferred embodiment of the present invention, the threshold is set to 0.0000001 degrees.
The PRC parameter for correcting the fast view of the resource three-wire array stereoscopic image specifically comprises the following substeps:
and 1.2.6, taking the resource third forward-looking fast view as a main image, and obtaining an identical image point between the resource third forward-looking fast view and the backward-looking fast view through a SIFT matching algorithm, wherein the obtained identical image point between the resource third forward-looking fast view and the backward-looking fast view is called a three-vision identical image point. Specifically, three-view homonymy points are screened out according to whether the homonymy points are the same in the point positions of the main image.
And 1.2.7, calculating the image side residual difference of the triple-view homonymous image point in the main image, wherein the image side residual difference comprises row coordinate residual differences and column coordinate residual differences. Wherein, P is used i (r i ,c i ) (i=1, …, N1) represents the feature points of the main image, and N1 represents the number of feature points. The residual difference of the image space is expressed as E i (Er i ,Ec i )(i=1,…,N1)。
Step 1.2.8, filtering the rough difference point and obtaining the intersection control point based on the image space residual difference, and correcting the respective RPC parameters of the forward, forward and backward view of the resource No. three by using the obtained intersection control point.
Specifically, the row coordinate residual error and the column coordinate residual error are respectively added to the row and column coordinates of the pixel point of the main image, and the characteristic point P of the main image is obtained i And pixel PE added with the image space residual error i (r i +Rr i ,c i +Ec i ) And carrying out RANSAC rough difference filtering to remove rough difference points which do not accord with local consistency, and filtering intersection in front of the same-name image points of the rough difference points to obtain intersection control points and correcting respective RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by using the obtained intersection control points.
And 1.2.9, re-acquiring the intersection control point by using the RPC parameters of the corrected forward, forward and backward looking fast view of the resource III, and updating the RPC parameters of the forward, forward and backward looking fast view of the resource III by using the re-acquired intersection control point again.
And step 1.2.10, performing loop iteration to obtain the final PRC parameters of the fast view of the three-linear array stereoscopic image of the resource III.
Step 1.3, respectively calculating a quick view DSM of the high-resolution seven-linear array stereoscopic image and a quick view DSM of the resource three-linear array stereoscopic image so as to reduce the searching range of homonymous image points of the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image;
specifically, the quick view DSM of the high-resolution two-linear array stereoscopic image is calculated through semi-global optimization matching by utilizing the quick view (namely, the high-resolution seven-rear-view quick view and the front-view quick view) of the high-resolution two-linear array stereoscopic image and the corrected PRC parameters of the quick view of the high-resolution two-linear array stereoscopic image; and calculating a quick view DSM of the three-wire array stereoscopic image of the resource by utilizing the quick view (namely, the front, back and front quick views of the three-wire array stereoscopic image of the resource) of the three-wire array stereoscopic image of the resource and the corrected PRC parameters of the quick view of the three-wire array stereoscopic image of the resource through semi-global optimization.
The step 1.3 specifically comprises the following steps:
high-resolution quick view of a seven-two linear array stereoscopic image and quick view of a three-linear array stereoscopic image of a resource, and RPC (remote procedure control) importing: and distinguishing the image from the corresponding RPC parameters according to the file name.
Generating an image pyramid: an image pyramid is generated by wavelet transformation and stored in a diagonal region of the data volume.
CENSUS-based top-level matching: and (3) performing CENSUS transformation on the top-layer image and performing matching based on semi-global optimization, detecting suspicious regions, if suspicious regions caused by large-area non-texture deletion occur, and the heights of the regions are close to the maximum or minimum heights, setting the searching range to be too small, enlarging the searching range and matching the error regions again, if the error regions are not represented as large-area regions, finishing top-layer matching, and if the regions are reduced and have large-area error regions, enlarging the searching range until suspicious regions caused by the searching range do not exist (the matching results of the front and back times are not obviously different). The result of the matching is the elevation of each image point of the front image.
Layer-by-layer matching, suspicious region detection and refinement matching based on CENSUS and mutual information (Mutual Information, hereinafter abbreviated as MI): and adopting block matching, wherein two adjacent images have a certain overlapping area, performing CENSUS conversion on each layer of images below the top layer by layer, calculating front and rear vision correction images by using the elevation obtained by each front vision image point of the upper layer, and calculating an MI lookup table. The initial elevation of each point of the current layer is obtained by the matched elevation of the previous layer, and the searching range of each point is determined by the elevation in the neighborhood (the neighborhood size depends on the local elevation difference) of the point, and a certain margin is reserved. Semi-global optimization matching based on MI and CENSUS is carried out, and suspicious areas are determined according to the setting of smooth parameters, the difference between multipath independent matching results and aggregated matching results and other methods. Classifying the mismatching areas according to the texture features of the front-view image and the front-view and rear-view correction image in the mismatching areas, the elevations around the mismatching areas, the matching measure of the mismatching areas and the like, performing refined matching on suspicious areas of different types, and calculating the images without using the suspicious areas by using the MI lookup table during the refined matching.
Generating a DSM: after bottom layer matching is completed, calculating ground points according to the front image points, the corresponding elevation of the image points and RPC, and generating DSM according to the set DSM grid spacing by a distance reciprocal weighting method.
The influence of the change of the global optimization parameters on the stability of the matching result determines which areas are mismatching areas, and the mismatching areas are generally caused by thick clouds, water bodies and shadows. These areas are set to invalid values on the quick view DSM and are presented as vulnerability areas. The quick view DSM, the corresponding quick view stereoscopic image, the RPC parameter of the image and the homonymy point of the quick view stereoscopic image are called a model, namely one stereoscopic image pair corresponds to one model. In the preferred embodiment of the invention, the high-resolution seven-speed view DSM, the corresponding high-resolution seven-two-linear-array stereoscopic image and RPC parameter of the image, the high-resolution seven-speed-view stereoscopic image homonymous point are referred to as a high-resolution seven-model, and the resource three-speed-view DSM, the corresponding resource three-linear-array stereoscopic image and RPC parameter of the image, and the resource three-speed-view stereoscopic image homonymous point are referred to as a resource three-model. Certain errors exist between overlapped models DSMs, RPC of the stereoscopic images of the fast views can be corrected while the relative errors among overlapped DSMs are eliminated, and good initial conditions are provided for homonymous point acquisition and free net adjustment among the original stereoscopic images.
Step 1.4, the high-resolution seventh-sized quick view DSM and the resource third-sized quick view DSM are respectively processed to eliminate the relative error between the high-resolution seventh-sized quick views DSM overlapped with each other and the relative error between the resource third-sized quick views DSM overlapped with each other, so that the searching range of the same-name image points of the high-resolution seventh-sized two-linear array stereoscopic image and the resource third-linear array stereoscopic image is further reduced, and the efficiency and the accuracy for acquiring the same-name image points are improved;
specifically, based on the model corresponding to the high-resolution seventh view DSM and the model corresponding to the resource third view DSM respectively, the relative error between the high-resolution seventh view DSM with overlap and the relative error between the resource third view DSM with overlap are eliminated.
As described above, the model corresponding to the high-score seventh view DSM is a high-score seventh model, which includes: the high-resolution seven-speed view DSM, the corresponding high-resolution seven-two-linear-array stereoscopic image and RPC parameters of the image, and the high-resolution seven-speed-view stereoscopic image with the same name point. The model corresponding to the resource third quick view DSM is a resource third model, which includes: the method comprises the steps of resource three-speed view DSM, corresponding resource three-linear array stereoscopic images and RPC parameters of the images, and homonymous points of the resource three-speed view stereoscopic images.
After the quick view DSM is generated, the searching range of the same name point of the original image can be obviously reduced through the object reference information, and the influence of the relative deformation of the matching window caused by the relief of the terrain is eliminated as far as possible, namely, the image point on one image is projected to the DSM through RPC to obtain the object coordinate, then the object coordinate is projected to other images to obtain the object coordinate, and the object coordinate can be used as the center of the searching range of the same name point. The four corners of the matching window are projected onto other images through the method to obtain corresponding four corner image point coordinates of the projection window, and other projection image point coordinates in the window are obtained through linear interpolation of the four corner image point coordinates. In order to increase the data utilization rate as much as possible, it is necessary to eliminate the relative error between the overlapped quick view DSMs (lower overlapping models), and on this basis, repair the bug of the Shan Jingkuai view DSMs, and provide a basic condition for the original image connection point matching.
Step 1.4 specifically comprises the following sub-steps:
step 1.4.1, calculating the visual angle and the included angle of images between overlapped models corresponding to the overlapped quick views DSM, selecting the image with the minimum visual angle and the included angle smaller than a preset threshold (preferably, the preset threshold is 7 degrees) as the main image of the overlapped model, and matching the same name points between the main images of the overlapped model through a SIFT matching algorithm; wherein, the quick view DSM refers to a high-score seventh quick view DSM or a resource third quick view DSM.
The viewing angle refers to the observation angle of the image, and is obtained by calculating with the zenith direction of 0 degree through RPC:
take two ground coordinates (including longitude, latitude, altitude), P1 (L1, B1, H) and P2 (L2, B2, H), where l2=l1+0.0001, b2=b1+0.0001.
Two image plane coordinates P1 (r 1, c 1) and P2 (r 2, c 2) are calculated by RPC for P1 and P2.
The image plane distance d between p1 and p2 is calculated in pixels.
And converting the two coordinates into rectangular geocentric coordinates to obtain (X1, Y1, Z1) and (X2, Y2, Z2).
And calculating the distance D between two geocentric rectangular coordinates, wherein the unit is meter.
The resolution res=d/D is calculated.
Take two ground coordinates (including longitude, latitude, altitude), P1 (L1, B1, H) and P3 (L1, B1, h+1).
Two image plane coordinates P1 (r 1, c 1) and P3 (r 3, c 3) are calculated by RPC for P1 and P3.
Two angles are calculated:
Ct0=atan((r3-r1)*res)/3.14159265*180
Ct1=atan((c3-c1)*res)/3.14159265*180
atan represents the arctangent function, and the larger absolute value of Ct0 and Ct1 is used as the observation angle.
The included angle is the difference between the observation angles of the two images, and the respective Ct0 and Ct1 are calculated through the RPCs of the two images. The two Ct0 are differentiated to obtain the dCT0, the two Ct1 are differentiated to obtain the dCT1, and the larger absolute value number in the dCT0 and the dCT1 is taken as the included angle of the two images.
Specifically, the included angle between two images is calculated by the following steps, wherein the two images are represented as a first image and a second image:
(1) Projecting the central coordinate of the first image onto the second image through a first elevation plane H1 to obtain an image point p1;
(2) Projecting the central coordinate of the first image onto the second image through a second elevation plane H2 to obtain an image point p2, wherein H2 is greater than H1;
(3) Projecting the image point P1 and the image point P2 onto a first elevation plane H1 to obtain ground three-dimensional coordinates P1 and P2, and calculating a distance D between the P1 and the P2;
(4) Calculating an included angle ct between the first image and the second image according to the following formula:
ct=atan(D/(H2-H1))
step 1.4.2, the homonymy points between the main images of the overlapped model are projected onto a quick view DSM to obtain object side homonymy points;
and 1.4.3, carrying out RANSAC rough difference filtering on the object space with the same name point, removing the same name point with the error exceeding a certain threshold, simultaneously obtaining object space three-dimensional coordinate transformation coefficients between overlapped models through a least square method, and calculating the relative offset of the longitude, latitude and elevation of the object space between the overlapped models by utilizing an object space three-dimensional transformation formula based on the object space three-dimensional coordinate transformation coefficients between the overlapped models.
Specifically, assuming that the current model sequence number is i and the model sequence number overlapped with the current model sequence number is j, the three-dimensional transformation formula of the object space is:
dL i =a i,j +b i,j *L j +c i,j *B j +d i,j *L j *B j
dB i =e i,j +f i,j *L j +g i,j *B j +h i,j *L j *B j
dH i =k i,j +m i,j *L j +n i,j *B j +o i,j *L j *B j
Wherein a is i,j 、b i,j 、c i,j 、d i,j 、e i,j 、f i,j 、g i,j 、h i,j 、k i,j 、m i,j 、n i,j 、o i,j Is the three-dimensional transformation coefficient of the object space, which is obtained by least square calculation of the same name point of the object space, L j 、B j 、H j Respectively represent the longitude, latitude and elevation of the object of the jth model, dL i 、dB i 、dH i The relative offsets of the longitude, latitude and elevation of the object side between the overlapped models are respectively shown.
Step 1.4.4, eliminating the relative offset of the longitude, latitude and elevation of the object space between the overlapped models by the iterative correction algorithm of the plane and the elevation offset of the overlapped models, correcting the RPC parameters of each overlapped quick view, and calculating and recording the mean square error sigma of the relative offset of the longitude, latitude and elevation between all the overlapped models L 、σ B 、σ H
The following is a detailed description of specific examples. Assuming that there are M models overlapping the ith model (i=0, …, N-1), there are M sets of three-dimensional transformation parameters. Assuming that the longitude and latitude of the lower left corner of the ith model DSM is (Lcn 0, bcn 0), the DSM width is dw, the height is dh, and the sampling intervals in the row and column directions are dr, the longitude and latitude coordinates of the remaining three corners are:
Lcn1=Lcn0+(dw-1)*dr,Bcn1=Bcn0
Lcn2=Lcn0+(dw-1)*dr,Bcn2=Bcn0+(dh-1)*dr
Lcn3=Lcn0,Bcn3=Bcn0+(dh-1)*dr
substituting longitude and latitude (hereinafter referred to as "input longitude and latitude") of four corners of the DSM into the right side of the three-dimensional transformation formula, and calculating M groups of three-dimensional coordinate offsets of the four corners by using M groups of three-dimensional transformation parameters. And respectively averaging M groups of longitude, latitude and elevation offset corresponding to each calculated angle to obtain average longitude, latitude and elevation offset (hereinafter referred to as 'output three-dimensional coordinate offset') of each angle. And re-fitting the three-dimensional transformation coefficient of the object space by using the input longitude and latitude and the output three-dimensional coordinate offset of the four corners.
And carrying out coordinate transformation on the object-space intersection point of the connection point of the single model through the re-fitted three-dimensional object-space transformation coefficient, reconstructing a control point by an image point corresponding to the stereoscopic quick view, and correcting RPC parameters of the stereoscopic quick view.
For each image plane coordinate (r) of the quick view DSM crnt ,c crnt ) Corresponding plane longitude and latitude coordinates (L crnt ,B crnt ) Obtaining three-dimensional coordinate offset (dL) by re-fitting object space three-dimensional transform coefficients crnt ,dB crnt ,dH crnt ) Will (L) crnt ,B crnt ) Adding an offset correction (dL) crnt ,dB crnt ) Then, calculating the corresponding DSM image plane coordinates of the quick view, and obtaining the elevation H through bilinear interpolation crnt Will H crnt +dH crnt As a transformed Gao Chengfu to DSM (r crnt ,c crnt ) And (5) a dot. And after the transformation heights of all DSM points are calculated, obtaining the transformed quick view DSM.
Step 1.4.5 repeating steps 1.4.2-1.4.4 until the mean square error sigma of the relative offsets of longitude, latitude and elevation between all final overlapped models L 、σ B 、σ H And the relative error between the overlapped models is eliminated when the relative error is smaller than a preset threshold value.
Fig. 2 (a) shows the DSM error distribution diagram of the fast view before the model iterative correction, and fig. 2 (b) shows the DSM error distribution diagram after the model iterative correction. It can be seen that, through the model iterative correction step, the model error is reduced from several tens of meters to a degree with negligible resolution relative to the thumbnail, so that the range of searching the same-name image point of the original image can be controlled within a small range, thereby improving the matching efficiency and the matching accuracy.
According to the method, accuracy of RPC is improved through DSM acquisition and preprocessing of the fast view, searching range of the same-name image points is controlled, object space reference is established through corrected DSM, accuracy and efficiency of subsequent matching point and adjustment processing are guaranteed, and the method is one of main improvement points of the method.
And 1.5, extracting the homonymous image points of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image based on the searching range of the homonymous image points, carrying out front intersection on the homonymous image points to obtain intersection control points, and correcting RPC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image by using the intersection control points through a least square method.
Step 1.5 specifically comprises the following sub-steps:
and 1.5.1, restoring the RPC parameters of the high-resolution two-linear array stereoscopic image fast view into the RPC parameters of the high-resolution two-linear array stereoscopic image fast view, and restoring the RPC parameters of the resource three-linear array stereoscopic image fast view into the RPC parameters of the resource three-linear array stereoscopic image fast view.
Since the RPC of the fast view is already optimized and converted into the original RPC, the original RPC can be used as a better initial value, in this step, the RPC parameters of the fast view are restored to the RPC parameters of the original image, and the subscript pre is also used to represent the RPC parameters of the fast view, and the subscript org is used to represent the RPC parameters of the original image.
Specifically, the RPC parameters of the fast view are converted into RPC parameters of the original image according to the following formula, wherein the fast view refers to a fast view of a high-resolution seven-two-linear-array stereoscopic image or a fast view of a resource three-linear-array stereoscopic image, and the original image refers to a high-resolution seven-two-linear-array stereoscopic image or a resource three-linear-array stereoscopic image:
LINE_OFF_org=LINE_OFF_pre*n-0.5+n/2.0
SAMP_OFF_org=SAMP_OFF_pre*n-0.5+n/2.0
LINE_SCALE_org=LINE_SCALE_pre*n
SAMP_SCALE_org=SAMP_SCALE_pre*n
wherein line_off_pre is a LINE offset of the fast view, samp_off_pre is a column offset of the fast view, line_scale_pre is a LINE scaling of the fast view, samp_scale_pre is a column scaling of the fast view, line_off_org is a LINE offset of the original image, samp_off_org is a column offset of the original image, line_scale_org is a LINE scaling of the original image, samp_scale_org is a column scaling of the original image, and n is a reduction multiple.
And 1.5.2, taking the front view image and the high-resolution seventh view image of the resource as main images, taking the front view image and the rear view image of the resource as auxiliary images, carrying out orthorectification on the main images and the auxiliary images, and then carrying out least square matching on the main images and the auxiliary images to obtain matching points.
Although step 1.4 can control the search range to a small extent, a further problem may affect the efficiency of the matching as a factor in determining the initial search position. If each primary image point is projected onto the secondary image by projective transformation, the efficiency will be significantly reduced. Based on the method, the main image and the auxiliary image are subjected to orthorectification and then matched, so that the calculated amount of projection conversion can be saved, the relative deformed image caused by terrain can be eliminated, and the matching efficiency and success rate are remarkably improved. The invention adopts a matching window with the size of 25 multiplied by 25 to search the point with the minimum correlation coefficient in the range of 11 multiplied by 11 pixels in the step length of 1 pixel, and then performs least square matching on the basis of the initial searching point position to obtain sub-pixel level matching points. A correlation coefficient threshold is set, and if the correlation coefficient is larger than the threshold, the matching point is adopted, and the threshold is set to be 0.6.
And step 1.5.3, projecting the matching point to the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image by utilizing the RPC parameter restored in the step S1.5.1 so as to obtain the same-name image point of the high-resolution two-line array stereoscopic image and the same-name image point of the resource three-line array stereoscopic image.
After the matching based on the orthographic image is completed, the same-name image point of the original image (namely, the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image) is projected to the original image through the RPC to complete the acquisition of the same-name image point of the original image, and the same-name image point can be called as a conventional same-name image point or a connecting point for distinguishing the same-name image point from the following laser point. And (3) obtaining the homonymous image points (namely connecting points) of the high-resolution seven-number two-linear-array stereoscopic image and the resource three-number three-linear-array stereoscopic image through the step 1.5.3.
And 1.5.4, carrying out RANSAC coarse difference filtering on the obtained homonymous image points of the high-resolution seven-linear array stereoscopic image and the homonymous image points of the resource three-linear array stereoscopic image, and removing coarse difference points.
The method for removing the coarse difference of the same-name image points of the original image is the same as the method for removing the coarse difference of the same-name image points of the fast view. The same-name image points are simultaneously positioned on two or more images of the high-resolution seventh image and the resource third original image.
Step 1.5.5, performing forward intersection on the same-name image points subjected to coarse difference filtering to calculate intersection control points;
and 1.5.6, correcting RPC parameters of the high-resolution two-wire array stereoscopic image and the resource three-wire array stereoscopic image by using the intersection control point.
Step 1.5.7, performing loop iteration to perform steps 1.5.5-1.5.6 until the mean square error sigma of the intersection control point residual difference L 、σ B 、σ H And the corrected RPC parameters are smaller than a preset threshold value to obtain the final high-resolution seven-linear array stereoscopic image and the corrected RPC parameters of the resource three-linear array stereoscopic image.
And step 2, correcting the RPC parameters of the three-dimensional resource three-linear array image again based on the small-area high-score number seven DSM and the three-dimensional resource DSM offset corresponding to the ground coordinates of the laser points of the high-score number seven two-linear array stereoscopic image so as to eliminate the system error to the greatest extent.
Step 2 specifically comprises the following substeps:
and 2.1, determining the image plane coordinates of the ground coordinates of the laser points of the high-resolution two-linear array stereoscopic image on the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image.
Assume that the row and column coordinates of the laser points of the high-resolution seven-linear array stereoscopic image on the high-resolution seven-linear array laser footprint image are (rld) i ,cld i ) (i=1, …, N2), the ground coordinates of the laser spot are (Lld i ,Bld i ,Hld i ) (i=1, …, N2), N2 being the number of laser spots.
Specifically, the ground coordinates of the laser points are calculated by each image RPC of the seventh and the third high-resolution image to obtain the corresponding image plane coordinates of each image. The ground coordinates of the laser spot (Lld i ,Bld i ,Hld i ) The normalized ground coordinates (U) are calculated by a range of RPC LONG_OFF, LONG_SCALE, LAT_OFF, LAT_SCALE, HEIGHT_OFF, HEIGHT_SCALE coefficients i ,V i ,W i ). Through (U) i ,V i ,W i ) And the line coordinate numerator term coefficient, the line coordinate denominator term coefficient, the column coordinate numerator term coefficient and the column coordinate denominator term coefficient of the RPC to calculate normalized line and column coordinates (l) i ,s i ). Will (l) i ,s i ) The line_off, line_scale, samp_off, samp_scale coefficients of the RPC are restored to the original image plane row and column coordinates (r, c).
And 2.2, respectively and independently extracting small areas from the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image areas corresponding to the image plane coordinates, respectively calculating the small areas DSM of the high-resolution two-line array stereoscopic image and the small areas DSM of the resource three-line array stereoscopic image, and taking the small areas DSM calculated respectively as the high-resolution two-line array stereoscopic image DSM and the resource three-line array stereoscopic image DSM.
In this step, the DSM of the entire image is not calculated because the calculation amount is large, and a plurality of independent small areas DSM can also perform the subsequent parameter calculation, thereby improving the calculation efficiency.
According to a preferred embodiment of the present invention, a small region DSM is calculated from a partial stereoscopic image of 2000×2000 pixels size centered on the image plane coordinates (r, c) of the main image (the high-resolution seventh-size back-view image, the resource third-size front-view image).
And 2.3, calculating the elevation offset between the high-resolution seven-linear array stereoscopic image DSM and the resource three-linear array stereoscopic image DSM based on an affine transformation formula.
Specifically, the affine transformation formula is: :
Δh=a×L+b×B+c×L×B
wherein Δh represents the elevation offset between the stereoscopic image DSM of the seventh high-resolution stereoscopic image and the stereoscopic image DSM of the third high-resolution stereoscopic image, L and B represent longitude and latitude, respectively, and a, B and c are affine transformation coefficients.
And 2.4, correcting the RPC parameter of the three-wire array stereoscopic image of the resource third based on the elevation offset.
Specifically, the intersection ground points corresponding to the connection points of the three-wire array stereoscopic image (i.e. the front view, the front view and the rear view images) of the resource are added with the elevation offset calculated by the affine transformation formula in the step 2.3, and the RPC parameter of the three-wire array stereoscopic image of the resource is corrected for the last time, so as to eliminate the systematic error to the greatest extent.
And step 3, acquiring laser control points of the high-resolution two-linear array stereoscopic image, and taking the laser control points as control information to improve the accuracy of the regional network adjustment.
The high-resolution seventh laser data comprise redundant information, and the high-resolution seventh laser data for adjustment adopted by the invention comprise laser footprint images and RPC parameters thereof, row-column coordinates of laser points on the footprint images and ground coordinates of the laser points. The invention adopts the laser control point as control information to improve the precision of the regional network adjustment, which is one of the main improvement points of the invention.
The step 3 specifically comprises the following sub-steps:
and 3.1, obtaining the same name image point between the high-resolution seven-linear array stereoscopic image and the high-resolution seven-laser footprint image through a SIFT matching algorithm.
And 3.2, obtaining the ground three-dimensional coordinates of the high-resolution seventh homonymous image point by converging and projecting the high-resolution seventh homonymous image point to the DSM of the high-resolution seventh two-linear array stereoscopic image.
And 3.3, obtaining a laser point homonymy image point, and forming a laser control point by the laser point homonymy image point and the ground three-dimensional coordinates of the high-resolution seventh homonymy image point.
Specifically, the homonymous image points between the high-resolution seventh two-linear array stereoscopic image and the high-resolution seventh laser footprint image obtained in the step 3.1 include high-resolution seventh image points and laser point image points, wherein the step 3.3 of obtaining the high-resolution seventh laser point homonymous image points includes: projecting the corrected RPC parameters of the laser spot image point to the small area DSM to obtain coordinates (L1 i, B1i, H1 i) (i=1, …, N2), and obtaining the obtained coordinates (L1 i, B1i, H1 i) (i=1) 1, …, N2) to obtain image point coordinates (rld 1) i,j ,cld1 i,j ) (i=1, …, N2; j=1, …, 5), where j represents the number of 5 images consisting of two images of the seventh high score and 3 images of the third resource.
And 4, carrying out joint regional adjustment on the high-resolution two-linear-array stereoscopic image and the resource three-linear-array stereoscopic image based on the resource three-linear-array stereoscopic image with the RPC parameters corrected again in the step 2 and the laser control point of the high-resolution two-linear-array stereoscopic image obtained in the step 3.
The regional adjustment in the step 4 is similar to the calculation process of the free adjustment performed in the step 1, specifically, the step 4 includes the following sub-steps:
and 4.1, calculating an intersection control point of the connection point of the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image based on the resource three-linear array stereoscopic image with the RPC parameter corrected again in the step 2 and the high-resolution seven-linear array stereoscopic image in the step 3, and calculating an intersection surface point of the laser point and the same-name image point according to the RPC parameter of the high-resolution seven-linear array stereoscopic image with the same-name image point and the same-name image point.
And 4.2, determining the offset between the intersection control point of the laser point and the same-name image point and the laser control point.
The intersection control point of the laser point with the same name image point is marked as (Lcldi, bcldi, hcldi), the object space of the laser control point is marked as (Lldi, bldi, hldi), and the offset (Loff) between the intersection control point of the laser point with the same name image point and the object space coordinate of the laser control point i ,Boff i ,Hoff i ) (i=1, …, N2) is:
Loff i =Lld i -Lcld i ,
Boff i =Bld i -Bcld i ,
Hoff i =Hld i -Hcld i
wherein N2 is the number of laser control points.
And 4.3, correcting the intersection control point of the connection point of the high-resolution seventh two-linear array stereoscopic image and the resource third-linear array stereoscopic image obtained in the step 4.1 and the intersection control point of the laser point homonymous image point according to the determined offset, wherein the homonymous image point is positioned on at least two images of the high-resolution seventh forward-looking image, the high-resolution seventh backward-looking image and the resource third-linear array stereoscopic image.
Specifically, the offset (Lldoff) of the laser intersection control point within a certain distance range D with each object intersection point as the center is counted i ,Bldoff i ,Hldoff i ) (i=1, …, nld), nld is the number of laser points with a distance from an object intersection point smaller than D, and the correction value (Loff) of the object intersection point is obtained by a distance reciprocal weighted average method j ,Boff j ,Hoff j ) (j=1, …, nt), nt being the number of connection points.
And 4.4, forming a high-resolution seventh forward-looking image, a rear-looking image and control points of each image of the forward-looking image, the rear-looking image and the forward-looking image of the resource III by the corrected intersection control points of the connecting points and intersection control points of the identical image points of the laser points, wherein each image control point comprises ground coordinates and more than 2 image side coordinates. And correcting RPC parameters of the high-resolution seventh-size and the resource third-size stereoscopic images by using the control points.
The method for correcting RPC of the invention is to correct the molecular terms of RPC by least square method through a plurality of control points, including the first 6 coefficients of LINE_NUM_COEFF_i and the first 6 digits of SAMP_NUM_COEFF_i.
The specific correction process is as follows: assume that a total of N intersection control points (Lj, bj, hj, rj, cj) are shared by one image in the seventh and third source (j=0, …, N-1). Normalized transform to (Vj, uj, wj, lj, sj):
Vj=(Lj-LONG_OFF)/LONG_SCAEL
Uj=(Bj-LAT_OFF)/LAT_SCALE
Wj=(Hj-HEIGHT_OFF)/HEIGHT_SCALE
lj=(rj-LINE_OFF)/LINE_SCALE
sj=(cj-SAMP_OFF)/SAMP_SCALE
1. correction of row direction parameters
(1) The projected normalized pixel row coordinates lpj (j=0, …, N-1) for each intersection control point are computed in turn:
Lpj=F(Vj,Uj,Wj,LINE_NUM_COEFF_i)/F(Vj,Uj,Wj,LINE_DEN_COEFF_i)
(2) The row direction normal equation coefficients of all the intersection control points are calculated to obtain a normal equation coefficient matrix B, and the coefficient calculation method corresponding to the jth intersection control point comprises the following steps:
Bj0=1/F(Vj,Uj,Wj,LINE_DEN_COEFF_i),
Bj1=Vj/F(Vj,Uj,Wj,LINE_DEN_COEFF_i),
Bj2=Uj/F(Vj,Uj,Wj,LINE_DEN_COEFF_i),
Bj3=Vj*Uj/F(Vj,Uj,Wj,LINE_DEN_COEFF_i),
Bj4=Uj*Uj/F(Vj,Uj,Wj,LINE_DEN_COEFF_i),
Bj5=Vj*Vj/F(Vj,Uj,Wj,LINE_DEN_COEFF_i)。
The resulting matrix B is an N row 6 column matrix.
(3) Calculating a residual matrix L, wherein the corresponding residual of the j-th intersection control point is as follows:
Lj=lj-lpj。
thus, L is an N row 1 column matrix.
(4) The transposed matrix Bt of B is calculated.
Multiplying the Bt matrix by the B matrix yields a 6 row 6 column matrix BtB.
Multiplying the Bt matrix by the L matrix to obtain a 6 row 1 column matrix BtL.
The inverse matrix BtB1 of BtB is calculated.
Multiplying the BtB matrix by the BtL matrix yields a 6 row 1 column matrix da.
(5) Let the 6 parameters of the da matrix be dai (i=0, …, 5). Line_num_coeff_i (i=0, …, 5) is added to the corresponding dai to complete the 6 parameter corrections in the row direction.
2. Column direction parameter correction
(1) The projected normalized image point column coordinates spj (j=0, …, N-1) for each intersection control point are computed in turn:
spj=F(Vj,Uj,Wj,SAMP_NUM_COEFF_i)/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i)
(2) The column direction normal equation coefficients of all the intersection control points are calculated to obtain a normal equation coefficient matrix B, and the coefficient calculation method corresponding to the jth intersection control point comprises the following steps:
Bj0=1/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i),
Bj1=Vj/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i),
Bj2=Uj/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i),
Bj3=Vj*Uj/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i),
Bj4=Uj*Uj/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i),
Bj5=Vj*Vj/F(Vj,Uj,Wj,SAMP_DEN_COEFF_i)。
the resulting matrix B is an N row 6 column matrix.
(3) Calculating a residual matrix S, wherein the corresponding residual of the j-th intersection control point is as follows:
Sj=sj-spj。
thus, L is an N row 1 column matrix.
(4) The transposed matrix Bt of B is calculated.
Multiplying the Bt matrix by the B matrix yields a 6 row 6 column matrix BtB.
Multiplying the Bt matrix by the S matrix to obtain a 6-row 1-column matrix Bts.
The inverse matrix BtB1 of BtB is calculated.
Multiplying the BtB matrix by the Bts matrix yields a 6 row 1 column matrix da.
(5) Let the 6 parameters of the da matrix be dai (i=0, …, 5). Samp_num_coeff_i (i=0, …, 5) is added to the corresponding dai to complete the column direction 6 parameter corrections.
According to a preferred embodiment of the present invention, steps 3.1-3.3 are iteratively performed, and the mean square error of the corrected RPC parameter obtained after each iteration is calculated until the difference between the two front and rear mean square errors is smaller than a preset threshold, which is set to 0.000001.
The foregoing is merely exemplary of the present invention, and those skilled in the art should not be considered as limiting the invention, since modifications may be made in the specific embodiments and application scope of the invention in light of the teachings of the present invention.

Claims (7)

1. A method for combining and leveling high-resolution seven-size and resource three-size stereoscopic images is characterized by comprising the following steps:
step 1, carrying out joint free net adjustment on a high-resolution two-line array stereoscopic image and a resource three-line array stereoscopic image, wherein in the joint free net adjustment process, homonymous image points of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image are extracted, an intersection control point is obtained through intersection in front of the homonymous image points, and RPC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image are corrected by utilizing the intersection control point;
Step 2, correcting RPC parameters of the three-dimensional resource three-dimensional array image again based on small-area high-score number seven DSM and three-dimensional resource DSM offset corresponding to the ground coordinates of the laser points of the high-score number seven two-dimensional array image so as to eliminate system errors to the greatest extent;
step 3, obtaining laser control points of the high-resolution two-line array stereoscopic image, and taking the laser control points as control information to improve the accuracy of the area network adjustment;
step 4, carrying out joint regional adjustment on the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image based on the resource three-linear array stereoscopic image with the RPC parameters corrected again in the step 2 and the laser control point of the high-resolution two-linear array stereoscopic image obtained in the step 3; wherein,
step 1 specifically comprises the following substeps:
step 1.1, generating a fast view of the resource three-line array stereoscopic image and a fast view of the high-resolution seven-line array stereoscopic image, and calculating RPC parameters of the fast view of the resource three-line array stereoscopic image and RPC parameters of the fast view of the high-resolution seven-line array stereoscopic image;
Step 1.2, correcting the PRC parameters of the fast view of the high-resolution two-linear array stereoscopic image and the PRC parameters of the fast view of the resource three-linear array stereoscopic image respectively;
step 1.3, respectively calculating a quick view DSM of the high-resolution two-linear array stereoscopic image and a quick view DSM of the resource three-linear array stereoscopic image;
step 1.4, the high-resolution seventh-sized quick view DSM and the resource third-sized quick view DSM are respectively processed to eliminate the relative error between the high-resolution seventh-sized quick views DSM overlapped with each other and the relative error between the resource third-sized quick views DSM overlapped with each other, so that the searching range of the same-name image points of the high-resolution seventh-sized two-linear array stereoscopic image and the resource third-linear array stereoscopic image is reduced, and the efficiency and the accuracy for acquiring the same-name image points are improved;
step 1.5, extracting homonymous image points of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image based on the searching range of the homonymous image points, carrying out front intersection on the homonymous image points to obtain intersection control points, and correcting RPC parameters of the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image by using the intersection control points through a least square method;
Step 4 specifically comprises the following substeps:
step 4.1, calculating an intersection control point of a connection point of the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image based on the resource three-linear array stereoscopic image with the RPC parameter corrected again in the step 2 and the high-resolution seven-linear array stereoscopic image in the step 3, and calculating an intersection surface point of a laser point and a same name image point according to the RPC parameter of the high-resolution seven-linear array stereoscopic image with the same name image point and the same name image point;
step 4.2, determining the offset between the intersection control point of the identical image point of the laser point and the laser control point;
step 4.3, correcting the intersection control point of the connection point of the high-resolution seventh two-linear array stereoscopic image and the resource third-linear array stereoscopic image and the intersection control point of the laser point homonymous image point, which are obtained in the step 4.1, according to the determined offset, wherein the homonymous image point is positioned on at least two images of the high-resolution seventh forward-looking image, the rear-looking image and the resource third-numbered forward-looking image, the rear-looking image and the forward-looking image;
and 4.4, forming the corrected intersection control point of the connecting point and the intersection control point of the laser point with the same-name image point into a control point of each image of the high-resolution seventh forward-looking image, the rear-looking image, the forward-looking image of the resource third, the rear-looking image and the forward-looking image, and correcting RPC parameters of the high-resolution seventh stereoscopic image and the resource third stereoscopic image by using the control points.
2. The method according to claim 1, wherein in step 1.1, RPC parameters of the fast view of the resource three-line stereoscopic image and the fast view of the high-resolution two-line stereoscopic image are calculated according to the following formula:
LINE_OFF_pre= (LINE_OFF_org-n/2.0+0.5)/n
SAMP_OFF_pre= (SAMP_OFF_org-n/2.0+0.5)/n
LINE_SCALE_pre= LINE_SCALE_org/n
SAMP_SCALE_pre= SAMP_SCALE_org/n
the line_off_pre is a LINE offset of a fast view, the samp_off_pre is a column offset of the fast view, the line_scale_pre is a LINE scaling of the fast view, the samp_scale_pre is a column scaling of the fast view, the line_off_org is a LINE offset of an original image, the samp_off_org is a column offset of the original image, the line_scale_org is a LINE scaling of the original image, the samp_scale_org is a column scaling of the original image, and n is a reduction multiple, wherein the fast view refers to a fast view of a resource three-LINE three-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image, and the original image refers to a resource three-LINE-array stereoscopic image or a high-resolution seven-LINE-array stereoscopic image.
3. The method according to claim 1, wherein in step 1.2, "correcting the PRC parameter of the fast view of the high-resolution two-line stereoscopic image" specifically includes the following sub-steps:
step 1.2.1, taking a high-resolution seventh back-view fast view as a main image, and obtaining a homonymic image point between the high-resolution seventh back-view fast view and the high-resolution seventh front-view fast view through a SIFT matching algorithm;
Step 1.2.2, calculating an image side residual difference of the same-name image point in the main image, wherein the image side residual difference comprises a row coordinate residual difference and a column coordinate residual difference;
step 1.2.3, filtering coarse difference points and obtaining intersection control points based on the image space residual difference, and correcting respective RPC parameters of the high-resolution seventh forward-looking fast view and the backward-looking fast view by using the obtained intersection control points;
step 1.2.4, re-acquiring the intersection control point by using the respective RPC parameters of the corrected high-resolution seventh forward-looking fast view and the corrected backward-looking fast view, and then updating the respective RPC parameters of the high-resolution seventh forward-looking fast view and the high-resolution seventh backward-looking fast view by using the re-acquired intersection control point again;
step 1.2.5, performing loop iteration to execute the steps 1.2.3-1.2.4 to obtain the final corrected PRC parameters of the fast view of the high-resolution seven-two-linear array stereoscopic image;
the PRC parameter for correcting the fast view of the resource three-wire array stereoscopic image specifically comprises the following substeps:
step 1.2.6, taking a resource third forward-looking fast view as a main image, and obtaining an identical image point between the resource third forward-looking fast view and the backward-looking fast view through a SIFT matching algorithm, wherein the obtained identical image point between the resource third forward-looking fast view and the backward-looking fast view is called a three-vision identical image point;
Step 1.2.7, calculating an image side residual difference of the triple-view homonymous image point in the main image, wherein the image side residual difference comprises a row coordinate residual difference and a column coordinate residual difference;
step 1.2.8, filtering coarse difference points and obtaining intersection control points based on the image space residual difference, and correcting respective RPC parameters of the third forward, forward and backward view of the resource by using the obtained intersection control points;
step 1.2.9, re-acquiring the intersection control point by using the respective RPC parameters of the corrected forward, forward and backward looking fast view of the resource III, and updating the respective RPC parameters of the forward, forward and backward looking fast view of the resource III by using the re-acquired intersection control point again;
and step 1.2.10, performing loop iteration to obtain the final PRC parameters of the fast view of the three-linear array stereoscopic image of the resource III.
4. The method according to claim 1, characterized in that step 1.4 comprises in particular the sub-steps of:
step 1.4.1, calculating the visual angle and the included angle of images between overlapped models corresponding to the mutually overlapped quick views DSM, selecting the image with the minimum visual angle and the included angle smaller than a preset threshold value as an overlapped model main image, and matching homonymous points between the overlapped model main images through a SIFT matching algorithm; the quick view DSM refers to a high-definition seventh quick view DSM or a resource third quick view DSM;
Step 1.4.2, the homonymy points between the main images of the overlapped model are projected onto a quick view DSM to obtain object side homonymy points;
step 1.4.3, carrying out RANSAC rough difference filtering on the object space with the same name point, removing the same name point with the error exceeding a certain threshold, simultaneously obtaining object space three-dimensional coordinate transformation coefficients between overlapped models through a least square method, and calculating relative offset of longitude, latitude and elevation of the object space between the overlapped models by utilizing an object space three-dimensional transformation formula based on the object space three-dimensional coordinate transformation coefficients between the overlapped models;
step 1.4.4, eliminating the relative offset of the longitude, latitude and elevation of the object space between the overlapped models by the iterative correction algorithm of the plane and the elevation offset of the overlapped models, correcting the RPC parameters of each overlapped quick view, and calculating and recording the mean square error sigma of the relative offset of the longitude, latitude and elevation between all the overlapped models L 、σ B 、σ H
Step 1.4.5 repeating steps 1.4.2-1.4.4 until the mean square error sigma of the relative offsets of longitude, latitude and elevation between all final overlapped models L 、σ B 、σ H Is smaller than a preset threshold value to eliminate the relative error between overlapped models.
5. The method according to claim 1, characterized in that step 1.5 comprises in particular the sub-steps of:
Step 1.5.1, restoring the RPC parameters of the high-resolution two-linear array stereoscopic image fast view into the RPC parameters of the high-resolution two-linear array stereoscopic image fast view, and restoring the RPC parameters of the resource three-linear array stereoscopic image fast view into the RPC parameters of the resource three-linear array stereoscopic image fast view;
step 1.5.2, taking a front view image and a high-resolution seventh view image of a resource as main images, taking a front view image and a rear view image of the resource as auxiliary images, carrying out orthorectification on the main images and the auxiliary images, and then carrying out least square matching on the main images and the auxiliary images to obtain matching points;
step 1.5.3, projecting the matching point to a high-resolution two-line array stereoscopic image and a resource three-line array stereoscopic image by utilizing the RPC parameter restored in the step S1.5.1 so as to obtain a homonymous image point of the high-resolution two-line array stereoscopic image and a homonymous image point of the resource three-line array stereoscopic image;
1.5.4, carrying out RANSAC coarse difference filtering on the obtained homonymous image points of the high-resolution seventh two-linear array stereoscopic image and the homonymous image points of the resource third-linear array stereoscopic image, and removing coarse difference points;
Step 1.5.5, performing forward intersection on the same-name image points subjected to coarse difference filtering to calculate intersection control points;
step 1.5.6, correcting RPC parameters of the high-resolution seven-linear array stereoscopic image and the resource three-linear array stereoscopic image by using the intersection control point;
step 1.5.7, performing loop iteration to perform steps 1.5.5-1.5.6 until the mean square error sigma of the intersection control point residual difference L 、σ B 、σ H Is smaller than a preset threshold value to obtain a final high-resolution seven-two-linear array stereoscopic image and a resource three-lineAnd correcting RPC parameters of the array stereo image.
6. The method according to claim 1, characterized in that step 2 comprises in particular the sub-steps of:
step 2.1, determining the image plane coordinates of the ground coordinates of the laser points of the high-resolution two-linear array stereoscopic image on the high-resolution two-linear array stereoscopic image and the resource three-linear array stereoscopic image;
step 2.2, small areas are respectively and independently extracted from the high-resolution two-line array stereoscopic image and the resource three-line array stereoscopic image areas corresponding to the image plane coordinates, the small areas DSM of the high-resolution two-line array stereoscopic image and the small areas DSM of the resource three-line array stereoscopic image which are extracted are respectively calculated, and the small areas DSM which are respectively calculated are used as the high-resolution two-line array stereoscopic image DSM and the resource three-line array stereoscopic image DSM;
Step 2.3, calculating the elevation offset between the high-resolution seven-size two-linear array stereoscopic image DSM and the resource three-size three-linear array stereoscopic image DSM based on an affine transformation formula;
and 2.4, correcting the RPC parameter of the three-wire array stereoscopic image of the resource third based on the elevation offset.
7. The method according to claim 1, characterized in that step 3 comprises the following sub-steps:
step 3.1, obtaining the same name image point between the high-resolution seven-linear array stereoscopic image and the high-resolution seven-laser footprint image through a SIFT matching algorithm;
step 3.2, the high-resolution seventh homonymous image point is projected to a DSM of the high-resolution seventh two-linear array stereoscopic image in a meeting mode, and the ground three-dimensional coordinates of the high-resolution seventh homonymous image point are obtained;
and 3.3, obtaining a laser point homonymy image point, and forming a laser control point by the laser point homonymy image point and the ground three-dimensional coordinates of the high-resolution seventh homonymy image point.
CN202110778949.4A 2021-07-09 2021-07-09 High-resolution seven-number and resource three-number stereoscopic image joint adjustment method Active CN113379648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778949.4A CN113379648B (en) 2021-07-09 2021-07-09 High-resolution seven-number and resource three-number stereoscopic image joint adjustment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778949.4A CN113379648B (en) 2021-07-09 2021-07-09 High-resolution seven-number and resource three-number stereoscopic image joint adjustment method

Publications (2)

Publication Number Publication Date
CN113379648A CN113379648A (en) 2021-09-10
CN113379648B true CN113379648B (en) 2023-12-19

Family

ID=77581583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778949.4A Active CN113379648B (en) 2021-07-09 2021-07-09 High-resolution seven-number and resource three-number stereoscopic image joint adjustment method

Country Status (1)

Country Link
CN (1) CN113379648B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114689015B (en) * 2021-11-29 2023-01-17 成都理工大学 Method for improving elevation precision of optical satellite stereoscopic image DSM

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168972A (en) * 2010-12-15 2011-08-31 中国资源卫星应用中心 RPC-based method for improving and calibrating block adjustment of three-linear array three-dimensional satellite
CN106960174A (en) * 2017-02-06 2017-07-18 中国测绘科学研究院 High score image laser radar vertical control point is extracted and its assisted location method
CN107504981A (en) * 2017-07-25 2017-12-22 国家测绘地理信息局卫星测绘应用中心 A kind of attitude of satellite error correcting method and equipment based on laser-measured height data
CN109977344A (en) * 2019-03-20 2019-07-05 武汉大学 A kind of block adjustment method of spaceborne noctilucence remote sensing image
CN110388898A (en) * 2019-06-27 2019-10-29 中国科学院遥感与数字地球研究所 Construct the multiple coverage remote sensing image error compensation method of multi-source of virtual controlling point constraint
CN111354040A (en) * 2020-01-16 2020-06-30 井冈山大学 Optical satellite image block adjustment method based on Partial EIV model
CN112597428A (en) * 2020-12-22 2021-04-02 同济大学 Flutter detection correction method based on beam adjustment and image resampling of RFM model
CN112765095A (en) * 2020-12-24 2021-05-07 山东省国土测绘院 Method and system for filing image data of stereo mapping satellite

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107167786B (en) * 2017-06-05 2021-01-01 中国测绘科学研究院 Method for auxiliary extraction of elevation control points from satellite laser height measurement data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168972A (en) * 2010-12-15 2011-08-31 中国资源卫星应用中心 RPC-based method for improving and calibrating block adjustment of three-linear array three-dimensional satellite
CN106960174A (en) * 2017-02-06 2017-07-18 中国测绘科学研究院 High score image laser radar vertical control point is extracted and its assisted location method
CN107504981A (en) * 2017-07-25 2017-12-22 国家测绘地理信息局卫星测绘应用中心 A kind of attitude of satellite error correcting method and equipment based on laser-measured height data
CN109977344A (en) * 2019-03-20 2019-07-05 武汉大学 A kind of block adjustment method of spaceborne noctilucence remote sensing image
CN110388898A (en) * 2019-06-27 2019-10-29 中国科学院遥感与数字地球研究所 Construct the multiple coverage remote sensing image error compensation method of multi-source of virtual controlling point constraint
CN111354040A (en) * 2020-01-16 2020-06-30 井冈山大学 Optical satellite image block adjustment method based on Partial EIV model
CN112597428A (en) * 2020-12-22 2021-04-02 同济大学 Flutter detection correction method based on beam adjustment and image resampling of RFM model
CN112765095A (en) * 2020-12-24 2021-05-07 山东省国土测绘院 Method and system for filing image data of stereo mapping satellite

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A Planimetric Location Method for Laser Footprints of the Chinese Gaofen-7 Satellite Using Laser Spot Center Detection and Image Matching to Stereo Image Product";Huan Xie et al;《IEEE Transactions on Geoscience and Remote Sensing 》;全文 *
"星载激光测距数据与线阵影像的联合平差";刘玉梅;《 CNKI优秀硕士学位论文全文库》;全文 *
高分辨率光学卫星测绘技术综述;李德仁;王密;;航天返回与遥感(第02期);全文 *

Also Published As

Publication number Publication date
CN113379648A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN107705329B (en) High-resolution optical satellite staring image registration method based on geometric constraint
CN110388898B (en) Multisource multiple coverage remote sensing image adjustment method for constructing virtual control point constraint
CN113358091B (en) Method for producing digital elevation model DEM (digital elevation model) by using three-linear array three-dimensional satellite image
US8577139B2 (en) Method of orthoimage color correction using multiple aerial images
CN109709551B (en) Area network plane adjustment method for satellite-borne synthetic aperture radar image
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN108876861B (en) Stereo matching method for extraterrestrial celestial body patrolling device
Ozcanli et al. A comparison of stereo and multiview 3-D reconstruction using cross-sensor satellite imagery
CN108444451B (en) Planet surface image matching method and device
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN111583330B (en) Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system
CN114998399B (en) Heterogeneous optical remote sensing satellite image stereopair preprocessing method
CN113379648B (en) High-resolution seven-number and resource three-number stereoscopic image joint adjustment method
CN110660099B (en) Rational function model fitting method for remote sensing image processing based on neural network
CN107516291B (en) Night scene image ortho-rectification processing method
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN115719320B (en) Tilt correction dense matching method based on remote sensing image
CN117030620A (en) Method and device for adjusting regional network based on multisource optical remote sensing satellite image
CN113469899B (en) Optical remote sensing satellite relative radiation correction method based on radiation energy reconstruction
CN114359389A (en) Large image blocking epipolar line manufacturing method based on image surface epipolar line pair
Previtali et al. An automatic multi-image procedure for accurate 3D object reconstruction
Zhang et al. Tests and performance evaluation of DMC images and new methods for their processing
Zhao et al. Epipolar line generation from IKONOS imagery based on rational function model
CN110307858B (en) Adaptive correction method for surveying and mapping satellite intersection error

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 1 Baisheng Village, Zizhuyuan, Haidian District, Beijing, 100089

Applicant after: Ministry of Natural Resources Land Satellite Remote Sensing Application Center

Address before: 100048 No.1 Baisheng village, Zizhuyuan, Haidian District, Beijing

Applicant before: Ministry of Natural Resources Land Satellite Remote Sensing Application Center

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant