CN113516606A - Slope radar image and optical photo matching fusion method assisted by oblique photography data - Google Patents

Slope radar image and optical photo matching fusion method assisted by oblique photography data Download PDF

Info

Publication number
CN113516606A
CN113516606A CN202110244381.8A CN202110244381A CN113516606A CN 113516606 A CN113516606 A CN 113516606A CN 202110244381 A CN202110244381 A CN 202110244381A CN 113516606 A CN113516606 A CN 113516606A
Authority
CN
China
Prior art keywords
image
radar
shot
matching
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110244381.8A
Other languages
Chinese (zh)
Other versions
CN113516606B (en
Inventor
郑翔天
何秀凤
李金旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202110244381.8A priority Critical patent/CN113516606B/en
Publication of CN113516606A publication Critical patent/CN113516606A/en
Application granted granted Critical
Publication of CN113516606B publication Critical patent/CN113516606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a slope radar image and optical photo matching fusion method assisted by oblique photography data, which comprises the following steps of firstly, completing geometric mapping by utilizing the spatial relationship of foundation radar station setting information and oblique images; then, obtaining an instantaneous angle-of-view image in oblique photography data according to the coordinate of a photograph shooting point and orientation perspective projection transformation, extracting the instantaneous angle-of-view image and a common point of a shot photograph, and then using least square optimization affine transformation matching to obtain a mapping relation between a radar measurement image and a picture element of the shot photograph; and finally, introducing an image fusion algorithm, and fusing the real-time deformation and the color space of the shot picture to obtain a shot picture-deformation fusion image. The method and the device utilize the three-dimensional information of the oblique image to assist in completing the matching of the ground radar image and the optical photographing, and avoid the problem of huge matching errors caused by directly utilizing a two-dimensional image transformation method to ignore the difference of imaging principles in the conventional method.

Description

Slope radar image and optical photo matching fusion method assisted by oblique photography data
Technical Field
The invention relates to a slope radar image and optical photo matching fusion method assisted by oblique photography data, and belongs to the technical field of strip mine slope deformation monitoring.
Background
Landslide is a geological disaster which is second to earthquake, occurs most frequently and causes the most serious loss. After landslide, a plurality of field emergency rescue personnel and engineering vehicles exist, and at the moment, if secondary disasters occur, the loss cannot be estimated. The continuous monitoring of the residual rock mass of the landslide, the analysis of deformation characteristics, the study and the judgment and the positioning of dangerous deformation positions by experts and the timely early warning are widely accepted technical routes for the field emergency monitoring after the landslide disaster at present. Ground-based synthetic aperture interferometric radar (GB-InSAR) is a hotspot technology in recent years, and has proven to be a powerful tool for regional deformation monitoring, with shorter time intervals (revisit periods can reach the order of minutes). The existing landslide monitoring case shows that the actual measurement precision can reach a submillimeter level, and GB-InSAR has a huge prospect in the field of landslide emergency monitoring and early warning. The radar is imaged in a polar coordinate system, and is convenient for human eyes to perceive and position dangerous areas only by mapping to a three-dimensional space, so that the radar is not suitable for being directly used by safety monitoring personnel, and therefore, the measurement result is often required to be matched with the shot pictures of the monitoring personnel.
The existing matching mode is low in matching precision, the ground radar image and the shot image are matched through two-dimensional image transformation, the difference of the imaging principles of the ground radar image and the shot image is large, accurate correspondence cannot be achieved, and the ground radar image and the shot image are difficult to serve as interpretation references. Slope sites often have three-dimensional laser scanning (LIDAR) and unmanned aerial vehicle oblique photography three-dimensional data. The three-dimensional data can be matched to the radar image by geometric mapping by means of range-doppler analysis on the one hand and the oblique instantaneous perspective image can be matched to the shot picture on the other hand. Therefore, matching fusion of the ground slope deformation monitoring radar image and the shot picture can be completed through oblique photography assistance.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for matching and fusing slope radar deformation images and optical photos by respectively matching the slope deformation monitoring radar images and the shot photos with the slope photography as intermediate data.
The invention adopts the following technical scheme for solving the technical problems:
the slope radar image and optical photo matching fusion method assisted by oblique photography data comprises the following specific steps:
step 1, continuously measuring the antenna center coordinates of each stop bit in the process of moving the foundation radar one by one to obtainArrival set PrailPoint set PrailCarrying out linear fitting to obtain an antenna track vector
Figure BDA0002963546690000011
Step 2, calculating a three-dimensional oblique photography data point set PmapRelative slope and relative azimuth angle between each point and radar to form two-dimensional point set Pmap3D
Step 3, calculating Pmap3DThe Euclidean distance between each point and each pixel point in the radar image, and the pixel point corresponding to the minimum Euclidean distance form the most adjacent pixel point set PITo obtain Pmap3DAnd PIMapping table T between oneI
Step 4, from the optical photograph IshotGeographic coordinates P of internally-read optical photograph shooting pointshotA 1 is to PmapIs set at PshotAn instantaneous image I obtained by adjusting the viewing angle of three-dimensional oblique photography data by using a projective transformation methodtempPreservation of PmapAnd ItempMapping table T between pointstemp
Step 5, visual interpretation of ItempAnd IshotObtaining a rough matching common point set P by two-dimensional coordinates of the middle and obvious ground object targetcoWherein P iscoIs composed of two sub-point sets, one is ItempA set of coordinate sub-points of the inner salient object, another one is IshotEndo-and ItempA set of coordinate sub-points of the same salient object target;
step 6, adding PcoInputting an image affine transformation equation f to obtain initial values of transformation parameters of the image affine transformation equation, wherein the transformation parameters comprise rotation factors, translation factors and scaling factors;
step 7, replacing the initial values of the transformation parameters in the step 6 back to f to form a transformation equation f1, and converting I into IshotSubstituting all the pixel points into f1 to obtain a coarse matching image IroughForm IshotAnd IroughCoarse matching mapping table T between pixelsroughCompleting coarse matching;
step 8, extracting I by using a characteristic extraction methodtempAnd IroughImage features of the interior, get a number of ItempAnd IroughForm a fine-matching common point pair set Pfine,PfineWherein the element is ItempAnd IroughPixel coordinates of the same characteristic points;
step 9, utilizing P in step 8fineEstimating the optimal transformation parameters of the affine transformation equation f of the image by using least square iteration according to the initial values of the transformation parameters in the step 6, and replacing the optimal transformation parameters back to the f to form a transformation equation f 2;
step 10, adding IroughSubstituting transformation equation f2 to obtain IfineForm IroughAnd IfineFine matching mapping table T between pixelsfineObtaining I by resampling and interpolation methodshotAnd radar image PIMapping relation T betweenfinal
Step 11, search for TfinalMapping table for combining the deformation value and I of each pixel point in radar image by image fusion methodshotFusing the middle RGB color channels to obtain a fused image IfusionAnd completing the matching fusion process.
Further, in step 1
Figure BDA0002963546690000021
Is directed from the radar starting point to the end point.
Further, in step 2, P is calculated by using a range-Doppler algorithmmapThe relative slope and relative azimuth of each point in the radar.
Further, PmapMiddle ith vertex AiRelative slope from radar:
Figure BDA0002963546690000022
in the formula, N represents PmapThe number of vertices in (x)i,yi,zi) Is Ai(x) three-dimensional coordinates of (c)s,ys,zs) For radar synthetic aperture centre OsThree-dimensional coordinates of (a);
Pmapmiddle ith vertex AiThe relative azimuth to the radar is:
Figure BDA0002963546690000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002963546690000032
is AiIn antenna track vector
Figure BDA0002963546690000033
The upper vertical foot, |, represents the modulus of the vector, P2Is the radar travel termination point.
Further, the image affine transformation equation f in step 6 is:
Figure BDA0002963546690000034
wherein
Figure BDA0002963546690000035
In order to be the target coordinates,
Figure BDA0002963546690000036
for the coordinates to be matched, R is a rotation factor, Tx、TyFor the translation factor, ρ is the scaling factor.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) the three-dimensional information of the oblique image is used for assisting in completing matching of the ground radar image and the optical photographing, so that the problem of huge matching error caused by the fact that a conventional method directly uses a two-dimensional image transformation method to ignore difference of imaging principles is solved;
(2) by identifying the salient objects in the scene, firstly, rough matching of the oblique image instantaneous image and the optical shot image is carried out, and then, a large number of common points are identified by using characteristics to optimize affine transformation equation parameters, the problem of difficulty in initial value selection of a conventional transformation method is solved;
(3) by identifying a large number of common point least square iteration optimization transformation equation parameters through characteristics, the method is easy to implement, the operation speed is high, and the pixel level matching fusion precision can be realized;
(4) the deformation measurement image of the side slope deformation monitoring radar is matched with the shot pictures of geological disaster reconnaissance personnel in real time, the usability is strong, and data support can be provided for landslide risk assessment and disaster early warning.
Drawings
FIG. 1 is a schematic diagram of slope deformation monitoring radar image and optical photograph matching fusion assisted by oblique photography data;
fig. 2 is a flowchart of a slope deformation monitoring radar image and optical photo matching fusion method assisted by oblique photography data according to an embodiment of the present invention.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to a slope radar image and optical photo matching fusion method assisted by oblique photography data, which comprises the steps of firstly, utilizing the spatial relationship of the station setting coordinate, the pitch angle information and the oblique image of a foundation radar to complete geometric mapping; then, obtaining an instantaneous angle-of-view image in oblique photography data according to the coordinate of a photograph shooting point and orientation perspective projection transformation, extracting the common point of the instantaneous angle-of-view image and a shot optical photograph, and then matching by using least square optimized affine transformation to obtain a mapping relation between radar images and pixels of the shot optical photograph; and finally, introducing an image fusion algorithm, and fusing the side slope deformation monitoring radar image with the color space of the shot optical picture to obtain a shot picture-deformation fusion image.
Examples
An embodiment provides a flow chart of a slope deformation monitoring radar image and optical photograph matching fusion method assisted by oblique photography data, as shown in fig. 2.
And continuously measuring the antenna center coordinate of each stop bit in the process of moving the foundation radar one step by using common auxiliary measuring equipment such as a total station, three-dimensional laser scanning, a static global navigation satellite system (GPS/GNSS) and the like. The first operation measurement point set of the radar is
Figure BDA0002963546690000041
The nth operation measurement point set is
Figure BDA0002963546690000042
Collection
Figure BDA0002963546690000043
Obtain a point set Prail
Using conventional straight line fitting method to point set PrailFitting to obtain radar antenna trace vector
Figure BDA0002963546690000044
Vector
Figure BDA0002963546690000045
The direction is from the radar starting point to the end point.
Antenna track vector
Figure BDA0002963546690000046
Three-dimensional oblique photography data point set PmapAs input parameters of the geometric mapping algorithm, calculating a point set P by using a conventional range-Doppler algorithmmapRelative slope distance R between each point and radar3DAnd relative azimuth angle theta3D
Let the center of the synthetic aperture of radar be OsGo through PmapEach vertex in (x)i,yi,zi) Relative to the aperture center Os(xs,ys,zs) Euclidean distance of (c):
Figure BDA0002963546690000047
in the formula
Figure BDA0002963546690000048
Indicates the ith point relative to the aperture center OsIs a slope of (1), N represents PmapThe total number of model points.
Traverse PmapWith respect to the center O of the aperture for each model vertexsThe azimuth angle of (2) is a radar azimuth negative angle relative to the left side of the vertical central plane, and the azimuth angle is as follows:
Figure BDA0002963546690000049
in the formula (I), the compound is shown in the specification,
Figure BDA00029635466900000410
the azimuth angle of the ith point with respect to the center of the aperture,
Figure BDA00029635466900000411
for the ith point on the flight path axis
Figure BDA00029635466900000412
Is used for the foot drop. | represents the modulus of the vector. Form a two-dimensional point set Pmap3D:{(R3D3D)1,(R3D3D)2,...,(R3D3D)N}, calculating Pmap3DThe Euclidean distance between each point and each grid point of radar image I (r, theta) is the minimum Euclidean distance point to obtain the nearest pixel point set PI:{(R2D2D)1,(R2D2D)2,...,(R2D2D)NGet Pmap3DAnd PIMapping table T between oneI
Photo of the present example IshotThe geographical coordinates of the place of taking the picture are obtained by GPS/GNSS and the point of taking the picture is readGeographical coordinates PshotThe oblique photography data sets the observation point at PshotThen, determining the sight line direction
Figure BDA0002963546690000051
Instantaneous two-dimensional image I from which oblique photographic data can be determinedtemp. Examples
Figure BDA0002963546690000052
Is a point of view PshotAnd the scene center PR0The resultant vector. Projective transformation for adjusting oblique photography data observation visual angle to instantaneous image ItempAnd take a photograph IshotThe content is initially consistent. In the embodiment, projection transformation is performed, in which oblique photography is initially performed in a world coordinate system O-XYZ, and is first transformed into a target coordinate system P centered on an observation pointshot-XeYeZeNegative ZeThe axis pointing in the viewing direction
Figure BDA0002963546690000053
The world coordinate system and the target coordinate system are both right-handed three-dimensional rectangular coordinate systems. Then, P is addedmapThe projection refers to the transformation that the target object describes the transformation equivalence from a world coordinate system to a target coordinate system and the transformation that the target coordinate system is superposed on the world coordinate system by using translation and rotation operation, and then perspective projection is used. Will be parallel to XeOeYeThe plane with the distance f from the viewpoint is taken as the instantaneous image ItempProjection plane, then, a point in the target coordinate system is in the instantaneous image ItempCoordinates (X) of grid points on projection planem,Ym) Can be calculated from the following formula:
Figure BDA0002963546690000054
Figure BDA0002963546690000055
in the formula (X)s,Ys,Zs) Is PshotCoordinates in the world coordinate System O-XYZ, instantaneous image ItempAzimuth angle theta and pitch angle alpha of projection plane relative to plane XOY, observation point PshotAnd the instantaneous image ItempDistance f of projection plane, arbitrary target point (X) in world coordinate systemM,YM,ZM) Preservation of PmapAnd ItempMapping table T between grid pointstemp
Visual interpretation of transient images ItempAnd take a photograph IshotObtaining a rough matching common point set P by the two-dimensional coordinates of the salient object target in the imagecoSet of points PcoIncluded
Figure BDA0002963546690000056
Is ItempA set of coordinate sub-points of the inner salient object, further comprising
Figure BDA0002963546690000057
Is IshotEndo-and ItempThe same set of coordinate sub-points of the surface feature object, np to common.
Coarse matching public point set PcoAffine transformation equation of input image
Figure BDA0002963546690000058
Obtaining the initial value rho of the parameter0、R0
Figure BDA0002963546690000061
In equation f
Figure BDA0002963546690000062
In order to be the target coordinates,
Figure BDA0002963546690000063
for the coordinates to be matched, R is a rotation factor, Tx、TyFor the translation factor, ρ is the scaling factor.
Bringing the initial values of the parameters back to equation f to form transformation equation f1, and putting I into operationshotIs substituted into f2 to obtain a coarse matchMatching image grid IroughTo obtain a coarse matching mapping table TroughAnd finishing coarse matching.
Under the control of coarse matching result, extracting I by using feature extraction methodtempAnd IroughThe characteristics of inner points, lines and surfaces are obtained, and a large number of same characteristic points are used as a fine matching common point set Pfine
Using fine-matched common point set PfineInitial value of transformation parameter ρ0、R0
Figure BDA0002963546690000064
Using least square iteration optimization affine transformation equation f1 parameter, in this embodiment, the least square affine transformation is optimized by using an iterative method, taking a plane right-hand rectangular coordinate system as an example, and then performing bilinear interpolation transformation after the affine transformation, where the transformation model is:
Figure BDA0002963546690000065
establishing an error equation:
Figure BDA0002963546690000066
wherein the content of the first and second substances,
Figure BDA0002963546690000067
n is a common point PfineNumber, a least squares solution of the model parameters
Figure BDA0002963546690000068
P is an equal weight matrix. And (5) least square iteration, and minimizing the error of the formula (4). Controlling rounding errors by iterative calculation, the rounding errors selecting errors in unit weights
Figure BDA0002963546690000069
Degree of freedom
Figure BDA00029635466900000610
Taking each control point as an independent observation quantity weight matrix P to obtain a unit matrix for the difference between the quantity of error equations and the quantity of variables needed by actually solving the equations, and iteratively solving the optimal estimated value of the transformation parameters
Figure BDA00029635466900000611
Bringing back f forms transformation equation f 2.
Will IroughSubstituting transformation equation f2 to obtain IfineAnd the mapping relation is recorded as TfineAt the moment, a side slope deformation monitoring radar image P is obtainedIAnd Pmap3DMapping table T between oneI,Pmap3DAnd PmapIn a one-to-one correspondence relationship, PmapAnd ItempMapping table T between pointstemp,IshotAnd IroughCoarse matching mapping table Trough,IroughAnd IfineFine matching mapping table TfineObtaining I through conventional resampling and interpolation methodshotPhoto and radar image PIMapping relation T betweenfinal
Lookup TfinalMapping table for mapping the grid point deformation value and I of the deformation image of the slope deformation radarshotThe photo Red Green Blue (RGB) color channels are fused by an image fusion method. The embodiment adopts a common fusion method to enable the deformation quantity measured by the radar to replace an optical photo red (R) channel
Figure BDA0002963546690000071
In the formula (I), the compound is shown in the specification,
Figure BDA0002963546690000072
the converted image has three channels of red, green and blue, and the red channel uses radar image PIDistortion values IR, Green channel and blue channel use IshotGreen channel of photo optics IVGAnd blue channel IVBObtaining a fused image IfusionAnd completing the matching fusion process. Oblique photography data assistThe slope deformation monitoring radar image and the optical photo are matched and fused into a schematic diagram, as shown in fig. 1.

Claims (5)

1. The slope radar image and optical photo matching fusion method assisted by oblique photography data is characterized by comprising the following specific steps of:
step 1, continuously measuring the antenna center coordinates of each stop bit in the one-step and one-stop movement process of the ground-based radar to obtain a point set PrailPoint set PrailCarrying out linear fitting to obtain an antenna track vector
Figure FDA0002963546680000011
Step 2, calculating a three-dimensional oblique photography data point set PmapRelative slope and relative azimuth angle between each point and radar to form two-dimensional point set Pmap3D
Step 3, calculating Pmap3DThe Euclidean distance between each point and each pixel point in the radar image, and the pixel point corresponding to the minimum Euclidean distance form the most adjacent pixel point set PITo obtain Pmap3DAnd PIMapping table T between oneI
Step 4, from the optical photograph IshotGeographic coordinates P of internally-read optical photograph shooting pointshotA 1 is to PmapIs set at PshotAn instantaneous image I obtained by adjusting the viewing angle of three-dimensional oblique photography data by using a projective transformation methodtempPreservation of PmapAnd ItempMapping table T between pointstemp
Step 5, visual interpretation of ItempAnd IshotObtaining a rough matching common point set P by two-dimensional coordinates of the middle and obvious ground object targetcoWherein P iscoIs composed of two sub-point sets, one is ItempA set of coordinate sub-points of the inner salient object, another one is IshotEndo-and ItempA set of coordinate sub-points of the same salient object target;
step 6, adding PcoInputting an image affine transformation equation f to obtain image affine transformationInitial values of transformation parameters of the equation, wherein the transformation parameters comprise a rotation factor, a translation factor and a scaling factor;
step 7, replacing the initial values of the transformation parameters in the step 6 back to f to form a transformation equation f1, and converting I into IshotSubstituting all the pixel points into f1 to obtain a coarse matching image IroughForm IshotAnd IroughCoarse matching mapping table T between pixelsroughCompleting coarse matching;
step 8, extracting I by using a characteristic extraction methodtempAnd IroughImage features of the interior, get a number of ItempAnd IroughForm a fine-matching common point pair set Pfine,PfineWherein the element is ItempAnd IroughPixel coordinates of the same characteristic points;
step 9, utilizing P in step 8fineEstimating the optimal transformation parameters of the affine transformation equation f of the image by using least square iteration according to the initial values of the transformation parameters in the step 6, and replacing the optimal transformation parameters back to the f to form a transformation equation f 2;
step 10, adding IroughSubstituting transformation equation f2 to obtain IfineForm IroughAnd IfineFine matching mapping table T between pixelsfineObtaining I by resampling and interpolation methodshotAnd radar image PIMapping relation T betweenfinal
Step 11, search for TfinalMapping table for combining the deformation value and I of each pixel point in radar image by image fusion methodshotFusing the middle RGB color channels to obtain a fused image IfusionAnd completing the matching fusion process.
2. The oblique photography data aided slope radar image and optical photograph matching fusion method of claim 1, wherein in step 1
Figure FDA0002963546680000028
Is directed from the radar starting point to the end point.
3. The oblique photography data aided slope radar image and optical photograph matching fusion method of claim 1, wherein in step 2, P is calculated by using range-Doppler algorithmmapThe relative slope and relative azimuth of each point in the radar.
4. The oblique photography data-assisted slope radar image and optical photograph matching fusion method of claim 3, wherein P is PmapMiddle ith vertex AiRelative slope from radar:
Figure FDA0002963546680000021
in the formula, N represents PmapThe number of vertices in (x)i,yi,zi) Is Ai(x) three-dimensional coordinates of (c)s,ys,zs) For radar synthetic aperture centre OsThree-dimensional coordinates of (a);
Pmapmiddle ith vertex AiThe relative azimuth to the radar is:
Figure FDA0002963546680000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002963546680000023
is AiIn antenna track vector
Figure FDA0002963546680000024
The upper vertical foot, |, represents the modulus of the vector, P2Is the radar travel termination point.
5. The oblique photography data aided slope radar image and optical photograph matching fusion method of claim 1, wherein in step 6, the image affine transformation equation f is as follows:
Figure FDA0002963546680000025
wherein
Figure FDA0002963546680000026
In order to be the target coordinates,
Figure FDA0002963546680000027
for the coordinates to be matched, R is a rotation factor, Tx、TyFor the translation factor, ρ is the scaling factor.
CN202110244381.8A 2021-03-05 2021-03-05 Slope radar image and optical photo matching fusion method assisted by oblique photographing data Active CN113516606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110244381.8A CN113516606B (en) 2021-03-05 2021-03-05 Slope radar image and optical photo matching fusion method assisted by oblique photographing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110244381.8A CN113516606B (en) 2021-03-05 2021-03-05 Slope radar image and optical photo matching fusion method assisted by oblique photographing data

Publications (2)

Publication Number Publication Date
CN113516606A true CN113516606A (en) 2021-10-19
CN113516606B CN113516606B (en) 2023-12-12

Family

ID=78061163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110244381.8A Active CN113516606B (en) 2021-03-05 2021-03-05 Slope radar image and optical photo matching fusion method assisted by oblique photographing data

Country Status (1)

Country Link
CN (1) CN113516606B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image
KR102028324B1 (en) * 2019-02-26 2019-11-04 엘아이지넥스원 주식회사 Synthetic Aperture Radar Image Enhancement Method and Calculating Coordinates Method
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112102458A (en) * 2020-08-31 2020-12-18 湖南盛鼎科技发展有限责任公司 Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image
KR102028324B1 (en) * 2019-02-26 2019-11-04 엘아이지넥스원 주식회사 Synthetic Aperture Radar Image Enhancement Method and Calculating Coordinates Method
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112102458A (en) * 2020-08-31 2020-12-18 湖南盛鼎科技发展有限责任公司 Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴瑞娟;何秀凤: "高分雷达与光学影像融合的滨海湿地变化检测", 测绘科学, no. 011 *
李万莉;王文佳;: "基于Hough变换的激光SLAM几何特征地图提取方法", 机电一体化, no. 07 *
杨俊;乞耀龙;谭维贤;王彦平;洪文: "地基SAR 图像与地形数据的几何映射三维匹配方法", 中国科学院大学学报, vol. 32, no. 3 *

Also Published As

Publication number Publication date
CN113516606B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
EP1242966B1 (en) Spherical rectification of image pairs
JP4794019B2 (en) Apparatus and method for providing a three-dimensional map representation of a region
Li Potential of high-resolution satellite imagery for national mapping products
US8107722B2 (en) System and method for automatic stereo measurement of a point of interest in a scene
US20170310892A1 (en) Method of 3d panoramic mosaicing of a scene
CN106767706A (en) A kind of unmanned plane reconnoitres the Aerial Images acquisition method and system of the scene of a traffic accident
Eisenbeiss et al. Photogrammetric documentation of an archaeological site (Palpa, Peru) using an autonomous model helicopter
CN106444837A (en) Obstacle avoiding method and obstacle avoiding system for unmanned aerial vehicle
CN104363438B (en) Full-view stereo making video method
CN106408601A (en) GPS-based binocular fusion positioning method and device
KR20090064679A (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN102519436A (en) Chang'e-1 (CE-1) stereo camera and laser altimeter data combined adjustment method
CN110986888A (en) Aerial photography integrated method
CN112184786A (en) Target positioning method based on synthetic vision
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN112461204B (en) Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height
Liu et al. A new approach to fast mosaic UAV images
Bertram et al. Generation the 3D model building by using the quadcopter
CN102538764A (en) Combined type image pair three-dimensional location method
Maurice et al. A photogrammetric approach for map updating using UAV in Rwanda
Al-Rawabdeh et al. A robust registration algorithm for point clouds from UAV images for change detection
CN112785686A (en) Forest map construction method based on big data and readable storage medium
Choi et al. Precise geometric registration of aerial imagery and LIDAR data
CN113516606B (en) Slope radar image and optical photo matching fusion method assisted by oblique photographing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant