CN111127564A - Video image correction method based on geometric positioning model - Google Patents

Video image correction method based on geometric positioning model Download PDF

Info

Publication number
CN111127564A
CN111127564A CN201911338034.0A CN201911338034A CN111127564A CN 111127564 A CN111127564 A CN 111127564A CN 201911338034 A CN201911338034 A CN 201911338034A CN 111127564 A CN111127564 A CN 111127564A
Authority
CN
China
Prior art keywords
camera
image
coordinates
model
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911338034.0A
Other languages
Chinese (zh)
Other versions
CN111127564B (en
Inventor
崔子豪
邢力
肖骥
赵李明
李小俊
刘驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart City Research Institute Of China Electronics Technology Group Corp
Original Assignee
Smart City Research Institute Of China Electronics Technology Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart City Research Institute Of China Electronics Technology Group Corp filed Critical Smart City Research Institute Of China Electronics Technology Group Corp
Priority to CN201911338034.0A priority Critical patent/CN111127564B/en
Publication of CN111127564A publication Critical patent/CN111127564A/en
Application granted granted Critical
Publication of CN111127564B publication Critical patent/CN111127564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A video image correction method based on a geometric positioning model comprises the following steps: s1: constructing an initial geometric positioning model of the camera; s2: the positioning accuracy of the geometric positioning model is improved through calibration; s3: constructing a geometric positioning model of the virtual camera; s4: and resampling the real image by using the virtual camera model to generate a virtual image. In the invention, when the video image is projected on the map for display, the projection error caused by the imaging parameter error of the camera and the elevation error of the projection surface is eliminated, and the projected image and the map base map can be completely superposed.

Description

Video image correction method based on geometric positioning model
Technical Field
The invention relates to the technical field of video image correction, in particular to a video image correction method based on a geometric positioning model.
Background
In the geometric positioning model of the camera, because the measurement of the internal and external orientation elements of the camera is inaccurate, namely measurement errors exist, certain positioning deviation exists when the positioning model is used for calculating the position of an image, and the deviation of different parts of the image may be inconsistent. The imaging parameter error can be divided into an outer orientation element error and an inner orientation element error, the outer orientation element error comprises a mounting position error and a shooting angle error of a video camera, the inner orientation element error comprises a focal length error of the camera, a size error of an imaging probe and distortion of a lens.
The current practice is to calculate and obtain these imaging parameter errors by calibrating the camera under measurement, and then recover the accurate geometric positioning model of the camera. However, in the current three-dimensional GIS platform, such as skyline, the parameter configuration of video projection only supports setting of the position, shooting angle and view angle in the horizontal direction of the camera, and cannot include complete geometric positioning model parameters. The configuration of skyline is only applicable when the camera does not have a pinhole imaging model with ideal internal orientation element errors, and is not in accordance with the actual imaging situation. Because of the imaging parameter error, the projected image has position deviation and deformation. In addition, when the video image is projected on a reference plane or projected on an uneven terrain surface but the elevation data of the terrain is not accurate, the projected terrain is deviated due to the elevation error, and the image is also displaced.
In sum, the imaging parameter error of the camera, the limitation of the GIS platform on the setting of the projection parameters of the camera, and the influence of the elevation error all result in that the image after the video projection accessed to the GIS platform cannot be completely overlapped with the map base map. Even if the imaging parameters are manually adjusted, errors caused by elements in the external orientation can be eliminated, the projected image still has local deformation, and splicing of multiple paths of videos and extraction of the motion trail of a moving target are not facilitated.
Disclosure of Invention
Objects of the invention
In order to solve the technical problems in the background art, the invention provides a video image correction method based on a geometric positioning model, which can eliminate projection errors caused by camera imaging parameter errors and projection surface elevation errors when a video image is projected on a map for display; within a certain precision range, the projected image and the map base map can be completely superposed.
(II) technical scheme
In order to solve the above problems, the present invention provides a video image correction method based on a geometric positioning model, comprising the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure BDA0002331503110000021
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is the number of0,y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; rUIs an offset matrix to absorb mounting errors of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control point is a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control point and the image point coordinates (X, Y) of the control point on the image are obtained;
s202: solving the position compensation parameter, the attitude compensation parameter, the lens distortion parameter and the size of the imaging probe element of the camera by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in S1 and provide the positioning accuracy of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the geometric imaging model of the virtual camera is in a specific form as shown in formula B:
Figure BDA0002331503110000031
in formula B, x and y are coordinates of image points; f represents the camera standoff; x is the number of0、y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual imageP,yP) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting a ground real elevation model as a projection elevation model of a real camera geometric positioning model, substituting ground coordinates (X, Y, Z) into a formula A for calculation to obtain image point coordinates (X ', Y') corresponding to a real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of the real imageP’,yP’);
S404: obtaining real image by bilinear differenceElement coordinate (x)P’,yP’) The gray value of (d) and assigning to the virtual image pixel (x)P,yP) The gray scale of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
Preferably, in S1, the camera imaging geometric model is used to establish a one-to-one correspondence relationship between coordinates of the ground object point in the object space and coordinates of the ground object point in the image space, and includes two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y, Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the ground object points, assuming that the initial elevations of the ground object points are zero, i.e. h0When the formula A is combined with the earth ellipsoid formula, the initial geodetic coordinates (X) corresponding to the image point coordinates (X, y) are calculated as 00,Y0,Z0);
S2: will coordinate the earth (X)0,Y0,Z0) Conversion into latitude and longitude representations (lon)0,lat0,h0) (ii) a Reading an actual elevation value h 'from the ground elevation data, wherein the difference between the initial elevation value h0 and h' is dh ═ h0)。
S3: taking the elevation of the ground object point as h', and recalculating the geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y);
s4: repeating the steps S1-S3, and performing iterative calculation until dh is smaller than a preset tolerance, and stopping iteration to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the coordinates of the corresponding image points from the geodetic coordinates of the given points, namely from the object space to the image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the image point coordinates are predicted by solving the homography transformation matrix.
Preferably, in S3, the geometric imaging model of the virtual camera is an ideal pinhole imaging model, there is no lens distortion, and the imaging range and resolution of the virtual camera are substantially the same as those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the imaging probe size of the virtual camera is equal to the average of the length and width of the imaging probe of the real camera.
In the invention, the image under an ideal pinhole imaging mode can be simulated by constructing the virtual camera, the image deformation caused by the imaging parameter error of the camera is eliminated, and the image point offset caused by the elevation error caused by the projection datum plane is eliminated.
According to the invention, the positioning precision of the geometric positioning model is improved, and accurate motion track extraction is convenient for a moving target.
According to the invention, the video image is conveniently accessed to the three-dimensional GIS platform for projection display. The projected image and the map base map are highly overlapped, and seamless splicing display of the multi-channel monitoring video data is facilitated.
Drawings
Fig. 1 is a schematic flow chart of a video image correction method based on a geometric positioning model according to the present invention.
Fig. 2 is a schematic diagram illustrating a corresponding relationship between virtual image coordinates and real image coordinates in the geometric orientation model-based video image correction method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
As shown in fig. 1-2, the present invention provides a method for correcting a video image based on a geometric orientation model, which comprises the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure BDA0002331503110000071
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is the number of0,y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; rUIs an offset matrix to absorb mounting errors of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control point is a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control point and the image point coordinates (X, Y) of the control point on the image are obtained;
s202: solving the position compensation parameter, the attitude compensation parameter, the lens distortion parameter and the size of the imaging probe element of the camera by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in S1 and provide the positioning accuracy of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the geometric imaging model of the virtual camera is in a specific form as shown in formula B:
Figure BDA0002331503110000072
in formula B, x and y are coordinates of image points; f represents the camera standoff; x is the number of0、y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual imageP,yP) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting a ground real elevation model as a projection elevation of a real camera geometric positioning modelModel, substituting ground coordinates (X, Y, Z) into formula A for calculation to obtain image point coordinates (X ', Y') corresponding to real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of real imageP’,yP’);
S404: obtaining real image pixel coordinate (x) by bilinear differenceP’,yP’) The gray value of (d) and assigning to the virtual image pixel (x)P,yP) The gray scale of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
In the invention, the image under an ideal pinhole imaging mode can be simulated by constructing the virtual camera, the image deformation caused by the imaging parameter error of the camera is eliminated, and the image point offset caused by the elevation error caused by the projection datum plane is eliminated.
According to the invention, the positioning precision of the geometric positioning model is improved, and accurate motion track extraction is convenient for a moving target.
According to the invention, the video image is conveniently accessed to the three-dimensional GIS platform for projection display. The projected image and the map base map are highly overlapped, and seamless splicing display of the multi-channel monitoring video data is facilitated.
In an alternative embodiment, in S1, the camera imaging geometry model is used to establish a one-to-one correspondence between coordinates of the ground object point in the object space and coordinates of the ground object point in the image space, which includes two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y, Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the ground object points, assuming that the initial elevations of the ground object points are zero, i.e. h0When the formula A is combined with the earth ellipsoid formula, the initial geodetic coordinates (X) corresponding to the image point coordinates (X, y) are calculated as 00,Y0,Z0);
S2: will coordinate the earth (X)0,Y0,Z0) Conversion into latitude and longitude representations (lon)0,lat0,h0) (ii) a Reading an actual elevation value h 'from the ground elevation data, wherein the difference between the initial elevation value h0 and h' is dh ═ h0)。
S3: taking the elevation of the ground object point as h', and recalculating the geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y);
s4: repeating the steps S1-S3, and performing iterative calculation until dh is smaller than a preset tolerance, and stopping iteration to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the coordinates of the corresponding image points from the geodetic coordinates of the given points, namely from the object space to the image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the image point coordinates are predicted by solving the homography transformation matrix.
In an alternative embodiment, the number of pixels of the virtual camera image may be determined by the virtual camera model constructed in S3. The pixel number in the width direction of the virtual image is the width of the image sensor divided by the size of the virtual imaging detection element and is rounded, and the pixel number in the height direction of the virtual image is the height of the image sensor divided by the size of the virtual imaging detection element and is rounded. The gray value of each pixel of the virtual image can be obtained by re-acquiring the corresponding pixel gray value of the real image. The coordinate corresponding relation between the virtual image and the real image can be obtained by calculation through a positioning model of the virtual image and a positioning model of the real image;
in S3, the geometric imaging model of the virtual camera is an pinhole imaging model in an ideal state, no lens distortion exists, and the imaging range and resolution of the virtual camera are substantially the same as those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the imaging probe size of the virtual camera is equal to the average of the length and width of the imaging probe of the real camera.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (3)

1. A video image correction method based on a geometric positioning model is characterized by comprising the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure FDA0002331503100000011
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is the number of0,y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; rUIs an offset matrix to absorb mounting errors of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control point is a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control point and the image point coordinates (X, Y) of the control point on the image are obtained;
s202: solving the position compensation parameter, the attitude compensation parameter, the lens distortion parameter and the size of the imaging probe element of the camera by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in S1 and provide the positioning accuracy of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the geometric imaging model of the virtual camera is in a specific form as shown in formula B:
Figure FDA0002331503100000021
in formula B, x and y are coordinates of image points; f represents the camera standoff; x is the number of0、y0Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; xC、YC、ZCGeodetic coordinates that are the camera's photographic center; rCWA rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual imageP,yP) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting a ground real elevation model as a projection elevation model of a real camera geometric positioning model, substituting ground coordinates (X, Y, Z) into a formula A for calculation to obtain image point coordinates (X ', Y') corresponding to a real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of the real imageP’,yP’);
S404: obtaining real image pixel coordinate (x) by bilinear differenceP’,yP’) The gray value of (d) and assigning to the virtual image pixel (x)P,yP) The gray scale of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
2. The method for correcting video image according to claim 1, wherein in S1, the camera images a geometric model for establishing a one-to-one correspondence relationship between coordinates of the ground object point in the object space and coordinates of the ground object point in the image space, which comprises two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y, Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the ground object points, assuming that the initial elevations of the ground object points are zero, i.e. h0When the formula A is combined with the earth ellipsoid formula, the initial geodetic coordinates (X) corresponding to the image point coordinates (X, y) are calculated as 00,Y0,Z0);
S2: will coordinate the earth (X)0,Y0,Z0) Conversion into latitude and longitude representations (lon)0,lat0,h0) (ii) a Reading an actual elevation value h 'from the ground elevation data, wherein the difference between the initial elevation value h0 and h' is dh ═ h0)。
S3: taking the elevation of the ground object point as h', and recalculating the geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y);
s4: repeating the steps S1-S3, and performing iterative calculation until dh is smaller than a preset tolerance, and stopping iteration to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the coordinates of the corresponding image points from the geodetic coordinates of the given points, namely from the object space to the image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the image point coordinates are predicted by solving the homography transformation matrix.
3. The geometric localization model-based video image correction method according to claim 1, wherein in S3, the geometric imaging model of the virtual camera is an ideal pinhole imaging model, no lens distortion exists, and the imaging range and resolution of the virtual camera substantially coincide with those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the imaging probe size of the virtual camera is equal to the average of the length and width of the imaging probe of the real camera.
CN201911338034.0A 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model Active CN111127564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911338034.0A CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911338034.0A CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Publications (2)

Publication Number Publication Date
CN111127564A true CN111127564A (en) 2020-05-08
CN111127564B CN111127564B (en) 2023-02-28

Family

ID=70501295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911338034.0A Active CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Country Status (1)

Country Link
CN (1) CN111127564B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947653A (en) * 2021-09-27 2022-01-18 四川大学 Simulation method of real texture hair
CN117934346A (en) * 2024-03-21 2024-04-26 安徽大学 Geometric processing method of airborne linear array hyperspectral remote sensing image without stable platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
EP2966863A1 (en) * 2014-07-10 2016-01-13 Seiko Epson Corporation Hmd calibration with direct geometric modeling
CN107144293A (en) * 2017-04-07 2017-09-08 武汉大学 A kind of geometric calibration method of video satellite area array cameras
CN110211054A (en) * 2019-04-28 2019-09-06 张过 A kind of undistorted making video method of spaceborne push-broom type optical sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
EP2966863A1 (en) * 2014-07-10 2016-01-13 Seiko Epson Corporation Hmd calibration with direct geometric modeling
CN107144293A (en) * 2017-04-07 2017-09-08 武汉大学 A kind of geometric calibration method of video satellite area array cameras
CN110211054A (en) * 2019-04-28 2019-09-06 张过 A kind of undistorted making video method of spaceborne push-broom type optical sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡振龙等: "基于数字检校场的"天绘一号"卫星在轨几何定标", 《航天返回与遥感》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947653A (en) * 2021-09-27 2022-01-18 四川大学 Simulation method of real texture hair
CN113947653B (en) * 2021-09-27 2023-04-07 四川大学 Simulation method of real texture hair
CN117934346A (en) * 2024-03-21 2024-04-26 安徽大学 Geometric processing method of airborne linear array hyperspectral remote sensing image without stable platform
CN117934346B (en) * 2024-03-21 2024-06-07 安徽大学 Geometric processing method of airborne linear array hyperspectral remote sensing image without stable platform

Also Published As

Publication number Publication date
CN111127564B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN106403902B (en) A kind of optical satellite in-orbit real-time geometry location method and system cooperateed with to star
CN109115186B (en) 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system
CN104897175B (en) Polyphaser optics, which is pushed away, sweeps the in-orbit geometric calibration method and system of satellite
JP6211157B1 (en) Calibration apparatus and calibration method
CN107644435B (en) Attitude correction-considered agile optical satellite field-free geometric calibration method and system
KR101346323B1 (en) Method for self-calibration of non-metric digital camera using ground control point and additional parameter
CN107014399B (en) Combined calibration method for satellite-borne optical camera-laser range finder combined system
KR20190026452A (en) A method of automatic geometric correction of digital elevation model made from satellite images and provided rpc
CN107144293A (en) A kind of geometric calibration method of video satellite area array cameras
CN106709944B (en) Satellite remote sensing image registration method
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN110736447B (en) Vertical-direction horizontal position calibration method for integrated image acquisition equipment
US20120063668A1 (en) Spatial accuracy assessment of digital mapping imagery
CN111127564B (en) Video image correction method based on geometric positioning model
JP2002156229A (en) Mobile displacement measuring method and device for structure
CN107967700A (en) The in-orbit geometric correction of the wide working distance binocular camera of big visual field and precision test method
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method
KR101346192B1 (en) Aviation surveying system for correction realtime of aviation image
CN110986888A (en) Aerial photography integrated method
CN113947638A (en) Image orthorectification method for fisheye camera
CN115311366A (en) RPC model-based geometric calibration method and system for space-borne segmented linear array sensor
KR100520275B1 (en) Method for correcting geometry of pushbroom image using solidbody rotation model
CN108955642B (en) Large-breadth equivalent center projection image seamless splicing method
KR101346206B1 (en) Aviation surveying system for processing the aviation image in gps
CN111402315A (en) Three-dimensional distance measuring method for adaptively adjusting base line of binocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant