CN111667413A - Image despinning method and system based on multi-source sensing data fusion processing - Google Patents
Image despinning method and system based on multi-source sensing data fusion processing Download PDFInfo
- Publication number
- CN111667413A CN111667413A CN202010461520.8A CN202010461520A CN111667413A CN 111667413 A CN111667413 A CN 111667413A CN 202010461520 A CN202010461520 A CN 202010461520A CN 111667413 A CN111667413 A CN 111667413A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- pixel
- axis
- image
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000007499 fusion processing Methods 0.000 title claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims abstract description 76
- 230000006340 racemization Effects 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000005096 rolling process Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 23
- 230000003287 optical effect Effects 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- -1 roll angle Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
- G06T3/608—Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image despinning method and system based on multi-source sensing data fusion processing, wherein the method comprises the following steps: and a course angle and a pitch angle of the rotary table are obtained in real time through an encoder inside the rotary table. Meanwhile, the inclination angle in the rolling direction is obtained in real time through the rolling gyro of the rotary table azimuth frame. And acquiring the attitude angles of the ground-lining coordinate system including roll, course and pitch in real time through an inertial navigation system. The geographic coordinate system is converted into the pixel coordinate system by establishing four coordinate systems, namely a geographic coordinate system, a camera coordinate system, an imaging coordinate system, a pixel coordinate system and the like, and by deducing the four coordinate systems layer by layer. The angle information is converted into coordinate values of the image pixel coordinates. And performing compensation on the pixel deviation in each direction by using an image processing algorithm to realize image racemization of multi-source sensing data fusion processing.
Description
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an image despinning method and system based on multi-source sensing data fusion processing.
Background
In the tracking and shooting process of the vehicle-mounted photoelectric platform, the attitude change of the carrier and the relative motion of a target can cause the relative motion of the azimuth, the pitching and the like of a visual axis of the photoelectric platform, so that a shot image rotates, troubles are brought to an observer, and the target observation is not facilitated. To overcome this effect, it is necessary to perform a despinning process on the rotated image. With the rapid development of computer image processing technology, the image despinning processing technology is widely applied in the fields of shells, missiles, aerospace, remote sensing and remote measuring, scientific detection, image monitoring and the like. The traditional optical despinning system has the advantages of high processing difficulty, high power consumption, low angular resolution and large volume. The existing electronic racemization algorithm makes up the defects of the traditional optical racemization system, and the electronic racemization algorithm is widely concerned as an effective racemization method. The method has the greatest advantage of improving the accuracy and speed of image despinning.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the technical problems in the background art, the invention provides an image despinning method based on multisource sensing data fusion processing (multisource sensing here means that three-dimensional angle information at the acquisition moment is obtained when a video is acquired), and the image despinning can be realized in complex scenes such as different course angles, pitch angles, roll angles and the like. The method comprises the following steps:
step 1: acquiring real-time image data, acquiring a course angle and a pitch angle of the rotary table in real time through an encoder inside the rotary table, and acquiring an inclination angle in a rolling direction in real time through a rolling gyroscope of a rotary table azimuth frame; acquiring attitude angles of a geographic coordinate system in real time through an inertial navigation system, wherein the attitude angles comprise a roll angle, a course angle and a depression angle;
the turntable, the gyro of the azimuth frame of the turntable, and the inertial navigation system mentioned here are provided by an optoelectronic system, which is a product used by special equipment limited.
Step 2: establishing a camera coordinate system, an imaging coordinate system and a pixel coordinate system;
and step 3: converting the geographic coordinate system into a final pixel coordinate system;
and 4, step 4: converting the angle information into coordinate values of image pixel coordinates;
and 5: and performing compensation on the pixel deviation in each direction by using an image processing algorithm to realize image racemization of multi-source sensing data fusion processing.
In step 1, the three-dimensional space coordinate system where the geographic coordinate system, that is, the ground object target is located, selects the ground as a Z coordinate axis, that is, Z is 0 plane, and selects a coordinate axis at an arbitrary position on the plane where Z is 0 in the direction of the coordinate axis X, Y, and satisfies the right-hand coordinate system.
The step 2 comprises the following steps:
the camera coordinate system is a three-dimensional coordinate system, the optical center of the camera is selected as the origin, the optical axis of the camera is the Z axis, and the X axis and the Y axis of the camera coordinate system are respectively parallel to the width and the height of the area array detector; the area array detector refers to a plane sensor inside the camera;
the imaging coordinate system is a two-dimensional coordinate system, the intersection point of the optical axis and the image plane is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the imaging coordinate system represents the absolute coordinate of the projection of the three-dimensional point to the image plane;
the pixel coordinate system is a two-dimensional coordinate system, the upper left corner of the focal plane of the image is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the coordinate values on the X axis and the Y axis are normalized to take the width and the height of a single pixel as a unit.
The step 3 comprises the following steps:
step 3-1, converting the geographic coordinate system into a camera coordinate system: let the coordinate of the geographic coordinate system center O in the camera coordinate system be (x)0,y0,z0) Then the translation vector T ═ x translated from the geographic coordinate system origin to the camera coordinate system origin0,y0,z0]. The camera coordinate system is derived from the geographic coordinate system by rotating the matrix R and translating to T. The coordinate relationship of the coordinates (X, Y, Z) of the three-dimensional point in the camera coordinate system and the coordinates (X, Y, Z) in the geographic coordinate system satisfies:
wherein C is a point in the camera coordinate system;
r represents a rotation matrix representing the spatial variation from the geographic coordinate system to the camera coordinate system, R (yz) represents roll angle, R (xy) represents heading angle, R (xz) represents pitch angle:
r ═ R (yz) R (xz) R (xy); wherein alpha, beta and gamma respectively represent roll angle, course angle and pitch angle;
step 3-2, converting the camera coordinate system into an imaging coordinate system: for a three-dimensional point M and an imaging point M in the camera coordinate system, the projection process is expressed as:
P=NC,
wherein f represents the image distance, namely the distance between the image and the lens;
wherein, the coordinate of the three-dimensional point M is (X, Y, Z), and the coordinate of the imaging point M is (X)p,yp),Is a 3 x 3 matrix, and P is a point converted from a camera coordinate system to an imaging coordinate system;
step 3-3, converting the imaging coordinate system into a pixel coordinate system: the coordinate values on the X axis and the Y axis in the imaging coordinate system are normalized to take the pixel size as a unit, namely the coordinate values on the X axis and the Y axis are divided by the width and the height of a single pixel respectively; translating the center of the imaging coordinate system to the origin of the pixel coordinate system, setting the pixel coordinate where the center of the imaging coordinate system is located as (cx, cy,1), the width and the height of the pixel as px and py respectively, and representing the change of the image from the imaging coordinate system to the pixel coordinate system by using a matrix M:
wherein, cx and cy are respectively a horizontal coordinate and a vertical coordinate of a pixel where the center of the imaging coordinate system is located;
step 3-4, converting the geographic coordinate system into a pixel coordinate system by the following formula:
wherein XW, YW, ZW respectively represent coordinate values of an X axis, a Y axis, and a Z axis in the geographic coordinate system.
Step 4 comprises the following steps: compensating for 3 degrees in a camera coordinate system, and converting the camera coordinate system into a pixel coordinate system by the method in the step 3;
in the pixel coordinate system, the following calculation is made:
when the course angle is deviated from 0.09 degrees, the number of pixels actually moved in the pixel coordinate system is 1920/18.9 × 0.09-9 pixels;
when the pitch angle is shifted by 0.08 °, the number of pixels actually moved in the pixel coordinate system is 1080/10.65 × 0.08 — 8 pixels.
In step 5, the image processing algorithm includes: and (4) offsetting 9 pixels of the course angle and 8 pixels of the course angle in the pitching direction calculated by the step 4, then compensating in each direction, compensating 9 pixels in the horizontal direction, and compensating 8 pixels in the vertical direction to obtain a final racemization result.
The invention also provides an image despinning system based on multi-source sensing data fusion processing, which is characterized by comprising an acquisition module, a preprocessing module, a coordinate system construction module, a coordinate system conversion module, an image despinning module and an imaging module;
the acquisition module is used for acquiring real-time image data, acquiring a course angle and a pitch angle of the rotary table in real time through an encoder inside the rotary table, and acquiring an inclination angle in a roll direction in real time through a roll gyro of a rotary table azimuth frame; acquiring attitude angles of a geographic coordinate system in real time through an inertial navigation system, wherein the attitude angles comprise a roll angle, a course angle and a depression angle;
the preprocessing module is used for preprocessing the acquired real-time image data, including noise reduction, contrast enhancement and brightness enhancement;
the preprocessing here is used in different situations, and if the image is not sharp, the relevant smoothing is performed, and if the image contrast is too low, the contrast stretching is performed.
The coordinate system building module is used for building a camera coordinate system, an imaging coordinate system and a pixel coordinate system;
the coordinate system conversion module is used for converting the geographic coordinate system into a final pixel coordinate system and converting the angle information into coordinate values of image pixel coordinates;
the image despinning module is used for compensating the pixel deviation in each direction by using an image processing algorithm so as to realize image despinning of multi-source sensing data fusion processing;
and the imaging module is used for presenting the image result after the despinning to a user.
The geographic coordinate system, namely a three-dimensional space coordinate system where the ground object target is located, selects the ground as a Z coordinate axis, namely a Z-0 plane, selects a coordinate axis at any position on the Z-0 plane in the direction of the X, Y coordinate axis, and meets the right-hand coordinate system.
The coordinate system building module is used for building a camera coordinate system, an imaging coordinate system and a pixel coordinate system, and specifically comprises the following steps: the camera coordinate system is a three-dimensional coordinate system, the optical center of the camera is selected as the origin, the optical axis of the camera is the Z axis, and the X axis and the Y axis of the camera coordinate system are respectively parallel to the width and the height of the area array detector;
the area array detector refers to a plane sensor inside a camera.
The imaging coordinate system is a two-dimensional coordinate system, the intersection point of the optical axis and the image plane is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the imaging coordinate system represents the absolute coordinate of the projection of the three-dimensional point to the image plane;
the pixel coordinate system is a two-dimensional coordinate system, the upper left corner of the focal plane of the image is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the coordinate values on the X axis and the Y axis are normalized to take the width and the height of a single pixel as a unit.
The coordinate system conversion module is used for converting the geographic coordinate system into a final pixel coordinate system and converting the angle information into coordinate values of image pixel coordinates, and specifically comprises:
step 3-1, converting the geographic coordinate system into a camera coordinate system: let the coordinate of the geographic coordinate system center O in the camera coordinate system be (x)0,y0,z0) Then the translation vector T ═ x translated from the origin of the camera coordinate system to the origin of the geographic coordinate system0,y0,z0]. The geographic coordinate system is obtained by rotating the matrix R and translating to T from the camera coordinate systemAnd (4) obtaining. The coordinate relationship of the coordinates (X, Y, Z) of the three-dimensional point in the camera coordinate system and the coordinates (X, Y, Z) in the geographic coordinate system satisfies:
wherein C is a point in the camera coordinate system;
r represents a rotation matrix representing the spatial variation transformed from the geographic coordinate system to the camera coordinate system. R (yz) represents roll angle, r (xy) represents heading angle, r (xz) represents pitch angle:
r (YZ) R (XZ) R (XY), where α, β and γ respectively represent roll angle, heading angle and pitch angle
Step 3-2, converting the camera coordinate system into an imaging coordinate system: for a three-dimensional point M and an imaging point M in the camera coordinate system, the projection process is expressed as:
P=NC,
where f represents the distance, i.e. the distance between the image and the lens; the coordinate of the three-dimensional point M is (X, Y, Z), and the coordinate of the imaging point M is (X)p,yp),Is 3 x 3 momentAn array, P being a point converted from a camera coordinate system to an imaging coordinate system;
step 3-3, converting the imaging coordinate system into a pixel coordinate system: the coordinate values on the X axis and the Y axis in the imaging coordinate system are normalized to take the pixel size as a unit, namely the coordinate values on the X axis and the Y axis are divided by the width and the height of a single pixel respectively; translating the center of the imaging coordinate system to the origin of the pixel coordinate system, setting the pixel coordinate where the center of the imaging coordinate system is located as (cx, cy,1), the width and the height of the pixel as px and py respectively, and representing the change of the image from the imaging coordinate system to the pixel coordinate system by using a matrix M:
wherein, cx and cy are respectively a horizontal coordinate and a vertical coordinate of a pixel where the center of the imaging coordinate system is located;
step 3-4, converting the geographic coordinate system into a pixel coordinate system by the following formula:
wherein XW, YW, ZW respectively represent coordinate values of an X axis, a Y axis, and a Z axis in the geographic coordinate system.
Compensating for 3 degrees in a camera coordinate system, and converting the camera coordinate system into a pixel coordinate system by the method in the step 3;
in the pixel coordinate system, the following calculation is made:
when the course angle is deviated from 0.09 degrees, the number of pixels actually moved in the pixel coordinate system is 1920/18.9 × 0.09-9 pixels;
when the pitch angle is shifted by 0.08 °, the number of pixels actually moved in the pixel coordinate system is 1080/10.65 × 0.08 — 8 pixels.
The image despinning module is used for compensating the pixel deviation in each direction by using an image processing algorithm so as to realize image despinning of multi-source sensing data fusion processing, and specifically comprises the following steps: and (3) offsetting 9 pixels by the calculated course angle and 8 pixels by the calculated pitch direction, then compensating in each direction, compensating 9 pixels in the horizontal direction, and compensating 8 pixels in the vertical direction to obtain a final racemization result.
Has the advantages that: the image despinning method based on multi-source sensing data fusion processing can solve the image despinning problem in a complex environment. Compared with the traditional racemization method, the method can achieve a good racemization effect. And image despinning of multi-source sensing data fusion processing is realized through a conversion relation among four different coordinate systems, and the use requirement is met.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart for converting a geographic coordinate system to a pixel coordinate system.
Fig. 3 is a schematic diagram of the conversion of a geographic coordinate system to a camera coordinate system.
Fig. 4 is a schematic diagram of perspective transformation of a camera coordinate system into an imaging coordinate system.
Fig. 5 is a schematic diagram of the conversion of the imaging coordinate system to the pixel coordinate system.
FIG. 6 is a graph showing the effect of image despinning using the method of the present invention.
Detailed Description
Examples
The conditions of this example are: in this embodiment, a 1080P30 camera is selected, and the field angle is 18.90 ° to 10.65 °. In a suburb with clear weather, the course angle and the pitch angle of the rotary table are obtained in real time through an encoder inside the rotary table. Meanwhile, the inclination angle in the rolling direction is obtained in real time through the rolling gyro of the rotary table azimuth frame. And acquiring attitude angles including roll, course and pitch of the geographic coordinate system in real time through an inertial navigation system. The course angle is deviated by 0.09 degrees, the pitch angle is deviated by 0.08 degrees, and the roll angle is 3 degrees. And performing four coordinate system transformations on the acquired data to obtain a final despun image. Note: the gyro sensor used in this experiment was a model TL740D sensor from resifen.
The method comprises the following specific steps:
step 1: and a course angle and a pitch angle of the rotary table are obtained in real time through an encoder inside the rotary table. Meanwhile, the inclination angle in the rolling direction is obtained in real time through the rolling gyro of the rotary table azimuth frame. And acquiring attitude angles including roll, course and pitch of the geographic coordinate system in real time through an inertial navigation system. The geographic coordinate system is a three-dimensional space coordinate system where the ground object target is located. The invention selects the ground as the plane Z-0. X, Y the coordinate axis direction can be selected from the coordinate axes of any position on the plane where Z is 0, and satisfies the right-hand coordinate system.
Step 2: establishing a camera coordinate system, an imaging coordinate system and a pixel coordinate system;
the camera coordinate system is a three-dimensional coordinate system. The invention selects the optical center of the camera as the origin, the optical axis of the camera as the Z axis, and the X axis and the Y axis of the camera coordinate system are respectively parallel to the width and the height of the area array detector.
The imaging coordinate system is a two-dimensional coordinate system with the intersection of the optical axis and the image plane as the origin (0,0), and the X-axis and the Y-axis are along the image width direction and the height direction, respectively. The imaging coordinate system represents the absolute coordinates of the projection of the three-dimensional point onto the image plane. The process of mapping three-dimensional points in the camera coordinate system to the imaging coordinate system is called perspective change.
The area array detector herein refers to a planar sensor inside the camera.
The pixel coordinate system is a two-dimensional coordinate system. And taking the upper left corner of the focal plane of the image as an origin (0,0), respectively taking the X axis and the Y axis along the width direction and the height direction of the image, and normalizing the coordinate values on the X axis and the Y axis into a unit of the width and the height of a single pixel.
The process of converting the geographic coordinate system to the pixel coordinate system is shown in fig. 2.
And step 3: and converting the geographic coordinate system into a final pixel coordinate system through layer-by-layer derivation.
1. The geographic coordinate system is converted into a camera coordinate system: in the figure, O-XYZ is a geographic coordinate system, S-XYZ is a camera coordinate system, plane §1Is the image plane. Let the coordinate of the geographic coordinate system center O in the camera coordinate system be (x)0,y0,z0) Then the translation vector T ═ x translated from the origin of the camera coordinate system to the origin of the geographic coordinate system0,y0,z0]. The geographic coordinate system is derived from the camera coordinate system by rotating R and translating to T. The coordinate relationship of the coordinates (X, Y, Z) of the three-dimensional point in the camera coordinate system and the coordinates (X, Y, Z) in the geographic coordinate system satisfies:
note: where R is a rotation matrix including roll angle, pitch angle, and course angle, and T is a displacement vector. C is a point in the camera coordinates. The conversion of the geographical coordinate system into the camera coordinate system is shown in fig. 3.
2. The camera coordinate system is converted into an imaging coordinate system: for three-dimensional point M (X, Y, Z) and imaging point M (X) in the camera coordinate systemp,yp) The projection process is represented as
P=NC
Note:is a 3 x 3 matrix. P is a point converted from the camera coordinate system to the imaging coordinate system. Perspective transformation of the camera coordinate system into the imaging coordinate system is shown in FIG. 4, plane §2Is an imaging coordinate system plane.
3. The imaging coordinate system is converted into a pixel coordinate system: two steps are required to achieve this function. Firstly, the coordinate values on the x axis and the y axis in an imaging coordinate system need to be normalized to take the pixel size as a unit, namely the coordinates on the x axis and the y axis respectively take the width and the height of a single pixel; the second step translates the center of the imaging coordinate system to the origin of the pixel coordinate system. Let the pixel coordinate of the center of the imaging coordinate system be (cx, cy,1), the width and height of the pixel be px, py, respectively, and the matrix M is used to represent the change of the image from the imaging coordinate system to the pixel coordinate system.
Note:is a 3 x 3 matrix. Conversion of imaging coordinate System to Pixel coordinate System As shown in FIG. 5, plane §3Is a pixel coordinate system plane.
and 4, step 4: the angle information is converted into coordinate values of the image pixel coordinates.
The roll angle is 3 deg., so it needs to compensate 3 deg. in the camera coordinate system, and then the camera coordinate system is converted into the pixel coordinate system through the above coordinate system conversion formula.
In the pixel coordinate system, the following calculation needs to be made:
when the course angle is deviated from 0.09 degrees, the number of pixels actually moved in the pixel coordinate system is 1920/18.9 × 0.09-9 pixels;
when the pitch angle is shifted by 0.08 °, the number of pixels actually moved in the pixel coordinate system is 1080/10.65 × 0.08 — 8 pixels.
And 5: and performing compensation on the pixel deviation in each direction by using an image processing algorithm to realize image racemization of multi-source sensing data fusion processing.
The course angle deviation of 9 pixels and the pitch direction deviation of 8 pixels are calculated through the step 4, then compensation is performed in each direction, 9 pixels are compensated in the horizontal direction, 8 pixels are compensated in the vertical direction, and the final despinning result is obtained, as shown in fig. 6.
The present invention provides an image despinning method and system based on multi-source sensing data fusion processing, and a plurality of methods and approaches for implementing the technical scheme, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of modifications and embellishments can be made without departing from the principle of the present invention, and these modifications and embellishments should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (10)
1. An image racemization method based on multi-source sensing data fusion processing is characterized by comprising the following steps:
step 1: acquiring real-time image data, acquiring a course angle and a pitch angle of the rotary table in real time through an encoder inside the rotary table, and acquiring an inclination angle in a rolling direction in real time through a rolling gyroscope of a rotary table azimuth frame; acquiring attitude angles of a geographic coordinate system in real time through an inertial navigation system, wherein the attitude angles comprise a roll angle, a course angle and a depression angle;
step 2: establishing a camera coordinate system, an imaging coordinate system and a pixel coordinate system;
and step 3: converting the geographic coordinate system into a final pixel coordinate system;
and 4, step 4: converting the angle information into coordinate values of image pixel coordinates;
and 5: and performing compensation on the pixel deviation in each direction by using an image processing algorithm to realize image racemization of multi-source sensing data fusion processing.
2. The method according to claim 1, wherein in step 1, the geographic coordinate system, i.e. the three-dimensional space coordinate system in which the ground object is located, is selected as the ground as the Z coordinate axis, i.e. the Z-0 plane, and X, Y coordinate axes are selected in the direction of the Z coordinate axis as the coordinate axis of any position on the Z-0 plane, and satisfy the right-hand coordinate system.
3. The method of claim 2, wherein step 2 comprises:
the camera coordinate system is a three-dimensional coordinate system, the optical center of the camera is selected as the origin, the optical axis of the camera is the Z axis, and the X axis and the Y axis of the camera coordinate system are respectively parallel to the width and the height of the area array detector; the area array detector refers to a plane sensor inside the camera;
the imaging coordinate system is a two-dimensional coordinate system, the intersection point of the optical axis and the image plane is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the imaging coordinate system represents the absolute coordinate of the projection of the three-dimensional point to the image plane;
the pixel coordinate system is a two-dimensional coordinate system, the upper left corner of the focal plane of the image is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the coordinate values on the X axis and the Y axis are normalized to take the width and the height of a single pixel as a unit.
4. The method of claim 3, wherein step 3 comprises:
step 3-1, converting the geographic coordinate system into a camera coordinate system: let the coordinate of the geographic coordinate system center O in the camera coordinate system be (x)0,y0,z0) Then the translation vector T ═ x translated from the geographic coordinate system origin to the camera coordinate system origin0,y0,z0](ii) a The camera coordinate system is obtained by rotating the matrix R and translating to T from the geographic coordinate system, and the coordinate relation of the coordinates (X, Y, Z) of the three-dimensional point in the camera coordinate system and the coordinates (X, Y, Z) in the geographic coordinate system satisfies the following conditions:
wherein C is a point in the camera coordinate system;
r represents a rotation matrix representing the spatial variation from the geographic coordinate system to the camera coordinate system, R (yz) represents roll angle, R (xy) represents heading angle, R (xz) represents pitch angle:
r ═ R (yz) R (xz) R (xy), where α, β, and γ represent roll angle, heading angle, and pitch angle, respectively;
step 3-2, converting the camera coordinate system into an imaging coordinate system: for a three-dimensional point M and an imaging point M in the camera coordinate system, the projection process is expressed as:
P=NC,
wherein f represents the image distance, namely the distance between the image and the lens;
wherein, the coordinate of the three-dimensional point M is (X, Y, Z), and the coordinate of the imaging point M is (X)p,yp),Is a 3 x 3 matrix, and P is a point converted from a camera coordinate system to an imaging coordinate system;
step 3-3, converting the imaging coordinate system into a pixel coordinate system: the coordinate values on the X axis and the Y axis in the imaging coordinate system are normalized to take the pixel size as a unit, namely the coordinate values on the X axis and the Y axis are divided by the width and the height of a single pixel respectively; translating the center of the imaging coordinate system to the origin of the pixel coordinate system; let the pixel coordinate of the center of the imaging coordinate system be (cx, cy,1), the width and height of the pixel be px and py, respectively, and use the matrix M to represent the change of the image from the imaging coordinate system to the pixel coordinate system:
wherein, cx and cy are respectively a horizontal coordinate and a vertical coordinate of a pixel where the center of the imaging coordinate system is located;
step 3-4, converting the geographic coordinate system into a pixel coordinate system by the following formula:
wherein XW, YW, ZW respectively represent coordinate values of an X axis, a Y axis, and a Z axis in the geographic coordinate system.
5. The method of claim 4, wherein step 4 comprises: compensating for 3 degrees in a camera coordinate system, and converting the camera coordinate system into a pixel coordinate system by the method in the step 3;
in the pixel coordinate system, the following calculation is made:
when the course angle is deviated from 0.09 degrees, the number of pixels actually moved in the pixel coordinate system is 1920/18.9 × 0.09-9 pixels;
when the pitch angle is shifted by 0.08 °, the number of pixels actually moved in the pixel coordinate system is 1080/10.65 × 0.08 — 8 pixels.
6. The method according to claim 5, wherein in step 5, the image processing algorithm comprises: and (4) offsetting 9 pixels of the course angle and 8 pixels of the course angle in the pitching direction calculated by the step 4, then compensating in each direction, compensating 9 pixels in the horizontal direction, and compensating 8 pixels in the vertical direction to obtain a final racemization result.
7. An image despinning system based on multi-source sensing data fusion processing is characterized by comprising an acquisition module, a preprocessing module, a coordinate system construction module, a coordinate system conversion module, an image despinning module and an imaging module;
the acquisition module is used for acquiring real-time image data, acquiring a course angle and a pitch angle of the rotary table in real time through an encoder inside the rotary table, and acquiring an inclination angle in a roll direction in real time through a roll gyro of a rotary table azimuth frame; acquiring attitude angles of a geographic coordinate system in real time through an inertial navigation system, wherein the attitude angles comprise a roll angle, a course angle and a depression angle;
the preprocessing module is used for preprocessing the acquired real-time image data, including noise reduction, contrast enhancement and brightness enhancement;
the coordinate system building module is used for building a camera coordinate system, an imaging coordinate system and a pixel coordinate system;
the coordinate system conversion module is used for converting the geographic coordinate system into a final pixel coordinate system and converting the angle information into coordinate values of image pixel coordinates;
the image despinning module is used for compensating the pixel deviation in each direction by using an image processing algorithm so as to realize image despinning of multi-source sensing data fusion processing;
and the imaging module is used for presenting the image result after the despinning to a user.
8. The system of claim 7, wherein the geographic coordinate system is a three-dimensional space coordinate system in which the ground object is located, the ground is selected as a Z coordinate axis, i.e., a plane where Z is 0, and the X, Y coordinate axis direction is selected as a coordinate axis of any position on the plane where Z is 0, and satisfies a right-hand coordinate system.
9. The system according to claim 8, wherein the coordinate system building module is configured to build a camera coordinate system, an imaging coordinate system, and a pixel coordinate system, and specifically includes: the camera coordinate system is a three-dimensional coordinate system, the optical center of the camera is selected as the origin, the optical axis of the camera is the Z axis, and the X axis and the Y axis of the camera coordinate system are respectively parallel to the width and the height of the area array detector;
the imaging coordinate system is a two-dimensional coordinate system, the intersection point of the optical axis and the image plane is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the imaging coordinate system represents the absolute coordinate of the projection of the three-dimensional point to the image plane;
the pixel coordinate system is a two-dimensional coordinate system, the upper left corner of the focal plane of the image is taken as an origin (0,0), the X axis and the Y axis are respectively along the width direction and the height direction of the image, and the coordinate values on the X axis and the Y axis are normalized to take the width and the height of a single pixel as a unit.
10. The system according to claim 9, wherein the coordinate system transformation module is configured to transform the geographic coordinate system into a final pixel coordinate system, and transform the angle information into coordinate values of image pixel coordinates, and specifically includes:
step 3-1, converting the geographic coordinate system into a camera coordinate system: let the coordinate of the geographic coordinate system center O in the camera coordinate system be (x)0,y0,z0) Then the translation vector T ═ x translated from the origin of the camera coordinate system to the origin of the geographic coordinate system0,y0,z0](ii) a The geographic coordinate system is obtained by rotating the matrix R and translating to T from the camera coordinate system, and the coordinate relation of the coordinates (X, Y, Z) of the three-dimensional point in the camera coordinate system and the coordinates (X, Y, Z) in the geographic coordinate system satisfies the following conditions:
wherein C is a point in the camera coordinate system;
r represents a rotation matrix representing the spatial variation from the geographic coordinate system to the camera coordinate system, R (yz) represents roll angle, R (xy) represents heading angle, R (xz) represents pitch angle:
r (YZ) R (XZ) R (XY), where α, β and γ respectively represent roll angle, heading angle and pitch angle
Step 3-2, converting the camera coordinate system into an imaging coordinate system: for a three-dimensional point M and an imaging point M in the camera coordinate system, the projection process is expressed as:
P=NC,
where f represents the distance, i.e. the distance between the image and the lens; the coordinate of the three-dimensional point M is (X, Y, Z), and the coordinate of the imaging point M is (X)p,yp),Is a 3 x 3 matrix, and P is a point converted from a camera coordinate system to an imaging coordinate system;
step 3-3, converting the imaging coordinate system into a pixel coordinate system: the coordinate values on the X axis and the Y axis in the imaging coordinate system are normalized to take the pixel size as a unit, namely the coordinate values on the X axis and the Y axis are divided by the width and the height of a single pixel respectively; translating the center of the imaging coordinate system to the origin of the pixel coordinate system, setting the pixel coordinate where the center of the imaging coordinate system is located as (cx, cy,1), the width and the height of the pixel as px and py respectively, and representing the change of the image from the imaging coordinate system to the pixel coordinate system by using a matrix M:
wherein, cx and cy are respectively a horizontal coordinate and a vertical coordinate of a pixel where the center of the imaging coordinate system is located;
step 3-4, converting the geographic coordinate system into a pixel coordinate system by the following formula:
wherein XW, YW and ZW respectively represent coordinate values of an X axis, a Y axis and a Z axis in a geographic coordinate system;
compensating for 3 degrees in a camera coordinate system, and converting the camera coordinate system into a pixel coordinate system by the method in the step 3;
in the pixel coordinate system, the following calculation is made:
when the course angle is deviated from 0.09 degrees, the number of pixels actually moved in the pixel coordinate system is 1920/18.9 × 0.09-9 pixels;
when the pitch angle is shifted by 0.08 degrees, the number of pixels actually moved in the pixel coordinate system is 1080/10.65 × 0.08-8 pixels;
the image despinning module is used for compensating the pixel deviation in each direction by using an image processing algorithm so as to realize image despinning of multi-source sensing data fusion processing, and specifically comprises the following steps: and (3) offsetting 9 pixels by the calculated course angle and 8 pixels by the calculated pitch direction, then compensating in each direction, compensating 9 pixels in the horizontal direction, and compensating 8 pixels in the vertical direction to obtain a final racemization result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010461520.8A CN111667413A (en) | 2020-05-27 | 2020-05-27 | Image despinning method and system based on multi-source sensing data fusion processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010461520.8A CN111667413A (en) | 2020-05-27 | 2020-05-27 | Image despinning method and system based on multi-source sensing data fusion processing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111667413A true CN111667413A (en) | 2020-09-15 |
Family
ID=72384752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010461520.8A Pending CN111667413A (en) | 2020-05-27 | 2020-05-27 | Image despinning method and system based on multi-source sensing data fusion processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111667413A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113050108A (en) * | 2021-03-23 | 2021-06-29 | 湖南盛鼎科技发展有限责任公司 | Electronic boundary address vision measurement system and measurement method |
CN114663480A (en) * | 2022-02-10 | 2022-06-24 | 上海卫星工程研究所 | Synchronous image rotation elimination and channel registration method and system for 45-degree rotary scanning space camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106342328B (en) * | 2008-05-23 | 2012-07-25 | 中国航空工业集团公司洛阳电光设备研究所 | Electronics racemization method for parallel processing based on TIDSP |
CN103064430A (en) * | 2012-12-18 | 2013-04-24 | 湖南华南光电(集团)有限责任公司 | Mechanical and electrical integration type image stabilization device |
CN109658337A (en) * | 2018-11-21 | 2019-04-19 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of FPGA implementation method of image real-time electronic racemization |
-
2020
- 2020-05-27 CN CN202010461520.8A patent/CN111667413A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106342328B (en) * | 2008-05-23 | 2012-07-25 | 中国航空工业集团公司洛阳电光设备研究所 | Electronics racemization method for parallel processing based on TIDSP |
CN103064430A (en) * | 2012-12-18 | 2013-04-24 | 湖南华南光电(集团)有限责任公司 | Mechanical and electrical integration type image stabilization device |
CN109658337A (en) * | 2018-11-21 | 2019-04-19 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of FPGA implementation method of image real-time electronic racemization |
Non-Patent Citations (1)
Title |
---|
王霆: "机载CCD图像消旋控制技术研究", 《中国优秀博硕士学位论文全文数据库 (博士)工程科技Ⅱ辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113050108A (en) * | 2021-03-23 | 2021-06-29 | 湖南盛鼎科技发展有限责任公司 | Electronic boundary address vision measurement system and measurement method |
CN113050108B (en) * | 2021-03-23 | 2024-01-09 | 湖南盛鼎科技发展有限责任公司 | Electronic world site vision measurement system and measurement method |
CN114663480A (en) * | 2022-02-10 | 2022-06-24 | 上海卫星工程研究所 | Synchronous image rotation elimination and channel registration method and system for 45-degree rotary scanning space camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109360240B (en) | Small unmanned aerial vehicle positioning method based on binocular vision | |
CN110842940A (en) | Building surveying robot multi-sensor fusion three-dimensional modeling method and system | |
CN105716542B (en) | A kind of three-dimensional data joining method based on flexible characteristic point | |
CN110033480B (en) | Aerial photography measurement-based airborne photoelectric system target motion vector estimation method | |
CN111968228B (en) | Augmented reality self-positioning method based on aviation assembly | |
CN113850126A (en) | Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle | |
JPH11252440A (en) | Method and device for ranging image and fixing camera to target point | |
CN112132908B (en) | Camera external parameter calibration method and device based on intelligent detection technology | |
CN112184812B (en) | Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system | |
CN112927133B (en) | Image space projection splicing method based on integrated calibration parameters | |
CN112184786B (en) | Target positioning method based on synthetic vision | |
CN108830811A (en) | A kind of aviation image real-time correction method that flight parameter is combined with camera internal reference | |
CN113177918B (en) | Intelligent and accurate inspection method and system for electric power tower by unmanned aerial vehicle | |
CN111915685B (en) | Zoom camera calibration method | |
CN111667413A (en) | Image despinning method and system based on multi-source sensing data fusion processing | |
CN113313659A (en) | High-precision image splicing method under multi-machine cooperative constraint | |
CN113496503A (en) | Point cloud data generation and real-time display method, device, equipment and medium | |
CN114549629A (en) | Method for estimating three-dimensional pose of target by underwater monocular vision | |
CN111260736B (en) | In-orbit real-time calibration method for internal parameters of space camera | |
CN111696155A (en) | Monocular vision-based multi-sensing fusion robot positioning method | |
CN111222586A (en) | Inclined image matching method and device based on three-dimensional inclined model visual angle | |
CN112017303B (en) | Equipment maintenance auxiliary method based on augmented reality technology | |
CN112577463B (en) | Attitude parameter corrected spacecraft monocular vision distance measuring method | |
CN108986025B (en) | High-precision different-time image splicing and correcting method based on incomplete attitude and orbit information | |
Kim et al. | Rover mast calibration, exact camera pointing, and camera handoff for visual target tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200915 |
|
RJ01 | Rejection of invention patent application after publication |