CN115861463A - Method for generating panoramic orthographic image of multispectral ground observation device and program product - Google Patents

Method for generating panoramic orthographic image of multispectral ground observation device and program product Download PDF

Info

Publication number
CN115861463A
CN115861463A CN202211327058.8A CN202211327058A CN115861463A CN 115861463 A CN115861463 A CN 115861463A CN 202211327058 A CN202211327058 A CN 202211327058A CN 115861463 A CN115861463 A CN 115861463A
Authority
CN
China
Prior art keywords
multispectral
image
coordinates
foc1
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211327058.8A
Other languages
Chinese (zh)
Inventor
王一豪
李海巍
陈铁桥
宋丽瑶
刘松
陈军宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202211327058.8A priority Critical patent/CN115861463A/en
Publication of CN115861463A publication Critical patent/CN115861463A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to a panoramic orthographic image generation method, and provides a method for generating a panoramic orthographic image of a multispectral ground observation device and a computer program product for solving the technical problem that in the current integrated remote sensing and monitoring means of the sky, the earth and the ground, a ground multispectral camera is difficult to accurately position an observation target for a single multispectral remote sensing image, so that the integrated remote sensing and monitoring capability of the sky, the earth and the ground is restricted.

Description

Method for generating panoramic orthographic image of multispectral ground observation device and program product
Technical Field
The invention belongs to a panoramic orthographic image generation method, and particularly relates to a panoramic orthographic image generation method of a multispectral ground observation device and a computer program product.
Background
The multispectral remote sensing technology is a remote sensing technology which divides the surface feature radiation electromagnetic wave into a plurality of narrower spectral bands and obtains information of different bands of the same target at the same time in a photographing or scanning mode. Multispectral remote sensing can not only distinguish ground objects according to the difference of the form and the structure of the image, but also distinguish ground objects according to the difference of spectral characteristics, and the information content of remote sensing is enlarged. Therefore, the method is widely applied to the fields of agricultural monitoring, geological exploration, environmental monitoring, marine research and the like.
The space, space and ground integrated remote sensing monitoring means represented by space satellite remote sensing, aviation unmanned aerial vehicle remote sensing and ground observation remote sensing are integrated, and omnibearing, multi-scale and multi-element data information guarantee can be provided for multispectral remote sensing application. The three remote sensing observation modes of air, sky and earth have the characteristics of mutual advantage complementation in various fields. The space satellite remote sensing realizes spectrum remote sensing observation through multispectral imaging load carried by a satellite, and has the advantages of wide observation range (the breadth can reach hundreds of kilometers), no limitation on observation regions (monitoring of any region at home and abroad can be realized), low spatial resolution (only tens of meters), slow observation period (the satellite revisit period reaches one to two weeks), high radiation correction difficulty (influenced by atmospheric factors), high possibility of being influenced by weather factors (such as thick cloud weather cannot be observed), and high overall cost of space remote sensing. The remote sensing of the aerial unmanned aerial vehicle generally adopts a loadable unmanned aerial vehicle platform (such as M600 in Xinjiang province) to carry a satellite multispectral camera, at present, the visible-near infrared range is mostly adopted, the advantages of the remote sensing are larger than the ground observation range, higher in observation flexibility and less influenced by weather factors, but the disadvantages of the remote sensing are limited by the carrying capacity of the platform, the performance of the micro multispectral camera is limited, the observation time is limited by the flight time of the unmanned aerial vehicle, and the flight time needs professional flight hand operation, so that the labor cost is high.
The ground remote sensing monitoring usually adopts a surface feature spectrometer to obtain spectral information of crops, and simultaneously adopts a contact type sensor or a chemical method to obtain physicochemical information, and has the advantages that: (1) Due to the short radiation transmission link and low interference, quantification can be realized and the precision is highest; (2) data can be collected in real time; (3) the physical and chemical parameter information of the crops at the same time phase can be obtained; compared with unmanned aerial vehicle remote sensing, the degree of automation is higher; and (5) relatively low cost. But it still has some disadvantages, such as the remote sensing flexibility is lower compared with unmanned aerial vehicle, and single equipment measuring range is limited.
In order to make up the defect that the remote sensing observation field range of the ground multispectral camera is small, a panoramic rotary table can be configured for the multispectral camera, so that the multispectral camera has 360-degree panoramic observation capability. However, for a single multispectral remote sensing image, the observation space range of the multispectral remote sensing image is difficult to visually find, and an observation target is accurately positioned, so that the integrated remote sensing monitoring capability of the sky, the sky and the ground is restricted.
Disclosure of Invention
The invention provides a panoramic orthographic image generation method and a computer program product of a multispectral ground observation device, aiming at solving the technical problems that in the current space-ground integrated remote sensing monitoring means, a ground multispectral camera is difficult to accurately position an observation target for a single multispectral remote sensing image, so that the space-ground integrated remote sensing monitoring capability is restricted.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a method for generating a panoramic orthographic image of a multispectral ground observation device is characterized by comprising the following steps:
s1, respectively carrying out geometric registration and combination on image data of each channel of each multispectral image of a multispectral camera to obtain a multispectral image subjected to registration and combination;
s2, cutting the boundary area of the registered and combined multispectral image to obtain a cut multispectral image;
s3, defining a farthest observation boundary line on the cut multispectral image to obtain an observation area of each multispectral image;
s4, determining coordinates of the focus of the multispectral camera and four corner points of a multispectral camera sensor in an observation area in a world coordinate system by combining the farthest observation boundary line;
s5, respectively acquiring world coordinate system vectors from the focus of the multispectral camera to four corner points of the multispectral camera sensor in the observation area;
s6, respectively acquiring ground coordinates of four corner points of the multispectral camera sensor in each multispectral image observation area;
s7, respectively generating ground projection transformation matrixes of the multispectral images according to the ground coordinates obtained in the step S6;
s8, generating a ground base map for projection embedding of the multispectral images under different visual angles of the multispectral ground observation device;
and S9, according to the ground projection transformation matrix of each multispectral image obtained in the step S7, projecting and embedding the multispectral images of different visual angles of the multispectral ground observation device on a ground base map to generate a panoramic orthographic image.
Further, step S1 specifically includes:
s1.1, respectively calculating characteristic points of image data of each channel of each multispectral image;
s1.2, selecting any channel image data as a reference image, and using other channel image data as an image to be matched;
s1.3, respectively matching the image to be matched with the reference image according to the characteristic points of the image data of each channel obtained in the step S1.1;
s1.4, respectively calculating a projection transformation matrix from each image to be matched to a reference image by adopting a least square method;
and S1.5, respectively carrying out projection transformation on the images to be matched according to the projection transformation matrix from the images to be matched to the reference image, and merging the image data of all channels.
Further, step S2 is to cut the multispectral image after the registration and combination to a center of a reference image by a fixed width.
Further, step S4 is specifically to obtain coordinates of the multispectral camera focus and four corner points of the multispectral camera sensor in the world coordinate system according to the following formula:
Foc1=Rz·Rx·Foc+T;
C11=Rz·Rx·C1+T;
C21=Rz·Rx·C2+T;
C31=Rz·Rx·C3+T;
C41=Rz·Rx·C4+T;
foc1 represents coordinates of a multispectral camera focus in a world coordinate system, C11, C21, C31 and C41 represent coordinates of four corner points of a multispectral camera sensor in the world coordinate system respectively, rz represents an azimuth rotation matrix, rx represents a pitching rotation matrix, and T represents a height vector;
foc represents the initial coordinates of the multispectral camera focus in the world coordinate system:
Foc=[0,-f,0]′
wherein f represents the multispectral camera sensor focal length;
c1, C2, C3, C4 respectively represent initial coordinates of four corner points of the multispectral camera sensor in a world coordinate system:
Figure BDA0003910632520000041
Figure BDA0003910632520000042
Figure BDA0003910632520000043
Figure BDA0003910632520000051
wherein coms _ w represents the width of the multispectral camera sensor, coms _ h represents the height of the multispectral camera sensor, img _ h represents the height of the cut multispectral image, and top _ h represents the number of lines of the farthest observation boundary line from the multispectral image in the horizontal direction.
Further, step S5 is specifically to obtain vectors from the focus of the multispectral camera to four corner points of the multispectral camera sensor according to the following formula:
V1=C11-Foc1
V2=C21-Foc1
V3=C31-Foc1
V4=C41-Foc1
where V1 represents the world coordinate system vector of the multispectral camera focus to C11, V2 represents the world coordinate system vector of the multispectral camera focus to C21, V31 represents the world coordinate system vector of the multispectral camera focus to C31, and V4 represents the world coordinate system vector of the multispectral camera focus to C41.
Further, step S6 is specifically to obtain the ground coordinates of the four corner points of each multispectral image observation region according to the following formula:
G1(1)=Foc1(1)-Foc1(3)×(C11(1)-Foc1(1))/(C11(3)-Foc1(3))
G1(2)=Foc1(2)-Foc1(3)×(C11(2)-Foc1(2))/(C11(3)-Foc1(3))
G2(1)=Foc1(1)-Foc1(3)×(C21(1)-Foc1(1))/(C21(3)-Foc1(3))
G2(2)=Foc1(2)-Foc1(3)×(C21(2)-Foc1(2))/(C21(3)-Foc1(3))
G3(1)=Foc1(1)-Foc1(3)×(C31(1)-Foc1(1))/(C31(3)-Foc1(3))
G3(2)=Foc1(2)-Foc1(3)×(C31(2)-Foc1(2))/(C31(3)-Foc1(3))
G4(1)=Foc1(1)-Foc1(3)×(C41(1)-Foc1(1))/(C41(3)-Foc1(3))
G4(2)=Foc1(2)-Foc1(3)×(C41(2)-Foc1(2))/(C41(3)-Foc1(3))
wherein G1 (1) represents world coordinate system X-axis coordinates of ground coordinates G1, G1 (2) represents world coordinate system Y-axis coordinates of ground coordinates G1, G2 (1) represents world coordinate system X-axis coordinates of ground coordinates G2, G2 (2) represents world coordinate system Y-axis coordinates of ground coordinates G2, G3 (1) represents world coordinate system X-axis coordinates of ground coordinates G3, G3 (2) represents world coordinate system Y-axis coordinates of ground coordinates G3, G4 (1) represents world coordinate system X-axis coordinates of ground coordinates G4, and G4 (2) represents world coordinate system Y-axis coordinates of ground coordinates G4; foc1 (1) represents the X-axis coordinates of the final world coordinates of multispectral camera focus Foc, foc (2) represents the Y-axis coordinates of multispectral camera focus Foc, foc (3) represents the Z-axis coordinates of multispectral camera focus Foc; c11 (1) X-axis coordinates of a corner point with final world coordinates of C11 on the sensor are represented, C11 (2) Y-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented, and C11 (3) Z-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented; c21 (1) X-axis coordinates of a corner point where the final world coordinates on the sensor are C21, C21 (2) Y-axis coordinates of a corner point where the final world coordinates on the sensor are C21, and C21 (3) Z-axis coordinates of a corner point where the final world coordinates on the sensor are C21; c31 (1) an X-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, C31 (2) a Y-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, and C31 (3) a Z-axis coordinate of a corner point with a final world coordinate of C31 on the sensor; the ground coordinate G1 is a ground coordinate corresponding to C11, the ground coordinate G2 is a ground coordinate corresponding to C21, the ground coordinate G3 is a ground coordinate corresponding to C31, and the ground coordinate G4 is a ground coordinate corresponding to C41; c41 (1) X-axis coordinates of a corner point whose final world coordinates on the sensor are C41, C41 (2) Y-axis coordinates of a corner point whose final world coordinates on the sensor are C41, and C41 (3) Z-axis coordinates of a corner point whose final world coordinates on the sensor are C41.
Further, step S7 is specifically to obtain the image coordinates of the four corner points of the multispectral image by the following formula and using the least square method, and obtain a projection transformation matrix TForm of the ground coordinates of the four corner points of the multispectral camera sensor in the corresponding observation area:
G1=TForm·I1
G2=TForm·I2
G3=TForm·I3
G4=TForm·I4;
wherein, I1 represents the corner image coordinates of the multispectral image corresponding to G1, I2 represents the corner image coordinates of the multispectral image corresponding to G2, I3 represents the corner image coordinates of the multispectral image corresponding to G3, and I4 represents the corner image coordinates of the multispectral image corresponding to G4.
Further, in step S7, the image coordinates of the four corner points of the multispectral image are:
I1=[1,1]′
I2=[img_w,1]′
I3=[ing_w,img_h-top_h]′
I4=[1,img_h-top_h]′
wherein img _ w represents the width of the clipped multispectral image.
Further, in step S9, before the generating the panoramic orthographic image, filling the oversampled or undersampled pixel regions by using a linear fitting method.
In addition, the invention also provides a computer program product, which comprises a computer program and is characterized in that the program realizes the steps of the method for generating the panoramic orthographic image of the multispectral ground observation device when being executed by a processor.
Compared with the prior art, the invention has the following beneficial effects:
1. the method for generating the panoramic orthographic image of the multispectral ground observation device provided by the invention makes up the problems that the observation field of view of the original multispectral image is small and the information of the space position of the observation target is not clear. The space position information and the observation range size of the observation area can be visually seen from the panoramic orthographic image, and support is provided for air-space-ground integrated remote sensing observation. In addition, an azimuth compass and a scale are arranged on the output multispectral panoramic orthographic image, so that the interpretation of the azimuth and the distance of an observation area is facilitated.
2. According to the panoramic orthographic image generation method, the farthest observation boundary line is defined on the multispectral remote sensing image, so that the observation region with too few image pixels and too far distance is favorably eliminated, and meanwhile, the problem that the area near the skyline and in the air cannot be projected and transformed to the ground coordinate system is solved.
3. The panoramic ortho-image generation method of the invention has the advantages of less required parameters, high calculation speed and robust algorithm, and can realize the automatic generation of the multispectral ground observation panoramic ortho-image.
4. The invention also provides a computer program product capable of executing the steps of the method, which can popularize and apply the method of the invention and realize fusion on corresponding hardware equipment
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a method for generating a panoramic orthographic image of a multispectral ground observation device according to the present invention;
FIG. 2 is a diagram illustrating channel data of a single multi-spectral image according to an embodiment of the present invention; wherein, (a) is a red light wave band channel, (b) is a green light wave band channel, (c) is a blue light wave band channel, (d) is a red side wave band channel, and (e) is a near infrared wave band channel;
FIG. 3 is a graph illustrating feature point matching for two channel image data according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating boundary clipping after registration and merging of channels of a single multispectral image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating dimensions of a single multi-spectral image after registration and merging of channels and boundary clipping according to an embodiment of the present invention;
FIG. 6 is a full view multispectral registered merged image obtained in an embodiment of the present disclosure; wherein, 1.GIF-12.GIF are multispectral images collected at different horizontal angles respectively.
FIG. 7 is a schematic diagram illustrating the definition of the farthest observation boundary of the multi-spectral image according to the embodiment of the present invention;
FIG. 8 is a schematic diagram of the positional relationship of the four corner points of the multispectral camera sensor and thus the focus of the multispectral camera according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of ground coordinates of four corner points of a multispectral image observation region in an embodiment of the present invention;
FIG. 10 is a ground ortho image of a single multi-spectral image obtained according to an embodiment of the present invention;
fig. 11 is a panoramic orthographic image of the multispectral ground observation device according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
The invention provides a method for generating a panoramic orthographic image of a multispectral ground observation device, which generates a panoramic orthographic image under an overlooking visual angle by projecting and converting a plurality of original observation images with different angles collected by the multispectral ground observation device, solves the problems that a single multispectral original image is small in observation range and nonuniform in observation angle, can visually display an observation area and an observation range, provides a basis for accurate positioning of an interested observation target space, and provides support for air, space and ground integrated remote sensing observation.
The panoramic ortho-image generation method inputs a plurality of pieces of original image data of each channel, observation geometric data of a multi-spectral camera and parameter data of the multi-spectral camera, which are collected by a multi-spectral ground observation device, and outputs the data as multi-spectral panoramic ortho-image data. The multispectral ground observation device can realize 360-degree panoramic horizontal and pitching observation, collected M multispectral original images need to cover each observation angle in the horizontal direction, the visual field ranges of the images need to be overlapped, and a multispectral camera is arranged in the multispectral ground observation device; the multispectral camera comprises N spectral channels, and field of view geometric deviation exists among the channels, so that registration and fusion among the channels are required; the multispectral camera observation geometric data comprise a camera pitch angle Ax, a camera azimuth angle Az and a camera ground height H; the multispectral camera parameter data includes sensor dimensions cmos _ w × cmos _ h, focal length f.
The specific steps are shown in figure 1:
s1, registering each channel of single multispectral image
The method comprises the following steps of carrying out geometric registration and combination on N channel image data of a single multispectral image, and specifically comprises the following steps: (1) respectively calculating the characteristic points of the N channel images, for example, calculating the characteristic points by adopting an SURF algorithm; (2) selecting a certain channel image as a reference image, such as an N/2 image; (3) matching the reference image with other channel images by using feature points; (4) respectively calculating projection transformation matrixes from other channel images to the reference image by adopting a least square method, and (5) respectively carrying out projection transformation on the other channel images and combining all the channel images.
S2, clipping the registered multispectral image boundary region
And cutting the boundary area of the registered and combined multispectral image to remove the influence of the boundary unregistered area. The cutting method is that the multispectral image is registered and combined by taking the center of a single-channel reference image as a reference, the multispectral image is cut to a fixed width L around the center, and the size of the cut multispectral image is recorded as img _ w multiplied by img _ h.
S3, defining the farthest observation boundary line for the cut multispectral image
The farthest observation boundary line is defined on the cut multispectral image, and the image pixels occupied by the image pixels are too few due to the fact that the image pixels are too far away from the observation area, so that the image pixels can be excluded. The farthest observation boundary line is parallel to the horizontal direction of the multispectral image, and the number of lines is recorded as top _ h. And when the subsequent multispectral image is projected and transformed to the ground base map, the farthest end range is based on the farthest observation boundary line.
S4, converting the multispectral camera coordinate system into a world coordinate system
The world coordinate system, namely an east-north-sky coordinate system, is characterized in that an X axis points to the east, a Y axis points to the north and a Z axis points to the zenith.
And (3) observing geometric data and camera parameter data by using the multispectral camera, and converting the multispectral camera coordinate system into a world coordinate system. Setting the center position of a sensor of the multispectral camera as a world coordinate system origin, setting the initial world coordinate of a focus of the multispectral camera as Foc, and setting the initial world coordinates of four angular points on the sensor as C1, C2, C3 and C4, wherein the farthest observation boundary line of the image is defined, and the C1 and the C2 are the sensor end points corresponding to the boundary line, and adopting the following formula:
Foc=[0,-f,0]′
Figure BDA0003910632520000111
Figure BDA0003910632520000112
Figure BDA0003910632520000113
Figure BDA0003910632520000114
considering the observation geometrical factors of the multispectral camera, namely the existence of a pitch angle Ax, an azimuth angle Az and a ground height H of the camera, the initial world coordinates of the focus and the angular points need to be multiplied by a pitch rotation matrix Rx and an azimuth rotation matrix Rz, and a height vector T is added, so that the final world coordinate of the focus of the multispectral camera is Foc1, and the final world coordinates of the four angular points on the sensor are C11, C21, C31 and C41:
Foc1=Rz·Rx·Foc+T
C11=Rz·Rx·C1+T
C21=Rz·Rx·C2+T
C31=Rz·Rx·C3+T
C41=Rz·Rx·C4+T
Figure BDA0003910632520000121
Figure BDA0003910632520000122
T=[0 0 H]′
s5, vector generation from the multispectral camera focus to the sensor corner;
generating world coordinate system vectors V1, V2, V3 and V4 from the multispectral camera focus to the sensor corner points as follows:
V1=C11-Foc1
V2=C21-Foc1
V3=C31-Foc1
V4=C41-Foc1。
s6, generating ground coordinates of the single multispectral image
Generating ground coordinates G1, G2, G3 and G4 of four corner points of the sensor in the single multispectral image observation area, and adopting the following calculation formula of the intersection point of the space vector and the plane:
G1(1)=Foc1(1)-Foc1(3)×(C11(1)-Foc1(1))/(C11(3)-Foc1(3))
G1(2)=Foc1(2)-Foc1(3)×(C11(2)-Foc1(2))/(C11(3)-Foc1(3))
G2(1)=Foc1(1)-Foc1(3)×(C21(1)-Foc1(1))/(C21(3)-Foc1(3))
G2(2)=Foc1(2)-Foc1(3)×(C21(2)-Foc1(2))/(C21(3)-Foc1(3))
G3(1)=Foc1(1)-Foc1(3)×(C31(1)-Foc1(1))/(C31(3)-Foc1(3))
G3(2)=Foc1(2)-Foc1(3)×(C31(2)-Foc1(2))/(C31(3)-Foc1(3))
G4(1)=Foc1(1)-Foc1(3)×(C41(1)-Foc1(1))/(C41(3)-Foc1(3))
G4(2)=Foc1(2)-Foc1(3)×(C41(2)-Foc1(2))/(C41(3)-Foc1(3))
wherein G1 (1) represents world coordinate system X-axis coordinates of ground coordinates G1, G1 (2) represents world coordinate system Y-axis coordinates of ground coordinates G1, G2 (1) represents world coordinate system X-axis coordinates of ground coordinates G2, G2 (2) represents world coordinate system Y-axis coordinates of ground coordinates G2, G3 (1) represents world coordinate system X-axis coordinates of ground coordinates G3, and G3 (2) represents world coordinate system Y-axis coordinates of ground coordinates G3; foc1 (1) represents the X-axis coordinates of the final world coordinates of multispectral camera focus Foc, foc (2) represents the Y-axis coordinates of multispectral camera focus Foc, foc (3) represents the Z-axis coordinates of multispectral camera focus Foc; c11 (1) X-axis coordinates of a corner point with final world coordinates of C11 on the sensor are represented, C11 (2) Y-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented, and C11 (3) Z-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented; c21 (1) X-axis coordinates of a corner point with final world coordinates of C21 on the sensor are represented, C21 (2) Y-axis coordinates of the corner point with final world coordinates of C21 on the sensor are represented, and C21 (3) Z-axis coordinates of the corner point with final world coordinates of C21 on the sensor are represented; c31 (1) an X-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, C31 (2) a Y-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, and C31 (3) a Z-axis coordinate of a corner point with a final world coordinate of C31 on the sensor; c41 The X-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by (1), the Y-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by C41 (2), and the Z-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by C41 (3).
S7, generating a ground projection transformation matrix of the single multispectral image
And calculating coordinates I1, I2, I3 and I4 of the image of four corner points of the multispectral image to a projection transformation matrix TForm of ground coordinates G1, G2, G3 and G4 of the corresponding observation area by adopting a least square method. The image coordinate unit is pixel, the ground coordinate unit is centimeter, therefore, the ground coordinate also needs to be multiplied by a conversion coefficient rate (unit: pixel/centimeter), and the rate value can be given according to the experience of the size of the converted image.
G1=TForm·I1
G2=TForm·I2
G3=TForm·I3
G4=TForm·I4
I1=[1,1]′
I2=[img_w,1]′
I3=[ing_w,img_h-top_h]′
I4=[1,img_h-top_h]′。
S8, generating a ground floor map
And generating a ground base map for the projection embedding of the multispectral images of all the visual angles, wherein the size range of the ground base map needs to exceed the ground projection range of the multispectral images of all the visual angles, and the specific value can be adjusted according to experience.
S9, projecting and embedding the multispectral image on the ground base map
The multispectral images of all visual angles of the multispectral ground observation device are projected and embedded on a ground base map, namely the multispectral images of all visual angles are multiplied by corresponding ground projection transformation matrixes pixel by pixel, transformed and assigned to a ground image, and then the pixel areas which are over-sampled or under-sampled are filled in a linear fitting mode.
By adopting the panoramic orthographic image generation method, the output multispectral panoramic orthographic image data is accompanied with azimuth information and scale information. Wherein, the Y-axis direction of the ground floor map is the true north direction, so that an azimuth compass is added. And then determining the scale of the pixel number of the base map image corresponding to the physical distance according to the conversion coefficient rate from the ground world physical coordinate system to the ground base map image pixel coordinate system.
The following is a specific embodiment of the method for generating the panoramic orthographic image of the multispectral ground observation device, which is adopted by the invention:
a multispectral ground observation device in the agricultural monitoring application field of a certain area is selected for carrying out experiments, a 5-channel multispectral camera is carried by an acquisition device, and the multispectral camera is positioned in a panoramic observation turntable and can realize observation of a 360-degree horizontal azimuth angle and a 0-90-degree pitch angle.
Fig. 2 shows 5 channels of data of a single multi-spectral image, including red, green, blue, red-edge, and near-infrared band channels. For a single multispectral image, SURF feature points of 5 channel images are respectively calculated, the 3 rd channel is taken as a reference, feature point matching is carried out on other 4 channels, a projection transformation matrix is calculated by using a least square calculation method, then the other 4 channels are registered to the 3 rd channel, and multi-channel combination is carried out, for example, fig. 3 is a schematic diagram of feature point matching of two channel image data. Referring to fig. 4, the boundary area of the single multispectral image after the channel registration and merging is clipped. Based on the center of the 3 rd channel, the original resolution of the single-channel image is 1280 × 960, the size of the cropped multispectral image is L =40 pixels from the periphery to the center, and as shown in fig. 5, the size of the cropped multispectral image is img _ w × img _ h =1200 × 880. As shown in fig. 6, in order to obtain panoramic data of the observation area, 12 multispectral images, which are respectively images numbered 1 to 12 in fig. 6, are collected at different horizontal angles in this embodiment. As shown in fig. 7, the farthest observation boundary line (the solid line in fig. 7) is defined on the multispectral image after registration and combination, the farthest observation boundary line is parallel to the horizontal direction of the multispectral image, and the ordinate of the farthest observation boundary line is the 200 th pixel. As in fig. 8, the multispectral camera coordinate system is transformed to the world coordinate system using multispectral camera observation geometry data and camera parameter data. The initial world coordinates Foc of the camera focus are [0, -0.55,0], the initial world coordinates C1, C2, C3, C4 of the four corner points on the sensor are [ -0.24,0,0.0986], [0.24,0,0.0986], [ -0.24,0, -0.18], [0.24,0,0.18], the height vector T is [0,0,300], the pitch angle Ax is-15 °, the azimuth angle Az is 15 °, the pitch rotation matrix Rx and the azimuth rotation matrix Rz are as follows:
Figure BDA0003910632520000151
Figure BDA0003910632520000152
after calculating the pitch, azimuth and altitude factors, foc is [0.1375, -0.5132,300.1424], C11, C21, C31 and C41 are [ -0.2384, -0.0375,300.0952], [0.2252,0.868,300.0952], [ -0.2198, -0.1071,299.8261], [0.2439,0.0171,299.8261], respectively.
The world coordinate system vectors V1, V2, V3, V4 from the multispectral camera focal points to the sensor corner points are [ -0.3759,0.4757, -0.0471], [0.0877,0.5999, -0.0471], [ -0.3573,0.4060, -0.3162], [0.1064,0.5303, -0.3162, respectively.
As shown in fig. 9, the ground coordinates G1, G2, G3, and G4 of the four corner points corresponding to the observation area of the single multispectral image are [ -2394.5,3029.6], [558.9,3820.9], [ -338.9,384.9], [101.1,502.8], respectively, using a calculation formula of the intersection point of the space vector and the plane. Calculating coordinates I1, I2, I3 and I4 of four corner point images of the multispectral image by adopting a least square method, and obtaining a projection change matrix TForm of corresponding ground coordinates G1, G2, G3 and G4 of an observation area, wherein I1, I2, I3 and I4 are [1,1], [1200,1], [1200,681], [1,681] respectively, and obtaining TForm as follows:
Figure BDA0003910632520000161
and multiplying the single multispectral image by the corresponding projection transformation matrix TForm pixel by pixel, transforming and assigning to the base image, and filling the oversampled or undersampled pixel region in a linear fitting manner to obtain the ground orthographic image of the single multispectral image as shown in the figure 10.
As shown in fig. 11, the same method as described above is applied to 12 multispectral images to generate multispectral ground observation device panoramic orthographic image data, and azimuth information and scale information are attached thereto.
In addition, the method for generating a panoramic orthographic image of a multispectral ground observation device of the present invention may also form a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method for generating a panoramic orthographic image of a multispectral ground observation device.
According to the panoramic ortho-image generation method, a plurality of original observation images with different angles, which are acquired by the multispectral ground observation device, are subjected to projection transformation to generate the panoramic ortho-image under the overlooking visual angle, so that the problems that the observation range of a single multispectral original image is small and the observation angles are not uniform are solved, the generated panoramic ortho-image can visually display the observation area and the observation range, a foundation is provided for accurate positioning of an interested observation target space, and a support is provided for air, space and ground integrated remote sensing observation.
The present invention has been described in terms of the preferred embodiment, and it is not intended to be limited to the embodiment. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for generating a panoramic orthographic image of a multispectral ground observation device is characterized by comprising the following steps of:
s1, respectively carrying out geometric registration and combination on image data of each channel of each multispectral image of a multispectral camera to obtain a multispectral image subjected to registration and combination;
s2, cutting the boundary area of the registered and combined multispectral image to obtain a cut multispectral image;
s3, defining a farthest observation boundary line on the cut multispectral image to obtain an observation area of each multispectral image;
s4, determining coordinates of the focus of the multispectral camera and four corner points of a multispectral camera sensor in an observation area in a world coordinate system by combining the farthest observation boundary line;
s5, respectively acquiring world coordinate system vectors from the focus of the multispectral camera to four corner points of the multispectral camera sensor in the observation area;
s6, respectively obtaining ground coordinates of four corner points of the multispectral camera sensor in each multispectral image observation area;
s7, respectively generating ground projection transformation matrixes of the multispectral images according to the ground coordinates obtained in the step S6;
s8, generating a ground base map for projection embedding of the multispectral images under different visual angles of the multispectral ground observation device;
and S9, according to the ground projection transformation matrix of each multispectral image obtained in the step S7, projecting and embedding the multispectral images of different visual angles of the multispectral ground observation device on a ground base map to generate a panoramic orthographic image.
2. The method for generating a panoramic orthographic image of the multispectral ground observation device according to claim 1, wherein the step S1 specifically comprises:
s1.1, respectively calculating characteristic points of image data of each channel of each multispectral image;
s1.2, selecting any channel image data as a reference image, and using other channel image data as an image to be matched;
s1.3, respectively matching the image to be matched with the reference image according to the characteristic points of the image data of each channel obtained in the step S1.1;
s1.4, respectively calculating a projection transformation matrix from each image to be matched to a reference image by adopting a least square method;
and S1.5, respectively carrying out projection transformation on the images to be matched according to the projection transformation matrix from the images to be matched to the reference image, and merging the image data of all channels.
3. The method for generating a panoramic orthographic image of the multispectral ground observation device according to claim 2, wherein: and S2, specifically, cutting the registered and combined multispectral image to the center of a reference image for a fixed width.
4. The method for generating a panoramic orthographic image of the multispectral ground observation device according to any one of claims 1 to 3, wherein the step S4 is to obtain coordinates of the focal point of the multispectral camera and four corner points of the sensor of the multispectral camera in a world coordinate system by the following formula:
Foc1=Rz·Rx·Foc+T;
C11=Rz·Rx·C1+T;
C21=Rz·Rx·C2+T;
C31=Rz·Rx·C3+T;
C41=Rz·Rx·C4+T;
wherein Foc1 represents the coordinates of the multispectral camera focus in the world coordinate system, C11, C21, C31 and C41 represent the coordinates of the four corner points of the multispectral camera sensor in the world coordinate system respectively, rz represents the azimuth rotation matrix, rx represents the pitch rotation matrix, and T represents the altitude vector;
foc represents the initial coordinates of the multispectral camera focus in the world coordinate system:
Foc=[0,-f,0]′
wherein f represents the multispectral camera sensor focal length;
c1, C2, C3, C4 respectively represent initial coordinates of four corner points of the multispectral camera sensor in a world coordinate system:
Figure FDA0003910632510000031
Figure FDA0003910632510000032
Figure FDA0003910632510000033
Figure FDA0003910632510000034
the image cutting device comprises a multispectral camera sensor, a cutting device and a top-edge device, wherein coms _ w represents the width of the multispectral camera sensor, coms _ h represents the height of the multispectral camera sensor, img _ h represents the height of a cut multispectral image, and top _ h represents the number of lines between a farthest observation boundary line and the multispectral image in the horizontal direction.
5. The method for generating a panoramic orthographic image for the multispectral ground observation device according to claim 4, wherein the step S5 is to obtain vectors from the focal point of the multispectral camera to the four corner points of the sensor of the multispectral camera according to the following formula:
V1=C11-Foc1
V2=C21-Foc1
V3=C31-Foc1
V4=C41-Foc1
where V1 represents the world coordinate system vector from the multispectral camera focus to C11, V2 represents the world coordinate system vector from the multispectral camera focus to C21, V31 represents the world coordinate system vector from the multispectral camera focus to C31, and V4 represents the world coordinate system vector from the multispectral camera focus to C41.
6. The method for generating a panoramic orthographic image for the multispectral ground observation device according to claim 5, wherein the step S6 is to obtain the ground coordinates of the four corner points of each multispectral image observation area by the following formula:
G1(1)=Foc1(1)-Foc1(3)×(C11(1)-Foc1(1))/(C11(3)-Foc1(3))
G1(2)=Foc1(2)-Foc1(3)×(C11(2)-Foc1(2))/(C11(3)-Foc1(3))
G2(1)=Foc1(1)-Foc1(3)×(C21(1)-Foc1(1))/(C21(3)-Foc1(3))
G2(2)=Foc1(2)-Foc1(3)×(C21(2)-Foc1(2))/(C21(3)-Foc1(3))
G3(1)=Foc1(1)-Foc1(3)×(C31(1)-Foc1(1))/(C31(3)-Foc1(3))
G3(2)=Foc1(2)-Foc1(3)×(C31(2)-Foc1(2))/(C31(3)-Foc1(3))
G4(1)=Foc1(1)-Foc1(3)×(C41(1)-Foc1(1))/(C41(3)-Foc1(3))
G4(2)=Foc1(2)-Foc1(3)×(C41(2)-Foc1(2))/(C41(3)-Foc1(3))
wherein G1 (1) represents world coordinate system X-axis coordinates of ground coordinates G1, G1 (2) represents world coordinate system Y-axis coordinates of ground coordinates G1, G2 (1) represents world coordinate system X-axis coordinates of ground coordinates G2, G2 (2) represents world coordinate system Y-axis coordinates of ground coordinates G2, G3 (1) represents world coordinate system X-axis coordinates of ground coordinates G3, G3 (2) represents world coordinate system Y-axis coordinates of ground coordinates G3, G4 (1) represents world coordinate system X-axis coordinates of ground coordinates G4, and G4 (2) represents world coordinate system Y-axis coordinates of ground coordinates G4; foc1 (1) represents the X-axis coordinates of the final world coordinates of multispectral camera focus Foc, foc (2) represents the Y-axis coordinates of multispectral camera focus Foc, foc (3) represents the Z-axis coordinates of multispectral camera focus Foc; c11 (1) X-axis coordinates of a corner point with final world coordinates of C11 on the sensor are represented, C11 (2) Y-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented, and C11 (3) Z-axis coordinates of the corner point with final world coordinates of C11 on the sensor are represented; c21 (1) X-axis coordinates of a corner point with final world coordinates of C21 on the sensor are represented, C21 (2) Y-axis coordinates of the corner point with final world coordinates of C21 on the sensor are represented, and C21 (3) Z-axis coordinates of the corner point with final world coordinates of C21 on the sensor are represented; c31 (1) an X-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, C31 (2) a Y-axis coordinate of a corner point with a final world coordinate of C31 on the sensor, and C31 (3) a Z-axis coordinate of a corner point with a final world coordinate of C31 on the sensor; the ground coordinate G1 is a ground coordinate corresponding to C11, the ground coordinate G2 is a ground coordinate corresponding to C21, the ground coordinate G3 is a ground coordinate corresponding to C31, and the ground coordinate G4 is a ground coordinate corresponding to C41; c41 The X-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by (1), the Y-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by C41 (2), and the Z-axis coordinate of the corner point with the final world coordinate of C41 on the sensor is represented by C41 (3).
7. The method for generating a panoramic orthographic image for the multispectral ground observation device according to claim 6, wherein the step S7 is to obtain the image coordinates of the four corner points of the multispectral image by a least square method according to the following formula, and to obtain a projection transformation matrix TForm of the ground coordinates of the four corner points of the multispectral camera sensor in the corresponding observation area:
G1=TForm·I1
G2=TForm·I2
G3=TForm·I3
G4=TForm·I4;
wherein, I1 represents the corner image coordinates of the multispectral image corresponding to G1, I2 represents the corner image coordinates of the multispectral image corresponding to G2, I3 represents the corner image coordinates of the multispectral image corresponding to G3, and I4 represents the corner image coordinates of the multispectral image corresponding to G4.
8. The method according to claim 7, wherein in step S7, the coordinates of the four corner point images of the multispectral image are respectively:
I1=[1,1]′
I2=[img_w,1]′
I3=[ing_w,img_h-top_h]′
I4=[1,img_h-top_h]′
wherein img _ w represents the width of the clipped multispectral image.
9. The method for generating a panoramic orthographic image of the multispectral ground observation device according to claim 8, wherein the method comprises the following steps: in step S9, before generating the panoramic orthographic image, filling the oversampled or undersampled pixel regions by using a linear fitting method.
10. A computer program product comprising a computer program characterized in that: the program is executed by a processor to realize the steps of the method for generating the panoramic orthographic image of the multispectral ground observation device according to any one of claims 1 to 9.
CN202211327058.8A 2022-10-26 2022-10-26 Method for generating panoramic orthographic image of multispectral ground observation device and program product Pending CN115861463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211327058.8A CN115861463A (en) 2022-10-26 2022-10-26 Method for generating panoramic orthographic image of multispectral ground observation device and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211327058.8A CN115861463A (en) 2022-10-26 2022-10-26 Method for generating panoramic orthographic image of multispectral ground observation device and program product

Publications (1)

Publication Number Publication Date
CN115861463A true CN115861463A (en) 2023-03-28

Family

ID=85661962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211327058.8A Pending CN115861463A (en) 2022-10-26 2022-10-26 Method for generating panoramic orthographic image of multispectral ground observation device and program product

Country Status (1)

Country Link
CN (1) CN115861463A (en)

Similar Documents

Publication Publication Date Title
Grodecki et al. IKONOS geometric accuracy
US10249024B2 (en) Systems and methods for enhancing object visibility for overhead imaging
CN111693025B (en) Remote sensing image data generation method, system and equipment
US9093047B2 (en) Displaying arrays of image data
CN112598608B (en) Method for manufacturing optical satellite rapid fusion product based on target area
CN107798668B (en) Unmanned aerial vehicle imaging hyperspectral geometric correction method and system based on RGB images
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
JP7069609B2 (en) Crop cultivation support device
Oh Novel approach to epipolar resampling of HRSI and satellite stereo imagery-based georeferencing of aerial images
CN117665841B (en) Geographic space information acquisition mapping method and device
Zhou et al. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera
CN115861463A (en) Method for generating panoramic orthographic image of multispectral ground observation device and program product
US11415990B2 (en) Optical object tracking on focal plane with dynamic focal length
CN114494039A (en) Underwater hyperspectral push-broom image geometric correction method
Cariou et al. Automatic georeferencing of airborne pushbroom scanner images with missing ancillary data using mutual information
CN108198166B (en) Method and system for calculating ground resolution of high-resolution four-number images with different directions
Reulke et al. Improvement of spatial resolution with staggered arrays as used in the airborne optical sensor ADS40
JP7318768B2 (en) Crop cultivation support device
Didan et al. Award# DE-LM0000479
Kang et al. Positioning Errors of Objects Measured by Convolution Neural Network in Unmanned Aerial Vehicle Images
Shahbazi et al. a Gnss-Free Unmanned Aerial Laser Scanning System
Dong et al. ADHHI airborne hyperspectral imager: camera structure and geometric correction
Phan et al. Mobile 3D Mapping with A Low-Cost UAV-Based LIDAR System
Zhu et al. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination