CN114331834A - Panoramic image splicing method in optical simulation training system - Google Patents

Panoramic image splicing method in optical simulation training system Download PDF

Info

Publication number
CN114331834A
CN114331834A CN202111523383.7A CN202111523383A CN114331834A CN 114331834 A CN114331834 A CN 114331834A CN 202111523383 A CN202111523383 A CN 202111523383A CN 114331834 A CN114331834 A CN 114331834A
Authority
CN
China
Prior art keywords
image
point
images
angle
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111523383.7A
Other languages
Chinese (zh)
Inventor
贾涛
李玲
钟坚
马蕾
李福林
葛超
董圆
王子鹏
刘洋
丁焕玉
张衍滨
朱晓兵
汪向阳
金毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pla 63861 Unit
Original Assignee
Pla 63861 Unit
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pla 63861 Unit filed Critical Pla 63861 Unit
Priority to CN202111523383.7A priority Critical patent/CN114331834A/en
Publication of CN114331834A publication Critical patent/CN114331834A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image processing, and relates to a panoramic image splicing method in an optical simulation training system. The method comprises the following steps: shooting by the photoelectric theodolite according to a preset theoretical track; preprocessing the acquired image; establishing an association mapping model between images to realize image registration; establishing a panoramic image splicing mapping table; fusing the images according to the mapping relation to form a panoramic image; and searching a panoramic image splicing mapping table by a secondary addressing method according to the angle required to be displayed, so as to realize local display of the image. The panoramic stitching method solves the problems that when the existing panoramic image stitching method is directly realized, cost is too high and image precision is influenced when a plurality of sets of photoelectric theodolites are used for collecting data, and also solves the problems that no obvious feature exists in a shot image, and therefore the feature point matching stitching method cannot be used.

Description

Panoramic image splicing method in optical simulation training system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a panoramic image splicing method in an optical simulation training system.
Background
The photoelectric theodolite has a photoelectric tracking and measuring function and is widely applied to image and attitude measurement tasks. The traditional training method is greatly limited by time and cost, so in actual work, a simulation training method is adopted in more and more training.
In order to ensure that the simulation training process is more real, a live-action image splicing method is adopted in the optical simulation training system, so that the problem that the training scene is not real due to the adoption of mathematical modeling methods such as 3D modeling and the like in other simulation training systems is solved.
The resolution ratio of the image shot by the photoelectric theodolite is very high, the field of view is very small, and in order to meet the requirements of the system on the training scene, the field of view range of the image needs to be enlarged, so that the calling of different field of view images in the training process is realized, and the ideal situation is that the 360-degree panoramic image is obtained.
In the shooting process of the photoelectric theodolite, in order to ensure the integrity of images, partial overlapping exists between the images at adjacent angles. Therefore, a panoramic image stitching technology is required.
Only a single set of photoelectric theodolite is used for shooting, and panoramic images in the horizontal direction and the vertical direction are spliced.
Due to actual needs, most of the working backgrounds of the photoelectric theodolites have no obvious features, and splicing can not be realized in a mode of matching feature points.
A splicing method (application number 202010769524.2, authorization notice number CN 111738925B) of a field-shaped panoramic image in an array camera splices 4 camera images arranged in a field shape by using existing registration transformation parameters, and solves the problem that the field-shaped images cannot be processed simultaneously in the prior splicing technology; the panoramic video synchronous splicing system, the panoramic video synchronous splicing method and the panoramic video display device (application No. 201710764095.8, application publication No. CN 107396068A) are also provided with a plurality of cameras, the cameras are arranged on the same plane in a preset sequence, the angles of two adjacent cameras meet the requirement that the range of shooting video images has an overlapping area, and the cameras are combined to shoot the panoramic video images. From the economic and reliability aspects, the photoelectric theodolite is used as a shooting device, and a plurality of problems are caused if an array camera or a data acquisition method with a plurality of cameras is adopted. And multiple sets of photoelectric theodolites shoot simultaneously, so that the requirement on the arrangement of data acquisition equipment is high, and the cost is high. The shooting of the photoelectric theodolite has a systematic error, so that different devices of the same type can also affect the precision of the spliced image when shooting.
A panoramic image splicing method and a panoramic image splicing device (application No. 201811071685.3, application publication No. CN 109389555A) respectively extract feature points of two images to be spliced, match two groups of obtained key point sets to obtain matched feature point pairs, divide grids, and perform image splicing by local self-adaptive homography estimation and other methods. The method is only suitable for the condition that the image has obvious features, but is not suitable for the condition that the background is as clean as possible and does not have excessive interference, and the expected effect cannot be achieved.
Disclosure of Invention
(I) technical problems to be solved by the invention
Aiming at the technical problems, the invention provides a panoramic image splicing method in an optical simulation training system, which utilizes a single set of photoelectric theodolite to acquire data and provides a corresponding panoramic image splicing method, so that the problems of overhigh cost and influenced image precision which may occur when multiple sets of photoelectric theodolites are utilized to acquire data when the existing panoramic image splicing method is directly realized are solved, and a mapping model is established by combining position information attached to images shot by the photoelectric theodolites in the panoramic image splicing process, so that the problems that no obvious characteristic object exists in the shot images and a characteristic point matching and splicing method cannot be used are solved.
(II) the complete technical scheme provided by the invention
The invention provides a panoramic image splicing method in an optical simulation training system, which comprises the following steps:
s1, shooting by the photoelectric theodolite according to a preset theoretical track to finish data acquisition;
s2, preprocessing the acquired image;
s3, establishing an association mapping model between the images to realize image registration;
s4, establishing a panoramic image splicing mapping table;
s5, fusing the images according to the mapping relation to form a panoramic image;
and S6, searching a panoramic image splicing mapping table by a secondary addressing method according to the angle required to be displayed, and realizing local display of the image.
Further, the theoretical trajectory preset in step S1 is angle guidance data of the electro-optic theodolite.
Further, in step S1, when shooting, the position of the electro-optic theodolite is kept still, the preset theoretical trajectory guides the electro-optic theodolite to rotate at a constant speed for 360 degrees at each fixed pitch angle according to the order of the pitch angles from low to high, and according to the size of the field of view of the shot images, it is ensured that the range of the shot video images has an overlapping area at the adjacent direction and pitch angle.
Further, the preprocessing in step S2 includes the following steps:
s21, judging the collected image;
if the image is an infrared image, the process proceeds to step S22;
if the image is a visible light image, the process proceeds to step S24;
s22, performing image enhancement on the infrared image;
s23, brightness enhancement, and the flow proceeds to step S24;
and S24, carrying out median filtering method denoising processing on the collected image.
Further, the step S3 of establishing an association mapping model between the images includes the following steps:
s31, estimating the size of the panoramic image according to the view field range (horizontal alpha, vertical beta) and the pixel size mu of the electro-optic theodolite, wherein the horizontal: width is 360/(α · μ), longitudinal: height is 90/(β · μ);
s32, distributing Width Height space in the memory, and recording as a splicing map M;
s33, reading the video image shot by the electro-optic theodolite frame by frame, and acquiring information such as the angle of an encoder, the size of a pixel and the like added in the file header information of the image;
s34, inversely calculating the position information of each pixel point in the mosaic image M according to the composite angle of each pixel point in the image, and writing the position information into M;
and S35, repeating the steps S33-S34 until the positions of all pixel points in the video image in the mosaic image M are obtained, and accordingly establishing a relational mapping model between the images.
Further, the step S34 of calculating the position information of each pixel point in the mosaic image M in reverse according to the composite angle of each pixel point in the image includes the following steps:
s341, establishing a synthetic target angle formula as follows, wherein A represents an azimuth angle, E represents a pitch angle, and f represents a focal length:
Figure BDA0003400223740000041
A=A0+ΔA
Figure BDA0003400223740000042
s342, simplifying the formula to obtain an approximate synthetic target angle formula:
Figure BDA0003400223740000051
Figure BDA0003400223740000052
s343, according to the above formula, if the angular offset of a certain point with respect to the central point is known, the positional relationship between the two points can be calculated inversely, and the conversion between the angular value (a, E) and the coordinate value (x, y) of any point on the image is realized.
Further, the step S4 of establishing the panoramic image stitching mapping table includes the following steps:
s41, recording a corresponding pitch angle value in the splicing process;
s42, taking the pitch angle from 0 degrees and the interval of 0.5 degrees as a pitch angle zone, and recording the real image angle value and the pixel point position which are closest to the pixel distance corresponding to the pitch angle at the interval of 3 degrees on each angle zone, wherein the azimuth angle of each angle zone is from 0 degrees;
and S43, repeating S41 and S42 until the image mapping relation is completely established.
Further, the image fusion in step S5 includes the following steps:
s51, judging whether the points on the image are at the overlapping position according to the established mapping model;
if the position is in the overlapping position, the process proceeds to step S52;
if not, the process proceeds to step S54;
s52, calculating all images S to which the point belongs1,S2,…,SnOf (1) weight I'1,I′2,…I′n(ii) a n is the number of the overlapped images of the point;
s53, putting the gray value R of the point on different images1、R2…RnAnd corresponding weight value I'1,I′2,…I′nMultiplying to obtain the gray value of the point after fusion
Figure BDA0003400223740000053
Proceeding to step S55;
s54, reserving the grey value R of the point1As the gray value R of the point after fusion0=R1Proceeding to step S55;
s55, obtaining the corresponding gray value R of the point in the final spliced image0
And S56, processing all the points in the mapping model according to the steps of S51-S54 respectively to obtain a fused image.
Further, the step S52 of calculating the weight of the point in all the images to which the point belongs specifically includes:
s521, calculating the point (x, y) in all the images to which the point belongs, and determining the image center point O corresponding to the point on the corresponding image1(x1,y1)、O2(x2,y2),…,On(xn,yn) The distance between
Figure BDA0003400223740000061
n is the pointThe number of overlapping images;
s522, calculating to obtain the weight corresponding to the point
Figure BDA0003400223740000062
S523, repeating the steps S521-S522 to obtain all weights I corresponding to the point1,I2,…,In
S524, normalizing all the weight values to obtain the final weight value I 'of the point on each overlapped image'1,I′2,…I′n
Further, the step S6 of displaying the image partially includes the steps of:
s61, performing table lookup in a panoramic image stitching mapping table according to the central point angle value of the image to be displayed, and assuming that the shooting angle corresponding to the central point of a certain local image to be displayed is an azimuth angle A and a pitch angle E;
s62, inquiring in a mapping table according to the angle value of the pitch angle E to obtain the corresponding longitudinal axis addressing reference, and recording the pixel longitudinal coordinate as Y;
s63, calculating a 3-degree interval to which the azimuth angle A under the pitch angle belongs in an angle value table corresponding to the pitch angle E, and taking the boundary azimuth angle value corresponding to the interval as AnAnd An+1Comparison A-AnAnd An+1-Absolute value of A, if A-AnIf the absolute value of A is smaller, An is taken as the addressing reference, otherwise, A is taken as the addressing referencen+1Taking the pixel abscissa of the addressing reference as X as the addressing reference;
s64, after the addressing reference of the azimuth angle is determined, the pixel coordinate needs to be addressed for the second time, and the pixel coordinate offset delta X and delta Y of the point relative to the addressing reference are obtained by using the miss distance calculation method, so that the pixel coordinate position (X + delta X, Y + delta Y) of the point on the panoramic image is obtained;
and S65, taking the inquired pixel points as the centers of the displayed images, and respectively selecting half of the image display size on the horizontal and vertical axes of the central point for display.
The technical scheme of the invention has the following beneficial effects:
according to the image splicing method, only a single set of photoelectric theodolite is used for shooting, and compared with a common multi-camera shooting splicing method, the equipment using cost is saved, the influence caused by system errors among different equipment is overcome, and the imaging accuracy of the panoramic image is improved;
a panoramic image splicing method conforming to the actual shooting of the photoelectric theodolite is designed, and the problem that feature point registration cannot be used due to fewer on-site feature markers in the image registration process is solved.
Drawings
FIG. 1 is a flow chart of an implementation of a panoramic image stitching method in an optical simulation training system according to the present invention;
FIG. 2 is a flow chart of image preprocessing in an optical simulation training system provided by the present invention;
FIG. 3 is a flow chart of image registration in the optical simulation training system provided by the present invention;
FIG. 4 is a flow chart of image fusion in the optical simulation training system provided by the present invention.
Detailed Description
The following describes in detail a specific embodiment of the panoramic image stitching method in the optical simulation training system according to the present invention with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides a panoramic image stitching method in an optical simulation training system, including the following steps:
s1, shooting by the photoelectric theodolite according to a preset theoretical track to finish data acquisition;
s2, preprocessing the acquired image;
s3, establishing an association mapping model between the images to realize image registration;
s4, establishing a panoramic image splicing mapping table;
s5, fusing the images according to the mapping relation to form a panoramic image;
and S6, searching a panoramic image splicing mapping table by a secondary addressing method according to the angle required to be displayed, and realizing local display of the image.
It should be noted that the theoretical trajectory preset in step S1 is angle guidance data of the photoelectric theodolite, and is compiled according to a required field range. Because the method has no requirement on real-time performance, windless weather can be selected for shooting, and the interference of weather conditions on image effects is reduced as much as possible. When shooting, the position of the photoelectric theodolite is kept still, and the preset theoretical track guides the photoelectric theodolite to rotate at a constant speed for 360 degrees at each fixed pitching angle according to the sequence of the pitching angles from low to high. According to the size of the field of view of the shot images, the overlapping area of the range of the shot video images under the adjacent azimuth and the adjacent pitch angle is ensured.
Fig. 2 is a flowchart of a specific implementation manner of an image stitching method in an embodiment of the present invention. As shown in fig. 2, the preprocessing in step S2 of the image stitching method may include the following steps:
s21, judging the collected image;
if the image is an infrared image, the process proceeds to step S22;
if the image is a visible light image, the process proceeds to step S24;
s22, performing image enhancement on the infrared image;
s23, brightness enhancement, and the flow proceeds to step S24;
and S24, carrying out median filtering method denoising processing on the collected image.
Fig. 3 is a flowchart of a specific implementation manner of an image stitching method in an embodiment of the present invention. As shown in fig. 3, the image registration in step S3 of the image stitching method may include the following steps:
s31, estimating the size of the panoramic image according to the view field range (transverse alpha, longitudinal beta) and the pixel size mu of the electro-optic theodolite (Width is 360/(alpha mu), and Height is 90/(beta mu));
s32, distributing Width Height space in the memory, and recording as a splicing map M;
s33, reading the video image shot by the electro-optic theodolite frame by frame, and acquiring information such as the angle of an encoder, the size of a pixel and the like added in the file header information of the image;
s34, inversely calculating the position information of each pixel point in the mosaic image M according to the composite angle of each pixel point in the image, and writing the position information into M;
and S35, repeating the steps S33-S34 until the positions of all pixel points in the video image in the mosaic image M are obtained, and accordingly establishing a relational mapping model between the images.
Further, in this embodiment, the step S34 of calculating the position information of each pixel point in the mosaic image M in reverse according to the composite angle of each pixel point in the image includes the following steps:
s341, establishing a synthetic target angle formula, wherein A represents an azimuth angle, E represents a pitch angle, and f represents a focal length:
Figure BDA0003400223740000091
A=A0+ΔA
Figure BDA0003400223740000092
s342, simplifying the formula to obtain an approximate synthetic target angle formula:
Figure BDA0003400223740000093
Figure BDA0003400223740000094
s343, according to the above formula, if the angular offset of a certain point relative to the central point is known, the position relationship between the two points can be calculated inversely, and the conversion between the angle value (a, E) and the coordinate value (x, y) of any point on the target surface is realized.
Fig. 4 is a flowchart of a specific implementation manner of an image stitching method in an embodiment of the present invention. As shown in fig. 4, the image fusion in step S5 of the image stitching method includes the following steps:
s51, judging whether the points on the image are at the overlapping position according to the established mapping model;
if the position is in the overlapping position, the process proceeds to step S52;
if not, the process proceeds to step S54;
s52, calculating all images S to which the point belongs1、S2,…,SnOf (1) weight I'1,I′2,…I′n(ii) a n is the number of the overlapped images of the point;
s53, putting the gray value R of the point on different images1、R2…RnAnd corresponding weight value I'1,I′2,…I′nMultiplying to obtain the gray value of the point after fusion
Figure BDA0003400223740000101
Proceeding to step S55;
s54, reserving the grey value R of the point1As the gray value R of the point after fusion0=R1Proceeding to step S55;
s55, obtaining the corresponding gray value R of the point in the final spliced image0
And S56, processing all the points in the mapping model according to the steps of S51-S54 respectively to obtain a fused image.
Further, the step S52 of calculating the weight of the point in all the images to which the point belongs specifically includes:
s521, calculating the point (x, y) in all the images to which the point belongs, and determining the image center point O corresponding to the point on the corresponding image1(x1,y1)、O2(x2,y2),…,On(xn,yn) The distance between
Figure BDA0003400223740000102
n is the number of the overlapped images of the point;
s522, calculating to obtain the weight corresponding to the point
Figure BDA0003400223740000103
S523, repeating the steps S521-S522 to obtain all weights I corresponding to the point1、I2,…,In
S524, normalizing all the weight values to obtain the final weight value I 'of the point on each overlapped image'1,I′2,…I′n
Further, in this embodiment, the step S6 of displaying the image part may include the following steps:
s61, performing table lookup in a panoramic image stitching mapping table according to the central point angle value of the image to be displayed, and assuming that the shooting angle corresponding to the central point of a certain local image to be displayed is an azimuth angle A and a pitch angle E;
s62, inquiring in a mapping table according to the angle value of the pitch angle E to obtain the corresponding longitudinal axis addressing reference, and recording the pixel longitudinal coordinate as Y;
s63, in the angle value table corresponding to the pitch angle E, calculating to obtain a 3-degree interval of the azimuth angle A under the pitch angle, wherein the minimum and maximum azimuth angle values corresponding to the interval are AnAnd An+1Comparison A-AnAnd An+1-Absolute value of A, if A-AnIf the absolute value of (A) is smaller, then A is setnAs the addressing reference, otherwise, the value is An+1Taking the pixel abscissa of the addressing reference as X as the addressing reference;
s64, after the addressing reference of the azimuth angle is determined, the pixel coordinate needs to be addressed for the second time, and the pixel coordinate offset delta X and delta Y of the point relative to the addressing reference are obtained by using the miss distance calculation method, so that the pixel coordinate position (X + delta X, Y + delta Y) of the point on the panoramic image is obtained;
and S65, taking the inquired pixel points as the centers of the displayed images, and respectively selecting half of the image display size on the horizontal and vertical axes of the central point for display.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (10)

1. A panoramic image splicing method in an optical simulation training system is characterized by comprising the following steps:
s1, shooting by the photoelectric theodolite according to a preset theoretical track to finish data acquisition;
s2, preprocessing the acquired image;
s3, establishing an association mapping model between the images to realize image registration;
s4, establishing a panoramic image splicing mapping table;
s5, fusing the images according to the mapping relation to form a panoramic image;
and S6, searching a panoramic image splicing mapping table by a secondary addressing method according to the angle required to be displayed, and realizing local display of the image.
2. The method for stitching panoramic images in an optical simulation training system as claimed in claim 1, wherein the theoretical trajectory preset in step S1 is angle guiding data of the electro-optic theodolite.
3. The method for stitching panoramic images in an optical simulation training system as claimed in claim 2, wherein in step S1, the position of the electro-optic theodolite is kept still during shooting, the preset theoretical trajectory guides the electro-optic theodolite to rotate at a constant speed for 360 degrees at each fixed pitch angle in the order of the pitch angles from low to high, and the overlapping area of the range of the shot video images at the adjacent azimuth and pitch angle is ensured according to the size of the field of view of the shot images.
4. The method for stitching panoramic images in an optical simulation training system according to claim 1, wherein the preprocessing in the step S2 includes the following steps:
s21, judging the collected image;
if the image is an infrared image, the process proceeds to step S22;
if the image is a visible light image, the process proceeds to step S24;
s22, performing image enhancement on the infrared image;
s23, brightness enhancement, and the flow proceeds to step S24;
and S24, carrying out median filtering method denoising processing on the collected image.
5. The method for stitching panoramic images in an optical simulation training system according to claim 1, wherein the step S3 of establishing a correlation mapping model between the images comprises the following steps:
s31, estimating the size of the panoramic image according to the view field range (horizontal alpha, vertical beta) and the pixel size mu of the electro-optic theodolite, wherein the horizontal: width is 360/(α · μ), longitudinal: height is 90/(β · μ);
s32, distributing Width Height space in the memory, and recording as a splicing map M;
s33, reading the video image shot by the electro-optic theodolite frame by frame, and acquiring information such as the angle of an encoder, the size of a pixel and the like added in the file header information of the image;
s34, inversely calculating the position information of each pixel point in the mosaic image M according to the composite angle of each pixel point in the image, and writing the position information into M;
and S35, repeating the steps S33-S34 until the positions of all pixel points in the video image in the mosaic image M are obtained, and accordingly establishing a relational mapping model between the images.
6. The method of stitching panoramic images in an optical simulation training system according to claim 5, wherein the step S34 of back-calculating the position information of each pixel point in the stitched image M according to the composite angle of the pixel point in the image comprises the following steps:
s341, establishing a synthetic target angle formula as follows, wherein A represents an azimuth angle, E represents a pitch angle, and f represents a focal length:
Figure RE-FDA0003548771120000031
A=A0+ΔA
Figure RE-FDA0003548771120000032
s342, simplifying the formula to obtain an approximate synthetic target angle formula:
Figure RE-FDA0003548771120000033
Figure RE-FDA0003548771120000034
s343, according to the above formula, if the angular offset of a certain point with respect to the central point is known, the positional relationship between the two points can be calculated inversely, and the conversion between the angular value (a, E) and the coordinate value (x, y) of any point on the image is realized.
7. The method for stitching panoramic images in an optical simulation training system according to claim 1, wherein the step S4 of creating a panoramic image stitching mapping table comprises the following steps:
s41, recording a corresponding pitch angle value in the splicing process;
s42, taking the pitch angle from 0 degrees and the interval of 0.5 degrees as a pitch angle zone, and recording the real image angle value and the pixel point position which are closest to the pixel distance corresponding to the pitch angle at the interval of 3 degrees on each angle zone, wherein the azimuth angle of each angle zone is from 0 degrees;
and S43, repeating S41 and S42 until the image mapping relation is completely established.
8. The method for stitching the panoramic images in the optical simulation training system according to claim 1, wherein the image fusion in the step S5 comprises the following steps:
s51, judging whether the points on the image are at the overlapping position according to the established mapping model;
if the position is in the overlapping position, the process proceeds to step S52;
if not, the process proceeds to step S54;
s52, calculating all images S to which the point belongs1、S2,…,SnOf (1) weight I'1,I′2,…I′n(ii) a n is the number of the overlapped images of the point;
s53, putting the gray value R of the point on different images1、R2…RnAnd corresponding weight value I'1,I′2,…I′nMultiplying to obtain the gray value of the point after fusion
Figure RE-FDA0003548771120000041
Proceeding to step S55;
s54, reserving the grey value R of the point1As the gray value R of the point after fusion0=R1Proceeding to step S55;
s55, obtaining the corresponding gray value R of the point in the final spliced image0
And S56, processing all the points in the mapping model according to the steps of S51-S54 respectively to obtain a fused image.
9. The method for stitching panoramic images in a field optical simulation training system according to claim 6, wherein the step S52 of calculating the weight of the point in all the images includes:
s521, calculating the point (x, y) in all the images to which the point belongs, and determining the image center point O corresponding to the point on the corresponding image1(x1,y1)、O2(x2,y2),…,On(xn,yn) The distance between
Figure RE-FDA0003548771120000042
n is the number of the overlapped images of the point;
s522, calculating to obtain the weight corresponding to the point
Figure RE-FDA0003548771120000043
S523, repeating the steps S521-S522 to obtain all weights I corresponding to the point1、I2,…,In
S524, normalizing all the weight values to obtain the final weight value I 'of the point on each overlapped image'1,I′2,…I′n
10. The method for stitching panoramic images in an optical simulation training system according to claim 1, wherein the step S6 of displaying the images locally comprises the following steps:
s61, performing table lookup in a panoramic image stitching mapping table according to the central point angle value of the image to be displayed, and assuming that the shooting angle corresponding to the central point of a certain local image to be displayed is an azimuth angle A and a pitch angle E;
s62, inquiring in a mapping table according to the angle value of the pitch angle E to obtain the corresponding longitudinal axis addressing reference, and recording the pixel longitudinal coordinate as Y;
s63, calculating a 3-degree interval to which the azimuth angle A under the pitch angle belongs in an angle value table corresponding to the pitch angle E, and taking the boundary azimuth angle value corresponding to the interval as AnAnd An+1Comparison A-AnAnd An+1Absolute value of A, if A-AnIf the absolute value of A is smaller, An is taken as the addressing reference, otherwise, A is taken as the addressing referencen+1Taking the pixel abscissa of the addressing reference as X as the addressing reference;
s64, after the addressing reference of the azimuth angle is determined, the pixel coordinate needs to be addressed for the second time, and the pixel coordinate offset delta X and delta Y of the point relative to the addressing reference are obtained by using the miss distance calculation method, so that the pixel coordinate position (X + delta X, Y + delta Y) of the point on the panoramic image is obtained;
and S65, taking the inquired pixel points as the centers of the displayed images, and respectively selecting half of the image display size on the horizontal and vertical axes of the central point for display.
CN202111523383.7A 2021-12-08 2021-12-08 Panoramic image splicing method in optical simulation training system Pending CN114331834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111523383.7A CN114331834A (en) 2021-12-08 2021-12-08 Panoramic image splicing method in optical simulation training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111523383.7A CN114331834A (en) 2021-12-08 2021-12-08 Panoramic image splicing method in optical simulation training system

Publications (1)

Publication Number Publication Date
CN114331834A true CN114331834A (en) 2022-04-12

Family

ID=81050974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111523383.7A Pending CN114331834A (en) 2021-12-08 2021-12-08 Panoramic image splicing method in optical simulation training system

Country Status (1)

Country Link
CN (1) CN114331834A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116113A (en) * 2023-10-19 2023-11-24 中国科学院长春光学精密机械与物理研究所 Ship-borne photoelectric theodolite simulation training device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116113A (en) * 2023-10-19 2023-11-24 中国科学院长春光学精密机械与物理研究所 Ship-borne photoelectric theodolite simulation training device
CN117116113B (en) * 2023-10-19 2024-01-02 中国科学院长春光学精密机械与物理研究所 Ship-borne photoelectric theodolite simulation training device

Similar Documents

Publication Publication Date Title
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
US5073819A (en) Computer assisted video surveying and method thereof
US5259037A (en) Automated video imagery database generation using photogrammetry
US7751651B2 (en) Processing architecture for automatic image registration
US7479982B2 (en) Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
US20100245571A1 (en) Global hawk image mosaic
DE03725555T1 (en) AIR EDUCATION SYSTEM
CN108195472B (en) Heat conduction panoramic imaging method based on track mobile robot
CN107192376B (en) Unmanned plane multiple image target positioning correction method based on interframe continuity
DE112010000812T5 (en) Methods and systems for determining angles and locating points
CN104584032A (en) Hybrid precision tracking
SG191452A1 (en) Automatic calibration method and apparatus
WO2015093147A1 (en) Multi-camera imaging system and method for combining multi-camera captured images
CN112629431A (en) Civil structure deformation monitoring method and related equipment
CN112927133B (en) Image space projection splicing method based on integrated calibration parameters
CN206611521U (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN108846084B (en) System and method for generating live-action map
CN106403900A (en) Flyer tracking and locating system and method
CN111486868B (en) Photoelectric telescope azimuth-free expansion calibration method based on ground feature
CN111915685B (en) Zoom camera calibration method
CN105182678A (en) System and method for observing space target based on multiple channel cameras
CN114331834A (en) Panoramic image splicing method in optical simulation training system
CN114659523A (en) Large-range high-precision attitude measurement method and device
CN104700367B (en) A kind of ship carries the geometric correction method of EO-1 hyperion push-broom imaging data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination