CN103096043B - Mine safety monitoring method based on parallel video splicing technology - Google Patents

Mine safety monitoring method based on parallel video splicing technology Download PDF

Info

Publication number
CN103096043B
CN103096043B CN201310055823.XA CN201310055823A CN103096043B CN 103096043 B CN103096043 B CN 103096043B CN 201310055823 A CN201310055823 A CN 201310055823A CN 103096043 B CN103096043 B CN 103096043B
Authority
CN
China
Prior art keywords
mrow
mtd
msub
mtr
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310055823.XA
Other languages
Chinese (zh)
Other versions
CN103096043A (en
Inventor
方贤勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201310055823.XA priority Critical patent/CN103096043B/en
Publication of CN103096043A publication Critical patent/CN103096043A/en
Application granted granted Critical
Publication of CN103096043B publication Critical patent/CN103096043B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

Aiming at the defects of the existing mine safety monitoring method based on the parallel video splicing technology, the invention provides a free sparse view point parallel video splicing method which reduces the requirement on the dense number of view points and the technical requirements on the horizontal movement direction and view point direction shooting of sampling equipment. The invention has the beneficial technical effects that: the problem of free angle monitoring of a scene with large longitudinal extension is solved, and large-view-field real-time monitoring without deformation, blind areas and overlapping areas is realized; the method has simple layout, easy implementation, more economy and safety, and can realize clear monitoring of the whole scene only by a small number of cameras.

Description

Mine safety monitoring method based on parallel video splicing technology
Technical Field
The invention relates to the technical field of mine safety monitoring, in particular to the technical field of safety monitoring by taking videos as means, and particularly relates to a mine safety monitoring method based on parallel video splicing by adopting sparse viewpoints.
Background
Mine safety monitoring is a key research and development technology for national safety production. Under the well of dozens of meters or even hundreds of meters, the working environment is extremely severe, and how to carry out real-time, accurate, reliable and efficient safety monitoring and early warning on the whole safe living process and key construction working section is a particularly important research subject. The current underground monitoring technology mainly comprises the following steps: firstly, acquiring the temperature, humidity or harmful gas concentration of a site by using a sensor; secondly, the camera is used for capturing the field information of the working section in real time. The former technology needs to arrange a large number of sensors on site and feed back parameters such as gas concentration in a mine, so that the obtained information is not intuitive, and in addition, real-time conditions cannot be reflected in case of underground emergency; although the latter technique can overcome the defect that the sensor monitoring is not intuitive, the latter technique is only simple monitoring of a plurality of independent cameras: if the precision of camera monitoring is to be improved, a large number of layouts in a narrow underground space are needed, so that the method is neither economical nor practical; if the monitoring points are reduced and the monitoring points are arranged sparsely, blind areas which cannot be scanned by the observation points, overlapping areas of repeated monitoring of adjacent observation points and image distortion deformation areas when the camera at the monitoring points rotates for monitoring can be caused, and misjudgment in mine safety monitoring can be caused. Therefore, it is necessary to develop a low-cost, easy-to-arrange and high-precision underground video monitoring technology innovation. The video splicing technology for splicing a plurality of independent cameras into a large-view-field monitoring video through an algorithm provides a solution for the requirement.
The main steps of the existing video splicing method are viewpoint initialization, source synchronous frame input, synchronous frequency registration, synchronous frame fusion, synthetic frame display and storage, and the method specifically comprises the following steps: firstly, carrying out viewpoint initialization, namely, related preparation work such as viewpoint layout and the like; then, starting to carry out registration on each synchronous frame in sequence so as to find out mutually overlapped positions, and carrying out viewpoint geometric transformation to realize correct registration if necessary in the step; then, the registered synchronous frames can be fused to generate a spliced composite frame; and then, displaying and storing the obtained spliced composite frame, and transferring to the registration of the next synchronous frame, thereby starting a new synchronous frame splicing work. In the above process, the registration and fusion of the synchronous frames are key steps, and the core technology is how to perform image stitching.
Referring to fig. 2, a conventional image mosaic (image mosaic) is formed by taking a circle of images around a central point. The method overcomes the defect of small single image visual field, and is widely researched and applied. However, the viewpoint of the conventional splicing method is fixed, and for long and narrow scenes such as streets and underground roadways, the images are deformed more and more as the distance from the camera is larger, so that the overall appearance cannot be displayed.
Referring to fig. 3, in order to overcome the disadvantages of the conventional image stitching, an improved scheme is to process a long and narrow scene by using a parallel stitching (parallel mosaic) method, which is a slice image stitching technique for stitching a slice taken from each viewpoint position into a large-view image on the basis of shooting the scene at a plurality of parallel viewpoints. The parallel splicing can obtain wide-angle images, overcomes the defect that the visual field of the traditional image splicing is limited under the condition of a single viewpoint, is particularly suitable for the long deep and deep scenes which can be completely shot by moving the viewpoint, such as underground tunnels, streets and the like, and has important practical value.
The existing parallel splicing method mainly comprises route panoramic splicing (route panoramic) and parallel projection splicing (parallel transmissive stereo mosaics). However, there are two disadvantages to the above two methods: 1) both of the two parallel splicing schemes require that a camera is arranged to continuously move to shoot scenes so as to ensure that enough dense viewpoints are achieved, thereby ensuring that enough slices can be extracted for seamless splicing, and thus dense camera layout is required or equipment for specially controlling continuous movement of the camera is required; 2) when image acquisition is carried out, each viewpoint is required to be over against a scene (namely, the viewpoint direction is vertical to the horizontal movement direction of the camera) to simulate orthogonal projection, namely, the optical axis of the camera is required to be vertical to the horizontal movement direction, or the viewpoint deflection angle is small, so that the extracted spliced slice can be ensured to have minimum deformation, but a large amount of complicated debugging work is also required. In other words, the intensive viewpoints increase the workload in the shooting process; the multi-viewpoint direction is vertical to the horizontal movement direction of the camera, so that the complexity and instability of work are increased, the splicing flexibility is reduced, and the range of a spliced scene is limited because the viewpoint rotation angle cannot be too large. In summary, the existing parallel video splicing method is not suitable for safety monitoring of underground complex environments with long and narrow tunnels.
Disclosure of Invention
Aiming at the defects of the existing mine safety monitoring method based on the parallel video splicing technology, the invention provides a free sparse view point parallel video splicing method which reduces the requirement on the dense number of view points and the technical requirements on the horizontal movement direction and view point direction shooting of sampling equipment. The specific method comprises the following steps:
the mine safety monitoring method based on the parallel video splicing technology is carried out according to the following steps:
(1) arranging sampling points: setting the depth of a monitoring scene surface needing safety monitoring to be LVDistance between camera and monitoring scene is LDThe distance between two adjacent cameras is LCDistance L between camera and monitoring sceneDIs monitoring the depth L of the scene planeV1.0 to 10.0 times of the distance L between two adjacent camerasCIs monitoring the depth L of the scene planeV1.0 to 3.0 times of the total height of the cameras, wherein all the cameras are arranged at the same height, and keep the same rotating angle when rotating, adjacent viewpoint images formed when two adjacent cameras respectively deflect within a range of +/-45 degrees should be overlapped, and the connecting line of the cameras monitoring the same monitoring scene surface should be parallel to the monitoring scene surface; all the cameras are connected with a computer responsible for graphic processing through data lines, and the computer uniformly processes the synchronous frames obtained from each camera;
(2) initializing a camera focal length: each camera adopts the same frame resolution, the focal length f of the camera is calculated by using frame resolution simulation, namely the width and the height of a frame are respectively w and h, and the value of the focal length f of the camera is as follows:(3) initializing the overlapping position: all cameras are just opposite to a monitoring scene for image capture, and an initial synchronization frame of each camera is obtainedWherein n is the nth camera; semi-transparent method for obtaining initial synchronous frame between two adjacent camerasThe horizontal length L of the overlapping region therebetween(n,n+1)(ii) a (4) Debugging the image brightness: initial identity of each cameraStep framePerforming Gamma correction with formula Iout=aIinγ, wherein, IinIs an initial synchronization frameOriginal pixel of (1)outIs an initial synchronization frameIn the corrected pixel, the parameter a is 1, and the value range of the parameter gamma is 1.5-3.0;
(5) recording the camera yaw angle: all cameras are set to the same deflection angle thetasSaid angle of deflection θsIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; deflection angle thetasReal-time synchronization frame with each cameraCorrespondingly, the real-time synchronization frameThe image monitored by the nth camera in the s second from the beginning of monitoring; (6) and performing synchronous frame virtual transformation: for real-time synchronous framePerforming virtual transformation to obtain virtual synchronous frame after virtual transformationWherein the virtual transformation formula is
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> </mrow></math>
<math><mrow> <mfrac> <mi>f</mi> <mrow> <mo>-</mo> <mi>i</mi> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mfrac> <mi>w</mi> <mn>2</mn> </mfrac> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>f</mi> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mi>tan</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mi>tan</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mi>tan</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mi>fh</mi> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> </mrow> <mn>2</mn> </mfrac> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mfrac> <mrow> <mo>-</mo> <mi>wfh</mi> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> </mrow> <mn>4</mn> </mfrac> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>-</mo> <mfrac> <mrow> <msup> <mi>f</mi> <mn>2</mn> </msup> <mi>h</mi> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow></math>
f is the camera focal length calculated in step (2); thetasIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; point P (i, j) is a real-time synchronization frameA point above, i represents the horizontal axis, j represents the vertical axis, and w and h are real-time sync frames, respectivelyWidth and height of (d); the point P ' (i ', j ') is a virtual synchronous frameAnd P ' (i ', j ') corresponds to the point P (i, j);
(7) adjacent virtual synchronization frame correction and registration: sequentially acquired for two adjacent camerasVirtual synchronization frame ofAnd virtual synchronization frameCorrecting and calculating virtual synchronous frameAnd virtual synchronization frameHorizontal length of the overlap region of the inter-virtual synchronization framesThen using the horizontal length of the virtual synchronous frame overlapping regionPerforming virtual synchronization framesAnd virtual synchronization frameThe registration between them; wherein, the correction between two adjacent virtual synchronous frames is used for calculating the horizontal length of the virtual overlapping areaIs calculated by the formula <math><mrow> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>L</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> <mo>,</mo> </mrow></math> f is the focal length calculated in step (2), L(n,n+1)Is the horizontal length, θ, of the overlap region between the initial adjacent synchronization frames in step (3)sIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; new offset obtained after correction, i.e. horizontal length of virtual overlap regionFor all current deflection angles thetasLower adjacent virtual sync frameAnd virtual synchronization frame ICarrying out registration;
(8) generating a composite frame: using weighted average fusion method to make smooth transition to generate synthetic frame of whole monitoring scene at s secondNamely: carrying out weighted fusion on the overlapping area according to the size of the overlapping area of two adjacent virtual synchronous frames, covering the overlapping area after fusion processing to the corresponding position of a spliced new image, ensuring that the finally generated synthetic frame has visual consistency and no obvious seam, and finishing the processing of the image to be monitored on the monitoring scene surface at the s-second point; (9) and (3) real-time video synthesis and output: repeating steps (4) to (8), namely, for each time of each cameraReal-time synchronization frame of a clockProcessing to obtain a composite frame at each timeWill synthesize the framePlaying the video files one by one and writing the video files into the video files to obtain spliced videos generated in real time; wherein: step (5) can simultaneously and uniformly adjust the deflection angles theta of all cameras at any time point during real-time rotation monitoringsWhen monitoring the fixed angle, the deflection angle is adjusted only when the fixed angle is monitored for the first time; and (7) only carrying out virtual synchronous frame correction calculation of the offset when splicing the first frames of the adjacent cameras under the current deflection angle.
The invention has the beneficial technical effects that
The invention provides a novel parallel video splicing method aiming at mine safety by aiming at the requirement of mine safety production and combining the advantages of parallel splicing. The method can realize the parallel video splicing of free visual angle rotation only by arranging a small number of cameras on the site to be monitored. By taking a coal mine mining face with the shooting depth change of less than 0.5 m and the transverse span of 1.5 m as an example, the method can complete the non-blind area large-view-field monitoring of scenes at any angle between-45 degrees and +45 degrees only by installing two cameras, and is convenient to operate and control; however, if the traditional monitoring method of the independent camera is adopted, the problems of blind areas, repeated monitoring and image deformation distortion during scanning and monitoring of the camera cannot be solved; if the existing parallel splicing method is adopted for monitoring, a scene with a fixed viewpoint and a direction parallel to a scene surface can be finished only by moving the camera by 1.5 meters and shooting at a speed of 24 frames per second or so, namely moving the viewpoint 24 times per second, and the monitoring is not practical when the solution is realized in a narrow and complicated underground roadway; if the camera is not moved, in order to overcome the problem that only one narrow strip in the middle of one camera can be extracted and fixed as a scene for head-up shooting, at least more than ten cameras are required to be concentrated on the interval of 1.5 meters to achieve the same technical effect as the invention, and the camera is not intended to be an unrealistic technical solution.
It can be further summarized that the innovation points of the invention are as follows:
1) aiming at the characteristic of long and narrow underground working space, the parallel video splicing method for generating the monitoring video with a long view field is firstly provided, the problem of free angle monitoring of a scene with large longitudinal extension is solved, and the large view field real-time monitoring without deformation, blind areas and overlapping areas is realized;
2) the method firstly provides sparse camera layout and free angle splicing ideas, saves cost, is flexible to control and is easier to layout compared with a parallel splicing method of a fixed viewpoint and a dense shooting idea of the predecessor;
3) compared with the existing underground monitoring technology with a large number of camera layouts, the underground monitoring method has the advantages that the clear monitoring of the whole scene can be realized only by a small number of cameras, so the layout is simple, the implementation is easy, and the underground monitoring method is more economic and safer;
4) compared with the current underground monitoring technology based on the sensor technology, the invention effectively expands the real-time visual monitoring of the underground monitoring technology, so that a user can obtain complete and clear roadway site information in time;
5) the method is characterized in that two-stage strategies of virtual transformation, synchronous frame correction and frame registration are adopted to realize image splicing under sparse view point and free view point rotation; the invention can be further applied to monitoring, traveling and displaying of other various scenes with long depth, thereby having wider application prospect.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
Fig. 2 is a schematic diagram of conventional image stitching.
Fig. 3 is a schematic diagram of a conventional parallel tiling.
Fig. 4 is a schematic top view of the arrangement position of the cameras in the present invention.
Fig. 5 is a schematic diagram of the principle of virtual transformation.
Fig. 6 is a schematic top view of the virtual transformation principle of fig. 5.
Fig. 7 is a schematic diagram of the correction of adjacent virtual sync frames.
Fig. 8 is a schematic diagram of the weighted average fusion principle.
Fig. 9 is an example of the original synchronization frames respectively acquired by deflecting two cameras 15 ° to the right in the first embodiment.
Fig. 10 is a virtual sync frame obtained by virtually transforming the original sync frame in the first embodiment.
Fig. 11 is the final stitched composite frame of the first embodiment.
Fig. 12 is an original view synchronization frame example of embodiment 2 obtained when the optical axis of the camera in embodiment 1 is horizontally rotated by 30 ° to the right.
Fig. 13 is the composite frame of fig. 12 after final splicing is completed.
Fig. 14 is the final video stitching result for the second embodiment using six cameras according to the method of the present invention.
Detailed Description
The specific method of the present invention will now be described in detail with reference to the accompanying drawings.
Example 1
The mine safety monitoring method based on the parallel video splicing technology specifically comprises the following steps:
(1) arranging sampling points: referring to fig. 1 and 4, in this embodiment, the depth of the monitoring scene surface that needs to be monitored is set to LV1m, the distance L between the camera and the monitored scene surfaceD6 meters, the distance L between two adjacent camerasCIs 1.5 m, i.e. the distance L between the camera and the monitored scene planeDIs monitoring the depth L of the scene planeV6 times of the distance L between two adjacent camerasCIs monitoring the depth L of the scene planeV2 times of the above, two cameras of the type of Rotech C170 are adopted for framing, the resolution of the cameras of the type is 640 x 480, the installation heights of all the cameras are the same, all the cameras keep the same rotation angle during rotation, adjacent viewpoint images formed when two adjacent cameras deflect within the range of +/-45 degrees respectively are overlapped, namely the range of the monitoring visual angles of the cameras is not less than 90 degrees, and the connecting lines of the cameras monitoring the same monitoring scene surface are parallel to the monitoring scene surface; all the cameras are connected with a computer responsible for graphic processing through data lines, and the computer uniformly processes the synchronous frames obtained from each camera;
(2) initializing a focal length of a camera; each camera adopts the same frame resolution, and the focal length thereof adopts the frame resolution analog calculation, namely, the width and the height of the frame are respectively w and h, and then the calculation of the focal length f of the camera is a formulaIn this embodiment, it can be calculated according to formula (i): f = 800; (3) initializing the overlapping position: all cameras are just opposite to a monitoring scene surface for image capture, and a synchronization frame of each camera is obtained as an initial synchronization frameWherein n is the nth camera; obtaining the horizontal length L of the initial synchronization frame overlapping area between two adjacent cameras by using a semi-transparent method(n,n+1): firstly, respectively acquiring initial synchronization frames from two adjacent camerasAndwill be provided withAndrespectively performing semi-transparent processing and then stacking the same positions together, wherein the repeated image area is an initial synchronous frame overlapping area; since all cameras are on the same horizontal plane, the vertical offset of the initial synchronization frame overlapping area is 0, and only the horizontal offset at this time needs to be recorded, that is, the horizontal length L of the initial synchronization frame overlapping area(n,n+1)Is 507;
(4) debugging the image brightness: performing Gamma correction on the initial synchronization frame acquired from each camera, wherein the formula is as follows:
Iout=aIinγ (ii)
wherein, IinIs the original pixel intensity, I, of the initial synchronization frameoutThe parameter a in this embodiment is 1 for the pixel intensity after the initial synchronization frame correction; the parameter gamma is 2.1, namely, a universal brightness histogram statistic method is used for image processing to ensure that the images displayed by the two adjacent cameras are the same brightness level;
(5) recording the camera yaw angle: starting all the cameras to monitor the scene surface, and setting the same deflection angle theta for all the camerassSaid angle of deflection θsIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; deflection angle thetasReal-time synchronization frame with each cameraIn response, the real-time synchronizationStep frameThe image monitored by the nth camera in the s second from the beginning of monitoring; the monitoring state of the camera in this step can be in two modes: in the first mode, all cameras are rotated by a uniform yaw angle thetasThe angle of view is not changed, i.e. the camera image is acquired at a fixed angle, and only one deflection angle theta needs to be recordedsEach image sent back by the corresponding camera; in the second mode, all the cameras rotate at an angular speed period all the time during monitoring and acquire monitoring images in real time, namely, the rotation angles of all the cameras change uniformly at each moment of acquiring the monitoring images, and the deflection angle theta corresponding to each image transmitted back from the cameras needs to be recorded in real time at the moments(ii) a In this embodiment, the two cameras rotate at a fixed angular velocity from the start of the camera, and the 5 th second deflection angle θ of the two cameras5At 15 degrees, images acquired by the two cameras are respectively real-time synchronous framesAnd real-time synchronization frameReferring to fig. 9 in detail, the original images respectively obtained by the two cameras deflecting 15 ° to the right in the figure can be clearly seen, the phenomenon of ladder-like deformation caused by the deflection angle, and the real-time synchronous frame is caused by the deformation of the image sizeAnd real-time synchronization frameScenes with the same proportion are not imaged, so splicing cannot be performed; (6) and performing synchronous frame virtual transformation: real-time synchronous frames obtained from the two cameras in the step (5)And real-time synchronization frameRespectively processing according to a virtual transformation formula, and determining the deflection angle thetasAnd the deformed image is restored to the deflection angle thetasReal-time sync frame at 0And real-time synchronization frameThe virtual transformation formula is:
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> </mrow></math>
<math><mrow> <mfrac> <mi>f</mi> <mrow> <mo>-</mo> <mi>sin</mi> <mi>&theta;i</mi> <mo>+</mo> <mrow> <mo>(</mo> <mfrac> <mi>w</mi> <mn>2</mn> </mfrac> <mi>sin</mi> <mi>&theta;</mi> <mo>+</mo> <mi>f</mi> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <mi>cos</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mi>fh</mi> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mn>2</mn> </mfrac> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mfrac> <mrow> <mo>-</mo> <mi>wfh</mi> <mi>sin</mi> </mrow> <mn>4</mn> </mfrac> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>-</mo> <mfrac> <mrow> <msup> <mi>f</mi> <mn>2</mn> </msup> <mi>h</mi> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>iii</mi> <mo>)</mo> </mrow> </mrow></math>
wherein: f is the camera focal length calculated in step (2); thetasIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; point P (i, j) is a real-time synchronization frameA point above, i represents the horizontal axis, j represents the vertical axis, and w and h are real-time sync frames, respectivelyWidth and height of (d); the point P ' (i ', j ') is a virtual synchronous frameAnd P ' (i ', j ') corresponds to the point P (i, j); in this example, the real-time synchronization frame isAnd real-time synchronization frameVirtual conversion into virtual synchronous framesAnd virtual synchronization frameFig. 10 shows the result of the virtual transformation of fig. 9, wherein the black area in fig. 10 is the non-information area after deflection. At this point, the problem of trapezoidal distortion in the original image 9 is overcome, i.e., the virtual synchronization frameAnd virtual synchronization frameSplicing is possible.
The principle of virtual transformation in the present invention is now explained with a single camera:
FIG. 5 is a schematic diagram of a virtual transformation of a single camera arrangement, and FIG. 6 is a schematic diagram of a top view of FIG. 5; as shown in fig. 5, the point O is the optical center of the camera, the coordinates of the camera before transformation are Oxyz, the optical axis is the z-axis, the plane of Oxyz is the horizontal plane, and the y-axis is vertical upward; the deflection angle between the monitoring direction of the camera before conversion and the direction of the camera facing the monitored scene is theta, i.e. the deflection angle theta is equal to the deflection angle thetasHalf of the horizontal visual angle is alpha, and the imaging surface is I; c is the intersection point of the optical axis of the camera and the imaging plane I, and is taken as the central point of I, and correspondingly, the line segment OC is the focal length f of the camera; as can be seen from fig. 5, in the area indicated by the line segment AB on the plane passing through the intersection point C and parallel to the xoz plane on the imaging plane I, the monitored scene plane corresponding to the area indicated by the line segment AC is narrower than the monitored scene plane corresponding to the area indicated by the line segment CB when the deflection angle θ is not zero; it can be further seen that there is a phenomenon of ladder-like deformation in the imaging of the scene on the imaging plane I, that is: 1) the smaller the image of the scene close to the imaging plane I, the more the image of the scene far from the imaging plane ILarger, 2) larger deflection angle, more obvious ladder-shaped deformation; it can be seen that there is no uniform area in such radial imaging between adjacent cameras, and therefore stitching together is not possible; the reason for this phenomenon is that the camera is not aligned to the scene, and thus the obtained imaging plane I is not parallel to the scene plane, thereby causing perspective transformation of the same scene in different cameras to different degrees; therefore, although the visual range can be enlarged by increasing the deflection angle of the camera, the problem of ladder-shaped deformation must be solved;
the solution idea is to compensate and restore the image obtained after deflection to obtain a virtual imaging result when the image is projected to the scene; because the imaging of the opposite scene has no inconsistent perspective transformation, namely has no ladder-shaped deformation phenomenon, the splicing can be realized; the specific solution is to virtually construct a virtual camera with the same focal length but opposite to the scene according to the focal length value of the current camera, wherein the view field range of the virtual camera comprises the view field of the original camera, the imaging surface naturally comprises the imaging surface of the original camera, but the imaging surface at the moment is opposite to the scene, namely, no trapezoidal deformation phenomenon exists, and certainly, the splicing of adjacent cameras can be realized! As shown in fig. 5 and 6, the coordinate Ox ' yz ' of the virtual camera is horizontally rotated by an angle θ in reverse relative to the coordinate of the original camera, half of the viewing angle is changed to (α + θ), and the corresponding virtual imaging plane is I '; relative to the intersection point C of the imaging plane I, the intersection point of the virtual imaging plane I ' and the optical axis is C ', and the length of the line segment OC ' is equal to the focal length f; corresponding to the point line section AB passing through the intersection point C on the imaging plane I, the virtual imaging plane I 'can also obtain a line section A' B 'passing through the point C'; according to the discussed solution, the objective is to calculate the corresponding image of the imaging plane I on the virtual imaging plane I' of the camera, so as to obtain the virtual synchronization frame after virtual transformation, such as: for pixel points on a line segment AB in the imaging plane I, the aim is to solve the pixel points on a line segment A ' B ' in the virtual imaging plane I '; how to calculate the virtual synchronization frame image on the virtual imaging plane I' is explained in detail below.
Assuming that the origin of coordinates is at the lower left corner of the imaging plane, set the point on the sceneThe imaging plane I and the virtual imaging plane I 'correspond to the point P (I, j) and the point P' (I ', j'), respectively; coordinate (x) of point P under original camera coordinate system Oxyzp,yp,zp) Expressed as:
x p y p z p = 1 0 - w / 2 0 1 - h / 2 0 0 f i j 1 - - - ( iv )
coordinates of point P in the virtual camera coordinate systemThe calculation from the rotational relationship of Oxyz and Ox 'yz' can be:
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mi>sin</mi> <mi>&theta;</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&theta;</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>p</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>p</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>p</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow></math>
since the point P ' is an intersection of the straight line OP and the virtual image plane I ', the coordinate (x ' of the point P ' in the coordinate system Ox ' yz ' can be derived 'p′,y′p′,z′p′)
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> <mo>*</mo> <mfrac> <mi>f</mi> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mfrac> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> <mo>*</mo> <mfrac> <mi>f</mi> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mfrac> </mtd> </mtr> <mtr> <mtd> <mi>f</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mi>f</mi> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>vi</mi> <mo>)</mo> </mrow> </mrow></math>
Further, the coordinates (I ', j') of the point P 'on the virtual imaging plane I' are expressed as:
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> <mo>-</mo> <mi>f</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>*</mo> <mi>h</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi></mi> </mtd> <mtd> <mo>-</mo> <mi>f</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mi></mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> <mtd> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> </mtd> <mtd> <mrow> <mfrac> <mn>1</mn> <msubsup> <mi>z</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>vii</mi> <mo>)</mo> </mrow> </mrow></math>
z 'is known from the formula (vi)'p′So that the formula (vii) can be rewritten as
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> <mtd> <mo>-</mo> <mi>f</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> <mtd> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> </mtd> <mtd> <mfrac> <mn>1</mn> <mi>f</mi> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> <mtd> <mo>-</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>fh</mi> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>&prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>vii</mi> <mo>)</mo> </mrow> </mrow></math>
Considering that P' corresponds to a point P on I, from equations (iv), (v) and (vi), equation (viii) can be written in the form of P point coordinates (I, j) as follows:
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mi>f</mi> <msub> <mi>z</mi> <mover> <mi>p</mi> <mrow> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mrow> </mover> </msub> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> </mtd> <mtd> <mo>-</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>fh</mi> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mi>sin</mi> <mi>&theta;</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&theta;</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>w</mi> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>ix</mi> <mo>)</mo> </mrow> </mrow></math>
according to the formulas (iv) and (v), the above formula can be rewritten as
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mi>f</mi> <mrow> <mo>-</mo> <mi>sin</mi> <mi>&theta;i</mi> <mo>+</mo> <mrow> <mo>(</mo> <mfrac> <mi>w</mi> <mn>2</mn> </mfrac> <mi>sin</mi> <mi>&theta;</mi> <mo>+</mo> <mi>f</mi> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>sin</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>sin</mi> <mi>&theta;</mi> <mo>-</mo> <mi>cos</mi> <mi></mi> <mi>&theta;</mi> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mi>fh</mi> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mn>2</mn> </mfrac> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mfrac> <mrow> <mo>-</mo> <mi>wfh</mi> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mn>4</mn> </mfrac> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>-</mo> <mfrac> <mrow> <msup> <mi>f</mi> <mn>2</mn> </msup> <mi>h</mi> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow></math>
Thus, the position of each pixel on the original imaging plane I in the virtual imaging plane I' can be found by using the formula, and the image after virtual transformation is calculated.
In the present embodiment, substituting θ =15 °, α =21.8014 °, f =800, w =640, h =480 into the virtual transformation formula respectively results in a simplified formula when the rotation angle is 15 °:
<math><mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>800</mn> <mrow> <mo>-</mo> <mn>0.2588</mn> <mi>i</mi> <mo>+</mo> <mn>855.5628</mn> </mrow> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0.9351</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mn>0.0000</mn> </mtd> </mtr> <mtr> <mtd> <mn>49693.2567</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mo>-</mo> <mn>164268289.0491</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow></math>
(7) adjacent virtual synchronization frame correction and registration: sequentially aiming at the virtual synchronous frames acquired by two adjacent camerasAnd virtual synchronization frameCorrecting and calculating virtual synchronous frameAnd virtual synchronization frameHorizontal length of the overlap region of the inter-virtual synchronization framesThen using the horizontal length of the virtual synchronous frame overlapping regionPerforming virtual synchronization framesAnd virtual synchronization frameThe registration between them; wherein, the correction between two adjacent virtual synchronous frames is used for calculating the horizontal length of the virtual overlapping areaIs calculated by the formula
<math><mrow> <msubsup> <mi>L</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mrow> <mo>&prime;</mo> <mi>s</mi> </mrow> </msubsup> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>L</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>xi</mi> <mo>)</mo> </mrow> </mrow></math>
Wherein f is the focal length calculated in step (2), L(n,n+1) Is the horizontal length, θ, of the overlap region between the initial adjacent synchronization frames in step (3)sIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; obtained after correctionNew offset, i.e. horizontal length of virtual overlap regionFor all current deflection angles thetasLower adjacent virtual sync frameAnd virtual synchronization frameCarrying out registration;
according to the principle of virtual transformation, the virtual imaging plane is parallel to the viewpoint line, and the distance from the viewpoint optical center is the focal length f. When the viewpoint is not rotated (i.e. the viewpoint optical axis is perpendicular to the scene plane), the imaging plane is away from the optical center f, so the imaging plane I when the viewpoint optical axis is not rotated is on the same plane as the virtual imaging plane I' after virtual transformation. The shift amount L of the overlapping area of the adjacent viewpoint imaging when the viewpoint optical axis is perpendicular to the scene plane has been obtained in the initialization of step (3)(n,n+1) The offset can be used for calculating the horizontal length of the virtually processed offset virtual overlapping area between the adjacent cameras under the condition that the cameras deflectTherefore, the correction calculation of the offset between the adjacent virtual synchronous frames under the current deflection angle is realized.
Referring to fig. 7, a top view of two adjacent cameras at a deflection angle θ is taken as an example to illustrate the principle of correcting the synchronous frame; wherein, point O1Is the optical center of the first camera, point O2The optical center of the second camera, the focal lengths of the first camera and the second camera are both f and are both rotated by an angle theta, C1And C2Representing the centers of the two imaging planes, respectively, ray O1C1And ray O2C2The optical axis of the first camera and the optical axis of the second camera, respectively; a'1B′1And A'2B2' imaging with these two deflection angles of θ respectivelyObtaining a virtual synchronous frame imaging surface A 'after the machine is subjected to the virtual transformation in the step (5)'2B′1The length of the overlapping area of the two virtual imaging surfaces is also the calculation target of the correction of the adjacent virtual synchronous frames; m1N1And M2N2Respectively representing the image planes of the two viewpoints when they are facing the scene (i.e. without deflection), their overlap region M2N1Namely, the initial synchronization frame overlapping region calculated in the step (3);
as can be seen from FIG. 7, the calculation of the initial synchronization frame to the overlap region M according to step (3)2N1Has an initial length ofOverlap region A 'of two virtual image planes after deflection of theta angle'2B′1Length of (2)Comprises the following steps:
<math><mrow> <msub> <mi>L</mi> <mrow> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>N</mi> <mn>1</mn> </msub> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>-</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>N</mi> <mn>1</mn> </msub> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>N</mi> <mn>1</mn> </msub> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>-</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <msub> <mi>N</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>xii</mi> <mo>)</mo> </mrow> </mrow></math>
wherein,andrespectively represent line segments N1B′1、N1A2′、M2A2' and M2N1Length of (d);andeach can be calculated as follows:
<math><mrow> <msub> <mi>L</mi> <mrow> <msub> <mi>N</mi> <mn>1</mn> </msub> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>C</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>-</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>C</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msub> <mi>N</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>xiii</mi> <mo>)</mo> </mrow> </mrow></math>
<math><mrow> <msub> <mi>L</mi> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <msubsup> <mi>C</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>-</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msubsup> <mi>C</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>-</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>xiv</mi> <mo>)</mo> </mrow> </mrow></math>
wherein,andrespectively represent line segment C'1B′1、C′1N1、M2C′2And A'2C′2Length of (d); therefore, the first and second electrodes are formed on the substrate,
<math><mrow> <msub> <mi>L</mi> <mrow> <msubsup> <mi>A</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msubsup> <mi>B</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> </mrow> </msub> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>L</mi> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <msub> <mi>N</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>xv</mi> <mo>)</mo> </mrow> </mrow></math>
if L is used(n,n+1)Indicates the length of the initial synchronization frame overlap region obtained during the initialization in step (3) between the nth camera and the (n + 1) th camera, which is L in this example(1,2)Then the horizontal length L 'of the overlap region between the corresponding adjacent virtual synchronization frames of the adjacent cameras after the virtual transformation when any plurality of cameras can be obtained is deduced from the formula (xv)'(n,n+1)The calculation formula of (2) is as follows:
L′(n,n+1)=f*tan(α+θ)+f*tan(α-θ)-2*f*tanα+L(n,n+1) (xvi)
since the rotation of the viewpoint does not affect the longitudinal position, the longitudinal and offset remain unchanged.
(8) And (3) generating a synthesized frame: using weighted average fusion method to make smooth transition to generate synthetic frame of whole monitoring scene at s secondNamely: carrying out weighted fusion on the overlapping area according to the size of the overlapping area of two adjacent virtual synchronous frames, covering the overlapping area after fusion processing to the corresponding position of the spliced new image, ensuring that the finally generated synthetic frame has visual consistency and no obvious seam, and finishing the processing of the image to be monitored on the monitoring scene surface at the second point;
as shown in FIG. 8, the point P is a point of the overlap region, and the intensity of the overlap region is I in the left and right framesA1And IA2,l1And l2The distances from A to the boundaries of two adjacent left and right synchronous frames, respectively, the intensity I of the synchronous frames in the composite frameAThe method is realized according to the following formula:
I A = l 2 l 1 + l 2 I A 1 + l 1 l 1 + l 2 I A 2 - - - ( xvii )
finally, the good s-second synthesized frame after the fusion processing is carried outPlaying and writing the synthesized frame into a video file, so as to complete the processing of the current synchronous frame;
referring to fig. 12, fig. 12 is a view screen image captured by the two cameras at the 15 th second: real-time synchronization frameAnd real-time synchronization frameAt the moment, the rotation angles theta of the two cameras15All 30 degrees, processed according to the methods of the steps (5) to (8), and firstly obtaining the virtual synchronous frameAnd virtual synchronization frameRe-fusion processing to obtain a composite frameAs shown in the figure13 is shown in the figure; comparing fig. 12 and fig. 13, it can be seen that the problem of significant trapezoidal distortion originally existing in fig. 12 is well suppressed in fig. 13 after the virtual processing and the correction registration. Furthermore, we can also confirm that: when the method is used for processing the large-angle rotation detection of the camera, the problems of obvious distortion and deformation during image splicing can be well solved.
(9) And (3) real-time video synthesis and output: repeating the steps (4) to (8), and synchronizing the synchronous frame I of each camera at each momentnProcessing, namely playing and writing the obtained processing result (namely the composite frame) of the monitoring image of the monitoring scene surface at each time point into a video file, namely obtaining a video which is generated and spliced in real time; wherein: step 5, the deflection angle of the camera can be adjusted at any time point during real-time rotation monitoring, and the deflection angle is adjusted only during first execution during fixed angle monitoring; and (7) only carrying out virtual synchronous frame correction calculation of the offset when the first frame of video at the current deflection angle is spliced.
In this embodiment, the composite frame is obtained by processing between the 5 th second and the 15 th secondAnd continuously playing and writing the video file, namely obtaining a continuous monitoring video lasting for 10 seconds.
Example 2
Referring to fig. 14, the result of video stitching by six cameras according to the method of the present invention is shown. In this embodiment, the depth of the scene is 0.5 m, the length of the scene is 7 m, the distance between six cameras and the scene to be photographed is 2m, the distance between the cameras is 1m, and the fixed rotation angle is 30 °; if a single camera is used for shooting the image with the effect shown in fig. 14, the whole scene needs to be shot in a mode of moving at least 24 times per second along the scene surface, so that the whole scene surface cannot be monitored in real time, and therefore the single camera cannot reach the panoramic monitoring target of the method; if shooting is carried out in a traditional parallel splicing mode, about 20 cameras are arranged, and each camera intercepts a vertical small image and then is spliced and combined. Compared with the traditional technical method, the method provides a technical method which uses a small number of cameras (monitoring points), captures the monitoring scene face (scene) in a short distance and has less image distortion, and is more suitable for the video monitoring method which needs key monitoring sections in a long and narrow mine roadway and the like.

Claims (1)

1. The mine safety monitoring method based on the parallel video splicing technology is characterized by comprising the following steps of:
(1) arranging sampling points: setting the depth of a monitoring scene surface needing safety monitoring to be LVDistance between camera and monitoring scene is LDThe distance between two adjacent cameras is LCDistance L between camera and monitoring sceneDIs monitoring the depth L of the scene planeV1.0 to 10.0 times of the distance L between two adjacent camerasCIs monitoring the depth of a scene planeLV1.0 to 3.0 times of the total height of the cameras, wherein all the cameras are arranged at the same height, and keep the same rotating angle when rotating, adjacent viewpoint images formed when two adjacent cameras respectively deflect within a range of +/-45 degrees should be overlapped, and the connecting line of the cameras monitoring the same monitoring scene surface should be parallel to the monitoring scene surface; all the cameras are connected with a computer responsible for graphic processing through data lines, and the computer uniformly processes the synchronous frames obtained from each camera;
(2) initializing a camera focal length: each camera adopts the same frame resolution, the focal length f of the camera is calculated by using frame resolution simulation, namely the width and the height of a frame are respectively w and h, and the value of the focal length f of the camera is as follows:
(3) initializing the overlapping position: all cameras are just opposite to a monitoring scene for image capture, and an initial synchronization frame of each camera is obtainedWherein n is the nth camera; semi-transparent method for obtaining initial synchronous frame between two adjacent camerasThe horizontal length L of the overlapping region therebetween(n,n+1)
(4) Debugging the image brightness: initial synchronization frame to be acquired from each cameraPerforming Gamma correction with formula Iout=aIin γWherein, IinIs an initial synchronization frameOriginal pixel of (1)outIs an initial synchronization frameIn the corrected pixel, the parameter a is 1, and the value range of the parameter gamma is 1.5-3.0;
(5) recording the camera yaw angle: all cameras are set to the same deflection angle thetasSaid angle of deflection θsIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; deflection angle thetasReal-time synchronization frame with each cameraCorrespondingly, the real-time synchronization frameThe image monitored by the nth camera in the s second from the beginning of monitoring;
(6) and performing synchronous frame virtual transformation: for real-time synchronous framePerforming virtual transformation to obtain virtual synchronous frame after virtual transformationWherein the virtual transformation formula is
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>f</mi> <mrow> <mo>-</mo> <mi>i</mi> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mfrac> <mi>w</mi> <mn>2</mn> </mfrac> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>+</mo> <mi>f</mi> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mo>-</mo> <mfrac> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> <mi>tan</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </mfrac> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> <mo>-</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> <mi>tan</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mi>fh</mi> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> </mrow> <mn>2</mn> </mfrac> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mfrac> <mrow> <mo>-</mo> <mi>wfh</mi> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> </mrow> <mn>4</mn> </mfrac> <mo>-</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>-</mo> <mfrac> <mrow> <msup> <mi>f</mi> <mn>2</mn> </msup> <mi>h</mi> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>s</mi> </msub> </mrow> <mn>2</mn> </mfrac> <mi></mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>i</mi> </mtd> </mtr> <mtr> <mtd> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> <mi>f</mi> </mtd> </mtr> </mtable> </mfenced> </math>
The focal distance of the camera calculated in the step (2); thetasIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; point P (i, j) is a real-time synchronization frameA point above, i represents the horizontal axis, j represents the vertical axis, and w and h are real-time sync frames, respectivelyWidth and height of (d); the point P ' (i ', j ') is a virtual synchronous frameAnd P ' (i ', j ') corresponds to the point P (i, j);
(7) adjacent virtual synchronization frame correction and registration: sequentially aiming at the virtual synchronous frames acquired by two adjacent camerasAnd virtual synchronization frameCorrecting and calculating virtual synchronous frameAnd virtual synchronization frameHorizontal length of the overlap region of the inter-virtual synchronization framesThen using the horizontal length of the virtual synchronous frame overlapping regionPerforming virtual synchronization framesAnd virtual synchronization frameThe registration between them; wherein, the correction between two adjacent virtual synchronous frames is used for calculating the horizontal length of the virtual overlapping areaIs calculated by the formula <math> <mrow> <msubsup> <mi>L</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mrow> <mo>&prime;</mo> <mi>s</mi> </mrow> </msubsup> <mo>=</mo> <mi>f</mi> <mo>*</mo> </mrow> </math> <math> <mrow> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <mi>f</mi> <mo>*</mo> <mi>tan</mi> <mi>&alpha;</mi> <mo>+</mo> <msub> <mi>L</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> <mo>,</mo> </mrow> </math> f is the focal length calculated in step (2), L(n,n+1)Is the horizontal length, θ, of the overlap region between the initial adjacent synchronization frames in step (3)sIs the included angle between the actual facing direction of the camera at the s second from the beginning of monitoring and the direction of the camera when the scene is forward looking; alpha is one half of the visual angle value of the camera; new offset obtained after correction, i.e. horizontal length of virtual overlap regionFor all current deflection angles thetasLower adjacent virtual sync frameAnd virtual synchronization frameCarrying out registration;
(8) generating a composite frame: using weighted average fusion method to make smooth transition to generate synthetic frame of whole monitoring scene at s secondNamely: carrying out weighted fusion on the overlapping area according to the size of the overlapping area of two adjacent virtual synchronous frames, covering the overlapping area after fusion processing to the corresponding position of a spliced new image, ensuring that the finally generated synthetic frame has visual consistency and no obvious seam, and finishing the processing of the image to be monitored on the monitoring scene surface at the s-second point;
(9) and (3) real-time video synthesis and output: repeating steps (4) to (8), namely, the real-time synchronization frame of each moment of each cameraProcessing to obtain a composite frame at each timeWill synthesize the framePlaying the video files one by one and writing the video files into the video files to obtain spliced videos generated in real time; wherein: step (5) of simultaneously and uniformly adjusting the deflection angles theta of all the cameras at any time point during real-time rotation monitoringsWhen monitoring the fixed angle, the deflection angle is adjusted only when the fixed angle is monitored for the first time; and (7) only carrying out virtual synchronous frame correction calculation of the offset when splicing the first frames of the adjacent cameras under the current deflection angle.
CN201310055823.XA 2013-02-21 2013-02-21 Mine safety monitoring method based on parallel video splicing technology Expired - Fee Related CN103096043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310055823.XA CN103096043B (en) 2013-02-21 2013-02-21 Mine safety monitoring method based on parallel video splicing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310055823.XA CN103096043B (en) 2013-02-21 2013-02-21 Mine safety monitoring method based on parallel video splicing technology

Publications (2)

Publication Number Publication Date
CN103096043A CN103096043A (en) 2013-05-08
CN103096043B true CN103096043B (en) 2015-08-05

Family

ID=48208126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310055823.XA Expired - Fee Related CN103096043B (en) 2013-02-21 2013-02-21 Mine safety monitoring method based on parallel video splicing technology

Country Status (1)

Country Link
CN (1) CN103096043B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013114998A1 (en) * 2013-12-31 2015-07-02 Marco Systemanalyse Und Entwicklung Gmbh Method for generating a panoramic image
CN104038668B (en) * 2014-06-30 2017-11-10 Tcl集团股份有限公司 A kind of panoramic video display methods and system
WO2019075617A1 (en) * 2017-10-16 2019-04-25 深圳市大疆创新科技有限公司 Video processing method, control terminal and mobile device
CN111476716B (en) * 2020-04-03 2023-09-26 深圳力维智联技术有限公司 Real-time video stitching method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577335B2 (en) * 1997-10-20 2003-06-10 Fujitsu Limited Monitoring system and monitoring method
JP2004056497A (en) * 2002-07-19 2004-02-19 Sumitomo Electric Ind Ltd Image processing apparatus and method therefor, and vehicle supervision system
CN101312526A (en) * 2008-06-26 2008-11-26 天津市亚安科技电子有限公司 Full-view cooperative video monitoring apparatus and full-view image splicing method
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN202634608U (en) * 2012-06-13 2012-12-26 黄炳 Video monitoring device for coal mine

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6577335B2 (en) * 1997-10-20 2003-06-10 Fujitsu Limited Monitoring system and monitoring method
JP2004056497A (en) * 2002-07-19 2004-02-19 Sumitomo Electric Ind Ltd Image processing apparatus and method therefor, and vehicle supervision system
CN101312526A (en) * 2008-06-26 2008-11-26 天津市亚安科技电子有限公司 Full-view cooperative video monitoring apparatus and full-view image splicing method
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN202634608U (en) * 2012-06-13 2012-12-26 黄炳 Video monitoring device for coal mine

Also Published As

Publication number Publication date
CN103096043A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
US11350073B2 (en) Disparity image stitching and visualization method based on multiple pairs of binocular cameras
CN102984453B (en) Single camera is utilized to generate the method and system of hemisphere full-view video image in real time
Zhu et al. Mosaic-based 3D scene representation and rendering
CN106296783B (en) A kind of space representation method of combination space overall situation 3D view and panoramic pictures
CN104835117A (en) Spherical panorama generating method based on overlapping way
CN104809719B (en) The method of virtual view synthesis based on homography matrix segmentation
CN103096043B (en) Mine safety monitoring method based on parallel video splicing technology
CN104299261A (en) Three-dimensional imaging method and system for human body
CN105678693A (en) Panorama video browsing-playing method
CN104067144A (en) Perimeter-monitoring device for operating machine
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
JPH11331874A (en) Image processing unit, depth image measuring device, composite reality presenting system, image processing method, depth image measuring method, composite reality presenting method and storage medium for program
US20110249095A1 (en) Image composition apparatus and method thereof
US11812009B2 (en) Generating virtual reality content via light fields
CN102927917A (en) Multi-view vision measurement method of iron tower
CN106534670A (en) Panoramic video generating method based on fixedly connected fisheye lens camera unit
JP2004265396A (en) Image forming system and image forming method
TWI526670B (en) Device and method for measuring three-dimensional images of tunnel deformation
KR101680367B1 (en) CG image product system by synchronization of simulator camera and virtual camera
CN106023066A (en) 4-path borehole wall video cylinder panoramic image generation method and device
CA3179772A1 (en) Systems and methods for image capture
CN106355546A (en) Vehicle panorama generating method and apparatus
CN109272445B (en) Panoramic video stitching method based on spherical model
CN111080523B (en) Infrared peripheral vision search system and infrared peripheral vision image splicing method based on angle information
CN101329453A (en) Large visual field high resolution imaging apparatus based on optical fiber and split joint method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150805

Termination date: 20180221

CF01 Termination of patent right due to non-payment of annual fee