CN117309856A - Smoke screen effect monitoring method and device, electronic equipment and storage medium - Google Patents

Smoke screen effect monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117309856A
CN117309856A CN202311105669.2A CN202311105669A CN117309856A CN 117309856 A CN117309856 A CN 117309856A CN 202311105669 A CN202311105669 A CN 202311105669A CN 117309856 A CN117309856 A CN 117309856A
Authority
CN
China
Prior art keywords
infrared
smoke
image
visible light
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311105669.2A
Other languages
Chinese (zh)
Inventor
蒋样明
赵辉辉
王拓
王大成
曾孟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202311105669.2A priority Critical patent/CN117309856A/en
Publication of CN117309856A publication Critical patent/CN117309856A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/25Fixed-wing aircraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N25/00Investigating or analyzing materials by the use of thermal means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides a smoke effect monitoring method, a smoke effect monitoring device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a binocular infrared polarized image and a binocular visible light image of a smoke screen and a scene applied by the smoke screen; calculating infrared three-dimensional point clouds according to the binocular infrared polarized images, and calculating visible three-dimensional point clouds according to the binocular visible images; according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud, constructing a stereoscopic fusion image of a smoke screen and a scene applied by the smoke screen; and analyzing the effect of the smoke curtain according to the stereoscopic fusion image. According to the method and the device, the smoke screen release and shielding effect can be comprehensively and accurately analyzed.

Description

Smoke screen effect monitoring method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of smoke screens, in particular to a smoke screen effect monitoring method, a smoke screen effect monitoring device, electronic equipment and a computer readable storage medium.
Background
Smoke screens are a technical means often used in the art to achieve masking, and confusing effects. Various effects of the smoke screen need to be evaluated to determine whether the release of the smoke screen is effective. However, the existing smoke effect evaluation technology is limited to simple image processing, the accuracy of effect analysis is limited, and the effect and state of the smoke cannot be comprehensively and real-timely evaluated. Therefore, there is a need in the art for a monitoring technique that can fully and accurately evaluate the effect of smoke curtain release.
Disclosure of Invention
To this end, the present application aims to provide a smoke effect monitoring method, a smoke effect monitoring device, an electronic apparatus, and a computer-readable storage medium, which are capable of comprehensively and accurately evaluating the effect of a smoke.
In one aspect, the present application provides a smoke screen effect monitoring method, including: acquiring a binocular infrared polarized image and a binocular visible light image of a smoke screen and a scene applied by the smoke screen; calculating infrared three-dimensional point clouds according to the binocular infrared polarized images, and calculating visible three-dimensional point clouds according to the binocular visible images; according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud, constructing a stereoscopic fusion image of a smoke screen and a scene applied by the smoke screen; and analyzing the effect of the smoke curtain according to the stereoscopic fusion image.
In a particular embodiment of the present application, computing an infrared three-dimensional point cloud from a binocular infrared polarized image includes: according to parallax information of the binocular infrared polarized image, calculating three-dimensional coordinates of each pixel point of the binocular infrared polarized image; and constructing an infrared three-dimensional point cloud according to the three-dimensional coordinates. Wherein, calculate the three-dimensional point cloud of visible light according to binocular visible light image, include: according to parallax information of the binocular visible light image, calculating three-dimensional coordinates of each pixel point of the binocular visible light image; and constructing a visible light three-dimensional point cloud according to the three-dimensional coordinates.
In a particular embodiment of the present application, constructing a stereoscopic fusion image of a smoke screen and a scene applied thereto according to an infrared three-dimensional point cloud and a visible three-dimensional point cloud, includes: respectively generating an infrared triangular net and a visible light triangular net according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud; obtaining textures of triangle vertexes in an infrared triangle network from a binocular infrared polarized image, and obtaining textures of triangle vertexes in a visible triangle network from a binocular visible image; obtaining the textures of the sides and the faces of the triangles in the infrared triangular net according to the interpolation of the textures of the triangle vertices in the infrared triangular net, and obtaining the textures of the sides and the faces of the triangles in the visible triangular net according to the interpolation of the textures of the triangle vertices in the visible triangular net.
In a particular embodiment of the present application, after acquiring the binocular infrared polarized image and the binocular visible light image of the smoke screen and the application scene thereof, the method further comprises: and calculating an infrared depth map according to the binocular infrared polarized image, and calculating a visible light depth map according to the binocular visible light image. The method for constructing the stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud comprises the following steps: and constructing a stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud, the infrared depth map, the visible light three-dimensional point cloud and the visible light depth map.
In a particular embodiment of the present application, computing an infrared depth map from a binocular infrared polarized image includes: calculating infrared depth of field information according to a focal length and an image point vector of an infrared polarization imaging device for shooting binocular infrared polarization images; and generating an infrared depth map according to the depth information. Wherein, calculate the visible light depth map according to the binocular visible light image, include: calculating visible light depth information according to the binocular visible light image and the binocular imaging depth calculation model; and generating a visible light depth map according to the visible light depth information.
In a particular embodiment of the present application, after analyzing the effect of the smoke screen according to the stereoscopic fusion image, the method further includes: calculating the front apparent temperature of the shielding target before the application of the smoke screen, the rear apparent temperature of the shielding target after the application of the smoke screen, the apparent temperature of the smoke screen and the ambient temperature according to the binocular infrared polarized image; and calculating the infrared shielding rate of the smoke curtain according to the target front apparent temperature, the target rear apparent temperature, the smoke curtain apparent temperature and the ambient environment temperature.
In a particular embodiment of the present application, after analyzing the effect of the smoke screen according to the stereoscopic fusion image, the method further includes: acquiring a plurality of binocular infrared polarized images which are continuously shot to form an infrared polarized image sequence; and calculating the shielding effect characteristic quantity and the motion characteristic quantity of the smoke curtain according to the infrared polarized image sequence.
In another aspect, the present application provides a smoke screen effect monitoring device comprising: the acquisition module is used for acquiring binocular infrared polarized images and binocular visible light images of the smoke curtain and the application scene thereof; the computing module is used for computing infrared three-dimensional point clouds according to the binocular infrared polarized image and computing visible three-dimensional point clouds according to the binocular visible light image; the construction module is used for constructing a stereoscopic fusion image of the smoke curtain according to the infrared three-dimensional point cloud and the visible three-dimensional point cloud; and the analysis module is used for analyzing the effect of the smoke curtain and the application scene thereof according to the stereoscopic fusion image.
In another aspect, the present application provides an electronic device comprising: a processor; a memory; an application program stored in the memory and configured to be executed by the processor, the application program comprising instructions for performing the smoke effect monitoring method described above.
In another aspect, the present application provides a computer readable storage medium storing a computer program for executing the above-described smoke effect monitoring method.
According to the smoke screen effect monitoring method, the smoke screen effect monitoring device, the electronic equipment and the computer readable storage medium, the release effect of the smoke screen can be accurately estimated under various observation conditions by fusing the visible light and the infrared image, and the limitation caused by singly adopting the visible light or the infrared image is avoided. In addition, by adopting the binocular technology to form a stereoscopic fusion image, the effect of the smoke screen can be observed more comprehensively and intuitively, and the quality of the smoke screen can be directly observed by a user, so that the user can quickly and accurately judge the quality of the smoke screen.
Drawings
The following detailed description of specific embodiments of the present application refers to the accompanying drawings, in which:
FIG. 1 is a flow chart of a smoke effect monitoring method according to an embodiment of the present application;
FIG. 2 shows a schematic structural view of a drone for use in the smoke effect monitoring apparatus of the embodiment of FIG. 1;
FIG. 3 shows a schematic structural view of a pod for use in the smoke effect monitoring apparatus of the embodiment of FIG. 1;
FIG. 4 shows a schematic structural view of a ground station for use in the smoke effect monitoring apparatus of the embodiment of FIG. 1;
FIG. 5 shows a schematic view of the internal structure of the nacelle shown in FIG. 3;
FIG. 6 shows a schematic diagram of the calculation of infrared polarized image depth information according to the embodiment of FIG. 1;
FIG. 7 is a schematic diagram of a smoke effect monitoring device according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the concepts and concepts of the present application more clearly understood by those skilled in the art, the present application is described in detail below in conjunction with specific embodiments. It should be understood that the embodiments presented herein are only a part of all embodiments that the application may have. Those skilled in the art, after having read the present specification, will be able to make improvements, modifications, or substitutions in part or in whole of the embodiments described below, which are also included within the scope of the present application.
The terms "a," "an," and other similar words are not intended to mean that there is only one of the things, but rather that the description is directed to only one of the things, which may have one or more. In this document, the terms "comprise," "include," and other similar words are intended to denote a logical relationship, but not to be construed as implying a spatial structural relationship. For example, "a includes B" is intended to mean that logically B belongs to a, and not that spatially B is located inside a. In addition, the terms "comprising," "including," and other similar terms should be construed as open-ended, rather than closed-ended. For example, "a includes B" is intended to mean that B belongs to a, but B does not necessarily constitute all of a, and a may also include other elements such as C, D, E.
The terms "first," "second," and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms "embodiment," "this embodiment," "an embodiment," "one embodiment," and the like herein do not denote that the descriptions are merely applicable to one particular embodiment, but rather denote that the descriptions are also applicable to one or more other embodiments. It will be appreciated by those skilled in the art that any descriptions of one embodiment herein may be substituted, combined, or otherwise combined with those described in relation to another embodiment or embodiments, such substitution, combination, or other combination resulting in a new embodiment as would be apparent to one of ordinary skill in the art and would be within the scope of this application.
In embodiments of the present application, smoke effect may refer to a value or measure of the degree of effect that a smoke is capable of producing for its particular use. For example, the effect of a smoke screen for masking is a masking effect of a smoke screen, and the effect of a smoke screen for hiding or confusion is a confusion effect.
In modern air combat, the smoke generating equipment needs to be pre-arranged in an array for quick response and maximum shielding effect of the smoke curtain. The infrared smoke screen bullet has excellent photoelectric countermeasure effect, and the smoke screen has the characteristics of absorption, reflection, scattering and the like of infrared wave bands, so that the energy received by an infrared imaging system is greatly attenuated, and the detection capability of the infrared imaging system is reduced. The morphological characteristics, the movement characteristics and the shielding rate distribution formed by the infrared smoke curtain are important indexes. At present, the evaluation of the smoke screen release effect in China mainly comprises the following steps: the video of the formation and diffusion processes of the smoke-screen bullet is acquired by instruments such as a thermal infrared imager, a traditional image processing method is applied, and the characteristic parameters of the infrared smoke-screen are manually extracted, so that the extraction of the infrared smoke-screen is time-consuming and inaccurate. In order to accurately acquire a smoke release scene, visually analyze the dynamic process of interference of an infrared smoke on thermal infrared imaging, and calculate accurate smoke release shielding effect parameters, the invention provides a three-dimensional visual dynamic monitoring method of the smoke release effect.
Some embodiments of the application provide a three-dimensional visual dynamic monitoring method for a smoke-screen application effect, which aims to realize the three-dimensional situation display of visible light and the three-dimensional situation display of thermal infrared of a smoke-screen application scene and calculate the characteristic quantity and the motion characteristic quantity of the smoke-screen application effect by acquiring the three-dimensional pair of the visible light image and the thermal infrared image data of the smoke-screen application scene in real time. The method is suitable for evaluating the shielding effect of the discharge of the smoke-screen bullet and optimizing the precise decision of smoke-screen release.
According to the three-dimensional visual dynamic monitoring method for the smoke screen application effect, the designed three-dimensional visual dynamic monitoring equipment for the smoke screen application effect is adopted, and the equipment integrates an unmanned plane platform, an autopilot, a radio link, a visible light and thermal infrared binocular stereoscopic imaging nacelle system, a high-definition image transmission data link, a wireless data transmission and ground station integrated system. The device aims at a smoke screen release scene, and can simultaneously and dynamically acquire two paths of imaging coplanar visible light images and thermal infrared polarized images through the aerial survey camera and the high-performance thermal infrared polarized imager integrated by the binocular stereoscopic imaging pod system. Based on parallax data of visible light and thermal infrared images of a smoke screen scene calculated by a parallax network and a depth estimation network, a stereoscopic fusion image of the visible light and the thermal infrared images of the smoke screen release scene is dynamically constructed on the basis of the parallax data, and the method can be used for displaying the scene situation of the smoke screen release in real time in a ground station integrated system. Based on the polarized thermal infrared imager focal length and image point vector, thermal infrared image depth information is calculated, a binocular visible light depth map and a thermal infrared depth map are synthesized, a fusion image depth map is accurately calculated, a three-dimensional fusion scene of visible light and thermal infrared emitted by a smoke curtain is constructed, and motion characteristic quantity parameters such as the width, the height, the shielding area, the thermal infrared shielding rate, the shielding time and the diffusion speed of the smoke curtain are calculated on the basis of the fusion scene, so that the fusion scene can be used for analyzing and deducting the effect of the smoke curtain.
Fig. 1 shows a flow chart of a smoke effect monitoring method according to an embodiment of the present application.
According to the present embodiment, the smoke effect monitoring method includes steps S110 to S140, and each step is described in detail below.
S110, acquiring a binocular infrared polarized image and a binocular visible light image of a smoke screen and a scene applied by the smoke screen.
Specifically, the smoke applying scene data can be dynamically acquired by adopting a three-dimensional visual dynamic monitoring device for the smoke applying effect. Aiming at a smoke-screen release array place, an airborne binocular stereoscopic imaging nacelle system is adopted to dynamically acquire a whole situation visible light and thermal infrared time sequence image stereoscopic pair of smoke-screen release and diffusion in the air, and monitoring data are transmitted to a ground station integration subsystem in real time through a high-definition image data transmission link.
Specifically, the smoke applying effect three-dimensional visualization dynamic monitoring device can comprise a unmanned subsystem (see fig. 2), a binocular stereoscopic imaging pod system (see fig. 3), and a ground station integration subsystem (see fig. 4).
The unmanned aerial vehicle subsystem integrates an unmanned aerial vehicle platform, an autopilot and a radio link.
The binocular stereoscopic imaging pod subsystem integrates a high-performance thermal infrared polarization imager, a high-definition aerial survey camera and a high-definition image data transmission link. The nacelle is mounted in the belly position of the unmanned aerial vehicle, i.e. below the unmanned aerial vehicle fuselage. The nacelle comprises a first channel 310 and a second channel 320.
The ground station integrated subsystem integrates unmanned aerial vehicle flight control, pod control, wireless data transmission, high-definition image transmission, industrial personal computers and the like. The ground station is arranged on the ground and used for remotely controlling the unmanned aerial vehicle and the nacelle.
Referring to fig. 5, a set of imaging devices includes a binocular visible light imaging device and a binocular infrared polarized imaging device. The binocular visible light imaging device comprises a left view field visible light camera 511 and a right view field visible light camera 521, and the binocular infrared polarization imaging device comprises a left view field infrared polarization camera (thermal imager) 512 and a right view field infrared polarization camera 522. The left light is split into two light paths perpendicular to each other by the left-view dichroic mirror 513, one of which is received by the left-view visible light camera 511 and the other of which is received by the left-view infrared polarization camera 512. The right light is split into two light paths perpendicular to each other by the right-view dichroic mirror 523, one of which is received by the right-view visible light camera 521, and the other of which is received by the right-view infrared polarization camera 522.
The photoelectric pod stabilizing platform of the binocular stereoscopic imaging pod system adopts a two-axis frame structure and combines a dichroic mirror to realize mirror superposition of the optical axes of the aerial survey cameras on each of the left and right view fields and thermal infrared polarization imaging. The left view field and the right view field are symmetrically arranged, so that the optical axes are parallel to each other, and the imaging surfaces are coplanar. The imaging plane coplanarity herein may refer to the imaging plane of the left field of view visible light imaging device being coplanar with the imaging plane of the right field of view visible light imaging device, and/or the imaging planes of the left field of view infrared polarized imaging device and the right field of view infrared polarized imaging device being coplanar. The imaging coordinate conversion principle may be that a photosensitive film of the camera may be regarded as a plane of a pixel coordinate system, the real position of a pixel in the world coordinate system is calculated after imaging, and the pixel position (u, v) on a photo is firstly converted into the physical (imaging plane) coordinate (x, y) of the image by perspective projection; then based on camera internal parameters, converting imaging physical coordinates (x, y) into camera coordinates (Xc, yc, zc) by adopting rigid body transformation; finally, a rotation translation relationship is constructed based on the camera's external parameters, converting the camera coordinates into world coordinates (Xw, yw, zw). Imaging co-planar means that the planes of the physical coordinate systems of the images (imaging planes) of the two visible light devices are guaranteed to be in the same plane in the world coordinate system. The coplanarity effect of the imaging surfaces is that the homonymous image points of the left and right visual field visible light images can be rapidly identified, and the efficiency and the accuracy of image fusion and three-dimensional reconstruction are improved.
The pitching and azimuth angles of the optical axes of the left and right view field aerial survey cameras and the thermal infrared polarization imager are stable. When the smoke curtain releasing training is carried out, the equipment can be adopted to simultaneously complete the acquisition of two paths of thermal infrared polarized images and two paths of visible light images, and the two paths of imaging are coplanar and transmitted back to the ground station in real time.
The aerial survey camera and the high-performance thermal infrared polarization imager are integrated through the binocular stereoscopic imaging pod system, and the aerial survey camera and the high-performance thermal infrared polarization imager are mainly characterized in that two paths of thermal infrared polarization images and two paths of visible light images can be acquired simultaneously, and the two paths of imaging are coplanar, so that the efficiency and the precision of image fusion and three-dimensional reconstruction are improved. The problem that the stereoscopic imaging precision is affected due to the fact that the single-band stereoscopic imaging systems cannot be matched seamlessly is solved. Here, the two-way imaging coplanarity may refer to that the left field of view visible light imaging device is coplanar with the left field of view thermal infrared imaging device; the right field of view visible light imaging device is coplanar with the imaging of the right field of view thermal infrared imaging device. Meanwhile, since the left and right field of view visible light cameras are also imaged coplanar, the two paths of thermal infrared and visible light are imaged coplanar as a whole. The two paths of imaging coplanarity can realize the rapid identification of the homonymous image points of the left-view visible light image and the left-view thermal infrared image, and the image fusion efficiency and the image fusion precision are improved; the method realizes the rapid identification of the homonymous image points of the right visual field visible light image and the right visual field thermal infrared image, and improves the image fusion efficiency and precision. Meanwhile, the method realizes the rapid identification of the homonymous image points of the visible light/thermal infrared images of the left and right view fields, and improves the efficiency and the accuracy of three-dimensional reconstruction.
Specifically, before step S110, it may further include: the pod system calibration and three-dimensional correction functions are adopted, so that the angle fine adjustment and the accurate correction of the aerial survey camera and the thermal infrared polarization imager are realized, and the parallel coplanarity of two paths of visible light and thermal infrared images is ensured. Comprises the following steps:
(1) Obtaining a visible light image sequence of a thermal infrared checkerboard black and white calibration plate and an image sequence of a light emitting diode array thermal infrared imaging by adopting a binocular nacelle system from different angles, wherein the black and white checkerboard grid position of the calibration plate and the light emitting diode array position are fixed and are positioned on the same plane;
(2) Based on the visible light image sequence of the calibration plate, calculating an internal parameter matrix M of the aerial survey camera by adopting a checkerboard automatic identification extraction method and a Zhang Zhengyou plane calibration method v And a lens distortion parameter Dist v (k v1 ,k v2 ,p v1 ,p v2 );
(3) Based on the image sequence of the thermal infrared imaging of the LED array, the central position array of the thermal infrared diode is extracted through binarization, and an internal reference matrix M of the thermal infrared polarization imager is calculated by using a Zhang Zhengyou plane calibration method Infradred And distortion parameter Dist Infradred (k 1 ,k 2 ,p 1 ,p 2 );
(4) Rotation R of aerial survey camera relative to each visible light image reference coordinate system is calculated by PNP algorithm vi And translation parameter T vi Rotation R of thermal infrared polarization imager relative to each thermal infrared image reference coordinate system Infraredi And translation parameter T Infraredi
(5) The rotation parameter and the translation parameter R of each image relative to the aerial survey camera are obtained by utilizing two adjacent visible light pairs and adopting the following steps vi 、T vi 、R vj 、T vj
(6) The rotation parameter and the translation parameter R of each pair of thermal infrared images relative to a thermal infrared imager are obtained by utilizing two adjacent thermal infrared pairs and adopting the following steps Infraredi 、T Infraredi 、R Infraredj 、T Infraredj
(7) Calculating relative rotation R between aerial survey camera and thermal infrared polarization imager by hand eye calibration method (below) Infrared-V And translation T Infrared-V
(8) Accordingly, angle fine adjustment and accurate correction are performed, and two paths of visible light and thermal infrared images are enabled to be parallel and coplanar.
S120, calculating infrared three-dimensional point clouds according to the binocular infrared polarized image, and calculating visible light three-dimensional point clouds according to the binocular visible light image.
As an example, computing an infrared three-dimensional point cloud from a binocular infrared polarized image includes: according to parallax information of the binocular infrared polarized image, calculating three-dimensional coordinates of each pixel point of the binocular infrared polarized image; and constructing an infrared three-dimensional point cloud according to the three-dimensional coordinates.
As an example, calculating a visible light three-dimensional point cloud from a binocular visible light image includes: according to parallax information of the binocular visible light image, calculating three-dimensional coordinates of each pixel point of the binocular visible light image; and constructing a visible light three-dimensional point cloud according to the three-dimensional coordinates.
S130, constructing a stereoscopic fusion image of the smoke curtain and the application scene according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud.
In this embodiment, a stereoscopic fusion image of a smoke screen and a scene applied to the smoke screen is constructed according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud, and the stereoscopic fusion image can be used for real-time display and on-site command of a ground station integrated system when the smoke screen is released.
Specifically, S130 may further include: based on SLAM video image rapid splicing algorithm, combining POS data, topographic data and map data, and displaying the three-dimensional visible light situation of the smoke screen in real time. The method specifically comprises the following steps:
(1) The SLAM video image rapid splicing system assembled on the ground station integration subsystem receives the aerial image of the unmanned aerial vehicle and then estimates the posture of the camera in real time, and simultaneously generates a 3D point cloud map and fits a fusion plane;
(2) And extracting a key frame image by a camera tracking method, and performing key frame local optimization and map local optimization. The camera tracking method is characterized in that a FAST feature point operator and a BRIEF feature description operator are used for rapidly extracting feature points, and according to a 3D point cloud map, a corresponding set between a 3D map point and a 2D feature point is searched between a current frame and a last key frame through a search window, so that the camera pose of the current frame is estimated;
(3) And carrying out image transformation on the pose according to the image of the key frame image, and realizing real-time generation and fusion of the orthographic image by using a multi-band algorithm of self-adaptive weight. The main processing flow is 1) calculating the rectangular edge of the image according to the pose of the image, and expanding the fusion area when the rectangular edge exceeds the fusion area; 2) Adaptively calculating a weight image according to the height, the angle of view and the pixel position; calculating an HOMOGRAPHY matrix, and changing a color image and a weight image by using the matrix to ensure that the fused image is orthotopic as much as possible; 3) Calculating a Laplacian pyramid and a weight pyramid from the changed image, and fusing an image block with an optimal weight value with a global spliced image block; 4) And superposing the fusion image, the topographic data and the map data, and displaying the three-dimensional visible light situation of the smoke screen in real time.
As an example, after S110, further including: and calculating an infrared depth map according to the binocular infrared polarized image, and calculating a visible light depth map according to the binocular visible light image. At this time, S130 may further include: and constructing a stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud, the infrared depth map, the visible light three-dimensional point cloud and the visible light depth map.
In this example, according to the infrared three-dimensional point cloud, the infrared depth map, the visible light three-dimensional point cloud and the visible light depth map, a stereoscopic fusion image of a smoke screen and a scene applied thereto is constructed, and the stereoscopic fusion image can be used for accurately analyzing and deducting the effect applied to the smoke screen after the smoke screen is released.
Specifically, parallax data and a depth map of visible light and thermal infrared images of a smoke scene are calculated based on a parallax and depth estimation network. The binocular visible light image depth map and the binocular thermal infrared depth map are obtained through an SGBM semi-global binocular stereo matching algorithm, a 3D convolution cost aggregation algorithm, a SOFT ARGMIN parallax calculation algorithm, a depth estimation network algorithm based on a multi-scale discrete convolution conditional random field and the like based on a parallel coplanar visible light image and thermal infrared image stereo pair. The method comprises the following specific steps:
(1) Preprocessing a visible light image and a thermal infrared image by adopting a SOBEL operator;
(2) Extracting a network model by adopting Faster R-CNN characteristics, and extracting information such as the shape, texture, color and the like of a target from an image;
(3) Image stitching is carried out by adopting a characteristic-based image stitching method;
(4) Calculating a cost (SAD, SSD, NCC) by adopting a 3D convolution cost aggregation algorithm;
(5) Adopting an SGBM semi-global binocular stereo matching algorithm (SAD and SSD take minimum values and NCC take maximum values) to complete stereo matching, and adopting a WLS parallax filtering method to reconstruct and encrypt parallax;
(6) Calculating parallax values of the binocular visible light image and the thermal infrared image by adopting a SOFT ARGMIN parallax calculation model (the following);
wherein d is parallax, c d Sigma is a softmax normalization function, D, for the predicted cost amount corresponding to parallax max Is the maximum parallax;
(7) Performing parallax refinement by adopting a parallax correction method based on K-means cluster image segmentation, and performing interpolation processing on a parallax image by adopting a parabolic interpolation method so as to ensure parallax continuity;
(8) Extracting depth information by adopting a binocular imaging distance calculation model (the following formula) according to a binocular imaging principle;
wherein Z is depth information, b is a baseline distance of the two cameras, f is a focal length of the cameras, and d is a target parallax.
Finally, based on the depth information, a depth map may be obtained.
As an example, calculating an infrared depth map from a binocular infrared polarized image includes: calculating infrared depth of field information according to a focal length and an image point vector of an infrared polarization imaging device for shooting binocular infrared polarization images; and generating an infrared depth map according to the depth information. Wherein, calculate the visible light depth map according to the binocular visible light image, include: calculating visible light depth information according to the binocular visible light image and the binocular imaging depth calculation model; and generating a visible light depth map according to the visible light depth information.
Specifically, thermal infrared image depth of field information is calculated based on the polarized thermal infrared imager focal length and the image point vector. The thermal infrared camera of the binocular stereoscopic imaging pod system can obtain coplanar thermal infrared polarized images. And adopting a SURF feature matching algorithm to match images of the stereopair. Based on the geometrical relation of different focal lengths and coplanar imaging and the principle of triangulation, the thermal infrared image depth of field information is calculated based on the focal length and the image point vector of the polarized thermal infrared imager, and the imaging characteristic diagram is shown in fig. 6. The focal length of the thermal infrared polarization imager may be varied by adjusting the physical position of the lens, and there may be multiple focal lengths, here in a dual focal length approach.
The thermal infrared image depth of field calculation model is:
wherein f 1 F is the smaller focal length of the infrared polarization thermal imager 2 D is the larger focal length of the infrared polarization thermal imager 1 Vector magnitude, d, of two object points in a coplanar planar space under a smaller focal length 2 Is the vector size at a larger focal length.
Specifically, S130 may further include: based on the image parallax data and the depth map, a visible light and thermal infrared three-dimensional fusion image of a smoke screen application scene is dynamically constructed and displayed in real time on a ground station integrated system. The method comprises the following specific steps:
(1) According to the principle of imaging coplanar binocular stereoscopic vision, the following model is adopted to calculate the three-dimensional coordinates of the space point P:
wherein, (u) 1 ,v 1 ) Sum (u) 2 ,v 2 ) For the image pixel coordinates of spatial point P on the same image pair, u 0 、v 0 、a x And a y D is the target parallax, b is the baseline distance of the two cameras;
(2) Using the above method, based on the imaging coplanar stereopair, calculating the space coordinates of each pixel point of the visible light image and the thermal infrared image, and generating three-dimensional point cloud data;
(3) Generating Delaunay triangulation based on a point-by-point insertion method of a Bowyer-Watson algorithm;
(4) Visible light and thermal infrared three-dimensional texture mapping. According to the principle of binocular stereoscopic vision, pixel points of a texture image and space points obtained through reconstruction have a one-to-one correspondence, the vertex of the Delaunay triangle can be directly matched with texture information from the texture image, and the textures of the sides and the surfaces of the Delaunay triangle are obtained by utilizing interpolation of the vertex textures.
As an example, constructing a stereoscopic fusion image of a smoke curtain and an application scene thereof according to an infrared three-dimensional point cloud, an infrared depth map, a visible three-dimensional point cloud and a visible depth map, including: calculating initial illumination chromaticity of the binocular visible light image; correcting the binocular visible light image according to the initial illumination chromaticity to obtain a corrected visible light image; calculating a weighting coefficient according to the corrected visible light image; and updating the illumination chromaticity of the binocular visible light image according to the weighting coefficient.
Specifically, the method may include the steps of:
(1) Initial illumination chromaticity (Lx) estimation is performed for visible light image data:
always x is pixel coordinate, P c R, G, B pixel value for that point;
(2) And solving a camera response curve algorithm by adopting a Grey Edge illumination chromaticity estimation algorithm based on a higher derivative image structure and a Stephen Lin single image, carrying out illumination estimation of a scene, and updating illumination chromaticity of the binocular visible light image based on an illumination judgment result.
The present embodiment relates to constructing a three-dimensional visual scene (i.e., a virtual reality scene) based on binocular visual images and thermal infrared polarized images. The three-dimensional visual scene requires illumination consistency to ensure that a virtual object with an illumination environment similar to that of a real scene is rendered, so that the surface of the virtual object generates illumination effects such as correct bright and dark areas, correct colors and the like. While a core problem of illumination consistency is illumination chromaticity estimation. The direct selection of the color of the light source in the image is not preferable as the illumination intensity, because the sensitization tolerance, exposure parameters and the like of the camera can influence the color of the light source in the image, so that the illumination estimation of the scene is needed, and the illumination chromaticity of the visible light image is updated.
The illumination chromaticity adjustment can synchronously influence the brightness, shadow, texture and the like of the visible light image, and the image after the illumination chromaticity adjustment is used as a texture map to the surface of the three-dimensional object.
Assuming that the input image is A, initializing illumination chromaticity as Lx, iterating the round number C, and carrying out illumination chromaticity estimation based on a weighted Grey Edge algorithm of the luminosity characteristic Edge, wherein the steps are as follows:
(1) correcting the image A by using illumination Lx to obtain a calibration image B;
(2) calculating weighting coefficients W of three edges, namely a mirror surface edge, a shadow edge and an object edge, through the image B;
(3) calculating illumination chromaticity Lx' by using a weighted Grey Edge algorithm;
(4) updating illumination chromaticity lx=lx×lx';
(5) update parameter c=c-1, and if C is not zero, return to (1).
As an example, S130 may further include: respectively generating an infrared triangular net and a visible light triangular net according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud; obtaining textures of triangle vertexes in an infrared triangle network from a binocular infrared polarized image, and obtaining textures of triangle vertexes in a visible triangle network from a binocular visible image; obtaining the textures of the sides and the faces of the triangles in the infrared triangular net according to the interpolation of the textures of the triangle vertices in the infrared triangular net, and obtaining the textures of the sides and the faces of the triangles in the visible triangular net according to the interpolation of the textures of the triangle vertices in the visible triangular net.
Specifically, mapping a binocular visible light image depth map to a three-dimensional space to obtain a three-dimensional point cloud, and generating Delaunay triangulation (triangulation network); the vertex of the Delaunay triangle can be directly matched with texture information from a visible light image, and the texture of the sides and the surfaces of the Delaunay triangle can be obtained by interpolation of the vertex texture. In addition, mapping the thermal infrared polarized image depth map to a three-dimensional space to obtain a three-dimensional point cloud, and generating Delaunay triangulation; the Delaunay triangle vertices directly match the texture information from the thermal infrared image and the texture of the sides and faces of the Delaunay triangle is obtained by interpolation of the vertex textures.
The texture is used to reflect the reality of the object surface. For example, a building constructed by Delaunay triangulation generated by using three-dimensional point cloud is a black-and-white house formed by a series of small triangles, and a plurality of holes are formed, and texture mapping is an effect of excluding photos and is attached to the model according to corresponding resolution and spatial positions, so that the model looks like a real house. The texture is to project a photo taken by a visible light camera to a corresponding spatial position, and the thermal value of a corresponding thermal infrared image (the value on the thermal infrared image reflects the surface temperature of an object and can be called as a thermal value), so that a commander can visually see which position is what target, and the thermal infrared condition of the target (the thermal infrared detection and the reconnaissance are to find the corresponding target), such as the metal characteristics of a tank and the heating of the tank during working, can easily display the corresponding tank contour on a thermal infrared imager. The texture of the visible light image and the texture (thermodynamic diagram) of the thermal infrared image are mapped to corresponding space positions, and the formed comprehensive three-dimensional stereoscopic scene diagram is a fusion process.
Specifically, after S130, it may further include: and 3D information of the smoke screen application scene is acquired by adopting a VoxelNet backbone 3D network model. The method has the advantages that the multi-mode semantic fusion of the visible light image and the thermal infrared image is realized, the advantages of the high-definition visible light camera and the thermal infrared polarization imager in data acquisition are fully utilized, and the rapid perception of the target 3D information can be realized rapidly. The main processing steps are as follows:
(1) the three-dimensional point cloud generated based on the depth map adopts a VoxelNet backbone 3D network to extract a characteristic map of a bird's-eye view angle from the three-dimensional point cloud, and the unmanned aerial vehicle remote sensing monitoring mode is adopted, so that the bird's-eye view angle is generally used, and the whole monitoring scene characteristic can be analyzed by using a macroscopic view angle;
(2) acquiring a thermodynamic diagram, a target size, a target shape, a target direction and a target detection frame of the center position of the target object from the characteristic diagram generated in the step (1) through a thermal infrared image output head and a numerical regression output head;
(3) based on the feature map obtained in the step (1) and the target detection frame obtained in the step (2), intelligently identifying the center point of the object on the periphery of the target detection frame, and extracting the point feature of the object on the feature map; the peripheral surface object is not specifically pointed to a certain class, and is determined according to actual scenes, for example, a smoke curtain release guarantee array is carried out in the project, and tanks, armored vehicles and ammunition libraries are arranged on the array, so that the objects can be regarded as the surface objects.
(4) Based on the point characteristics obtained in the step (3), a Voxel Net backbone 3D network is adopted, binary cross entropy loss is used as a loss function, training is carried out based on the point characteristic data, the confidence of a target detection frame is obtained through a full connection layer, and more accurate target object 3D information is obtained. The 3D information is obtained by training analysis based on a model of a 3D network of a Voxel Net backbone and based on a characteristic point data set, and the confidence is one of the results of the model analysis. The 3D information may refer to a smoke curtain on a smoke curtain guarantee array, and an outline, a spatial position, a distance from an unmanned aerial vehicle, an azimuth, and the like of a detectable target (such as a tank, an armored car, a command post, and the like), and the core is to obtain a characteristic point spatial position of a target object (an object in a target monitoring frame). The specific formula is as follows:
in the method, in the process of the invention,to analyze the confidence value of the result, I t The cross ratio of the simulation analysis result of the target detection and the true value.
S140, analyzing the effect of the smoke screen according to the stereoscopic fusion image.
Specifically, based on the visual fusion scene of the monitoring targets before and after the smoke is released, the shielding effect of the smoke is analyzed, when the depth data of the smoke layer can only be observed after the smoke is released, and when the depth data of the target object below the smoke layer is not observed, the shielding effect is good; when the depth map can clearly show the depth information and the outline of the target object below the smoke shielding layer after the smoke is released, the shielding effect is poor.
Specifically, after the smoke curtain release training is finished, an optimization method is adopted to construct a three-dimensional visible light and thermal infrared stereoscopic scene which is applied by the smoke curtain, and a commander deduces the smoke curtain shielding effect (mainly by observing through eyes) according to the visual scene. If the ground feature outline below the smoke screen shielding layer can be seen in the depth map, the shielding effect is poor; if the depth data in the depth map is not changed too much, the information of the smoke shielding layer is mainly reflected, but the ground objects below the shielding layer cannot be reflected, and the smoke shielding effect is good.
As an example, after analyzing the effect of the smoke screen according to the stereoscopic fusion image, it further includes: calculating the front apparent temperature of the shielding target before the application of the smoke screen, the rear apparent temperature of the shielding target after the application of the smoke screen, the apparent temperature of the smoke screen and the ambient temperature according to the binocular infrared polarized image; and calculating the infrared shielding rate of the smoke curtain according to the target front apparent temperature, the target rear apparent temperature, the smoke curtain apparent temperature and the ambient environment temperature.
In this embodiment, the apparent temperature may refer to the temperature measured by the infrared imager, and is not necessarily the actual temperature, but the blackbody temperature with the same radiance is obtained according to the equivalent blackbody of the thermal imager. In this embodiment, the ambient temperature may not be obtained by a thermometer, but may be obtained by removing the average of the high and low values by a thermography.
Specifically, the smoke curtain thermal infrared shielding rate calculation may operate as follows.
The thermal infrared shielding rate of the smoke screen is used for indicating the change of the detectable degree of the targets before and after the shielding of the smoke screen, and is an important index for evaluating the effect of the smoke screen. Based on the testing principle of the single blackbody temperature comparison method, a smoke curtain transmittance calculation model is constructed, and the apparent temperature (T) of a reconnaissance target before and after smoke curtain application is calculated m 、T c ) Apparent temperature of smoke curtain (T h ) Ambient temperature (T) b ) The thermal infrared shielding rate of the smoke curtain can be calculated by combining the wave band radiation exitance calculation method. The calculation formula of the thermal infrared shielding rate of the smoke curtain is as follows:
wherein epsilon is the field emissivity set by the thermal imager,is the band radiation exitance. The calculation method comprises the following steps:
wherein lambda is 1 、λ 2 C is the upper and lower limits of the corresponding band range 1 For a first radiation constant of 3.7415X108 w.m 2 ·μm 4 ,c 2 Is a second radiation constant of 1.439 multiplied by 10 4 μm·K。
As an example, after analyzing the effect of the smoke screen according to the stereoscopic fusion image, it further includes: acquiring a plurality of binocular infrared polarized images which are continuously shot to form an infrared polarized image sequence; and calculating the effect characteristic quantity and the motion characteristic quantity of the smoke curtain according to the infrared polarized image sequence.
In this embodiment, the infrared polarized image sequence may use an image of a left-view infrared polarized camera or an image of a right-view infrared polarized camera. Because the left and right fields of view are parallel and have high overlapping degree, the left or right field of view or the like image point data of the left and right fields of view is adopted singly to perform averaging to form a new image, and then differential processing is also possible.
Specifically, the smoke application effect characteristic amount and the movement characteristic amount can be calculated as follows.
Based on the thermal infrared time sequence image pair, acquiring absolute value difference between an intermediate smoke infrared image and other adjacent four frames of images by adopting a five-frame time sequence difference method, dividing the intermediate smoke infrared image and the other adjacent four frames of images into binary images by using a set smoke threshold (or selecting a reference smoke image extraction threshold), acquiring pixel points of smoke emission effect characteristic quantity by adopting logical AND operation, and determining a smoke emission effect characteristic quantity region; the method comprises the steps of obtaining a smoke applying motion characteristic quantity area by adopting logic 'difference' operation, obtaining the distribution and motion characteristics of a smoke in a three-dimensional space by combining pixel point space position information determined in thermal infrared three-dimensional reconstruction, and further calculating smoke applying effect characteristic quantity (smoke width, smoke height, smoke shielding area and shielding time) and motion characteristic quantity (diffusion speed in the vertical direction and the horizontal direction) according to the height and width of each pixel point. The main calculation formula is as follows:
wherein f m (x, y) represents the pixel value of the image position (x, y) of the intermediate frame of the infrared image sequence of the smoke curtain, f m-i (x, y) tableAnd showing pixel values of the positions (x, y) of the i frames adjacent to the infrared image sequence of the smoke screen, wherein i is (+/-1, +/-2), and T is a smoke screen threshold value.
Smoke applying effect characteristic quantity area D m
Smoke screen applying motion characteristic quantity area D motion
A smoke effect monitoring method according to another embodiment of the present application is described below with reference to fig. 2 to 5.
According to the present embodiment, the smoke effect monitoring method includes steps 1 to 11, and each step is described in detail below.
(1) A smoke curtain applying array is arranged.
(2) Ground station preparation: opening ground station software installed on a ground station integrated system (see fig. 4), loading a mission area map, planning a mission route, landing routes, aerial photography intervals, flight control preparation and the like; the pod software is opened.
(3) Pre-takeoff inspection confirmation: placing the unmanned aerial vehicle (see fig. 2) at a departure point, removing a airspeed head cover, and confirming that the aircraft nose faces the direction of the starting point of the planned route; and checking whether each part of the unmanned aerial vehicle is damaged or not and whether the unmanned aerial vehicle is stable or not.
(4) The unmanned aerial vehicle takes off and prepares for monitoring the application of smoke curtain: and (3) the ground station integrated system operator sends a take-off instruction, the unmanned aerial vehicle platform executes a flight task according to a planning task, the storage cabin is opened, the binocular stereoscopic imaging nacelle system (see figure 3) is electrified, and the visible light image and the thermal infrared image are started to be acquired and transmitted to the ground station in real time.
(5) Smoke curtain application: and starting a smoke car or a smoke bullet arranged on the smoke discharge array to discharge the smoke.
(6) And (3) smoke screen shielding effect data acquisition and transmission: the binocular stereoscopic imaging pod system acquires two paths of thermal infrared images and two paths of visible light images in real time (see fig. 5); and transmitting the data to the ground station integrated platform in real time through a matched high-definition image data transmission link.
(7) The three-dimensional situation of the smoke shielding effect is displayed, and the smoke shielding effect can be displayed in the following two modes.
Mode 1: and displaying the three-dimensional situation of the visible light of the smoke screen shielding effect. Based on SLAM video image rapid splicing algorithm, combining POS data, topographic data and map data, and displaying the three-dimensional visible light situation of the smoke screen in real time.
Mode 2: and displaying the three-dimensional fusion situation of the smoke screen shielding effect visible light and thermal infrared. The method comprises the following specific steps:
preprocessing a visible light image and a thermal infrared image by adopting a SOBEL operator;
extracting a network model by adopting Faster R-CNN characteristics, and extracting information such as the shape, texture, color and the like of a target from an image;
image stitching is carried out by adopting a characteristic-based image stitching method;
calculating a cost (SAD, SSD, NCC) by adopting a 3D convolution cost aggregation algorithm;
adopting an SGBM semi-global binocular stereo matching algorithm (SAD and SSD take minimum values and NCC take maximum values) to complete stereo matching, and adopting a WLS parallax filtering method to reconstruct and encrypt parallax;
And calculating the parallax value of the binocular visible light image and the thermal infrared image by adopting a SOFT ARGMIN parallax calculation model.
Performing parallax refinement by adopting a parallax correction method based on K-means cluster image segmentation, and performing interpolation processing on a parallax image by adopting a parabolic interpolation method so as to ensure parallax continuity;
extracting distance information by adopting a binocular imaging distance calculation model according to a binocular imaging principle;
according to the principle of imaging coplanar binocular stereoscopic vision, based on the imaging coplanar stereoscopic pair, calculating the space coordinates of each pixel point of the visible light image and the thermal infrared image, and generating three-dimensional point cloud data;
generating Delaunay triangulation based on a point-by-point insertion method of a Bowyer-Watson algorithm;
and performing visible light and thermal infrared three-dimensional texture mapping based on the Delaunay triangle subdivision vertex texture interpolation method.
(8) And (5) ending the application of the smoke curtain.
(9) And the unmanned aerial vehicle subsystem platform returns according to the planned landing plan, the binocular stereoscopic imaging pod system closes the image acquisition function, closes the related power supplies of the unmanned aerial vehicle subsystem, the binocular stereoscopic imaging pod system and the ground station integrated system, and stores in a boxing mode.
(10) After the remote sensing monitoring data of the smoke screen release effect are acquired, the related image data are transmitted to a server through a WAN interface or a USB interface for backup management.
(11) The method adopts ground station software installed on a ground station integrated system to analyze and evaluate the effect of the smoke discharge, and comprises the following main functions:
and (5) calculating the thermal infrared shielding rate of the smoke curtain. Based on a single blackbody temperature comparison method test principle, according to the apparent temperature of a detection target, the apparent temperature of a smoke screen and the ambient temperature obtained by a thermal infrared imager before and after the smoke screen is released, and a wave band radiation exitance calculation method is combined, a smoke screen thermal infrared shielding rate calculation model is adopted to calculate the smoke screen thermal infrared shielding rate.
And calculating the characteristic quantity of the smoke emission effect (smoke width, smoke height, smoke shielding area and shielding time). The method comprises the steps of obtaining absolute value difference between an intermediate smoke infrared image and other adjacent four-frame images by adopting a five-frame time sequence difference method based on a thermal infrared time sequence image pair, dividing the intermediate smoke infrared image into two-value images by a set smoke threshold value, obtaining pixels of smoke emission effect characteristic quantity by adopting logical AND operation, determining a smoke emission effect characteristic quantity area, combining pixel point space position information determined in thermal infrared three-dimensional reconstruction, obtaining distribution characteristics of a smoke in a three-dimensional space, and further calculating the smoke emission effect characteristic quantity according to the height and width of each pixel point.
Calculation of smoke emission motion characteristic quantity (diffusion speed in vertical direction and horizontal direction): and (3) obtaining a smoke-screen applying motion characteristic quantity region by adopting logic 'difference' operation, and obtaining the distribution and motion characteristics of the smoke screen in the three-dimensional space by combining the pixel point space position information determined in the thermal infrared three-dimensional reconstruction, so as to calculate the smoke-screen applying motion characteristic quantity (the diffusion speed in the vertical direction and the diffusion speed in the horizontal direction) according to the height and the width of each pixel point.
Fig. 7 shows a schematic structural diagram of a smoke effect monitoring device according to an embodiment of the present application.
According to the present embodiment, the smoke effect monitoring device 700 includes: the acquisition module 710 is configured to acquire a binocular infrared polarized image and a binocular visible light image of a smoke screen and a scene applied by the smoke screen; the computing module 720 is configured to compute an infrared three-dimensional point cloud according to the binocular infrared polarized image, and compute a visible three-dimensional point cloud according to the binocular visible image; the construction module 730 is configured to construct a stereoscopic fusion image of the smoke curtain according to the infrared three-dimensional point cloud and the visible three-dimensional point cloud; the analysis module 740 is used for analyzing the effect of the smoke curtain and the application scene thereof according to the stereo fusion image.
In an embodiment, the computing module is further configured to:
According to parallax information of the binocular infrared polarized image, calculating three-dimensional coordinates of each pixel point of the binocular infrared polarized image;
constructing an infrared three-dimensional point cloud according to the three-dimensional coordinates;
wherein the computing module is further configured to:
according to parallax information of the binocular visible light image, calculating three-dimensional coordinates of each pixel point of the binocular visible light image;
and constructing a visible light three-dimensional point cloud according to the three-dimensional coordinates.
In an embodiment, the build module is further configured to:
respectively generating an infrared triangular net and a visible light triangular net according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud;
obtaining textures of triangle vertexes in an infrared triangle network from a binocular infrared polarized image, and obtaining textures of triangle vertexes in a visible triangle network from a binocular visible image;
obtaining the textures of the sides and the faces of the triangles in the infrared triangular net according to the interpolation of the textures of the triangle vertices in the infrared triangular net, and obtaining the textures of the sides and the faces of the triangles in the visible triangular net according to the interpolation of the textures of the triangle vertices in the visible triangular net.
In an embodiment, the apparatus further comprises: and the second calculation module is used for calculating an infrared depth map according to the binocular infrared polarized image and calculating a visible light depth map according to the binocular visible light image. Wherein the build module is further configured to:
And constructing a stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud, the infrared depth map, the visible light three-dimensional point cloud and the visible light depth map.
In an embodiment, the second computing module is further configured to:
calculating infrared depth of field information according to a focal length and an image point vector of an infrared polarization imaging device for shooting binocular infrared polarization images;
and generating an infrared depth map according to the depth information.
Wherein the second computing module is further configured to:
calculating visible light depth information according to the binocular visible light image and the binocular imaging depth calculation model;
and generating a visible light depth map according to the visible light depth information.
In an embodiment, the apparatus further comprises: the third calculation module is used for calculating the target front apparent temperature of the shielding target before the smoke is released, the target rear apparent temperature of the shielding target after the smoke is released, the smoke apparent temperature of the smoke and the surrounding environment temperature according to the binocular infrared polarized image; and the fourth calculation module is used for calculating the infrared shielding rate of the smoke curtain according to the target forward-looking temperature, the target backward-looking temperature, the smoke curtain apparent temperature and the surrounding environment temperature.
In an embodiment, the apparatus further comprises: the second acquisition module is used for acquiring a plurality of binocular infrared polarized images which are continuously shot to form an infrared polarized image sequence; and the fifth calculation module is used for calculating the shielding effect characteristic quantity and the motion characteristic quantity of the smoke curtain according to the infrared polarized image sequence.
An electronic device according to an embodiment of the present application is described below with reference to fig. 8.
As shown in fig. 8, electronic device 800 includes one or more processors 810 and memory 820.
The processor 810 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 800 to perform desired functions.
Memory 820 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 810 to implement the smoke effect monitoring methods and/or other desired functions of the various embodiments of the present application described above.
In one example, the electronic device 800 may further include: an input device 830 and an output device 840, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 830 may be a microphone or an array of microphones for capturing a voice input signal; a communication network connector for receiving the acquired input signal from the cloud or other device; and may also include, for example, a keyboard, mouse, etc.
The output device 840 may output various information to the outside, including the determined distance information, direction information, and the like. The output device 840 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 800 that are relevant to the present application are shown in fig. 8 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 800 may include any other suitable components depending on the particular application.
Embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the smoke effect monitoring method according to various embodiments of the present application described hereinabove.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The concepts, principles and concepts of the application have been described above in connection with specific embodiments (including examples and illustrations). It will be appreciated by those skilled in the art that embodiments of the present application are not limited to the several forms set forth above, and that after reading the present application, those skilled in the art may make any possible modifications, substitutions, and equivalents of the steps, methods, apparatuses, and components of the above embodiments, which are intended to be within the scope of the present application. The protection scope of the present application is only subject to the claims.

Claims (10)

1. A smoke screen effect monitoring method, comprising:
acquiring a binocular infrared polarized image and a binocular visible light image of the smoke screen and a scene applied by the smoke screen;
calculating infrared three-dimensional point clouds according to the binocular infrared polarized image, and calculating visible three-dimensional point clouds according to the binocular visible light image;
constructing a stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud;
and analyzing the effect of the smoke screen according to the stereoscopic fusion image.
2. The smoke effect monitoring method of claim 1, wherein said calculating an infrared three-dimensional point cloud from said binocular infrared polarized image comprises:
according to the parallax information of the binocular infrared polarized image, calculating the three-dimensional coordinates of each pixel point of the binocular infrared polarized image;
constructing an infrared three-dimensional point cloud according to the three-dimensional coordinates;
wherein the calculating visible light three-dimensional point cloud according to the binocular visible light image includes:
according to the parallax information of the binocular visible light image, calculating the three-dimensional coordinates of each pixel point of the binocular visible light image;
and constructing a visible light three-dimensional point cloud according to the three-dimensional coordinates.
3. The smoke effect monitoring method according to claim 1, wherein the constructing a stereoscopic fused image of the smoke and the application scene thereof according to the infrared three-dimensional point cloud and the visible three-dimensional point cloud comprises:
respectively generating an infrared triangular net and a visible light triangular net according to the infrared three-dimensional point cloud and the visible light three-dimensional point cloud;
acquiring textures of triangle vertexes in the infrared triangle network from the binocular infrared polarized image, and acquiring textures of triangle vertexes in the visible light triangle network from the binocular visible light image;
obtaining the textures of the sides and the faces of the triangles in the infrared triangular net according to the interpolation of the textures of the triangle vertices in the infrared triangular net, and obtaining the textures of the sides and the faces of the triangles in the visible triangular net according to the interpolation of the textures of the triangle vertices in the visible triangular net.
4. The smoke effect monitoring method of claim 1, further comprising, after said acquiring a binocular infrared polarized image and a binocular visible light image of the smoke and its application scene:
calculating an infrared depth map according to the binocular infrared polarized image, and calculating a visible light depth map according to the binocular visible light image;
The method for constructing the stereoscopic fusion image of the smoke screen and the application scene thereof according to the infrared three-dimensional point cloud and the visible three-dimensional point cloud comprises the following steps:
and constructing a stereoscopic fusion image of the smoke curtain and the application scene thereof according to the infrared three-dimensional point cloud, the infrared depth map, the visible light three-dimensional point cloud and the visible light depth map.
5. The smoke effect monitoring method of claim 4, wherein said calculating an infrared depth map from said binocular infrared polarized image comprises:
calculating infrared depth of field information according to the focal length and image point vector of an infrared polarization imaging device for shooting the binocular infrared polarization image;
generating the infrared depth map according to the depth information;
wherein the calculating a visible light depth map according to the binocular visible light image includes:
calculating visible light depth information according to the binocular visible light image and the binocular imaging depth calculation model;
and generating the visible light depth map according to the visible light depth information.
6. The smoke effect monitoring method according to claim 1, wherein after the analyzing the effect of the smoke according to the stereoscopic fused image, further comprising:
Calculating the target front apparent temperature of the shielding target before the smoke screen is released, the target rear apparent temperature of the shielding target after the smoke screen is released, the smoke screen apparent temperature of the smoke screen and the ambient temperature according to the binocular infrared polarized image;
and calculating the infrared shielding rate of the smoke curtain according to the target front apparent temperature, the target rear apparent temperature, the smoke curtain apparent temperature and the ambient environment temperature.
7. The smoke effect monitoring method according to claim 1, wherein after the analyzing the effect of the smoke according to the stereoscopic fused image, further comprising:
acquiring a plurality of binocular infrared polarized images which are continuously shot to form an infrared polarized image sequence;
and calculating the shielding effect characteristic quantity and the movement characteristic quantity of the smoke curtain according to the infrared polarized image sequence.
8. A smoke screen effect monitoring device, comprising:
the acquisition module is used for acquiring binocular infrared polarized images and binocular visible light images of the smoke screen and the application scene of the smoke screen;
the computing module is used for computing infrared three-dimensional point clouds according to the binocular infrared polarized image and computing visible three-dimensional point clouds according to the binocular visible light image;
The construction module is used for constructing a stereoscopic fusion image of the smoke screen according to the infrared three-dimensional point cloud and the visible three-dimensional point cloud;
and the analysis module is used for analyzing the effect of the smoke curtain and the application scene thereof according to the stereoscopic fusion image.
9. An electronic device, comprising:
a processor;
a memory;
an application program stored in the memory and configured to be executed by the processor, the application program comprising instructions for performing the smoke effect monitoring method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program for executing the smoke effect monitoring method according to any one of claims 1 to 7.
CN202311105669.2A 2023-08-30 2023-08-30 Smoke screen effect monitoring method and device, electronic equipment and storage medium Pending CN117309856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311105669.2A CN117309856A (en) 2023-08-30 2023-08-30 Smoke screen effect monitoring method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311105669.2A CN117309856A (en) 2023-08-30 2023-08-30 Smoke screen effect monitoring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117309856A true CN117309856A (en) 2023-12-29

Family

ID=89283828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311105669.2A Pending CN117309856A (en) 2023-08-30 2023-08-30 Smoke screen effect monitoring method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117309856A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740186A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010036026A1 (en) * 2010-08-31 2012-03-01 Rheinmetall Waffe Munition Gmbh Smoke screen effectiveness determining device for protecting e.g. military platform, has measuring sensor system connected with data processing unit, and data processing algorithms provided for analysis of effectiveness of smoke screen
CN115471534A (en) * 2022-08-31 2022-12-13 华南理工大学 Underwater scene three-dimensional reconstruction method and equipment based on binocular vision and IMU

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010036026A1 (en) * 2010-08-31 2012-03-01 Rheinmetall Waffe Munition Gmbh Smoke screen effectiveness determining device for protecting e.g. military platform, has measuring sensor system connected with data processing unit, and data processing algorithms provided for analysis of effectiveness of smoke screen
CN115471534A (en) * 2022-08-31 2022-12-13 华南理工大学 Underwater scene three-dimensional reconstruction method and equipment based on binocular vision and IMU

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FIRST A. C. HUA ET AL.: "Simulation of IR smoke screen based on physical model", 2018 IEEE CSAA GUIDANCE, NAVIGATION AND CONTROL CONFERENCE (CGNCC), 12 August 2018 (2018-08-12), pages 1 - 6, XP033729448, DOI: 10.1109/GNCC42960.2018.9018683 *
徐世龙等: "基于时间-光谱信息的遮蔽目标激光点云扩展与标识方法", 红外与激光工程, vol. 52, no. 6, 30 June 2023 (2023-06-30), pages 1 - 9 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740186A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment
CN117740186B (en) * 2024-02-21 2024-05-10 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment

Similar Documents

Publication Publication Date Title
WO2019161813A1 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
US20240169658A1 (en) Systems and Methods for Reconstructing Scenes to Disentangle Light and Matter Fields
CN110148204B (en) Method and system for representing virtual objects in a view of a real environment
CN104835138B (en) Make foundation drawing picture and Aerial Images alignment
AU2011312140C1 (en) Rapid 3D modeling
US20160014395A1 (en) Data fusion processing to identify obscured objects
WO2021203883A1 (en) Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
KR100834157B1 (en) Method for Light Environment Reconstruction for Image Synthesis and Storage medium storing program therefor.
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
EP3084725B1 (en) Specularity determination from images
CN108629829A (en) The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera
CN117309856A (en) Smoke screen effect monitoring method and device, electronic equipment and storage medium
Kuschk Large scale urban reconstruction from remote sensing imagery
CN113902663A (en) Air small target dynamic infrared simulation method and device capable of automatically adapting to weather
CN107301633B (en) Simulation method for remote sensing imaging under cloud and fog interference
CN108564654A (en) The picture mode of entrance of three-dimensional large scene
Kang et al. View-dependent scene appearance synthesis using inverse rendering from light fields
WO2019220256A1 (en) A method of measuring illumination, corresponding system, computer program product and use
CN114359425A (en) Method and device for generating ortho image, and method and device for generating ortho exponential graph
WO2022081902A1 (en) Passive hyperspectral visual and infrared sensor package for mixed stereoscopic imaging and heat mapping
US20240144591A1 (en) System and method for surface properties reconstruction and positioning of digitized 3d objects
CN114119995B (en) Space-to-ground image matching method based on object space element
Zhu et al. Generation of thermal point clouds from uncalibrated thermal infrared image sequences and mobile laser scans
Sun et al. Spectral 3D Computer Vision--A Review
KR101886312B1 (en) Image Information extraction Device and Method by comparing Pixel Profile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination