CN111652276B - All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method - Google Patents

All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method Download PDF

Info

Publication number
CN111652276B
CN111652276B CN202010368325.0A CN202010368325A CN111652276B CN 111652276 B CN111652276 B CN 111652276B CN 202010368325 A CN202010368325 A CN 202010368325A CN 111652276 B CN111652276 B CN 111652276B
Authority
CN
China
Prior art keywords
module
target
positioning
polarized light
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010368325.0A
Other languages
Chinese (zh)
Other versions
CN111652276A (en
Inventor
白宏阳
梁华驹
郭宏伟
郑浦
李政茂
周育新
胡珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010368325.0A priority Critical patent/CN111652276B/en
Publication of CN111652276A publication Critical patent/CN111652276A/en
Application granted granted Critical
Publication of CN111652276B publication Critical patent/CN111652276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an all-weather portable multifunctional bionic positioning and attitude-determining sighting system and method, wherein the system comprises a laser range finder, a visible light detector, an infrared detector, a polarized light detector, a bionic polarized light navigation sensor, an attitude navigation module, a Beidou positioning module, an intelligent image processing plate, a small display screen, a wireless module and a power module, and distance information of a target is obtained through the laser range finder; the intelligent image processing plate is used for fusing the information of the visible light, the infrared light and the polarized light detectors to extract the characteristics of the targets and intelligently detect the targets; the pose information of the observing and aiming system is calculated through the Beidou positioning module, the navigation pose module and the bionic polarized light navigation sensor; on-line observing a target detection result and a positioning gesture detection result through a display; the target detection result and the positioning gesture detection result can be transmitted to other external terminals through the wireless module; the power module is used for providing a power conversion function for the system.

Description

All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method
Technical Field
The invention belongs to the technical field of sighting, and particularly relates to an all-weather portable multifunctional bionic positioning and attitude-determining sighting system and method.
Background
The observation and aiming system is a device for identifying, tracking and aiming targets and outputting information such as target positions. Current viewing systems are based essentially on a detection mode of the data source, i.e. relying on only one of visible, infrared and polarized image data. Several data have their advantages in the field of target detection, but both the application and the type of target detected have their limitations. The visible light image has finer texture, outline and other characteristics and stronger semantic information, and the infrared image and the polarized light image are beneficial to finding out an interested target under a complex background. Therefore, the information fusion and sharing of several data sources are beneficial to realizing the target detection task of the detector under various all-weather complex backgrounds, and have important significance.
Meanwhile, the current target detection method based on the deep convolutional neural network is a mainstream target detection method at present, but the methods are mainly developed on a ground-based GPU computing platform, and have extremely large network scale, calculation amount and power consumption. Is not beneficial to being deployed on mobile platforms such as resource-constrained and embedded platforms. Therefore, it is important to realize a lightweight deep neural network with small network scale and low network calculation amount.
Disclosure of Invention
The invention aims to provide an all-weather portable multifunctional bionic positioning and attitude-determining sighting system and method with simple structure and higher detection precision.
The technical solution for realizing the purpose of the invention is as follows: an all-weather portable multifunctional bionic positioning and attitude-determining sighting system comprises a laser range finder, a visible light detector, an infrared detector, a polarized light detector, a bionic polarized light navigation sensor, an attitude navigation module, a Beidou positioning module, an intelligent image processing plate, a display screen, a wireless module and a power module;
the intelligent image processing board respectively carries out image fusion on images acquired by the visible light, the infrared light and the polarized light detectors, is used for extracting target characteristics and extracting target candidate detection frames under all-weather conditions, and realizes the category judgment of targets in the detection frames so as to generate a final detection result; the laser range finder is used for outputting distance information from a target to the observing and sighting system; the Beidou positioning module and the navigation attitude module are used for resolving current absolute position information, speed information, pitch angle and roll angle information of the system; the bionic polarized light navigation sensor is used for resolving current course information of the system; the display is arranged at the upper end of the system and used for on-line real-time observation of the detection result and the positioning gesture measurement result of the target; the wireless module can transmit the target detection result and the positioning gesture detection result to other external terminals.
An all-weather portable multifunctional bionic positioning and attitude-determining viewing method comprises the following steps:
step 1, information fusion is carried out on a Beidou positioning module and a navigation attitude module, and position, speed, pitch angle and roll angle information of an observing and sighting system are determined; the bionic polarized light navigation sensor determines course angle information of the viewing system;
step 2, fusing image information under different wave bands detected by visible light, infrared and polarized light detectors through an intelligent image processing board to perform target detection and target identification;
step 3, monocular ranging is carried out on the identified stealth target through a visible light detector, and the position of the target is determined;
step 4, displaying the detected target information and the detected distance in real time;
and step 5, the whole encryption of the detected target and the detected distance is transmitted to the special encryption mobile terminal through the wireless module.
Compared with the prior art, the invention has the remarkable advantages that: (1) The visible light, infrared and polarization characteristics can be detected at the same time, so that the positioning target can be identified more accurately; (5) The lightweight deep neural network model is adopted, the network parameter scale and the calculated amount are greatly reduced on the premise of ensuring the precision, the occupied calculation resources are few, and the real-time performance is good; (6) The candidate detection frame position sharing of the multi-source heterogeneous data generation can effectively detect targets under various complex backgrounds including stealth targets.
Drawings
Fig. 1 is a schematic structural diagram of a climate portable multifunctional bionic positioning and attitude determining sighting system.
Fig. 2 is a flow chart of the weather portable multifunctional bionic positioning and posture determining observing and aiming method.
Detailed Description
An all-weather portable multifunctional bionic positioning and attitude-determining observing and sighting system comprises a laser range finder, a visible light detector, an infrared detector, a polarized light detector, a bionic polarized light navigation sensor, an attitude navigation module, a Beidou positioning module, an intelligent image processing plate, a display screen, a wireless module and a power module;
the intelligent image processing board respectively carries out image fusion on images acquired by the visible light, the infrared light and the polarized light detectors, is used for extracting target characteristics and extracting target candidate detection frames under all-weather conditions, and realizes the category judgment of targets in the detection frames so as to generate a final detection result; the laser range finder is used for outputting distance information from a target to the observing and sighting system; the Beidou positioning module and the navigation attitude module are used for resolving current absolute position information, speed information, pitch angle and roll angle information of the system; the bionic polarized light navigation sensor is used for resolving current course information of the system; the display is arranged at the upper end of the system and used for on-line real-time observation of the detection result and the positioning gesture measurement result of the target; the wireless module can transmit the target detection result and the positioning gesture detection result to other external terminals.
Further, the laser range finder is connected with the intelligent image processing board through an RS422 serial port; the visible light detector, the infrared detector and the polarized light detector are respectively connected with the intelligent image processing plate through the SDI port; the bionic polarized light navigation sensor is connected with the intelligent image processing board through the Ethernet; the navigation attitude module, the Beidou positioning module and the wireless module are respectively connected with the intelligent image processing board through an RS232 serial port; the power module is used for providing power for each sub-module respectively.
Furthermore, visible light, infrared and polarized light detectors are built by adopting a CMOS camera, the detection distance can reach 2 km, and the diameter of the camera is 36mm; the polarized light detector can detect targets under four polarized angles of 0 degree, 45 degree, 90 degree and 135 degree by rotating the polarizing plate at the front end of the polarized camera.
Furthermore, the bionic polarized light navigation sensor imitates the biological directional navigation principle, adopts an array type liquid crystal polarized camera, is opposite to the zenith, performs autonomous course calculation by collecting an atmospheric polarization mode, and obtains an included angle between the system and the geographic north direction, namely a course angle.
Further, the attitude module comprises a triaxial MEMS gyroscope and an accelerometer, and the pitch angle and the roll angle of the sighting system are calculated.
All-weather portable multifunctional bionic positioning and attitude-determining viewing method based on the system comprises the following steps of:
step 1, information fusion is carried out on a Beidou positioning module and a navigation attitude module, and position, speed, pitch angle and roll angle information of an observing and sighting system are determined; the bionic polarized light navigation sensor determines course angle information of the viewing system.
And 2, fusing image information under different wave bands detected by the visible light, infrared and polarized light detectors through the intelligent image processing plate, and carrying out target detection and target identification.
And 3, monocular ranging is carried out on the identified stealth target through a visible light detector, and the position of the target is determined.
And 4, displaying the detected target information and the detected distance in real time through a display.
And step 5, the whole encryption of the detected target and the detected distance is transmitted to the special encryption mobile terminal through the wireless module.
Further, the target detection method comprises the following steps:
extracting features of a visible light image acquired by a visible light detector through a lightweight deep neural network model, and generating a candidate detection frame;
carrying out feature extraction on an infrared image acquired by an infrared detector through a lightweight deep neural network model, and generating a candidate detection frame;
extracting image features of a polarized light image acquired by a polarized light detector through a surf operator, and then carrying out image segmentation by adopting a maximum inter-class variance method to generate a candidate detection frame;
the method comprises the steps of realizing information fusion of target candidate detection frames generated by visible light, infrared and polarized images, and detecting and identifying target categories in each detection frame;
and removing the detection frame with high overlapping degree by a non-maximum value inhibition method.
The lightweight deep neural network model is a full convolution network, does not contain a pooling layer, consists of 2 convolution layers, 6 expansion convolution structures and 8 expansion convolution residual structures, and uses an SSD decoder as a decoder in a target detection process for predicting the position and the size of a target;
detecting an image as an input to the first convolutional layer; the method comprises the steps of sequentially cascading a first expansion convolution residual module, a second expansion convolution residual module, a third expansion convolution residual module, a fourth expansion convolution module, a fifth expansion convolution residual module, a sixth expansion convolution residual module, a fifth expansion convolution module, a seventh expansion convolution residual module, a sixth expansion convolution module, an eighth expansion convolution residual module, a ninth expansion convolution residual module, a tenth expansion convolution residual module, a seventh expansion convolution module and a second convolution layer to obtain a feature map which is used as an input of a decoder and used for predicting target position, size and category information.
The expansion convolution modules are respectively constructed by using convolution kernels of 1×1, 3×3 and 1×1, and the convolution modules are three layers, and the block structures are as follows: input layer- & gt first 1×1 convolution layer- & gt first 3×3 convolution layer- & gt second 1×1 convolution layer- & gt output layer; and the expansion convolution residual error module is used for cascading the input layer and the output layer on the basis of the expansion convolution module to obtain a final output layer.
The lightweight depth neural network model predicts by using 5 feature maps with different scales, wherein the sizes of the 5 feature maps are respectively set to 38×38, 19×19, 10×10, 5×5, 3×3 and 1×1, horizontal candidate frames with 5 sizes are generated by taking the position of each pixel of the feature map as the center, and the sizes of the preset candidate frames are obtained according to clustering.
The specific process for extracting the image features by the surf operator comprises the following steps: calculating Hessian values of the feature points to be selected and surrounding points by using square filtering, taking the point with the largest Hessian value as the feature point, and further generating a feature description vector by integrating gray distribution information generated by first-order Haar wavelet response of the image
The present invention will be described in detail with reference to examples.
Examples
With reference to fig. 1, the bionic navigation multispectral polarization synchronous acquisition system comprises an all-weather portable multifunctional bionic positioning and attitude determination sighting system, and the whole system comprises a laser range finder, a visible light detector, an infrared detector, a polarized light detector, a bionic polarized light navigation sensor, a navigation attitude module, a Beidou positioning module, an intelligent image processing board, a small display screen, a wireless module and a power module. In fig. 1, 1 denotes a laser range finder, 2 denotes a visible light detector, 3 denotes an infrared detector, 4 denotes a polarized light detector, 5 denotes a bionic polarized light navigation sensor, 6 denotes a navigation attitude module, 7 denotes a Beidou positioning module, 8 denotes an intelligent image processing board, 9 denotes a small display screen, 10 denotes a wireless module, and 11 denotes a power supply module.
The laser range finder is connected with the intelligent image processing board through an RS422 serial port; the visible light detector, the infrared detector and the polarized light detector are respectively connected with the intelligent image processing plate through the SDI port; the bionic polarized light navigation sensor is connected with the intelligent image processing board through the Ethernet; the navigation attitude module, the Beidou positioning module and the wireless module are respectively connected with the intelligent image processing board through an RS232 serial port; the power module is used for providing power for each sub-module respectively.
The intelligent image processing board respectively carries out image fusion on images acquired by the visible light, the infrared light and the polarized light detectors, is used for extracting target characteristics and extracting target candidate detection frames under all-weather conditions, and realizes the category judgment of targets in the detection frames so as to generate a final detection result; the laser range finder is used for outputting distance information from a target to the observing and sighting system; the Beidou positioning module and the navigation attitude module are used for resolving current absolute position information, speed information, pitch angle and roll angle information of the system; the bionic polarized light navigation sensor is used for resolving current course information of the system; the display is arranged at the upper end of the system and used for on-line real-time observation of the detection result and the positioning gesture measurement result of the target; the wireless module can transmit the target detection result and the positioning gesture detection result to other external terminals.
Further, the visible light, infrared and polarized light detectors are built by adopting a CMOS camera, the detection distance can reach 2 km, and the diameter of the camera is 36mm; the polarized light detector can detect targets under four polarized angles of 0 degree, 45 degree, 90 degree and 135 degree by rotating the polarizing plate at the front end of the polarized camera. .
Further, the bionic polarized light navigation sensor imitates the biological orientation navigation principle, adopts an array type liquid crystal polarized camera, is opposite to the zenith, performs autonomous course calculation by collecting an atmospheric polarization mode, and obtains an included angle between the system and the geographic north direction, namely a course angle.
Further, the attitude heading reference system comprises a low-cost triaxial MEMS gyroscope and an accelerometer, and can accurately calculate the pitch angle and the roll angle of the attitude heading reference system.
Further, the laser ranging module comprises a low-power consumption far-focus laser ranging module, the detection distance can reach 2.5 km, and the distance error is smaller than 0.3 m.
With reference to fig. 2, the invention is a flow chart of the weather portable multifunctional bionic positioning and posture determining observing and sighting method. The method comprises the steps that feature extraction is carried out on a visible light image collected by a visible light detector and an infrared image collected by an infrared detector through a lightweight depth neural network model, and a candidate detection frame is generated; after extracting a feature map by a surf operator, the polarized light image acquired by the polarized light detector is subjected to image segmentation by a maximum inter-class variance method, and the position of the region suspected to have the target is extracted. And then combining the candidate detection frame positions acquired through visible light, infrared and polarized light images with the multi-scale feature map acquired by the lightweight deep neural network model to judge the target category information in the detection frame. And then removing redundant detection frames by a non-maximum suppression method to obtain a final detection result. An all-weather portable multifunctional bionic positioning and attitude-determining viewing method comprises the following steps:
step 1, information fusion is carried out on a Beidou positioning module and a navigation attitude module, and finally the position, speed, pitch angle and roll angle information of an observing and sighting system are determined; the bionic polarized light navigation sensor finally determines course angle information of the viewing system.
And 2, fusing image information under different wave bands detected by the visible light, infrared and polarized light detectors through the intelligent image processing plate, and carrying out accurate target detection and target identification.
And 3, monocular ranging is carried out on the identified stealth target through a visible light detector, and the position of the target is determined.
And 4, displaying the detected target information and the detected distance in real time through a display.
And step 5, the detected target and the detected distance are transmitted to the mobile phone client in a whole-course encryption manner through the wireless module, so that the confidentiality is good and the viewing is convenient.
The method for detecting and identifying the target adopts a heterogeneous image fusion target detection method based on visible light, infrared and polarized light, and specifically comprises the following steps:
step 1, extracting features of a visible light image acquired by a visible light detector through a lightweight deep neural network model, and generating a candidate detection frame;
step 2, extracting features of the visible light image acquired by the infrared detector through a lightweight deep neural network model, and generating a candidate detection frame;
step 3, extracting image characteristics from the polarized light image acquired by the polarized light detector through a surf operator, and then carrying out image segmentation by adopting a maximum inter-class variance method to generate a candidate detection frame;
step 4, information fusion is realized on target candidate detection frames generated by visible light, infrared and polarized images, and target categories in each detection frame are detected and identified;
and 5, removing the detection frame with high overlapping degree by a non-maximum value inhibition method.
The method comprises the steps of using a light-weight depth neural network model which is a full convolution network and does not contain a pooling layer, wherein the light-weight depth neural network model consists of 2 convolution layers, 6 expansion convolution structures and 8 expansion convolution residual structures, and using an SSD decoder as a decoder of a target detection process for predicting the position and the size of a target;
the detection image is used as an input of a first convolution layer; the method comprises the steps of sequentially cascading a first expansion convolution residual module, a second expansion convolution residual module, a third expansion convolution residual module, a fourth expansion convolution module, a fifth expansion convolution residual module, a sixth expansion convolution residual module, a fifth expansion convolution module, a seventh expansion convolution residual module, a sixth expansion convolution module, an eighth expansion convolution residual module, a ninth expansion convolution residual module, a tenth expansion convolution residual module, a seventh expansion convolution module and a second convolution layer to obtain a feature map which is used as an input of a decoder and used for predicting target position, size and category information.
The expansion convolution modules are respectively constructed by using convolution kernels of 1×1, 3×3 and 1×1, and the convolution modules are three layers, and the block structures are as follows: input layer- & gt first 1×1 convolution layer- & gt first 3×3 convolution layer- & gt second 1×1 convolution layer- & gt output layer; and the expansion convolution residual error module is used for cascading the input layer and the output layer on the basis of the expansion convolution module to obtain a final output layer.
The used lightweight depth neural network model predicts by using 5 feature maps with different scales, the sizes of the 5 feature maps are respectively set to 38×38, 19×19, 10×10, 5×5, 3×3 and 1×1, horizontal candidate frames with 5 sizes are generated by taking the position of each pixel of the feature map as the center, and the sizes of the preset candidate frames are obtained according to clustering.
Extracting image features by using surf operators, wherein the specific process is as follows: and calculating Hessian values of the to-be-selected feature points and surrounding points by using square filtering, taking the point with the largest Hessian value as the feature point, and further generating a feature description vector by integrating gray distribution information generated by first-order Haar wavelet response of the image.

Claims (8)

1. The all-weather portable multifunctional bionic positioning and attitude-determining sighting system is characterized by comprising a laser range finder, a visible light detector, an infrared detector, a polarized light detector, a bionic polarized light navigation sensor, a navigation attitude module, a Beidou positioning module, an intelligent image processing plate, a display screen, a wireless module and a power module;
the intelligent image processing board respectively carries out image fusion on images acquired by the visible light, the infrared light and the polarized light detectors, is used for extracting target characteristics and extracting target candidate detection frames under all-weather conditions, and realizes the category judgment of targets in the detection frames so as to generate a final detection result; the laser range finder is used for outputting distance information from a target to the observing and sighting system; the Beidou positioning module and the navigation attitude module are used for resolving current absolute position information, speed information, pitch angle and roll angle information of the system; the bionic polarized light navigation sensor is used for resolving current course information of the system; the display screen is arranged at the upper end of the system and used for on-line real-time observation of the detection result and the positioning gesture detection result of the target; the wireless module can transmit the target detection result and the positioning gesture detection result to other external terminals; the laser range finder is connected with the intelligent image processing board through an RS422 serial port; the visible light detector, the infrared detector and the polarized light detector are respectively connected with the intelligent image processing plate through the SDI port; the bionic polarized light navigation sensor is connected with the intelligent image processing board through the Ethernet; the navigation attitude module, the Beidou positioning module and the wireless module are respectively connected with the intelligent image processing board through an RS232 serial port; the power module is used for providing power for each sub-module respectively.
2. The all-weather portable multifunctional bionic positioning and attitude determining sighting system of claim 1 is characterized in that visible light, infrared and polarized light detectors are built by adopting a CMOS camera, the detection distance can reach 2 km, and the diameter of the camera is 36mm; the polarized light detector can detect targets under four polarized angles of 0 degree, 45 degree, 90 degree and 135 degree by rotating the polarizing plate at the front end of the polarized camera.
3. The all-weather portable multifunctional bionic positioning and attitude determining viewing system according to claim 1, wherein the bionic polarized light navigation sensor imitates a biological directional navigation principle, an array type liquid crystal polarized camera is adopted to face a zenith, autonomous course calculation is carried out by collecting an atmospheric polarization mode, and an included angle between the system and the geographic north direction, namely a course angle, is obtained.
4. The all-weather portable multifunctional bionic positioning and attitude determination viewing system according to claim 1, wherein the avionic module comprises a triaxial MEMS gyroscope and an accelerometer, and the pitch angle and the roll angle of the viewing system are calculated.
5. An all-weather portable multifunctional bionic positioning and attitude-determining sighting method based on the system of any one of claims 1 to 4, which is characterized by comprising the following steps:
step 1, information fusion is carried out on a Beidou positioning module and a navigation attitude module, and position, speed, pitch angle and roll angle information of an observing and sighting system are determined; the bionic polarized light navigation sensor determines course angle information of the viewing system;
step 2, fusing image information under different wave bands detected by visible light, infrared and polarized light detectors through an intelligent image processing board to perform target detection and target identification;
step 3, monocular ranging is carried out on the identified stealth target through a visible light detector, and the position of the target is determined;
step 4, displaying the detected target information and the detected distance in real time;
step 5, the whole encryption of the detected target and the detected distance is sent to a special encryption mobile terminal through a wireless module;
the target detection method comprises the following steps:
extracting features of a visible light image acquired by a visible light detector through a lightweight deep neural network model, and generating a candidate detection frame;
carrying out feature extraction on an infrared image acquired by an infrared detector through a lightweight deep neural network model, and generating a candidate detection frame;
extracting image features of a polarized light image acquired by a polarized light detector through a surf operator, and then carrying out image segmentation by adopting a maximum inter-class variance method to generate a candidate detection frame;
the method comprises the steps of realizing information fusion of target candidate detection frames generated by visible light, infrared and polarized images, and detecting and identifying target categories in each detection frame;
and removing the detection frame with high overlapping degree by a non-maximum value inhibition method.
6. The all-weather portable multifunctional bionic positioning and attitude determination viewing method according to claim 5, wherein the lightweight deep neural network model is a full convolution network, does not contain a pooling layer, consists of 2 convolution layers, 6 expansion convolution structures and 8 expansion convolution residual structures, and uses an SSD decoder as a decoder of a target detection process for predicting the position and the size of a target;
detecting an image as an input to the first convolutional layer; the method comprises the steps of sequentially cascading a first expansion convolution residual module, a second expansion convolution residual module, a third expansion convolution residual module, a fourth expansion convolution module, a fifth expansion convolution residual module, a sixth expansion convolution residual module, a fifth expansion convolution module, a seventh expansion convolution residual module, a sixth expansion convolution module, an eighth expansion convolution residual module, a ninth expansion convolution residual module, a tenth expansion convolution residual module, a seventh expansion convolution module and a second convolution layer to output a feature map serving as input of a decoder for predicting target position, size and category information;
the expansion convolution modules are respectively constructed by using convolution kernels of 1×1, 3×3 and 1×1, and the convolution modules are three layers, and the block structures are as follows: input layer- & gt first 1×1 convolution layer- & gt first 3×3 convolution layer- & gt second 1×1 convolution layer- & gt output layer; and the expansion convolution residual error module is used for cascading the input layer and the output layer on the basis of the expansion convolution module to obtain a final output layer.
7. The all-weather portable multifunctional bionic positioning and attitude determination viewing method according to claim 6, wherein the lightweight depth neural network model predicts by using 5 feature maps with different scales, the sizes of the 5 feature maps are respectively set to 38×38, 19×19, 10×10, 5×5, 3×3 and 1×1, and horizontal candidate frames with the sizes of the 5 candidate frames are generated by taking each pixel position of the feature map as the center, and the sizes of the preset candidate frames are obtained according to clustering.
8. The all-weather portable multifunctional bionic positioning and attitude determination viewing method according to claim 7, wherein the specific process of extracting image features by surf operators is as follows: and calculating Hessian values of the to-be-selected feature points and surrounding points by using square filtering, taking the point with the largest Hessian value as the feature point, and further generating a feature description vector by integrating gray distribution information generated by first-order Haar wavelet response of the image.
CN202010368325.0A 2020-04-30 2020-04-30 All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method Active CN111652276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368325.0A CN111652276B (en) 2020-04-30 2020-04-30 All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368325.0A CN111652276B (en) 2020-04-30 2020-04-30 All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method

Publications (2)

Publication Number Publication Date
CN111652276A CN111652276A (en) 2020-09-11
CN111652276B true CN111652276B (en) 2023-05-09

Family

ID=72346545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368325.0A Active CN111652276B (en) 2020-04-30 2020-04-30 All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method

Country Status (1)

Country Link
CN (1) CN111652276B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115062770B (en) * 2022-08-04 2022-11-08 中国人民解放军国防科技大学 Navigation method based on generalized bionic polarized light navigation model and solution
CN116777926B (en) * 2023-08-21 2023-10-31 华侨大学 Crack segmentation method and device based on left-right sum type light convolutional neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009569A (en) * 2019-04-17 2019-07-12 中国人民解放军陆军工程大学 Infrared and visible light image fusion method based on lightweight convolutional neural network
CN110764105A (en) * 2019-11-08 2020-02-07 北京煜邦电力技术股份有限公司 Unmanned aerial vehicle's laser radar system and unmanned aerial vehicle system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9726498B2 (en) * 2012-11-29 2017-08-08 Sensor Platforms, Inc. Combining monitoring sensor measurements and system signals to determine device context

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009569A (en) * 2019-04-17 2019-07-12 中国人民解放军陆军工程大学 Infrared and visible light image fusion method based on lightweight convolutional neural network
CN110764105A (en) * 2019-11-08 2020-02-07 北京煜邦电力技术股份有限公司 Unmanned aerial vehicle's laser radar system and unmanned aerial vehicle system

Also Published As

Publication number Publication date
CN111652276A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
US10580162B2 (en) Method for determining the pose of a camera and for recognizing an object of a real environment
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN106529538A (en) Method and device for positioning aircraft
CN106447585A (en) Urban area and indoor high-precision visual positioning system and method
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN106651951B (en) Atmospheric polarization mode detection and course calculation system and method
US20200357141A1 (en) Systems and methods for calibrating an optical system of a movable object
CN111652276B (en) All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method
CN111337037B (en) Mobile laser radar slam drawing device and data processing method
CN116245916B (en) Unmanned ship-oriented infrared ship target tracking method and device
US20190311209A1 (en) Feature Recognition Assisted Super-resolution Method
CN113807470B (en) Vehicle driving state determination method and related device
CN112489032A (en) Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background
CN111721281A (en) Position identification method and device and electronic equipment
CN103345765B (en) Based on moving object detection devices and methods therefor under the mobile platform of DSP+FPGA
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN112348886A (en) Visual positioning method, terminal and server
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN115307646A (en) Multi-sensor fusion robot positioning method, system and device
CN112595728B (en) Road problem determination method and related device
CN105807083A (en) Real-time speed measuring method and system for unmanned aerial vehicle
CN203397395U (en) Moving object detection device in platform based on DSP + FPGA
CN115665553B (en) Automatic tracking method and device of unmanned aerial vehicle, electronic equipment and storage medium
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method
CN115950435A (en) Real-time positioning method for unmanned aerial vehicle inspection image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant