CN117745505B - Disaster relief command system and method based on real-time multi-mode data - Google Patents

Disaster relief command system and method based on real-time multi-mode data Download PDF

Info

Publication number
CN117745505B
CN117745505B CN202410182841.2A CN202410182841A CN117745505B CN 117745505 B CN117745505 B CN 117745505B CN 202410182841 A CN202410182841 A CN 202410182841A CN 117745505 B CN117745505 B CN 117745505B
Authority
CN
China
Prior art keywords
data
point cloud
model
function
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410182841.2A
Other languages
Chinese (zh)
Other versions
CN117745505A (en
Inventor
殷永旸
文博
林军
徐金成
魏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Nanjing Panda Electronics Co Ltd
Original Assignee
Nanjing University
Nanjing Panda Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University, Nanjing Panda Electronics Co Ltd filed Critical Nanjing University
Priority to CN202410182841.2A priority Critical patent/CN117745505B/en
Publication of CN117745505A publication Critical patent/CN117745505A/en
Application granted granted Critical
Publication of CN117745505B publication Critical patent/CN117745505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a disaster relief command system and method based on real-time multi-mode data, wherein the system comprises a command center and modularized portable edge equipment, and the command center is used for real-time command and data processing of a disaster relief site and comprises a data processing module and a command center data transmission module; the modularized portable edge equipment comprises a data acquisition module, a data preprocessing module, an edge equipment data transmission module and a display module. According to the invention, through the cooperative work of the modularized portable edge equipment and the command center, the disaster relief command system can acquire the multi-mode data of the disaster area in real time, and the command center can process the data rapidly and accurately, so that the rescue action effect is improved.

Description

Disaster relief command system and method based on real-time multi-mode data
Technical Field
The invention relates to edge calculation, in particular to a disaster relief command system and method based on real-time multi-mode data.
Background
In modern disaster relief command systems, it is important to process multimodal data quickly and accurately and make efficient decisions. The traditional disaster relief command system generally adopts a single communication mode and a fixed algorithm, and cannot meet the requirements of instantaneity and accuracy in complex and changeable rescue environments. In recent years, with the development of wireless communication, internet of things, edge computing, artificial intelligence and other technologies, new technical support is provided for realizing the intellectualization and real-time of a disaster relief command system. However, in practical applications, how to efficiently process real-time multi-modal data and make effective decisions still faces many challenges, including multi-modal data processing and fusion, channel selection strategies, real-time performance optimization, and so on. Therefore, the disaster relief command system and method based on the real-time multi-mode data are researched and designed, and have important practical significance and application value.
Disclosure of Invention
The invention aims to: the invention aims to provide a disaster relief command system and method based on real-time multi-mode data, so that the problems of insufficient real-time performance, low data processing speed and the like in the prior art are solved.
The technical scheme is as follows: the invention relates to a disaster relief command system based on real-time multi-mode data, which comprises a command center and modularized portable edge equipment; the command center is used for real-time command and data processing of disaster relief sites and comprises a data processing module and a command center data transmission module; the modularized portable edge equipment comprises a data acquisition module, a data preprocessing module, an edge equipment data transmission module and a display module; the function of each module is as follows:
And a data acquisition module: the system comprises a point cloud data acquisition module and a video data acquisition module, wherein the cloud data acquisition module can adopt a radar, the video data acquisition module can adopt a binocular camera, and the radar can be a 4D millimeter wave radar or a laser radar; the 4D millimeter wave radar or the laser radar can acquire 3D point cloud data, and the binocular camera can acquire 2D image data containing depth information.
And a data preprocessing module: the system comprises a task processor, a data transmission module and a data transmission module, wherein the task processor is used for cleaning and mining the collected data of the point cloud data collection module and the video data collection module and then sending the cleaned and mined data to the edge data transmission module; a low power consumption edge-side AI chip may be employed.
An edge device data transmission module: the system comprises an edge multi-mode communication module, which can adopt the edge multi-mode communication module with low power consumption to support multi-mode data communication such as Bluetooth, WIFI, 4G/5G and the like, is applicable to various complex scenes, is used for sending the data processed by the data preprocessing module to the command center, and receives the processed image data from the command center and sends the processed image data to the display module.
And a display module: the portable display comprises a portable display, an MR head display can be adopted for displaying image data of a command center; the received image data can be presented to individual relief soldiers to assist the individual relief soldiers to complete relief actions under conditions of poor visual field, complex environment and unknown conditions.
And a data processing module: the system comprises a high-performance processor unit, wherein the high-performance processor unit is used for guiding data cleaned and mined by a data preprocessing module of the edge equipment into a quantized and cut lightweight model for processing, efficiently and quickly identifying and reconstructing the fused multi-mode data, generating image data which is significant for disaster relief individual soldiers and sending the generated image data to a command center data transmission module.
Command center data transmission module: the multi-mode communication module is used for sending the image data received from the data processing module to the edge device data transmission module of the edge device.
A disaster relief command method based on real-time multi-mode data adopts the disaster relief command system based on the real-time multi-mode data, and comprises the following steps:
step 1, an individual carries modularized portable edge equipment to enter a disaster area, and point cloud data and video data are respectively acquired through a point cloud data acquisition module and a video data acquisition module;
And 2, performing light-weight data cleaning and dimension reduction on the acquired point cloud data and video data by a data preprocessing module to generate preprocessed data.
The step 2 specifically comprises the following steps:
Step 2.1, adopting a point cloud data cleaning method; the method comprises the following specific steps:
a. Deep learning denoising: a depth self-encoder model is adopted, and the model consists of an encoder and a decoder; the encoder maps the input point cloud data to a low dimensional space, and the decoder attempts to reconstruct the original point cloud data from the low dimensional representation; by minimizing the reconstruction error, the self-encoder is able to learn the de-noised representation; the method is realized by the following optimization problems:
min{E,D};
loss:||X-D(E(X))||2
Where X is the input point cloud data, E is the encoder, and D is the decoder.
E (X): is an encoder function E that converts the input data X into a lower dimensional representation or code.
D (E (X)): is a decoder function D that converts the output E (X) of the encoder back to the original data space.
X-D (E (X)): is the difference between the original data X and its reconstructed version D (E (X)), called reconstruction error, and the original data X is reconstructed by the process of encoding and then decoding.
Loss ||X-D (E (X))|| 2: is the square of the norm of the reconstruction error used as a loss function.
Min { E, D }: is a pair of encoder E and decoder D that minimizes the loss function.
B. Model-based downsampling: firstly, estimating a geometric model of data by utilizing a pre-trained deep learning model, and then selectively retaining key points according to the model; specifically, a selection function S is defined such that the retained points are closest to the model predicted geometry:
S(X)=argmin{X'};
loss:||M(X)-X'||2
Wherein S (X) is a selection function, X is input point cloud data, X ' is downsampled point cloud data, { X ' } is a set of downsampled point cloud data, argmin is a value of an argument required when the function takes a minimum value, M is a geometric model prediction function, and loss: |M (X) -X ' | 2 is a loss function.
Step 2.2, using a high-efficiency video data cleaning method; the method comprises the following specific steps:
a. background subtraction based on deep learning: a deep convolutional neural network model is used that receives a video frame as input and then predicts the background of the next frame; the predicted background is obtained by the following optimization problem:
min{B};
loss:||I{t+1}-B(I{1:t})||2
Wherein, I {1:t } is the video frame of the previous t frames, I { t+1} is the video frame of the t+1st frame, B is the background prediction function, { B } is the set of background prediction results, min is the minimum value taken by the function, and loss: |I { t+1} -B (I {1:t }) | 2 is the loss function.
B. Adaptive noise cancellation: firstly, estimating the noise level by using a deep learning model, and then dynamically adjusting the parameters of a filter according to the estimated noise level; specifically, a filter function F is defined such that the filtered frame is closest to the noiseless frame:
F(I)=argmin{I'};
loss:||N(I)-I'||2
Wherein F (I) is a filter function, I is an input noisy video frame, I ' is a filtered video frame, { I ' } is a set of filtered video frames, N is a noise estimation function, argmin is the value of the required argument when the function takes a minimum value, and loss: ||N (I) -I ' || 2 is a loss function.
Identifying and eliminating abnormal values, noise and repeated data in the data through multidimensional analysis of the data; compared with the traditional rule-based method, the algorithm provided by the invention is more intelligent, can adapt to data sets in different types and fields, and improves the cleaning effect.
Step 2.3, normalizing the data in order to eliminate the influence of the dimension and the numerical difference in the data on model training, and further eliminating the influence of the dimension and the numerical difference in the data on model training; the method comprises the following specific steps:
a. and carrying out distribution analysis on the data set, wherein statistics such as mean, variance, skewness, kurtosis and the like are calculated.
B. According to the distribution characteristics, the most suitable normalization method is automatically selected, such as minimum-maximum normalization, Z-score normalization and the like.
C. the selected normalization method is applied to the dataset, eliminating dimension and numerical differences.
Compared with the traditional normalization method, the method can automatically select the most suitable normalization mode according to the distribution characteristics of the data, so that the data has better comparability under different scales.
And 2.4, in the feature selection and dimension reduction stage, the feature selection based on mutual information is performed.
A. mutual information between features in the dataset and between the features and the target variable is calculated.
B. And evaluating the correlation between the features and the degree of correlation between the features and the target variable according to the mutual information value.
C. Key information is reserved, redundant and irrelevant features are removed, and the data dimension is reduced.
By calculating mutual information between features, the invention evaluates the correlation between features and the degree of association between features and target variables. The method and the device can remove redundant and irrelevant features while keeping key information, reduce data dimension and improve model training efficiency.
And 3, data interaction between the data transmission module of the edge equipment and the data transmission module of the command center is performed, and the data transmission module of the command center sends the preprocessed data to the data processing module of the command center. The invention designs a multimode communication interface for realizing compatibility and switching of various communication modes in order to have higher communication reliability and adaptability. The interface supports a variety of communication protocols (e.g., wi-Fi, bluetooth, loRa, LTE, etc.) and physical layer interfaces (e.g., ethernet, serial, etc.). By packaging different communication modules into a unified interface format, the switching between different communication modes can be conveniently performed, and the reliability and adaptability of communication are improved.
The step 3 specifically comprises the following steps:
step 3.1, monitoring the channel state in real time; firstly, collecting communication quality indexes in real time, and evaluating the current channel state, wherein the communication quality indexes comprise received signal strength and signal-to-noise ratio; and secondly, monitoring performance indexes, including network delay and data throughput, to provide basis for channel selection.
Step 3.2, adopting a channel selection algorithm; the algorithm automatically learns the optimal selection strategy under different channel states by establishing a Markov decision process model of channel selection; the specific implementation process is as follows:
a. State definition: the communication quality index and the performance index are combined into a multidimensional vector which is used as a state representation of the reinforcement learning algorithm.
B. action definition: the switch to a different communication mode or the hold of the current mode is taken as the action of the reinforcement learning algorithm. For example, the action set may be represented as { keep current mode, switch to Wi-Fi, switch to LoRa.
C. And (3) bonus function design: a bonus function is designed to evaluate whether the channel selection is good or bad based on the current status and actions. The reward function should comprehensively consider factors such as communication quality, delay, data throughput and the like to realize optimal channel selection.
D. Reinforcement learning algorithm: a reinforcement learning algorithm suitable for continuous state space and discrete action space is adopted to learn the optimal channel selection strategy.
Step 3.3, switching dynamic channels; according to the learned channel selection strategy, realizing dynamic channel switching; when the channel state changes, the data transmission is switched to the optimal channel through the multimode communication interface so as to ensure the communication quality and performance. Through the innovative multimode communication interface design and the optimal channel selection method, the invention realizes the dynamic selection and switching of different channels in the disaster relief communication system. Compared with the prior art, the method has higher communication reliability and adaptability, and is beneficial to improving the rescue action effect.
And 4, after the command center data processing module receives the preprocessed data, carrying out multi-mode fusion, and fusing the point cloud data and the video data.
The step 4 specifically comprises the following steps:
Step 4.1, carrying out data fusion based on a graph model; the method comprises the following specific steps:
a. Feature extraction: firstly, respectively extracting characteristics of point cloud data and image data to obtain:
Xp=ReLU(MLP(P,Wp)+bp);
Wherein, MLP represents the multilayer perceptron, P is the data content of the P group point cloud, W p and b p are the parameters of the MLP of the P group point cloud, reLU is the linear rectification function, X p is the point cloud characteristics of the obtained P group point cloud.
B. feature pair Ji Hanshu
For feature alignment, this is achieved by a linear transformation; let A be the transformation matrix, obtain:
X'p=A×Xp
a in the feature alignment function is a transformation matrix, and the goal of the transformation matrix is to map the point cloud feature X p to the same feature space as the ith image feature X i to obtain a mapped point cloud feature X' p.
When a group of point cloud features and image features are paired, the transformed point cloud features and image features are maximally consistent in variance by finding a linear transformation; let C be the covariance matrix of the point cloud features and the image features, obtain A by solving the following optimization problem:
A=argmax(Tr(A'×C×A));
where Tr represents the trace of the matrix and argmax is the value of the required argument when the function takes the maximum value.
C. Feature fusion function
For feature fusion, feature vectors are directly spliced together, and then final feature representation is obtained through a full connection layer; let W c and b c be parameters of the full connection layer, obtain:
X=ReLU(Wc×Concat(X'p,Xi)+bc);
Wherein Concat denotes a stitching operation, reLU is a linear rectification function, X 'p is a mapped point cloud feature, and X i is an image feature of an i-th image corresponding to X' p.
Step 4.2, global optimization; after extracting the space features, performing global optimization by adopting an iterative closest point improvement algorithm; the algorithm calculates an optimal transformation matrix between the point cloud data and the video data by using the extracted spatial feature information, and accurately aligns the point cloud data with the depth map. By introducing the weight coefficient of feature matching, the convergence speed and the fusion precision of the algorithm are improved.
Step 4.3, data visualization: voxel processing is carried out on the fused multi-mode data, and a three-dimensional map with dense information is generated; rescue team members can view the maps in real time through the MR head display so as to rescue in complex environments.
Through the innovative multi-mode fusion method, the video and point cloud data can be effectively integrated in a disaster relief scene. Compared with the prior art, the fusion method has higher precision and richer information content, and is beneficial to improving the rescue action effect.
And 5, utilizing the quantized and cut lightweight model to efficiently and rapidly identify and reconstruct the fused multi-mode data, and generating image data which is significant for disaster relief individual soldiers.
The step 5 specifically comprises the following steps:
step 5.1, designing a lightweight network structure; the structure adopts light weight techniques such as grouping convolution, depth separable convolution and the like, so that the calculated amount and the parameter number of the model are reduced; meanwhile, attention mechanisms are introduced to enhance the feature expression capability at key parts, so that the model performance is improved.
Step 5.2, fine tuning of a dynamic model: firstly, pretraining on a large amount of synthesized multi-mode data to form a basic model; then, in the actual rescue process, online transfer learning is carried out on the model according to the collected real-time data, so that the model can be quickly adapted to the current scene, and the recognition accuracy is improved.
Step 5.3, model quantitative clipping, the method comprises the following steps:
a. Pruning on the network: redundant neurons and connections in the model are pruned by a network pruning method based on weight sensitivity. By setting the weight sensitivity threshold, the pruning degree can be flexibly controlled, and the performance and the calculation complexity of the model are balanced.
B. And (5) weight quantization: introducing a self-adaptive quantization algorithm to compress model weights; and the quantization level is automatically determined according to the weight distribution, so that the quantized model can keep high precision and simultaneously remarkably reduce the storage and calculation requirements of the model.
C. Structural recombination: in the pruning and quantization processes, the model is dynamically adjusted, besides the dynamic self-adaption is carried out on the network breadth and depth, a super network with a plurality of forward channels is also established, dynamic routing is carried out according to different input samples, and convolution channels with different scales are adaptively activated according to set parameters, so that the improvement of calculation efficiency can be realized while the capacity of the model is maintained. The structure reorganization can reduce the waste of calculation resources and further improve the calculation efficiency.
Step 5.4, a rapid reasoning framework; the framework adopts various hardware acceleration technologies to realize parallelization and optimization of model calculation, such as a neural Network Processor (NPU), a Graphic Processor (GPU) and the like, so that reasoning delay is greatly reduced.
Through the innovative lightweight model design and quantitative cutting method, the invention realizes the real-time processing of the multi-mode data in the disaster relief command system. Compared with the prior art, the method has higher calculation efficiency and lower delay, and is beneficial to improving the instantaneity and effect of rescue actions.
Step 6, the data processing module sends the generated image data back to the command center data transmission module; the edge equipment data transmission module receives the image data sent by the command center data transmission module and then transmits the image data to the display module; the display module presents the received image data to the individual relief soldier so as to assist the individual relief soldier to complete relief actions under the conditions of limited visual field, complex environment and unknown condition.
A computer storage medium having stored thereon a computer program which when executed by a processor implements a disaster relief command system based on real-time multimodal data as described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, the processor implementing a disaster relief command system based on real-time multimodal data as described above when executing the computer program.
The beneficial effects are that: compared with the prior art, the invention has the following advantages: through the collaborative work of the modularized portable edge equipment and the command center, the disaster relief command system can acquire the multi-mode data of the disaster area in real time, and the command center can process the data rapidly and accurately, so that the rescue action effect is improved.
Drawings
Fig. 1 is a schematic diagram of the system of the present invention.
Figure 2 is a flow chart of the steps of the method of the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, a disaster relief command system based on real-time multi-mode data comprises a command center and modularized portable edge equipment; the command center is used for real-time command and data processing of disaster relief sites and comprises a data processing module and a command center data transmission module; the modularized portable edge equipment is used for being carried by an individual and comprises a data acquisition module, a data preprocessing module, an edge equipment data transmission module and a display module; the function of each module is as follows:
And a data acquisition module: the system comprises a point cloud data acquisition module and a video data acquisition module, wherein the cloud data acquisition module can adopt a radar, the video data acquisition module can adopt a binocular camera, and the radar can be a 4D millimeter wave radar or a laser radar; the 4D millimeter wave radar or the laser radar can acquire 3D point cloud data, and the binocular camera can acquire 2D image data containing depth information.
And a data preprocessing module: the system comprises a task processor, a data transmission module and a data transmission module, wherein the task processor is used for cleaning and mining the collected data of the point cloud data collection module and the video data collection module and then sending the cleaned and mined data to the edge data transmission module; a low power consumption edge-side AI chip may be employed.
An edge device data transmission module: the system comprises an edge multi-mode communication module, which can adopt the edge multi-mode communication module with low power consumption to support multi-mode data communication such as Bluetooth, WIFI, 4G/5G and the like, is applicable to various complex scenes, is used for sending the data processed by the data preprocessing module to the command center, and receives the processed image data from the command center and sends the processed image data to the display module.
And a display module: the portable display comprises a portable display, an MR head display can be adopted for displaying image data of a command center; the received image data can be presented to individual relief soldiers to assist the individual relief soldiers to complete relief actions under conditions of poor visual field, complex environment and unknown conditions.
And a data processing module: the system comprises a high-performance processor unit, wherein the high-performance processor unit is used for guiding data cleaned and mined by a data preprocessing module of the edge equipment into a quantized and cut lightweight model for processing, efficiently and quickly identifying and reconstructing the fused multi-mode data, generating image data which is significant for disaster relief individual soldiers and sending the generated image data to a command center data transmission module.
Command center data transmission module: the multi-mode communication module is used for sending the image data received from the data processing module to the edge device data transmission module of the edge device.
As shown in fig. 2, a disaster relief command method based on real-time multi-mode data adopts the disaster relief command system based on real-time multi-mode data, and comprises the following steps:
step 1, an individual carries modularized portable edge equipment to enter a disaster area, and point cloud data and video data are respectively acquired through a point cloud data acquisition module and a video data acquisition module;
And 2, performing light-weight data cleaning and dimension reduction on the acquired point cloud data and video data by a data preprocessing module to generate preprocessed data.
The step 2 specifically comprises the following steps:
Step 2.1, adopting a point cloud data cleaning method; the method comprises the following specific steps:
a. Deep learning denoising: a depth self-encoder model is adopted, and the model consists of an encoder and a decoder; the encoder maps the input point cloud data to a low dimensional space, and the decoder attempts to reconstruct the original point cloud data from the low dimensional representation; by minimizing the reconstruction error, the self-encoder is able to learn the de-noised representation; the method is realized by the following optimization problems:
min{E,D};
loss:||X-D(E(X))||2
Where X is the input point cloud data, E is the encoder, and D is the decoder.
E (X): is an encoder function E that converts the input data X into a lower dimensional representation or code.
D (E (X)): is a decoder function D that converts the output E (X) of the encoder back to the original data space.
X-D (E (X)): is the difference between the original data X and its reconstructed version D (E (X)), referred to as the reconstruction error; the original data X is reconstructed through the process of encoding and then decoding.
Loss ||X-D (E (X))|| 2: is the square of the norm of the reconstruction error used as a loss function.
Min { E, D }: is a pair of encoder E and decoder D that minimizes the loss function.
B. model-based downsampling: firstly, estimating a geometric model of data by utilizing a pre-trained deep learning model, and then selectively retaining key points according to the model; specifically, a selection function S is defined such that the retained points are closest to the model predicted geometry:
S(X)=argmin{X'};
loss:||M(X)-X'||2
Wherein S (X) is a selection function, X is input point cloud data, X ' is downsampled point cloud data, { X ' } is a set of downsampled point cloud data, argmin is a value of an argument required when the function takes a minimum value, M is a geometric model prediction function, and loss: |M (X) -X ' | 2 is a loss function.
Step 2.2, using a high-efficiency video data cleaning method; the method comprises the following specific steps:
a. background subtraction based on deep learning: a deep convolutional neural network model is used that receives a video frame as input and then predicts the background of the next frame; the predicted background is obtained by the following optimization problem:
min{B};
loss:||I{t+1}-B(I{1:t})||2
Wherein, I {1:t } is the video frame of the previous t frames, I { t+1} is the video frame of the t+1st frame, B is the background prediction function, { B } is the set of background prediction results, min is the minimum value taken by the function, and loss: |I { t+1} -B (I {1:t }) | 2 is the loss function.
B. Adaptive noise cancellation: firstly, estimating the noise level by using a deep learning model, and then dynamically adjusting the parameters of a filter according to the estimated noise level; specifically, a filter function F is defined such that the filtered frame is closest to the noiseless frame:
F(I)=argmin{I'};
loss:||N(I)-I'||2
Wherein F (I) is a filter function, I is an input noisy video frame, I ' is a filtered video frame, { I ' } is a set of filtered video frames, N is a noise estimation function, argmin is the value of the required argument when the function takes a minimum value, and loss: ||N (I) -I ' || 2 is a loss function.
Identifying and eliminating abnormal values, noise and repeated data in the data through multidimensional analysis of the data; compared with the traditional rule-based method, the algorithm provided by the invention is more intelligent, can adapt to data sets in different types and fields, and improves the cleaning effect.
Step 2.3, normalizing the data in order to eliminate the influence of the dimension and the numerical difference in the data on model training, and further eliminating the influence of the dimension and the numerical difference in the data on model training; the method comprises the following specific steps:
a. and carrying out distribution analysis on the data set, wherein statistics such as mean, variance, skewness, kurtosis and the like are calculated.
B. According to the distribution characteristics, the most suitable normalization method is automatically selected, such as minimum-maximum normalization, Z-score normalization and the like.
C. the selected normalization method is applied to the dataset, eliminating dimension and numerical differences.
Compared with the traditional normalization method, the method can automatically select the most suitable normalization mode according to the distribution characteristics of the data, so that the data has better comparability under different scales.
And 2.4, in the feature selection and dimension reduction stage, the feature selection based on mutual information is performed.
A. mutual information between features in the dataset and between the features and the target variable is calculated.
B. And evaluating the correlation between the features and the degree of correlation between the features and the target variable according to the mutual information value.
C. Key information is reserved, redundant and irrelevant features are removed, and the data dimension is reduced.
By calculating mutual information between features, the invention evaluates the correlation between features and the degree of association between features and target variables. The method and the device can remove redundant and irrelevant features while keeping key information, reduce data dimension and improve model training efficiency.
And 3, data interaction between the data transmission module of the edge equipment and the data transmission module of the command center is performed, and the data transmission module of the command center sends the preprocessed data to the data processing module of the command center. The invention designs a multimode communication interface for realizing compatibility and switching of various communication modes in order to have higher communication reliability and adaptability. The interface supports a variety of communication protocols (e.g., wi-Fi, bluetooth, loRa, LTE, etc.) and physical layer interfaces (e.g., ethernet, serial, etc.). By packaging different communication modules into a unified interface format, the switching between different communication modes can be conveniently performed, and the reliability and adaptability of communication are improved.
The step 3 specifically comprises the following steps:
step 3.1, monitoring the channel state in real time; firstly, collecting communication quality indexes in real time, and evaluating the current channel state, wherein the communication quality indexes comprise received signal strength and signal-to-noise ratio; and secondly, monitoring performance indexes, including network delay and data throughput, to provide basis for channel selection.
Step 3.2, adopting a channel selection algorithm; the algorithm automatically learns the optimal selection strategy under different channel states by establishing a Markov decision process model of channel selection; the specific implementation process is as follows:
a. State definition: the communication quality index and the performance index are combined into a multidimensional vector which is used as a state representation of the reinforcement learning algorithm.
B. action definition: the switch to a different communication mode or the hold of the current mode is taken as the action of the reinforcement learning algorithm. For example, the action set may be represented as { keep current mode, switch to Wi-Fi, switch to LoRa.
C. And (3) bonus function design: a bonus function is designed to evaluate whether the channel selection is good or bad based on the current status and actions. The reward function should comprehensively consider factors such as communication quality, delay, data throughput and the like to realize optimal channel selection.
D. Reinforcement learning algorithm: a reinforcement learning algorithm suitable for continuous state space and discrete action space is adopted to learn the optimal channel selection strategy.
Step 3.3, switching dynamic channels; according to the learned channel selection strategy, realizing dynamic channel switching; when the channel state changes, the data transmission is switched to the optimal channel through the multimode communication interface so as to ensure the communication quality and performance. Through the innovative multimode communication interface design and the optimal channel selection method, the invention realizes the dynamic selection and switching of different channels in the disaster relief communication system. Compared with the prior art, the method has higher communication reliability and adaptability, and is beneficial to improving the rescue action effect.
And 4, after the command center data processing module receives the preprocessed data, carrying out multi-mode fusion, and fusing the point cloud data and the video data.
The step 4 specifically comprises the following steps:
Step 4.1, carrying out data fusion based on a graph model; the method comprises the following specific steps:
a. Feature extraction: firstly, respectively extracting characteristics of point cloud data and image data to obtain:
Xp=ReLU(MLP(P,Wp)+bp);
Wherein, MLP represents the multilayer perceptron, P is the data content of the P group point cloud, W p and b p are the parameters of the MLP of the P group point cloud, reLU is the linear rectification function, X p is the point cloud characteristics of the obtained P group point cloud.
B. feature pair Ji Hanshu
For feature alignment, this is achieved by a linear transformation; let A be the transformation matrix, obtain:
X'p=A×Xp
a in the feature alignment function is a transformation matrix, and the goal of the transformation matrix is to map the point cloud feature X p to the same feature space as the ith image feature X i to obtain a mapped point cloud feature X' p.
When a group of point cloud features and image features are paired, the transformed point cloud features and image features are maximally consistent in variance by finding a linear transformation; let C be the covariance matrix of the point cloud features and the image features, obtain A by solving the following optimization problem:
A=argmax(Tr(A'×C×A));
where Tr represents the trace of the matrix and argmax is the value of the required argument when the function takes the maximum value.
C. Feature fusion function
For feature fusion, feature vectors are directly spliced together, and then final feature representation is obtained through a full connection layer; let W c and b c be parameters of the full connection layer, obtain:
X=ReLU(Wc×Concat(X'p,Xi)+bc);
Wherein Concat denotes a stitching operation, reLU is a linear rectification function, X 'p is a mapped point cloud feature, and X i is an image feature of an i-th image corresponding to X' p.
Step 4.2, global optimization; after extracting the space features, performing global optimization by adopting an iterative closest point improvement algorithm; the algorithm calculates an optimal transformation matrix between the point cloud data and the video data by using the extracted spatial feature information, and accurately aligns the point cloud data with the depth map. By introducing the weight coefficient of feature matching, the convergence speed and the fusion precision of the algorithm are improved.
Step 4.3, data visualization: voxel processing is carried out on the fused multi-mode data, and a three-dimensional map with dense information is generated; rescue team members can view the maps in real time through the MR head display so as to rescue in complex environments.
Through the innovative multi-mode fusion method, the video and point cloud data can be effectively integrated in a disaster relief scene. Compared with the prior art, the fusion method has higher precision and richer information content, and is beneficial to improving the rescue action effect.
And 5, utilizing the quantized and cut lightweight model to efficiently and rapidly identify and reconstruct the fused multi-mode data, and generating image data which is significant for disaster relief individual soldiers.
The step 5 specifically comprises the following steps:
step 5.1, designing a lightweight network structure; the structure adopts light weight techniques such as grouping convolution, depth separable convolution and the like, so that the calculated amount and the parameter number of the model are reduced; meanwhile, attention mechanisms are introduced to enhance the feature expression capability at key parts, so that the model performance is improved.
Step 5.2, fine tuning of a dynamic model: firstly, pretraining on a large amount of synthesized multi-mode data to form a basic model; then, in the actual rescue process, online transfer learning is carried out on the model according to the collected real-time data, so that the model can be quickly adapted to the current scene, and the recognition accuracy is improved.
Step 5.3, model quantitative clipping, the method comprises the following steps:
a. Pruning on the network: redundant neurons and connections in the model are pruned by a network pruning method based on weight sensitivity. By setting the weight sensitivity threshold, the pruning degree can be flexibly controlled, and the performance and the calculation complexity of the model are balanced.
B. And (5) weight quantization: introducing a self-adaptive quantization algorithm to compress model weights; and the quantization level is automatically determined according to the weight distribution, so that the quantized model can keep high precision and simultaneously remarkably reduce the storage and calculation requirements of the model.
C. Structural recombination: in the pruning and quantization processes, the model is dynamically adjusted, besides the dynamic self-adaption is carried out on the network breadth and depth, a super network with a plurality of forward channels is also established, dynamic routing is carried out according to different input samples, and convolution channels with different scales are adaptively activated according to set parameters, so that the improvement of calculation efficiency can be realized while the capacity of the model is maintained. The structure reorganization can reduce the waste of calculation resources and further improve the calculation efficiency.
Step 5.4, a rapid reasoning framework; the framework adopts various hardware acceleration technologies to realize parallelization and optimization of model calculation, such as a neural Network Processor (NPU), a Graphic Processor (GPU) and the like, so that reasoning delay is greatly reduced.
Through the innovative lightweight model design and quantitative cutting method, the invention realizes the real-time processing of the multi-mode data in the disaster relief command system. Compared with the prior art, the method has higher calculation efficiency and lower delay, and is beneficial to improving the instantaneity and effect of rescue actions.
Step 6, the data processing module sends the generated image data back to the command center data transmission module; the edge equipment data transmission module receives the image data sent by the command center data transmission module and then transmits the image data to the display module; the display module presents the received image data to the individual relief soldier so as to assist the individual relief soldier to complete relief actions under the conditions of limited visual field, complex environment and unknown condition.
A computer storage medium having stored thereon a computer program which when executed by a processor implements a disaster relief command system based on real-time multimodal data as described above.
A computer device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, the processor implementing a disaster relief command system based on real-time multimodal data as described above when executing the computer program.

Claims (6)

1. The disaster relief command system based on the real-time multi-mode data is characterized by comprising a command center and modularized portable edge equipment; the command center is used for real-time command and data processing of disaster relief sites and comprises a data processing module and a command center data transmission module; the modularized portable edge equipment comprises a data acquisition module, a data preprocessing module, an edge equipment data transmission module and a display module; the function of each module is as follows:
And a data acquisition module: the disaster relief system comprises a point cloud data acquisition module and a video data acquisition module, which are respectively used for acquiring 3D point cloud data and 2D image data required by a disaster relief command system;
And a data preprocessing module: the system comprises a task processor, a data transmission module and a data transmission module, wherein the task processor is used for cleaning and mining the collected data of the point cloud data collection module and the video data collection module and then sending the cleaned and mined data to the edge data transmission module; the data preprocessing module is used for realizing the following steps:
Step 2.1, adopting a point cloud data cleaning method; the method comprises the following specific steps:
a. Deep learning denoising: a depth self-encoder model is adopted, and the model consists of an encoder and a decoder; the encoder maps the input point cloud data to a low dimensional space, and the decoder attempts to reconstruct the original point cloud data from the low dimensional representation; by minimizing the reconstruction error, the self-encoder is able to learn the de-noised representation; the method is realized by the following optimization problems:
min{E,D};
loss:||X-D(E(X))||2
Wherein X is input point cloud data, E is an encoder, and D is a decoder;
e (X): is an encoder function E;
D (E (X)): is a decoder function D that converts the output E (X) of the encoder back to the original data space;
X-D (E (X)): is the difference between the original data X and its reconstructed version D (E (X)), referred to as the reconstruction error; reconstructing the original data X through a process of encoding and then decoding;
loss ||X-D (E (X))|| 2: is the square of the norm of the reconstruction error, used as a loss function;
min { E, D }: a pair of encoder E and decoder D capable of minimizing the loss function;
b. Model-based downsampling: firstly, estimating a geometric model of data by utilizing a pre-trained deep learning model, and then selectively retaining key points according to the model; the formula used is as follows:
S(X)=argmin{X'};
loss:||M(X)-X'||2
wherein S (X) is a selection function, X is input point cloud data, X ' is downsampled point cloud data, { X ' } is a set of downsampled point cloud data, argmin is a value of an argument required when the function takes a minimum value, M is a geometric model prediction function, loss: |M (X) -X ' | 2 is a loss function;
Step 2.2, using a high-efficiency video data cleaning method; the method comprises the following specific steps:
a. background subtraction based on deep learning: a deep convolutional neural network model is used that receives a video frame as input and then predicts the background of the next frame; the predicted background is obtained by the following optimization problem:
min{B};
loss:||I{t+1}-B(I{1:t})||2
Wherein, I {1:t } is the video frame of the previous t frames, I { t+1} is the video frame of the t+1st frame, B is the background prediction function, { B } is the set of background prediction results, min is the minimum value taken by the function, loss is that I { t+1} -B (I {1:t }) I 2 is the loss function;
b. adaptive noise cancellation: firstly, estimating the noise level by using a deep learning model, and then dynamically adjusting the parameters of a filter according to the estimated noise level; the formula used is as follows:
F(I)=argmin{I'};
loss:||N(I)-I'||2
Wherein F (I) is a filter function, I is an input noisy video frame, I ' is a filtered video frame, { I ' } is a set of filtered video frames, N is a noise estimation function, argmin is a value of an argument required when the function takes a minimum value, and loss: |N (I) -I ' | 2 is a loss function;
Identifying and eliminating abnormal values, noise and repeated data in the data through multidimensional analysis of the data;
step 2.3, normalizing the data, and further eliminating the influence of dimension and numerical difference in the data on model training; the method comprises the following specific steps:
a. carrying out distribution analysis on the data set, wherein the distribution analysis comprises the steps of calculating the mean value, variance, skewness and kurtosis;
b. According to the distribution characteristics, automatically selecting the most suitable normalization method;
c. applying the selected normalization method to the data set to eliminate dimension and numerical differences;
step 2.4, in the characteristic selection and dimension reduction stage, the characteristic selection based on mutual information;
a. calculating mutual information between each feature in the data set and between the feature and the target variable;
b. according to the mutual information value, evaluating the correlation between the features and the degree of correlation between the features and the target variable;
c. key information is reserved, redundant and irrelevant features are removed, and the data dimension is reduced;
An edge device data transmission module: the system comprises an edge multi-mode communication module, a display module and a data preprocessing module, wherein the edge multi-mode communication module supports multi-mode data communication and is used for sending data processed by the data preprocessing module to a command center, receiving processed image data from the command center and sending the processed image data to the display module;
And a display module: for performing a display of image data of the command center;
And a data processing module: the system comprises a high-performance processor unit, a command center data transmission module and a data processing module, wherein the high-performance processor unit is used for guiding data cleaned and mined by a data preprocessing module of edge equipment into a quantized and cut lightweight model for processing, efficiently and quickly identifying and reconstructing the fused multi-mode data, generating image data which is significant for disaster relief individual soldiers and sending the generated image data to the command center data transmission module; the data processing module is used for realizing the following steps:
Step 4.1, carrying out data fusion based on a graph model; the method comprises the following specific steps:
a. Feature extraction: firstly, extracting characteristics of point cloud data to obtain:
Xp=ReLU(MLP(P,Wp)+bp);
Wherein, MLP represents a multi-layer sensor, P is the data content of the P group point cloud, W p and b p are the parameters of the MLP of the P group point cloud, reLU is a linear rectification function, X p is the point cloud characteristics of the obtained P group point cloud;
b. feature pairs Ji Hanshu;
For feature alignment, this is achieved by a linear transformation; let A be the transformation matrix, obtain:
X'p=A×X p
A in the feature alignment function is a conversion matrix, and the aim of the feature alignment function is to map the point cloud feature X p to the feature space which is the same as the i-th image feature X i to obtain a mapped point cloud feature X' p;
When a group of point cloud features and image features are paired, the transformed point cloud features and image features are maximally consistent in variance by finding a linear transformation; let C be the covariance matrix of the point cloud features and the image features, obtain A by solving the following optimization problem:
A=argmax(Tr(A'×C×A));
where Tr represents the trace of the matrix and argmax is the value of the required argument when the function takes the maximum value;
c. a feature fusion function;
For feature fusion, feature vectors are directly spliced together, and then final feature representation is obtained through a full connection layer; let W c and b c be parameters of the full connection layer, obtain:
X=ReLU(Wc×Concat(X'p,Xi)+bc);
Wherein Concat denotes a stitching operation, reLU is a linear rectification function, X 'p is a mapped point cloud feature, and X i is an image feature of an ith image corresponding to X' p;
Step 4.2, global optimization; after extracting the space features, performing global optimization by adopting an iterative closest point improvement algorithm; the algorithm calculates an optimal transformation matrix between the point cloud data and the video data by using the extracted space feature information, and accurately aligns the point cloud data with the depth map;
step 4.3, data visualization: voxel processing is carried out on the fused multi-mode data, and a three-dimensional map with dense information is generated; rescue team members can view the maps in real time through the MR head display so as to rescue in a complex environment;
Command center data transmission module: the multi-mode communication module is used for sending the image data received from the data processing module to the edge device data transmission module of the edge device.
2. The disaster relief command method based on the real-time multi-mode data adopts the disaster relief command system based on the real-time multi-mode data as claimed in claim 1, and is characterized by comprising the following steps:
step 1, an individual carries modularized portable edge equipment to enter a disaster area, and point cloud data and video data are respectively acquired through a point cloud data acquisition module and a video data acquisition module;
step 2, the data preprocessing module performs light-weight data cleaning and dimension reduction on the acquired point cloud data and video data to generate preprocessed data; the step 2 is specifically as follows:
Step 2.1, adopting a point cloud data cleaning method; the method comprises the following specific steps:
a. Deep learning denoising: a depth self-encoder model is adopted, and the model consists of an encoder and a decoder; the encoder maps the input point cloud data to a low dimensional space, and the decoder attempts to reconstruct the original point cloud data from the low dimensional representation; by minimizing the reconstruction error, the self-encoder is able to learn the de-noised representation; the method is realized by the following optimization problems:
min{E,D};
loss:||X-D(E(X))||2
Wherein X is input point cloud data, E is an encoder, and D is a decoder;
e (X): is an encoder function E;
D (E (X)): is a decoder function D that converts the output E (X) of the encoder back to the original data space;
X-D (E (X)): is the difference between the original data X and its reconstructed version D (E (X)), referred to as the reconstruction error; reconstructing the original data X through a process of encoding and then decoding;
loss ||X-D (E (X))|| 2: is the square of the norm of the reconstruction error, used as a loss function;
min { E, D }: a pair of encoder E and decoder D capable of minimizing the loss function;
b. Model-based downsampling: firstly, estimating a geometric model of data by utilizing a pre-trained deep learning model, and then selectively retaining key points according to the model; the formula used is as follows:
S(X)=argmin{X'};
loss:||M(X)-X'||2
wherein S (X) is a selection function, X is input point cloud data, X ' is downsampled point cloud data, { X ' } is a set of downsampled point cloud data, argmin is a value of an argument required when the function takes a minimum value, M is a geometric model prediction function, loss: |M (X) -X ' | 2 is a loss function;
Step 2.2, using a high-efficiency video data cleaning method; the method comprises the following specific steps:
a. background subtraction based on deep learning: a deep convolutional neural network model is used that receives a video frame as input and then predicts the background of the next frame; the predicted background is obtained by the following optimization problem:
min{B};
loss:||I{t+1}-B(I{1:t})||2
Wherein, I {1:t } is the video frame of the previous t frames, I { t+1} is the video frame of the t+1st frame, B is the background prediction function, { B } is the set of background prediction results, min is the minimum value taken by the function, loss is that I { t+1} -B (I {1:t }) I 2 is the loss function;
b. adaptive noise cancellation: firstly, estimating the noise level by using a deep learning model, and then dynamically adjusting the parameters of a filter according to the estimated noise level; the formula used is as follows:
F(I)=argmin{I'};
loss:||N(I)-I'||2
Wherein F (I) is a filter function, I is an input noisy video frame, I ' is a filtered video frame, { I ' } is a set of filtered video frames, N is a noise estimation function, argmin is a value of an argument required when the function takes a minimum value, and loss: |N (I) -I ' | 2 is a loss function;
Identifying and eliminating abnormal values, noise and repeated data in the data through multidimensional analysis of the data;
step 2.3, normalizing the data, and further eliminating the influence of dimension and numerical difference in the data on model training; the method comprises the following specific steps:
a. carrying out distribution analysis on the data set, wherein the distribution analysis comprises the steps of calculating the mean value, variance, skewness and kurtosis;
b. According to the distribution characteristics, automatically selecting the most suitable normalization method;
c. applying the selected normalization method to the data set to eliminate dimension and numerical differences;
step 2.4, in the characteristic selection and dimension reduction stage, the characteristic selection based on mutual information;
a. calculating mutual information between each feature in the data set and between the feature and the target variable;
b. according to the mutual information value, evaluating the correlation between the features and the degree of correlation between the features and the target variable;
c. key information is reserved, redundant and irrelevant features are removed, and the data dimension is reduced;
Step 3, through data interaction between the edge equipment data transmission module and the command center data transmission module, the command center data transmission module sends the preprocessed data to the data processing module of the command center;
Step 4, after the command center data processing module receives the preprocessed data, multi-mode fusion is carried out, and point cloud data and video data are fused; the step 4 specifically comprises the following steps:
Step 4.1, carrying out data fusion based on a graph model; the method comprises the following specific steps:
a. Feature extraction: firstly, extracting characteristics of point cloud data to obtain:
Xp=ReLU(MLP(P,Wp)+bp);
Wherein, MLP represents a multi-layer sensor, P is the data content of the P group point cloud, W p and b p are the parameters of the MLP of the P group point cloud, reLU is a linear rectification function, X p is the point cloud characteristics of the obtained P group point cloud;
b. feature pairs Ji Hanshu;
For feature alignment, this is achieved by a linear transformation; let A be the transformation matrix, obtain:
X'p=A×X p
A in the feature alignment function is a conversion matrix, and the aim of the feature alignment function is to map the point cloud feature X p to the feature space which is the same as the i-th image feature X i to obtain a mapped point cloud feature X' p;
When a group of point cloud features and image features are paired, the transformed point cloud features and image features are maximally consistent in variance by finding a linear transformation; let C be the covariance matrix of the point cloud features and the image features, obtain A by solving the following optimization problem:
A=argmax(Tr(A'×C×A));
where Tr represents the trace of the matrix and argmax is the value of the required argument when the function takes the maximum value;
c. a feature fusion function;
For feature fusion, feature vectors are directly spliced together, and then final feature representation is obtained through a full connection layer; let W c and b c be parameters of the full connection layer, obtain:
X=ReLU(Wc×Concat(X'p,Xi)+bc);
Wherein Concat denotes a stitching operation, reLU is a linear rectification function, X 'p is a mapped point cloud feature, and X i is an image feature of an ith image corresponding to X' p;
Step 4.2, global optimization; after extracting the space features, performing global optimization by adopting an iterative closest point improvement algorithm; the algorithm calculates an optimal transformation matrix between the point cloud data and the video data by using the extracted space feature information, and accurately aligns the point cloud data with the depth map;
Step 4.3, data visualization: voxel processing is carried out on the fused multi-mode data, and a three-dimensional map with dense information is generated; rescue team members can view the maps in real time through the MR head display so as to rescue in a complex environment;
Step 5, utilizing the quantized and cut lightweight model to efficiently and rapidly identify and reconstruct the fused multi-mode data, and generating image data which is significant for disaster relief individual soldiers; step 6, the data processing module sends the generated image data back to the command center data transmission module; the edge equipment data transmission module receives the image data sent by the command center data transmission module and then transmits the image data to the display module; the display module presents the received image data to the individual relief soldier to assist the individual relief soldier in completing the relief action.
3. The disaster relief command method based on real-time multi-mode data according to claim 2, wherein the step 3 is specifically:
step 3.1, monitoring the channel state in real time; firstly, collecting communication quality indexes in real time, and evaluating the current channel state, wherein the communication quality indexes comprise received signal strength and signal-to-noise ratio; secondly, monitoring performance indexes, which include network delay and data throughput, to provide basis for channel selection;
Step 3.2, adopting a channel selection algorithm; the algorithm automatically learns the optimal selection strategy under different channel states by establishing a Markov decision process model of channel selection; the specific implementation process is as follows:
a. state definition: combining the communication quality index and the performance index into a multidimensional vector serving as a state representation of the reinforcement learning algorithm;
b. action definition: an act of switching to a different communication mode or maintaining the current mode as a reinforcement learning algorithm;
c. And (3) bonus function design: designing a reward function according to the current state and action to evaluate the channel selection;
d. reinforcement learning algorithm: learning an optimal channel selection strategy by adopting a reinforcement learning algorithm suitable for a continuous state space and a discrete action space;
Step 3.3, switching dynamic channels; according to the learned channel selection strategy, realizing dynamic channel switching; when the channel state changes, the data transmission is switched to the optimal channel through the multimode communication interface so as to ensure the communication quality and performance.
4. The disaster relief command method based on real-time multi-mode data according to claim 2, wherein the step 5 is specifically:
Step 5.1, designing a lightweight network structure; the structure adopts a light weight technology of grouping convolution and depth separable convolution, so that the calculated amount and the parameter number of the model are reduced; meanwhile, attention mechanisms are introduced to enhance the feature expression capability at key parts, so that the performance of the model is improved;
Step 5.2, fine tuning of a dynamic model: firstly, pretraining on a large amount of synthesized multi-mode data to form a basic model; then, in the actual rescue process, online transfer learning is carried out on the model according to the collected real-time data, so that the model can be quickly adapted to the current scene, and the recognition accuracy is improved;
Step 5.3, model quantitative clipping, the method comprises the following steps:
a. Pruning on the network: redundant neurons and connections in the model are pruned by adopting a network pruning method based on weight sensitivity;
b. and (5) weight quantization: introducing a self-adaptive quantization algorithm to compress model weights; the quantization level is automatically determined according to the weight distribution, so that the quantized model can keep high precision and simultaneously remarkably reduce the storage and calculation requirements of the model;
c. Structural recombination: in the pruning and quantization processes, the model is dynamically adjusted, a super network with a plurality of forward channels is built besides the dynamic self-adaption on the network breadth and depth, dynamic routing is carried out according to different input samples, and convolution channels with different scales are adaptively activated according to set parameters, so that the improvement of calculation efficiency can be realized while the capacity of the model is maintained;
step 5.4, a rapid reasoning framework; the framework adopts various hardware acceleration techniques to realize parallelization and optimization of model calculation, thereby greatly reducing reasoning delay.
5. A computer storage medium having stored thereon a computer program which when executed by a processor implements a disaster relief command system based on real-time multimodal data as claimed in claim 1.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a disaster relief command system based on real-time multimodal data as claimed in claim 1 when executing the computer program.
CN202410182841.2A 2024-02-19 2024-02-19 Disaster relief command system and method based on real-time multi-mode data Active CN117745505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410182841.2A CN117745505B (en) 2024-02-19 2024-02-19 Disaster relief command system and method based on real-time multi-mode data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410182841.2A CN117745505B (en) 2024-02-19 2024-02-19 Disaster relief command system and method based on real-time multi-mode data

Publications (2)

Publication Number Publication Date
CN117745505A CN117745505A (en) 2024-03-22
CN117745505B true CN117745505B (en) 2024-06-07

Family

ID=90279895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410182841.2A Active CN117745505B (en) 2024-02-19 2024-02-19 Disaster relief command system and method based on real-time multi-mode data

Country Status (1)

Country Link
CN (1) CN117745505B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719054A (en) * 2016-01-13 2016-06-29 天津中科智能识别产业技术研究院有限公司 Disaster rescuing spot commander information sharing method and system based on mobile terminals
CN111840855A (en) * 2020-06-28 2020-10-30 深圳市恒升森林消防装备有限公司 All-round intelligent emergency rescue linkage command system
CN111985502A (en) * 2020-08-03 2020-11-24 武汉大学 Multi-mode image feature matching method with scale invariance and rotation invariance
CN114093111A (en) * 2021-12-20 2022-02-25 中国民用航空飞行学院 Forest fire rescue air-ground integrated commanding and scheduling system and method
CN115376045A (en) * 2022-08-16 2022-11-22 四川九洲视讯科技有限责任公司 Public safety command intelligent processing method based on multi-mode fusion deep learning
CN115861792A (en) * 2022-11-09 2023-03-28 武汉大学 Multi-mode remote sensing image matching method for weighted phase orientation description
CN116633956A (en) * 2022-09-07 2023-08-22 徐淑鹏 Panorama emergency command system suitable for multiparty linkage
CN116682120A (en) * 2023-05-08 2023-09-01 华中科技大学 Multilingual mosaic image text recognition method based on deep learning
CN117333409A (en) * 2023-10-07 2024-01-02 上海望繁信科技有限公司 Big data analysis method based on image
CN117496322A (en) * 2023-11-30 2024-02-02 浙江工业大学 Multi-mode 3D target detection method and device based on cloud edge cooperation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177911B (en) * 2019-12-24 2022-09-20 大连理工大学 Part surface roughness online prediction method based on SDAE-DBN algorithm

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719054A (en) * 2016-01-13 2016-06-29 天津中科智能识别产业技术研究院有限公司 Disaster rescuing spot commander information sharing method and system based on mobile terminals
CN111840855A (en) * 2020-06-28 2020-10-30 深圳市恒升森林消防装备有限公司 All-round intelligent emergency rescue linkage command system
CN111985502A (en) * 2020-08-03 2020-11-24 武汉大学 Multi-mode image feature matching method with scale invariance and rotation invariance
CN114093111A (en) * 2021-12-20 2022-02-25 中国民用航空飞行学院 Forest fire rescue air-ground integrated commanding and scheduling system and method
CN115376045A (en) * 2022-08-16 2022-11-22 四川九洲视讯科技有限责任公司 Public safety command intelligent processing method based on multi-mode fusion deep learning
CN116633956A (en) * 2022-09-07 2023-08-22 徐淑鹏 Panorama emergency command system suitable for multiparty linkage
CN115861792A (en) * 2022-11-09 2023-03-28 武汉大学 Multi-mode remote sensing image matching method for weighted phase orientation description
CN116682120A (en) * 2023-05-08 2023-09-01 华中科技大学 Multilingual mosaic image text recognition method based on deep learning
CN117333409A (en) * 2023-10-07 2024-01-02 上海望繁信科技有限公司 Big data analysis method based on image
CN117496322A (en) * 2023-11-30 2024-02-02 浙江工业大学 Multi-mode 3D target detection method and device based on cloud edge cooperation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合稀疏表示和深度学习的视频中3D人体姿态估计;王伟楠;张荣;郭立君;;中国图象图形学报;20200316(第03期);全文 *

Also Published As

Publication number Publication date
CN117745505A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN112767554B (en) Point cloud completion method, device, equipment and storage medium
CN112560656B (en) Pedestrian multi-target tracking method combining attention mechanism end-to-end training
CN106981057B (en) RPCA-based NSST image fusion method
CN113792641A (en) High-resolution lightweight human body posture estimation method combined with multispectral attention mechanism
CN116543168A (en) Garbage image denoising method based on multidimensional image information fusion
CN116956222A (en) Multi-complexity behavior recognition system and method based on self-adaptive feature extraction
CN116383617A (en) Intelligent blood pressure detection method and system based on pulse wave waveform characteristics
CN113076914B (en) Image processing method, device, electronic equipment and storage medium
CN117745505B (en) Disaster relief command system and method based on real-time multi-mode data
CN117237796A (en) Marine product detection method based on feature enhancement and sampling offset
Yang et al. Semantic change driven generative semantic communication framework
CN112365551A (en) Image quality processing system, method, device and medium
CN115330759B (en) Method and device for calculating distance loss based on Hausdorff distance
CN116167015A (en) Dimension emotion analysis method based on joint cross attention mechanism
CN115100740A (en) Human body action recognition and intention understanding method, terminal device and storage medium
CN114445618A (en) Cross-modal interaction RGB-D image salient region detection method
CN117584136B (en) Robot fault detection method and system based on artificial intelligence
CN116304558B (en) Epileptic brain magnetic map spike detection method and device
CN118229712B (en) Liver tumor image segmentation system based on enhanced multidimensional feature perception
CN114237394B (en) Motion recognition method, device, equipment and medium
CN118101942B (en) Image data high-speed transmission method and system based on FPGA
CN114924874A (en) Real-time instance segmentation method and system based on computation offload mechanism
da Silva Gilbert DNN ARCHITECTURES FOR RESOURCE-CONSTRAINED DEVICES AND LATENCY-SENSITIVE APPLICATIONS
Baldelomar TRAJECTORY FORECASTING FOR AUTONOMOUS VEHICLES
CN117911701A (en) Light-weight double-prediction branch semantic segmentation water body extraction deep learning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant