CN117572885B - Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle - Google Patents

Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle Download PDF

Info

Publication number
CN117572885B
CN117572885B CN202311544566.6A CN202311544566A CN117572885B CN 117572885 B CN117572885 B CN 117572885B CN 202311544566 A CN202311544566 A CN 202311544566A CN 117572885 B CN117572885 B CN 117572885B
Authority
CN
China
Prior art keywords
target
unmanned aerial
aerial vehicle
image
thermal infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311544566.6A
Other languages
Chinese (zh)
Other versions
CN117572885A (en
Inventor
谈心
郑祺
沈永林
武文奎
赵梦奎
梁立正
王强
冯静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mingfei Weiye Technology Co ltd
China University of Geosciences
Original Assignee
Mingfei Weiye Technology Co ltd
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mingfei Weiye Technology Co ltd, China University of Geosciences filed Critical Mingfei Weiye Technology Co ltd
Priority to CN202311544566.6A priority Critical patent/CN117572885B/en
Publication of CN117572885A publication Critical patent/CN117572885A/en
Application granted granted Critical
Publication of CN117572885B publication Critical patent/CN117572885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a night tracking method, a night tracking system and a related device based on a thermal infrared camera of an unmanned aerial vehicle, wherein the method comprises the following steps: s1, determining a flight area and a monitoring area of an unmanned aerial vehicle, and setting parameters of a thermal infrared camera and the flight height of the unmanned aerial vehicle; s2, shooting an infrared video of a flight area by using a thermal infrared camera, sending the infrared video to a ground station, and performing thermal infrared processing on the infrared video to obtain a processed infrared video; s3, performing target detection and tracking on the processed infrared video to obtain the position of the target, and generating an actual motion track of the target according to the position of the target; s4, learning an actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle; s5, determining a final flight path of the unmanned aerial vehicle according to the decision model, and enabling the unmanned aerial vehicle to track a target according to the final flight path and automatically fly to a designated position after the target tracking is finished.

Description

Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle
Technical Field
The invention relates to the technical field of automatic control, in particular to a night tracking method, a night tracking system and a related device based on a thermal infrared camera of an unmanned aerial vehicle.
Background
Unmanned aerial vehicle tracking is a widely applied target tracking method, and real-time tracking and detection of targets are realized by carrying sensor equipment by utilizing the mobility and flexibility of the unmanned aerial vehicle. However, the traditional unmanned plane tracking technology usually adopts a preset path or a manual control mode to fly, and the mode can meet the requirements of some simple scenes, and during the daytime tracking, a visible light image, namely a common RGB image, is used, can reflect the apparent characteristics of the color, texture, shape and the like of a target, but is also easily influenced by factors such as illumination, shielding, background interference and the like; when faced with complex environments, such as urban streets, mountainous areas, etc., where terrain is complex, conventional path planning methods are also difficult to accommodate due to the large number of obstacles and uncertainty factors in these environments, such as buildings, trees, poles, etc.
Therefore, searching for an unmanned aerial vehicle tracking method which can automatically plan a path according to the environment of a monitoring area of the unmanned aerial vehicle and has high recognition capability is a technical problem to be solved by a person skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a night tracking method, a system and a related device based on an unmanned aerial vehicle thermal infrared camera, which can acquire an infrared video of an unmanned aerial vehicle monitoring area through the thermal infrared camera, extract characteristics and further construct a target motion track, and learn the target motion track based on a deep learning algorithm to generate a decision model, so that the unmanned aerial vehicle is not limited by light, realizes all-weather, multi-angle and high-resolution monitoring, simultaneously can plan a final flight path according to environmental factors to carry out intelligent tracking, and improves the target tracking accuracy.
The technical scheme of the invention is realized as follows:
In a first aspect, the invention provides a night tracking method based on a thermal infrared camera of an unmanned aerial vehicle, comprising the following steps:
s1, determining a flight area and a monitoring area of an unmanned aerial vehicle, and setting parameters of a thermal infrared camera and the flight height of the unmanned aerial vehicle;
S2, shooting an infrared video of a flight area by using a thermal infrared camera, sending the infrared video to a ground station, and performing thermal infrared processing on the infrared video to obtain a processed infrared video;
s3, performing target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, and generating an actual motion track of the target according to the position of the target;
s4, learning an actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle;
s5, determining a final flight path of the unmanned aerial vehicle according to the decision model, and enabling the unmanned aerial vehicle to track a target according to the final flight path and automatically fly to a designated position after the target tracking is finished.
On the basis of the above technical solution, preferably, step S3 specifically includes:
S31, detecting each frame of image in the processed infrared video by using a fast-RCNN algorithm to obtain a detection image set { A i };
S32, extracting target features of a first detection image A 1 in a detection image set { A i }, and obtaining first target features, wherein the first target features are used as templates of a SiamRPN algorithm;
s33, extracting a candidate region set in the residual image in the detection image set { A i } by using a SiamRPN algorithm, respectively calculating a correlation score and an offset between the candidate region set and the template, and determining the position of the target according to the correlation score and the offset;
S34, arranging the candidate region sets according to the relevance scores, and generating the actual motion trail of the target according to the images corresponding to the arrangement sequence and the positions of the target.
On the basis of the above technical solution, preferably, step S33 specifically includes:
Extracting candidate areas in the current frame image by using SiamRPN algorithm, and performing first pretreatment on the candidate areas to obtain candidate areas after the first pretreatment; the first preprocessing includes resizing and color space conversion of the candidate region image;
extracting features of the candidate region after the first pretreatment to obtain a second target feature, wherein the second target feature is a high-dimensional feature vector;
Performing dimension adjustment and normalization processing on the second target feature, and calculating a correlation score and an offset between the normalized second target feature and the first target feature;
And determining the position of the target in the candidate region according to the relevance score and the offset.
On the basis of the above technical solution, preferably, step S4 specifically includes:
s41, determining an initial flight path according to the position of the unmanned aerial vehicle;
S42, designing a feedback function according to the initial flight path, wherein the feedback function is used for calculating errors of the initial flight path and the actual motion trail;
s43, constructing a state set according to the state information of the unmanned aerial vehicle; the state information comprises unmanned aerial vehicle positions and unmanned aerial vehicle flying heights;
s44, constructing an action set according to the control behavior of the unmanned aerial vehicle; the control behavior comprises a flight speed and a flight direction;
s45, the deep reinforcement learning algorithm learns and trains the feedback function according to the state set and the action set to generate a decision model.
On the basis of the above technical solution, preferably, step S45 specifically includes:
S451, collecting a state set, an action set, an initial flight path and a feedback function of a plurality of unmanned aerial vehicles, and generating training set data;
s452, performing second preprocessing on the training set data, and inputting the training set data after the second preprocessing to a neural network; the second preprocessing includes normalization and discretization;
s453, training the neural network by using Q-learning, and optimizing and updating parameters of the neural network;
s454, performing continuous iterative updating on the trained neural network by using the training set data to generate a decision model.
On the basis of the above technical solution, preferably, step S453 includes:
training the neural network by using Q-learning to obtain first training data;
Designing a meta learning network, inputting first training data into the meta learning network, and optimizing parameters of the meta learning network by a gradient descent method;
repeatedly and continuously updating parameters of the meta learning network, and verifying errors between the flight path and the actual motion trail of each unmanned aerial vehicle according to the parameters of the meta learning network;
And dynamically adjusting the initial flight path of each unmanned aerial vehicle according to the result of the meta-learning network parameter verification, and updating the parameters of the neural network.
On the basis of the above technical solution, preferably, the thermal infrared treatment specifically includes:
All gray images of the infrared video are converted into color images;
carrying out non-uniformity correction processing on the color image to obtain a first image; the non-uniformity correction process eliminates an image with low edge temperature in the color image, and each pixel of the first image can reflect the temperature of the target;
Performing drift compensation processing on the first image to obtain a second image; the drift compensation processing eliminates the temperature jump image in the first image, and the temperature precision of each frame of the second image is the same;
And performing target detection processing on the second image to obtain the processed infrared video.
In a second aspect, the present invention provides a night tracking system based on a thermal infrared camera of an unmanned aerial vehicle, which adopts the night tracking method as described above, and includes:
The image acquisition module is used for shooting infrared videos of the flight area by using the infrared camera and sending the infrared videos to the ground station;
the ground control module is used for carrying out thermal infrared treatment on the infrared video to obtain a treated infrared video;
The image processing module is used for carrying out target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, and generating the actual motion trail of the target according to the position of the target;
the training learning module is used for learning the actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle;
and the tracking module is used for tracking the target by the unmanned aerial vehicle according to the control strategy, and the unmanned aerial vehicle automatically flies to the designated position after the target tracking is finished.
In a third aspect, the present invention provides a computer-readable storage medium storing computer instructions that cause the computer to implement a night tracking method as described above.
In a fourth aspect, the present invention provides an electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus; wherein,
The processor, the memory and the communication interface complete the communication with each other through the bus;
the memory stores program instructions executable by the processor that the processor invokes to implement the night tracking method as described above.
Compared with the prior art, the night tracking method has the following beneficial effects:
(1) The method comprises the steps of detecting images in an infrared video and extracting features by using a fast-RCNN algorithm and a SiamRPN algorithm, determining the position of a target by calculating correlation scores and offsets between a candidate region set and a template respectively, arranging the candidate region set according to the correlation scores, and generating an actual motion track of the target according to the images corresponding to the arrangement sequence and the position of the target, so that an unmanned aerial vehicle monitors under the night low-light condition without being limited by light, and all-weather, multi-angle and high-resolution monitoring is realized;
(2) Shooting a flight area by setting a thermal infrared camera, sending an infrared video to a ground station for processing, generating an initial flight path according to the position of a target, generating a feedback function according to an error between the initial flight path and an actual motion track, and intelligently planning a final flight path according to the environment of the flight area by establishing a decision model and performing iterative updating to avoid the influence of an obstacle on target detection, so that the unmanned aerial vehicle can automatically avoid the obstacle and adapt to a complex environment better;
(3) The method comprises the steps of collecting a state set, an action set, initial flight paths and feedback functions of a plurality of unmanned aerial vehicles, generating training set data, carrying out iterative updating by using a neural network and a meta-learning network, generating a decision model, dynamically adjusting the initial flight paths of each unmanned aerial vehicle, and improving the performance and the robustness of an unmanned aerial vehicle cluster.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a night tracking method based on a thermal infrared camera of an unmanned aerial vehicle of the present invention;
fig. 2 is a block diagram of a night tracking system based on a thermal infrared camera of an unmanned aerial vehicle of the present invention.
Detailed Description
The following description of the embodiments of the present invention will clearly and fully describe the technical aspects of the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
As shown in fig. 1, the invention provides a night tracking method based on an unmanned aerial vehicle thermal infrared camera, which comprises the following steps:
s1, determining a flight area and a monitoring area of the unmanned aerial vehicle, and setting parameters of a thermal infrared camera and the flight height of the unmanned aerial vehicle.
As will be appreciated by those skilled in the art, the boundaries and obstacle locations of the surveillance area are determined by using maps or field surveys, and the extent and shape of the surveillance area is determined according to the location, size and importance of the target to ensure safe flight of the drone. Parameters of the thermal infrared camera include focal length, angle of view, resolution, frame rate, and the like, and appropriate parameters need to be selected according to the size, distance, and environmental conditions of the target to obtain a clear thermal infrared image.
In the embodiment of the application, the flight height of the unmanned aerial vehicle and the parameters of the thermal infrared camera are set according to the monitoring requirements and the characteristics of the targets so as to obtain the optimal monitoring result, realize all-weather, multi-angle and high-resolution monitoring, and simultaneously acquire images by measuring the thermal radiation of the targets, so that the unmanned aerial vehicle has the capability of monitoring at night or under low light conditions, the detection and recognition capability of the targets are improved, and the monitoring effect is enhanced.
S2, shooting an infrared video of the flight area by using a thermal infrared camera, sending the infrared video to a ground station, and performing thermal infrared processing on the infrared video to obtain a processed infrared video.
In the embodiment of the application, a thermal infrared camera on an unmanned aerial vehicle performs aerial photography on a flight area to obtain infrared video data, the flight area can be covered in a full range by setting an initial flight path and a time interval, and a sufficient video data amount is ensured to be obtained, the acquired infrared video data is transmitted to a ground station, the ground station performs thermal infrared processing on the received infrared video data, a target in the infrared video is highlighted from a background, and the detection and recognition capability of the target is improved; after the infrared videos are subjected to thermal infrared treatment, the treated infrared videos are obtained, and the videos can intuitively display the thermal distribution of a flight area, the thermal characteristics and abnormal conditions of a target and the like, so that the unmanned aerial vehicle can monitor at night or under low light conditions, is not limited by light rays, and realizes all-weather monitoring. The infrared video data can be transmitted to the ground station through a high-speed wireless data transmission technology, such as a wireless network or satellite communication.
Specifically, the thermal infrared treatment specifically includes:
All gray images of the infrared video are converted into color images;
carrying out non-uniformity correction processing on the color image to obtain a first image; the non-uniformity correction process eliminates an image with low edge temperature in the color image, and each pixel of the first image can reflect the temperature of the target;
Performing drift compensation processing on the first image to obtain a second image; the drift compensation processing eliminates the temperature jump image in the first image, and the temperature precision of each frame of the second image is the same;
And performing target detection processing on the second image to obtain the processed infrared video.
In the embodiment of the application, the collected infrared video data are mapped to different RGB channels to be converted into color images, so that the infrared video is more visual and is convenient to observe and analyze; since the photosensitive devices of the thermal infrared camera have non-uniformity, temperature responses corresponding to different pixels may have differences, so that the non-uniformity correction processing is required to be performed by performing background correction or gray correction on the color image, so that each pixel can accurately reflect the temperature of the target; because the thermal infrared camera may have temperature drift in the use process, the temperature value in the image is offset, and drift compensation processing is required to be performed on the first image by using the ambient temperature and the temperature sensor of the thermal infrared camera, so that the temperature precision of the second image is the same, the accuracy of the temperature value in the second image is further improved, and each pixel of the second image can accurately reflect the temperature of the target; the second image is subjected to target detection processing to identify and extract a target, and the efficiency of data processing can be improved by using computer vision technology and machine learning algorithm to perform target detection processing on the second image so as to obtain the best thermal infrared processing effect, wherein the target detection processing can be performed according to the thermal characteristics, shape, motion and other attributes of the target.
In an embodiment of the application, thermoviewer software is used for processing the infrared video image of the unmanned aerial vehicle and carrying out image preprocessing, including denoising, enhancement, target detection and other operations, so as to improve the image quality and the target detection precision. Wherein Thermoviewer processing the unmanned aerial vehicle infrared video image specifically includes:
1) Importing the unmanned aerial vehicle infrared video image into Thermoviewer software, and selecting a frame or an area to be processed in the unmanned aerial vehicle infrared video image. Converting the original gray image into a color image with higher contrast and visibility using a different color table or pseudo color conversion function provided by Thermoviewer software;
2) The non-uniformity correction function provided by Thermoviewer software is used for eliminating the phenomenon that the temperature of the edge of the image is low due to the movement of the unmanned aerial vehicle or the environmental change, so that each pixel in the image can accurately reflect the temperature of the target;
3) The drift compensation function provided by Thermoviewer software is used for eliminating the phenomenon of temperature jump in the image caused by a temperature calibration event in the unmanned aerial vehicle shooting process, so that each frame in the image can maintain consistent temperature precision;
4) Marking the position and the size of the target in the image automatically or manually according to the appearance characteristics or the temperature range of the target by using a target detection function provided by Thermoviewer software;
5) The processed image is saved to the local or cloud in a different format (e.g., JPG, PNG, TIFF, CSV, etc.) or sent to other software (e.g., arcGIS, pix4D, etc.) for further analysis or application using the export functionality provided by Thermoviewer software.
In the embodiment of the application, the infrared video image of the unmanned aerial vehicle is processed through Thermoviewer, so that the quality of the infrared video image and the target detection precision are improved, the temperature deviation and drift are eliminated, the reliability of target analysis and decision is improved, and the visual effect of operation is enhanced.
And S3, performing target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, and generating the actual motion trail of the target according to the position of the target.
Specifically, step S3 specifically includes:
S31, detecting each frame of image in the processed infrared video by using a fast-RCNN algorithm to obtain a detection image set { A i }; wherein i is a non-zero natural number;
S32, extracting target features of a first detection image A 1 in a detection image set { A i }, and obtaining first target features, wherein the first target features are used as templates of a SiamRPN algorithm;
s33, extracting a candidate region set in the residual image in the detection image set { A i } by using a SiamRPN algorithm, respectively calculating a correlation score and an offset between the candidate region set and the template, and determining the position of the target according to the correlation score and the offset;
S34, arranging the candidate region sets according to the relevance scores, and generating the actual motion trail of the target according to the images corresponding to the arrangement sequence and the positions of the target.
In the embodiment of the application, the detection image set { A i } is obtained by using the fast-RCNN according to the thermal characteristics, the shape, the motion and other attributes of the target, the bounding box and the class information of the target are obtained, and the target object in the infrared video is effectively detected by using the fast-RCNN algorithm; extracting features of a first detection image A 1 of the detection image set { A i } to obtain first target features, and taking the first target features as templates of a SiamRPN algorithm; performing target tracking on the residual images in the detected image set { A i } by using SiamRPN algorithm, extracting a candidate region set in the images, respectively calculating a correlation score and an offset of the candidate region set and a template, and determining the position of a target according to the correlation score and the offset; and arranging the candidate region sets according to the relevance scores, generating an actual motion trail of the target according to the images corresponding to the queuing sequence and the target positions determined according to the images, and improving the accuracy and reliability of motion trail generation, so that the unmanned aerial vehicle can realize stable target tracking under the condition of complex background and target change.
In an embodiment of the present application, step S33 specifically includes:
Extracting candidate areas in the current frame image by using SiamRPN algorithm, and performing first pretreatment on the candidate areas to obtain candidate areas after the first pretreatment; the first preprocessing includes resizing and color space conversion of the candidate region image;
extracting features of the candidate region after the first pretreatment to obtain a second target feature, wherein the second target feature is a high-dimensional feature vector;
Performing dimension adjustment and normalization processing on the second target feature, and calculating a correlation score and an offset between the normalized second target feature and the first target feature;
And determining the position of the target in the candidate region according to the relevance score and the offset.
In the embodiment of the application, after extracting the candidate region through SiamRPN algorithm, performing first preprocessing and feature extraction on the candidate region to obtain a second target feature, wherein the second target feature is a high-dimensional feature vector, in order to enable the second target feature to be adaptive to an input layer of SiamRPN algorithm to perform dimension adjustment on the second target feature, stability of the second target feature is improved, normalization processing is performed on the adjusted feature vector, correlation score and offset between the normalized second target feature and the first target feature are calculated, accuracy and stability of target tracking are improved, and the position of a target in the candidate region is determined according to the correlation score and offset. The SiamRPN algorithm may use a Convolutional Neural Network (CNN) as a feature extractor for feature extraction.
Optionally, the method of histogram equalization and gaussian filtering may be used to enhance the contrast and sharpness of the image before extracting the second candidate region, and the first preprocessing may further include using methods of non-maximum suppression and kalman filtering to eliminate duplicate and erroneous candidate regions, and smoothing and correcting the target position.
In an embodiment of the application, a real ground truth frame is arranged in a detection image A 1, and target features in the real ground truth frame are extracted as templates; and extracting the characteristics of the residual images in the detected image set { A i } by using SiamRPN algorithm, generating an initial anchor frame, classifying and adjusting the position of the initial anchor frame by calculating the offset of the initial anchor frame relative to a real ground truth frame and classifying and adjusting the initial anchor frame by using a classifier and a regressor of the Faster-RCNN algorithm to obtain an anchor frame. The calculation formula of the boundary position and the confidence score of the initial anchor point frame is as follows:
scorei=Pobject(i)
∆=(∆x,∆y,∆w,∆h)
regi=∆
Where score i represents a confidence score for the initial anchor block, P object (i) represents the probability of whether a target is present at the ith anchor point location, fatx and faty represent the horizontal and vertical offsets of the initial anchor block center point, fatw and fath represent the width and height offsets of the initial anchor block, reg represents the offset of the initial anchor block relative to the true ground truth block, reg represents the regressor location.
The location of the classifier and the regressor and the calculation formula of the classification score are as follows:
(P0,P1,...,PK-1)=softmax(wTΦ(xa))
*=(∆x*,∆y*,∆w*,∆h*
(∆x*,∆y*,∆w*,∆h*)=reg(∆a*,∆a)
Wherein, P K-1 represents the classification probability of the (K-1) th object, K is a non-zero natural number, softmax represents the operation of softmax on the input vector and converts the input vector into probability distribution, w T represents the weight of the classifier, which can be set according to actual requirements, Φ (x a) represents the transformation of the input feature x a to obtain a new feature vector, m * represents the offset between the real ground truth frame and the i-th anchor frame, x *,∆y*,∆w*,∆h* represents the final value of the offset of the target position predicted by the regressor, reg represents the regressor, is used for predicting the offset of the target position and generating round truth frames, a * represents the offset between the predicted ground truth frame and the i-th anchor frame, a represents the offset between the anchor frame and the i-th anchor frame, and a represents the offset between the predicted ground truth frame and the real ground truth frame.
It can be understood that the classification score of each initial anchor point frame is calculated by the classifier, then the offset of the target position is predicted by the regressive, the offset predicted by the regressive is applied to the initial anchor point frame, and the adjusted anchor point frame is obtained, namely, the final target detection result is determined by the confidence score and the adjusted anchor point frame.
And S4, learning an actual motion track of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle.
Specifically, step S4 specifically includes:
s41, determining an initial flight path according to the position of the unmanned aerial vehicle;
S42, designing a feedback function according to the initial flight path, wherein the feedback function is used for calculating errors of the initial flight path and the actual motion trail;
s43, constructing a state set according to the state information of the unmanned aerial vehicle; the state information comprises unmanned aerial vehicle positions and unmanned aerial vehicle flying heights;
s44, constructing an action set according to the control behavior of the unmanned aerial vehicle; the control behavior comprises a flight speed and a flight direction;
s45, the deep reinforcement learning algorithm learns and trains the feedback function according to the state set and the action set to generate a decision model.
In the embodiment of the application, the initial flight path can be determined according to the position of the target, the feedback function is generated according to the error of the initial flight path and the actual motion track, and the initial flight path is adjusted according to the feedback function, so that the unmanned aerial vehicle can plan the flight path more efficiently in a complex environment, avoid the barrier and reduce the risk; the method is beneficial to the deep reinforcement learning algorithm to better understand the environment and behaviors by constructing the state set and the action set, so that a more intelligent and highly-adaptive decision model is generated, the deep reinforcement learning algorithm is used for learning and training, and a high-efficiency decision model is generated, so that the unmanned aerial vehicle can adjust the flight path according to the actual environment, adapt to various complex conditions and improve the autonomy and the intelligent level of flight.
In an embodiment of the present application, when performing night tracking of the thermal infrared camera of the unmanned aerial vehicle, assuming that the current position of the target is p= (x, y), the motion trajectory is t= (T1, T2,..and tn), where ti= (xi, yi) represents the position of the target at the time point n, n is a non-zero natural number, and the feedback function is used to evaluate the difference between the expected position of the target and the position of the target, the formula is as follows:
where f (p, T) represents a feedback function.
When the movement of the drone is m= (dx, dy), the flight path is m=kx v f (p, T), where k is a constant representing the sensitivity of the flight path, v f (p, T) is the gradient of the feedback function with respect to position p.
In an embodiment of the present application, step S45 specifically includes:
S451, collecting a state set, an action set, an initial flight path and a feedback function of a plurality of unmanned aerial vehicles, and generating training set data;
s452, performing second preprocessing on the training set data, and inputting the training set data after the second preprocessing to a neural network; the second preprocessing includes normalization and discretization;
s453, training the neural network by using Q-learning, and optimizing and updating parameters of the neural network;
s454, performing continuous iterative updating on the trained neural network by using the training set data to generate a decision model.
It can be understood that when the unmanned aerial vehicle tracks according to the initial flight path, the unmanned aerial vehicle timely sends a feedback signal to the ground station when encountering an obstacle, the ground station timely adjusts the flight path according to the initial path and the feedback signal to generate a feedback function, the unmanned aerial vehicle tracks according to the new flight path until no feedback signal exists between the unmanned aerial vehicle and the ground station, a state set, an action set, the initial flight path, the feedback function and the new flight path of the unmanned aerial vehicle are used as training data, training data of a plurality of unmanned aerial vehicles are collected to form training set data, the training set data are subjected to second preprocessing and are input into a neural network to be trained by using Q-learning, parameters of the neural network are optimized, the unmanned aerial vehicle can learn to select the optimal flight path under different states by continuously updating the action value function Q, and the flight performance and the intelligent level of the unmanned aerial vehicle are improved; the training set data is used for continuously and iteratively updating the trained neural network, so that the neural network can better fit the relationship between the state and the action of the unmanned aerial vehicle, and an efficient and accurate decision model is generated, and the flight safety and efficiency of the unmanned aerial vehicle are improved.
The calculation formula of the Q value in the Q-learning is as follows:
Q(s,a)=Q(s,a)+α(r+γmaxa'Q(s’+a’)-Q(s,a))
Where Q (s, a) represents the cost function of taking action a in state s, α represents the learning rate, r represents the reward obtained by taking action a 'in current state s', γ represents the discount factor, and γmax a' represents the cost function corresponding to the action with the greatest value among all possible actions in the next state s. And through continuous iteration, optimizing the cost function Q, and further optimizing parameters of the neural network.
In the embodiment of the application, proper state information and control behaviors are required to be selected according to specific tasks and environments of the unmanned aerial vehicle, the feedback function is required to be set according to the relation between the states and the control behaviors, the difference between the initial flight path and the actual motion trail of the unmanned aerial vehicle is estimated by using the feedback function, and the initial flight path is adjusted according to the feedback function, so that the deviation of the unmanned aerial vehicle is found and corrected in time, and the safety of the unmanned aerial vehicle is enhanced.
In an embodiment of the present application, step S453 specifically includes:
training the neural network by using Q-learning to obtain first training data;
Designing a meta learning network, inputting first training data into the meta learning network, and optimizing parameters of the meta learning network by a gradient descent method;
repeatedly and continuously updating parameters of the meta learning network, and verifying errors between the flight path and the actual motion trail of each unmanned aerial vehicle according to the parameters of the meta learning network;
And dynamically adjusting the initial flight path of each unmanned aerial vehicle according to the result of the meta-learning network parameter verification, and updating the parameters of the neural network.
In the embodiment of the application, the neural network is trained by using Q-learning to obtain first training data and a neural network model after the first training, the relationship between the states and actions of different unmanned aerial vehicles is learned through the neural network, the parameters of the neural network are updated through repeated iteration, so that the neural network gradually converges to an optimal solution, meanwhile, the error between the flight path and the actual motion track of each unmanned aerial vehicle is verified according to the parameters of the neural network, the performance of the neural network is evaluated, the initial flight path of each unmanned aerial vehicle can be dynamically adjusted according to the result of the parameter verification of the neural network, and meanwhile, the data is collected again according to the adjusted flight path, and the parameters of the neural network are updated. Through the optimization and the dynamic adjustment of meta-learning network, the accuracy of unmanned aerial vehicle flight path planning can be improved for unmanned aerial vehicle can adapt to different environment and task demands better, through the initial flight path of dynamic adjustment unmanned aerial vehicle, can make unmanned aerial vehicle avoid the barrier, reduce collision risk in the flight process, improve flight security.
In an embodiment of the application, after training a neural network by using Q-learning, obtaining first training data, constructing a deep neural network by adopting a Convolutional Neural Network (CNN) or a fully-connected neural network, inputting the first training data into the deep neural network, outputting a Q value of each action, training the deep Q network by using a reinforcement learning algorithm, minimizing the mean square error of a Q function, enabling the mean square error to approach to a real Q value, storing historical data of an initial flight path and a feedback function of an unmanned aerial vehicle, randomly sampling the training deep network, and periodically copying parameters of the neural network to the deep neural network so as to stabilize estimation of a target Q value. Through training the depth Q network, the Q value of each action can be obtained, the mean square error of the Q function is minimized, the depth Q network approximates to the real Q value, the unmanned aerial vehicle is enabled to select the optimal flight path, and the flight efficiency and performance of the unmanned aerial vehicle are improved.
In an embodiment of the application, after training a neural network by using Q-learning, first training data is obtained, and the first training data is input into a cyclic neural network and an attention mechanism for training, that is, the cyclic neural network and the attention mechanism are adopted for training the training data, so that the state representation of the data is enhanced, the cyclic neural network comprises a long and short memory network (LSTM) or a gate-controlled cyclic unit (GRU), and the attention mechanism comprises a self-attention mechanism and a multi-head attention mechanism, so that an unmanned aerial vehicle can adapt to different flight environments better.
S5, determining a final flight path of the unmanned aerial vehicle according to the decision model, and enabling the unmanned aerial vehicle to track a target according to the final flight path and automatically fly to a designated position after the target tracking is finished.
According to the application, a thermal infrared camera is used for shooting and concentrating photo-thermal (CST) processing on a flight area to obtain an infrared video of the flight area, a fast-RCNN algorithm and a SiamRPN algorithm are combined for carrying out efficient tracking, an initial flight path is determined according to an actual motion track of a target, a feedback function is generated through an error between the initial flight path and the actual track, the initial flight path is timely adjusted according to the feedback function, a depth reinforcement learning algorithm learns and trains based on the feedback function, a state set and an action set of the unmanned aerial vehicle and the initial flight path to generate a decision model, a final flight path of the unmanned aerial vehicle is determined through the decision model, the night auxiliary thermal infrared tracking performance of the unmanned aerial vehicle is improved, the unmanned aerial vehicle can avoid in time when encountering an obstacle, intelligent planning can be adapted to a complex environment, the unmanned aerial vehicle automatically flies to a designated position after the target tracking is finished, the autonomous flight capability of the unmanned aerial vehicle is realized, and the burden of manual operation is reduced.
As shown in fig. 2, the invention further provides a night tracking system based on the thermal infrared camera of the unmanned aerial vehicle, which adopts the night tracking method as described above, and comprises the following steps:
The image acquisition module is used for shooting infrared videos of the flight area by using the infrared camera and sending the infrared videos to the ground station;
the ground control module is used for carrying out thermal infrared treatment on the infrared video to obtain a treated infrared video;
The image processing module is used for carrying out target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, generating the actual motion trail of the target according to the position of the target, and determining the position and the motion trail of the target;
the training learning module is used for learning the actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle;
and the tracking module is used for tracking the target by the unmanned aerial vehicle according to the control strategy, and the unmanned aerial vehicle automatically flies to the designated position after the target tracking is finished.
The thermal infrared camera is generally composed of a thermal imaging camera, an optical camera, an image processing unit, a wireless communication system and the like, wherein the thermal imaging camera is used for capturing heat of a target object in an infrared band, the optical camera is used for capturing images of the target object, and data of the two cameras are processed and fused through the image processing unit, so that image quality and recognition capability of the target object are improved.
In the embodiment of the application, the image acquisition module uses the thermal infrared camera to shoot the infrared video of the flight area, so as to ensure that a sufficiently clear infrared image is obtained, the ground control module carries out thermal infrared processing on the infrared video, the position and the motion track of the target are accurately determined, the effect of target tracking is improved, the image processing module uses the fast-RCNN algorithm and the SiamRPN algorithm to carry out target detection and tracking algorithm analysis on the processed infrared video, the position and the motion track of the target are used as input to train and learn, the unmanned aerial vehicle carries out target tracking task according to the final flight path determined by the trained decision model, the flight attitude and the speed are automatically adjusted, the proper distance with the target is kept, and the unmanned aerial vehicle moves along with the target, so that the flight time and the energy consumption of the unmanned aerial vehicle are reduced, and the execution efficiency of the task is improved.
Further, the night tracking system further comprises a display module for displaying the thermal infrared video, the target position and the movement track of the unmanned aerial vehicle.
It can be understood that the image acquisition module, the ground control module and the image processing module transmit the thermal infrared video, the processed thermal infrared video, the target position and the motion trail to the display module, so that a user can know the actual condition of the flight area in time and make corresponding measures according to the display module, and misoperation is reduced.
The present invention also provides a computer-readable storage medium storing computer instructions that cause the computer to implement a night tracking method as described above.
The invention also provides an electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus; the processor, the memory and the communication interface complete communication with each other through the bus; the memory stores program instructions executable by the processor that the processor invokes to implement the night tracking method as described above.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. The night tracking method based on the unmanned aerial vehicle thermal infrared camera is characterized by comprising the following steps of:
s1, determining a flight area and a monitoring area of an unmanned aerial vehicle, and setting parameters of a thermal infrared camera and the flight height of the unmanned aerial vehicle;
S2, shooting an infrared video of a flight area by using a thermal infrared camera, sending the infrared video to a ground station, and performing thermal infrared processing on the infrared video to obtain a processed infrared video;
s3, performing target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, and generating an actual motion track of the target according to the position of the target;
s4, learning an actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle;
S5, determining a final flight path of the unmanned aerial vehicle according to the decision model, and enabling the unmanned aerial vehicle to track a target according to the final flight path and automatically fly to a designated position after the target tracking is finished;
The step S3 specifically comprises the following steps:
S31, detecting each frame of image in the processed infrared video by using a fast-RCNN algorithm to obtain a detection image set { A i };
S32, extracting target features of a first detection image A1 in the detection image set { A i }, and obtaining first target features, wherein the first target features are used as templates of a SiamRPN algorithm;
s33, extracting a candidate region set in the residual image in the detection image set { A i } by using a SiamRPN algorithm, respectively calculating a correlation score and an offset between the candidate region set and the template, and determining the position of the target according to the correlation score and the offset;
s34, arranging the candidate region sets according to the relevance scores, and generating an actual motion trail of the target according to the images corresponding to the arrangement sequence and the positions of the target;
the step S4 specifically comprises the following steps:
s41, determining an initial flight path according to the position of the unmanned aerial vehicle;
S42, designing a feedback function according to the initial flight path, wherein the feedback function is used for calculating errors of the initial flight path and the actual motion trail;
s43, constructing a state set according to the state information of the unmanned aerial vehicle; the state information comprises unmanned aerial vehicle positions and unmanned aerial vehicle flying heights;
s44, constructing an action set according to the control behavior of the unmanned aerial vehicle; the control behavior comprises a flight speed and a flight direction;
s45, the deep reinforcement learning algorithm learns and trains the feedback function according to the state set and the action set to generate a decision model.
2. The night tracking method based on the thermal infrared camera of the unmanned aerial vehicle as set forth in claim 1, wherein the step S33 specifically includes:
Extracting candidate areas in the current frame image by using SiamRPN algorithm, and performing first pretreatment on the candidate areas to obtain candidate areas after the first pretreatment; the first preprocessing includes resizing and color space conversion of the candidate region image;
extracting features of the candidate region after the first pretreatment to obtain a second target feature, wherein the second target feature is a high-dimensional feature vector;
Performing dimension adjustment and normalization processing on the second target feature, and calculating a correlation score and an offset between the normalized second target feature and the first target feature;
And determining the position of the target in the candidate region according to the relevance score and the offset.
3. The night tracking method based on the thermal infrared camera of the unmanned aerial vehicle as set forth in claim 1, wherein the step S45 specifically includes:
S451, collecting a state set, an action set, an initial flight path and a feedback function of a plurality of unmanned aerial vehicles, and generating training set data;
s452, performing second preprocessing on the training set data, and inputting the training set data after the second preprocessing to a neural network; the second preprocessing includes normalization and discretization;
s453, training the neural network by using Q-learning, and optimizing and updating parameters of the neural network;
s454, performing continuous iterative updating on the trained neural network by using the training set data to generate a decision model.
4. The night tracking method based on the thermal infrared camera of the unmanned aerial vehicle as set forth in claim 3, wherein the step S453 specifically includes:
training the neural network by using Q-learning to obtain first training data;
Designing a meta learning network, inputting first training data into the meta learning network, and optimizing parameters of the meta learning network by a gradient descent method;
repeatedly and continuously updating parameters of the meta learning network, and verifying errors between the flight path and the actual motion trail of each unmanned aerial vehicle according to the parameters of the meta learning network;
And dynamically adjusting the initial flight path of each unmanned aerial vehicle according to the result of the meta-learning network parameter verification, and updating the parameters of the neural network.
5. The night tracking method based on the thermal infrared camera of the unmanned aerial vehicle according to claim 1, wherein the thermal infrared processing specifically comprises:
All gray images of the infrared video are converted into color images;
carrying out non-uniformity correction processing on the color image to obtain a first image; the non-uniformity correction process eliminates an image with low edge temperature in the color image, and each pixel of the first image can reflect the temperature of the target;
Performing drift compensation processing on the first image to obtain a second image; the drift compensation processing eliminates the temperature jump image in the first image, and the temperature precision of each frame of the second image is the same;
And performing target detection processing on the second image to obtain the processed infrared video.
6. Night tracking system based on unmanned aerial vehicle thermal infrared camera, characterized in that it employs a night tracking method according to any of claims 1-5, comprising:
The image acquisition module is used for shooting infrared videos of the flight area by using the infrared camera and sending the infrared videos to the ground station;
the ground control module is used for carrying out thermal infrared treatment on the infrared video to obtain a treated infrared video;
The image processing module is used for carrying out target detection and tracking on the processed infrared video by using a fast-RCNN algorithm and a SiamRPN algorithm to obtain the position of the target, and generating the actual motion trail of the target according to the position of the target;
the training learning module is used for learning the actual motion trail of the target based on a deep reinforcement learning algorithm to obtain a decision model of the unmanned aerial vehicle;
and the tracking module is used for tracking the target by the unmanned aerial vehicle according to the control strategy, and the unmanned aerial vehicle automatically flies to the designated position after the target tracking is finished.
7. A computer readable storage medium storing computer instructions that cause the computer to implement the night tracking method of any one of claims 1-5.
8. An electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus; wherein,
The processor, the memory and the communication interface complete the communication with each other through the bus;
The memory stores program instructions executable by the processor, the processor invoking the program instructions to implement the night tracking method of any of claims 1-5.
CN202311544566.6A 2023-11-20 2023-11-20 Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle Active CN117572885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311544566.6A CN117572885B (en) 2023-11-20 2023-11-20 Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311544566.6A CN117572885B (en) 2023-11-20 2023-11-20 Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN117572885A CN117572885A (en) 2024-02-20
CN117572885B true CN117572885B (en) 2024-05-31

Family

ID=89862032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311544566.6A Active CN117572885B (en) 2023-11-20 2023-11-20 Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN117572885B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893933B (en) * 2024-03-14 2024-05-24 国网上海市电力公司 Unmanned inspection fault detection method and system for power transmission and transformation equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4937878A (en) * 1988-08-08 1990-06-26 Hughes Aircraft Company Signal processing for autonomous acquisition of objects in cluttered background
US6104429A (en) * 1993-08-10 2000-08-15 Raytheon Company Integration of TV video with IR tracker features
JP2003139834A (en) * 2001-10-31 2003-05-14 Mitsubishi Electric Corp Image correlation tracking device
CN108614572A (en) * 2018-04-28 2018-10-02 中国地质大学(武汉) A kind of target identification method for tracing, equipment and storage device based on aircraft
WO2021012484A1 (en) * 2019-07-19 2021-01-28 平安科技(深圳)有限公司 Deep learning-based target tracking method and apparatus, and computer readable storage medium
CN113156996A (en) * 2021-04-28 2021-07-23 北京理工大学 Pod control adaptive gain method for target tracking
EP3884352A1 (en) * 2018-11-21 2021-09-29 Eagle View Technologies, Inc. Navigating unmanned aircraft using pitch
CN115407803A (en) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 Target monitoring method and device based on unmanned aerial vehicle
CN116168322A (en) * 2023-01-10 2023-05-26 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle long-time tracking method and system based on multi-mode fusion
CN116758117A (en) * 2023-06-28 2023-09-15 云南大学 Target tracking method and system under visible light and infrared images
CN116859985A (en) * 2023-05-30 2023-10-10 河南科技大学 Four-rotor automatic tracking function implementation method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4937878A (en) * 1988-08-08 1990-06-26 Hughes Aircraft Company Signal processing for autonomous acquisition of objects in cluttered background
US6104429A (en) * 1993-08-10 2000-08-15 Raytheon Company Integration of TV video with IR tracker features
JP2003139834A (en) * 2001-10-31 2003-05-14 Mitsubishi Electric Corp Image correlation tracking device
CN108614572A (en) * 2018-04-28 2018-10-02 中国地质大学(武汉) A kind of target identification method for tracing, equipment and storage device based on aircraft
EP3884352A1 (en) * 2018-11-21 2021-09-29 Eagle View Technologies, Inc. Navigating unmanned aircraft using pitch
WO2021012484A1 (en) * 2019-07-19 2021-01-28 平安科技(深圳)有限公司 Deep learning-based target tracking method and apparatus, and computer readable storage medium
CN113156996A (en) * 2021-04-28 2021-07-23 北京理工大学 Pod control adaptive gain method for target tracking
CN115407803A (en) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 Target monitoring method and device based on unmanned aerial vehicle
CN116168322A (en) * 2023-01-10 2023-05-26 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle long-time tracking method and system based on multi-mode fusion
CN116859985A (en) * 2023-05-30 2023-10-10 河南科技大学 Four-rotor automatic tracking function implementation method
CN116758117A (en) * 2023-06-28 2023-09-15 云南大学 Target tracking method and system under visible light and infrared images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GFSNet: Generalization-Friendly Siamese Network for Thermal Infrared Object Tracking;Runmin Chen等;《Infrared Physics & Technology》;20220531;第123卷(第7期);104-110 *
基于红外特征的空中无人机目标检测与跟踪技术研究;花思齐;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20230215(第2期);C031-954 *

Also Published As

Publication number Publication date
CN117572885A (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN109800689B (en) Target tracking method based on space-time feature fusion learning
US11205274B2 (en) High-performance visual object tracking for embedded vision systems
CN109034018B (en) Low-altitude small unmanned aerial vehicle obstacle sensing method based on binocular vision
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN110570454B (en) Method and device for detecting foreign matter invasion
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN117572885B (en) Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
CN113159466B (en) Short-time photovoltaic power generation prediction system and method
CN110146099A (en) A kind of synchronous superposition method based on deep learning
CN109934108B (en) Multi-target and multi-type vehicle detection and distance measurement system and implementation method
CN108830286A (en) A kind of reconnaissance UAV moving-target detects automatically and tracking
CN113158833B (en) Unmanned vehicle control command method based on human body posture
CN112927264B (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN106447730A (en) Parameter estimation method, parameter estimation apparatus and electronic equipment
CN110610130A (en) Multi-sensor information fusion power transmission line robot navigation method and system
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN109544584B (en) Method and system for realizing inspection image stabilization precision measurement
CN113936031A (en) Cloud shadow track prediction method based on machine vision
CN113901931A (en) Knowledge distillation model-based behavior recognition method for infrared and visible light videos
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN109543553A (en) The photoelectricity recognition and tracking method of low small slow target based on machine learning
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant