CN114998747B - Long flight path real-time identification method based on deep learning template matching - Google Patents

Long flight path real-time identification method based on deep learning template matching Download PDF

Info

Publication number
CN114998747B
CN114998747B CN202210856190.1A CN202210856190A CN114998747B CN 114998747 B CN114998747 B CN 114998747B CN 202210856190 A CN202210856190 A CN 202210856190A CN 114998747 B CN114998747 B CN 114998747B
Authority
CN
China
Prior art keywords
real
data
flight path
time
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210856190.1A
Other languages
Chinese (zh)
Other versions
CN114998747A (en
Inventor
丘成桐
***
张朕通
刘义海
王航宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Applied Mathematics Center
Original Assignee
Nanjing Applied Mathematics Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Applied Mathematics Center filed Critical Nanjing Applied Mathematics Center
Priority to CN202210856190.1A priority Critical patent/CN114998747B/en
Publication of CN114998747A publication Critical patent/CN114998747A/en
Application granted granted Critical
Publication of CN114998747B publication Critical patent/CN114998747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a long flight path real-time identification method based on deep learning template matching, which comprises the following steps: marking the data of the acquired data from a global angle to make a data set; mapping the data into a track image in a dynamic sliding window mode; constructing a template matching model based on deep learning; mapping the flight path into an embedded vector with a fixed length; clustering the embedded vectors of the training set to obtain a class template; and then, obtaining a test set embedding vector by using a network, and comparing the test set embedding vector with the class template to judge the track type. The invention adopts a dynamic sliding window method to map the flight path to the image, thus solving the problem of identifying the position of the starting point of the flight path in real time in the long flight path; the template matching model based on deep learning is adopted, the embedded vector is mapped, and the embedded vector is matched with the track template, so that the condition that the window track is incomplete or an unknown type appears in the long track is avoided, the robustness is higher compared with the conventional method, and the track type and the position existing in the long track can be judged in real time.

Description

Long flight path real-time identification method based on deep learning template matching
Technical Field
The invention relates to the field of computer vision and pattern recognition, in particular to a long flight path real-time recognition method based on deep learning template matching.
Background
With the development of artificial intelligence technology, the problem of processing images by using deep learning is greatly improved, and the method is widely applied to different occasions. The method has important effect on military or public safety field by judging the target track through the sensor data. Conventionally, target tracks are judged manually, only single target track data can be processed manually on a long-endurance real-time task, and all data are difficult to monitor in real time and make decisions manually along with the increase of the number and time of the target tracks.
In the existing track method, the problems of disturbance and noise in the track cannot be effectively solved by using the traditional algorithm; the existing neural network judges track information, uses artificially processed data or images for training, cannot autonomously judge track starting points, cannot solve the problem of long tracks comprising various tracks, and only can identify single track images after being manually screened. In addition, the existing deep learning method has no robustness in dealing with the unknown track categories, and the unknown categories are identified as known categories when the unknown categories appear, which requires that a training set must include all the categories, but in an actual long track, other complex behaviors may be included, and the determination of the unknown categories is very important in addition to the known categories in the training set.
The twin neural network technology is a matching network, 1 group of samples including similar samples and heterogeneous samples are adopted during training, embedded vectors of different samples can be automatically learned through a convolution filter, and the model can enhance the vector similarity of the similar samples and weaken the similarity among different samples. Finally, a sample vector is obtained through an embedded vector extraction module, and discrimination of the target can be realized through comparison of similarity between samples.
In conclusion, the invention simultaneously solves the problems that the multi-type track in the real-time long track is difficult to distinguish and the existing track identification method can not divide the unknown class.
Disclosure of Invention
The invention provides a long flight path real-time identification method based on deep learning template matching, aiming at the problems that the real-time long flight path comprises multi-type flight paths which are difficult to distinguish and the existing flight path identification method cannot divide unknown classes.
In order to achieve the purpose, the invention provides the following technical scheme:
a long flight path real-time identification method based on deep learning template matching specifically comprises the following steps:
step S1: receiving and processing long track real-time data in real time to obtain a real-time track mapping image;
step S2: selecting an effective category in the long real-time flight path for marking, and training the twin neural network by using the effective category and an unknown category to obtain a trained network;
and step S3: inputting the effective categories in the training set into the trained embedded vector extraction module to obtain embedded vectors of the training set;
and step S4: using a clustering algorithm to the embedded vectors of the training set to find a class template;
step S5: mapping the real-time test data into an image, inputting the image into a trained network, and obtaining an embedded vector;
step S6: calculating the similarity of the embedded vector and a class template after calculating the embedded vector of the test data by using a similarity measurement function, and judging the track class according to the similarity;
preferably, the data in step S1 is received and determined in real time, and a long track may include tracks of multiple categories and also include undefined tracks of unknown categories.
Preferably, the method for mapping the image on the real-time track in step S1 uses a sliding window manner to correct line thickness variation and scale variation caused by different numbers and directions of the data points in the mapping process.
Preferably, the selection of the scale of the sliding window in step S1 is performed according to the type of the target, and generally includes military targets such as ships, airplanes, submarines, and the like.
Preferably, the method for obtaining the real-time track mapping image in step S1 is to comprehensively consider real-time performance and effectiveness according to the target type, and use different sampling frequencies for different targets.
Preferably, when the total amount of data is smaller than the length of the sliding window from the beginning of receiving the real-time data in step S1, mapping is performed after the sampled data null value reaches the length of the window; and after the total amount of data exceeds the length of the sliding window, inquiring data with the length of the sliding window from the current moment, sampling and mapping.
Preferably, the data received in real time in step S2 is labeled, identifiable similar tracks are classified into one category, and unidentifiable tracks are classified into unknown categories.
Preferably, in the step S2, a training set formed by the data received in real time is subjected to data expansion in an offline data enhancement mode, and secondary screening and labeling are performed manually.
Preferably, a twin neural network technology is adopted in the step S2, the twin neural network is a matching network, and 1 group of samples, namely two homogeneous samples and one heterogeneous sample, are adopted during training; 1 group of embedded vectors are obtained through an embedded vector extraction module; and judging the similarity between the samples in the group by using the Euclidean distance, monitoring the similarity of the similar samples to be as close as possible by using a loss function, and monitoring the similarity of the heterogeneous samples to be as far away as possible to obtain a trained embedded vector module.
Preferably, the embedded vector extraction module in the twin network in step S2 adopts a convolutional neural network to extract the features of the real-time track image through multilayer convolution to obtain embedded vectors of fixed dimensions.
Preferably, in step S3, the classes in the training set are sent to the embedded vector module to obtain embedded vectors of the valid classes.
Preferably, in the step S4, a clustering algorithm is used for the embedded vectors of the valid categories, and the clustering center points of the various categories are respectively found as the category templates of the category.
Preferably, in step S5, the data to be tested is input into the embedded vector extraction module to obtain a vector value of the test track.
Preferably, in step S5, the vector value to be tested is compared with the class template, generally, an euclidean distance is used as a measurement function, and the data can be determined as the class if the value between the euclidean distance and the class template is less than a threshold value.
Preferably, the unknown classes or the flight paths that do not appear in the training set in step S5 can be effectively distinguished from the known classes, so as to avoid the disadvantage that the deep learning classification network cannot identify the unknown classes.
Compared with the prior art, the invention has the following beneficial effects: the method of combining deep learning matching network technology with clustering is adopted to solve the problem that the existing track identification method can not divide unknown classes; by adopting the sliding window method, the problem that the real-time long track contains multiple types of tracks which are difficult to distinguish is solved, and the effectiveness and robustness of the invention for identifying the track types in real time in the long track are effectively improved.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, a method for identifying a long flight path in real time based on deep learning template matching specifically includes the following steps:
step S1: and receiving long track data in real time for processing, wherein each long track possibly comprises a plurality of track behaviors, and data is collected every 40 time stamps. The collected data are mapped into the image by a sliding window method, the sliding window scale is determined according to the known target category, 1000 points are selected here, and when the number does not reach 1000 points, the sliding window is kept still. When the threshold is exceeded, 1000 points are taken forward from the current time to map. And in the mapping process, all image line thickness changes and all image line scale changes are adjusted to be consistent, so that a real-time track mapping image data set with the size of 224x224 is obtained.
Step S2: dividing data received in real time into a training set and a testing set, expanding the data in an off-line data enhancement mode, expanding an image sample by rotation, translation and overturning, and needing manual secondary screening and marking because part of the data can lose original class characteristics after being enhanced.
And step S3: firstly, a deep learning matching network is constructed, and the deep learning matching network comprises an embedded vector extraction module and a comparison network. The extraction module adopts a multilayer convolution filter stacking mode, resNet18 is used as the extraction module, the training set samples are mapped into vector representation through two layers of full connection layers and a Relu activation function after the images are coded, and the dimension of the embedded vector is defined as 2. The comparison network respectively takes the embedded vectors of two similar tracks as positive samples and the embedded vectors of other types as negative samples as a group of samples to be input, the distance measurement function uses Euclidean distance, and the loss function is used for supervising the network to reduce the distance of the similar samples and expand the distance of the heterogeneous samples.
And step S4: a neural network is written by using a pyrrch framework, 128 groups of samples are written in each batch, each group of samples comprises 2 similar samples and one heterogeneous sample, a loss function is optimized by using an SGD optimizer, the learning rate is attenuated once every 10 rounds, and 100 rounds are trained in total.
Step S5: and after the training is finished, loading network weight, calculating the embedded vector of the effective class sample in the training set through an embedded vector extraction module, obtaining the embedded vector central point of the class by using a clustering algorithm on the similar embedded vector in the training set, and using the embedded vector central point as a characteristic template of the class, wherein the characteristic dimension is defined as 2.
Step S6: and obtaining an embedded vector of the test data by the data to be tested through a trained network, calculating the distance between the embedded vector of the test sample and the embedded vector of the class template through the Euclidean distance, judging the class if the distance is smaller than a threshold value, and not belonging to the class if the distance is larger than the threshold value. If not, the sample is of an unknown class.
Furthermore, it should be noted that the specific embodiments described in this specification may have different names, and the above description is only an illustration of the structure of the present invention. Minor or simple variations of the structure, features and principles of the present invention are included in the scope of the present invention. Various modifications, additions and the like may be made to the embodiments described herein by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (10)

1. A long flight path real-time identification method based on deep learning template matching is characterized by comprising the following steps:
step S1: receiving and processing long track real-time data in real time to obtain a real-time track mapping image;
step S2: selecting an effective category in the long real-time flight path for marking, and training the twin neural network by using the effective category and an unknown category to obtain a trained network;
and step S3: inputting the effective category in the training set into the trained embedded vector extraction module to obtain the embedded vector of the training set;
and step S4: using a clustering algorithm to the embedded vectors of the training set to find a class template;
step S5: mapping real-time test data into an image and inputting the image into a trained network to obtain an embedded vector;
step S6: and calculating the similarity of the embedded vector and the class template after calculating the embedded vector of the test data by using a similarity measurement function, and judging the track class according to the similarity.
2. The method for identifying a long track in real time based on deep learning template matching according to claim 1, wherein the data in step S1 is received and determined in real time, and a long track includes multiple types of tracks and also includes undefined unknown type tracks.
3. The method for identifying the long flight path in real time based on the deep learning template matching according to claim 1, wherein the method for mapping the image of the real-time flight path in the step S1 uses a sliding window mode to correct line thickness variation and scale variation caused by different numbers and directions of data points in the mapping process; and selecting the scale of the sliding window according to the types of the targets, including military targets of ships, airplanes and submarines.
4. The method for identifying the long flight path in real time based on the deep learning template matching according to claim 1, wherein the method for obtaining the real-time flight path mapping image in the step S1 is to comprehensively consider real-time performance and effectiveness according to the type of the target and use different sampling frequencies for different targets.
5. The long-flight path real-time identification method based on deep learning template matching according to claim 3, characterized in that when the total amount of data is smaller than the length of a sliding window from the beginning of receiving real-time data in step S1, a sampled data null value is mapped to the length of the window; and after the total amount of data exceeds the length of the sliding window, inquiring data with the length of the sliding window from the current moment, sampling and mapping.
6. The method for identifying the long track in real time based on the deep learning template matching according to claim 1, wherein the data received in real time in the step S2 are labeled, identifiable similar tracks are classified into one type, and unidentifiable tracks are classified into unknown types.
7. The method according to claim 1, wherein the data is expanded in the step S2 in a manner of offline data enhancement on a training set formed by the data received in real time, and the labeling is performed by manual secondary screening.
8. The long-flight path real-time identification method based on deep learning template matching according to claim 1, characterized in that a twin neural network technology is adopted in step S2, the twin neural network is a matching network, and 1 group of samples, namely two similar samples and one heterogeneous sample, are adopted during training; obtaining 1 group of embedded vectors through an embedded vector extraction module; judging the similarity between samples in the group by using the Euclidean distance, monitoring the similarity of similar samples to be close and the similarity of heterogeneous samples to be far through a loss function, and obtaining a trained embedded vector module; an embedded vector extraction module in the twin neural network adopts a convolution neural network, and extracts the characteristics of the real-time track image through multilayer convolution to obtain an embedded vector with fixed dimensionality.
9. The method for real-time long flight path identification based on deep learning template matching as claimed in claim 1, wherein the data to be tested is input into the embedded vector extraction module in step S5, so as to obtain the vector value of the test flight path.
10. The method for real-time long-flight path identification based on deep learning template matching according to claim 1, wherein in step S5, a vector value to be tested is compared with a class template, an euclidean distance is used as a metric function, and data is determined to be of the class if a value between the metric function and the template is less than a threshold value; in the step S5, unknown classes or flight paths which do not appear in the training set can be effectively distinguished from the known classes, so that the defect that the deep learning classification network cannot identify the unknown classes is avoided.
CN202210856190.1A 2022-07-21 2022-07-21 Long flight path real-time identification method based on deep learning template matching Active CN114998747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210856190.1A CN114998747B (en) 2022-07-21 2022-07-21 Long flight path real-time identification method based on deep learning template matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210856190.1A CN114998747B (en) 2022-07-21 2022-07-21 Long flight path real-time identification method based on deep learning template matching

Publications (2)

Publication Number Publication Date
CN114998747A CN114998747A (en) 2022-09-02
CN114998747B true CN114998747B (en) 2022-10-18

Family

ID=83022017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210856190.1A Active CN114998747B (en) 2022-07-21 2022-07-21 Long flight path real-time identification method based on deep learning template matching

Country Status (1)

Country Link
CN (1) CN114998747B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11763832B2 (en) * 2019-05-01 2023-09-19 Synaptics Incorporated Audio enhancement through supervised latent variable representation of target speech and noise
CN113674310B (en) * 2021-05-11 2024-04-26 华南理工大学 Four-rotor unmanned aerial vehicle target tracking method based on active visual perception
CN113569465B (en) * 2021-06-25 2022-10-21 中国人民解放军战略支援部队信息工程大学 Flight path vector and target type joint estimation system and estimation method based on deep learning

Also Published As

Publication number Publication date
CN114998747A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN111598881A (en) Image anomaly detection method based on variational self-encoder
CN112200121B (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN113379686B (en) PCB defect detection method and device
CN112633382A (en) Mutual-neighbor-based few-sample image classification method and system
CN113838054A (en) Mechanical part surface damage detection method based on artificial intelligence
CN114049305B (en) Distribution line pin defect detection method based on improved ALI and fast-RCNN
CN115953666B (en) Substation site progress identification method based on improved Mask-RCNN
CN114841920A (en) Flame identification method and device based on image processing and electronic equipment
CN111539931A (en) Appearance abnormity detection method based on convolutional neural network and boundary limit optimization
CN114487129A (en) Flexible material damage identification method based on acoustic emission technology
CN106951924B (en) Seismic coherence body image fault automatic identification method and system based on AdaBoost algorithm
CN114998747B (en) Long flight path real-time identification method based on deep learning template matching
CN116977834A (en) Method for identifying internal and external images distributed under open condition
CN115797804A (en) Abnormity detection method based on unbalanced time sequence aviation flight data
CN112990279B (en) Radar high-resolution range profile library outside target rejection method based on automatic encoder
CN112014821B (en) Unknown vehicle target identification method based on radar broadband characteristics
CN117372787B (en) Image multi-category identification method and device
CN113963249B (en) Detection method and system for star image
CN114220016B (en) Unmanned aerial vehicle aerial image domain adaptive identification method oriented to open scene
Johannesen et al. Using weighted minutiae for fingerprint identification
Ni et al. Learning discriminative and shareable patches for scene classification
CN117315342A (en) YOLOv 7-based target detection method under special weather conditions under visible light
CN117036761A (en) Small sample target detection method and model based on contrast clustering and template matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant