CN114998711A - Method and system for detecting aerial infrared small and weak target and computer storage medium - Google Patents

Method and system for detecting aerial infrared small and weak target and computer storage medium Download PDF

Info

Publication number
CN114998711A
CN114998711A CN202210704225.XA CN202210704225A CN114998711A CN 114998711 A CN114998711 A CN 114998711A CN 202210704225 A CN202210704225 A CN 202210704225A CN 114998711 A CN114998711 A CN 114998711A
Authority
CN
China
Prior art keywords
learnable
features
candidate
backbone network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210704225.XA
Other languages
Chinese (zh)
Inventor
刘伟
王鹏
郭得福
段程鹏
张书强
宋洁
刘济铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhongkelide Infrared Technology Co ltd
Original Assignee
Xi'an Zhongkelide Infrared Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhongkelide Infrared Technology Co ltd filed Critical Xi'an Zhongkelide Infrared Technology Co ltd
Priority to CN202210704225.XA priority Critical patent/CN114998711A/en
Publication of CN114998711A publication Critical patent/CN114998711A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for detecting an aerial infrared small dim target and a computer storage medium, wherein the method comprises the following steps: determining a learnable candidate box; generating a learnable candidate feature corresponding to the learnable candidate frame in the target image; inputting a target image into a backbone network, wherein the backbone network adopts a weighted bidirectional cyclic characteristic pyramid network and comprises a switchable hole convolution module, and the backbone network is used for extracting characteristics with different resolutions in the target image and fusing the extracted characteristics to obtain fused characteristics; inputting the learnable candidate features and the fusion features into a dynamic detection module, and extracting the regional features in each learnable candidate frame; and carrying out classification prediction on the region characteristics to obtain a detection result. The invention adopts the weighted bidirectional circulation characteristic pyramid network, improves the characteristic extraction capability, introduces switchable hole convolution, increases the receptive field under the condition of less image information loss, and improves the detection accuracy.

Description

Method and system for detecting aerial infrared small and weak target and computer storage medium
Technical Field
The invention relates to the technical field of target detection, in particular to a method and a system for detecting aerial infrared small and weak targets and a computer storage medium.
Background
The infrared image can capture clear target contour information by depending on the radiation intensity difference between the target and the background, is not influenced by severe environments such as rain, snow, wind, frost and the like, has clear imaging and high accuracy, can identify camouflage and resist interference, and is a necessary means for target detection under complex conditions.
In the early exploration in the field of infrared target detection, many scholars focus on researching methods based on filtering, human visual system, low-rank sparse recovery and the like, and infrared target detection is realized in a background suppression and target enhancement mode. With the rapid development of deep neural networks, researchers are dedicated to applying deep neural networks to infrared multi-scale target detection.
However, the accuracy of the current infrared image-based air weak and small target detection technology still needs to be further improved.
Disclosure of Invention
The embodiment of the invention provides a method and a system for detecting an aerial infrared small and weak target and a computer storage medium, which are used for solving the problem that the accuracy of an aerial small and weak target detection technology based on an infrared image in the prior art still needs to be further improved.
In one aspect, an embodiment of the present invention provides a method for detecting an aerial infrared small and weak target, including:
determining a learnable candidate box;
generating a learnable candidate feature corresponding to the learnable candidate frame in the target image;
inputting a target image into a backbone network, wherein the backbone network adopts a weighted bidirectional cyclic characteristic pyramid network and comprises a switchable hole convolution module, and the backbone network is used for extracting characteristics with different resolutions in the target image and fusing the extracted characteristics to obtain fused characteristics;
inputting the learnable candidate features and the fusion features into a dynamic detection module, and extracting the regional features in each learnable candidate frame;
and carrying out classification prediction on the region characteristics to obtain a detection result.
On the other hand, an embodiment of the present invention provides an air infrared small dim target detection system, including:
a candidate frame selection module for determining a learnable candidate frame;
a candidate feature generation module for generating a learnable candidate feature corresponding to the learnable candidate frame in the target image;
the backbone network adopts a weighted bidirectional cyclic feature pyramid network and comprises a switchable cavity convolution module, and the backbone network is used for extracting features with different resolutions in the target image and fusing the extracted features to obtain fused features;
and the dynamic detection module is used for extracting the regional characteristics in each learnable candidate frame in the learnable candidate characteristics and the fusion characteristics, and classifying and predicting the regional characteristics to obtain a detection result.
In another aspect, an embodiment of the present invention provides a computer storage medium having a plurality of computer instructions stored therein, where the computer instructions are used to make a computer execute the method described above.
The method, the system and the computer storage medium for detecting the aerial infrared dim target have the following advantages that:
and the influence of dense prior on the detection result is eliminated by utilizing a sparse prior model, and the calculation complexity is reduced. Meanwhile, a weighted bidirectional circulation feature pyramid network is adopted, the feature extraction capability is improved, switchable hole convolution is introduced, the receptive field is increased under the condition that less image information is lost, and the detection accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting an aerial infrared small and weak target according to an embodiment of the present invention;
fig. 2 is a composition diagram of an aerial infrared small and weak target detection system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for detecting an aerial infrared small and weak target according to an embodiment of the present invention. The embodiment of the invention provides a method for detecting an aerial infrared small dim target, which comprises the following steps:
s100, a candidate frame capable of learning is determined.
Illustratively, a set of a smaller number (100 sets) of learnable candidate boxes (sparse regions) may be selected to replace the Region of interest (ROI) predicted by RPN (Region of interest).
S110, generating a learnable candidate feature corresponding to the learnable candidate frame in the target image.
Illustratively, after the learnable candidate frame is selected, the high-dimensional vector can be randomly initialized according to the learnable candidate frame, and the learnable candidate feature of the target image can be generated as the network training is continuously updated in an iterative manner.
And S120, inputting the target image into a backbone network, wherein the backbone network adopts a weighted bidirectional cyclic feature pyramid network and comprises a switchable hole convolution module, and the backbone network is used for extracting features with different resolutions in the target image and fusing the extracted features to obtain fused features.
Illustratively, the backbone network is also trained prior to inputting the target image into the backbone network. Training a backbone network, comprising: inputting the training image into a backbone network to obtain a feature extraction result; and updating parameters of the backbone network by adopting a back propagation algorithm according to the feature extraction result. The parameters of the backbone network can be continuously updated along with network training, so that the influence of the initialization value on the final detection result is small, and the problem of the dependence of the conventional dense detection algorithm on the prior knowledge is solved.
The method mainly comprises the following steps of:
(1) and fusing the extracted features by the backbone network in a fast normalization fusion mode. The following output characteristics can be obtained by adopting a fast normalization fusion mode:
Figure 318183DEST_PATH_IMAGE001
wherein the content of the first and second substances,w m andw n respectively represent input featuresx m Andx n may learn the weight parameters. To normalize the weight parameterswAnd to ensure the weightw m Not less than 0, using ReLU function pair weightWeight parameterw m Is adjusted and set
Figure 186913DEST_PATH_IMAGE002
=0.0001 to avoid numerical instability. For example, the feature fusion of the sixth layer of the backbone network is represented as follows:
Figure 3559DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 49226DEST_PATH_IMAGE004
is an intermediate feature of layer 6 from the top down path,
Figure 634928DEST_PATH_IMAGE005
representing the input features at layer 6 of the top-down path, Resize (-) is the resizing of the features to satisfy the convolution calculation,
Figure 877821DEST_PATH_IMAGE006
for the top-down path of the input features at layer 7,
Figure 181764DEST_PATH_IMAGE007
the output characteristic of layer 5 for the bottom-up path,
Figure 768472DEST_PATH_IMAGE008
is the output characteristic of layer 6 of the bottom-up path,
Figure 474260DEST_PATH_IMAGE009
respectively, learnable weight parameters of the output features.
In the embodiment of the invention, in order to improve the detection precision of the small target, the backbone network performs standard normalization operation after performing convolution processing on the extracted features by adopting the switchable hole convolution module each time.
(2) And adding a switchable hole convolution module in the backbone network. Switchable void convolution moduleThe system is composed of two global context modules and a switchable hole convolution component, wherein the two global context modules are respectively added before and after the switchable hole convolution component. The switchable hole convolution component is added in the ordinary convolutionr10 formation, equivalently, need to bekn×knIs extended tokn=kn+(kn-1)(r-1) thereinrIndicating the parameter void ratio, with different settingsrDifferent receptive fields can be obtained, and the multi-scale receptive field can be effectively obtained under the condition of not increasing the calculation complexity. In an embodiment of the invention, an arrangement is providedrVoid convolution component of =3 andrand (4) filling the missing pixel points by using standard convolution according to the conventional convolution of the = 1.
The convolution operation in the switchable hole convolution component can be expressed asy out =Conv(x,w,r) WhereinxThe characteristics of the input are represented by,wrepresenting weights, consistent with the weight values used in the feature pyramid structure. The complete switchable hole convolution component is then expressed as follows:
Figure 622475DEST_PATH_IMAGE010
wherein the transformation functionS(x) Comprising a 5 × 5 average pooling layer and a 1 × 1 convolution layer, and having a void fractionrThe default setting is set to be 3,
Figure 413714DEST_PATH_IMAGE011
and
Figure 807042DEST_PATH_IMAGE012
sharing weights using a locking mechanismwBut for a void fraction ofrAdding additional trainable weight to convolutional layers
Figure 367337DEST_PATH_IMAGE013
Because the pre-training weights are often used to initialize network parameters when training the backbone network, and when converting a normal convolution into a switchable hole convolution, the convolution with a greater hole rateThe weight is lost. Since objects of different scales can be different with the same weightrAnd roughly detecting convolution layers of values, so that the weight of the part which should be lost is initialized by the weight in the pre-training model, namely the weight of the common convolution kernel is copied to the convolution kernel with higher void rate in the switchable void convolution.
And S130, inputting the learnable candidate feature and the fusion feature into the dynamic detection module, and extracting the regional feature in each learnable candidate frame.
Illustratively, the number of the dynamic detection modules is multiple, the multiple dynamic detection modules respectively extract regional features (namely, ROI features in fig. 2) of each learnable candidate frame in the learnable candidate features and the fusion features by using ROI Align operation, after the regional features are extracted, the dynamic detection modules further perform interaction on the regional features to filter out invalid feature blocks and output target features, then a perceptron is used to perform regression prediction on the target features, and finally a full-connection layer is used to perform classification prediction on the results of the regression prediction.
The number of the dynamic detection modules is the same as that of the learnable candidate frames, the characteristics of each learnable candidate frame are input into a plurality of independent dynamic detection modules for target positioning and classification, each dynamic detection module is set according to specific regional characteristics, wherein the regional characteristics and the learnable candidate frames are in one-to-one correspondence, and the regional characteristics and the learnable candidate frames are in correspondenceNA candidate frame capable of learningNA region characteristic. In specific processing, the dynamic detection module firstly characterizes each regionfi(S×S) Performing interaction to filter invalid feature blocks and output final target features; and then, performing regression prediction on the target characteristics by using a 3-layer perceptron with a ReLU activation function and a hidden dimension C, and finally performing classification prediction on the regression prediction result by using a full-connection layer.
In an embodiment of the invention, the interaction process is implemented by an attention mechanism for the parts that need to focus on when a learnable candidate box is selected. Firstly, kernel parameters of convolution are generated by utilizing the regional characteristics, and secondly, the regional characteristics are processed through the generated convolution to obtain more different characteristics, so that the blocks with more foreground information make greater contribution to the final target position and category. And an iterative structure is adopted, and the newly generated target area and target characteristics are used as learnable candidate frames and area characteristics of the next iteration, so that the network performance is improved. Meanwhile, in order to reduce the computational complexity of the network, a continuous 1 × 1 convolution is used after the ReLU activation function to realize an interaction process.
And S140, carrying out classification prediction on the region characteristics to obtain a detection result.
The embodiment of the invention also provides an air infrared weak and small target detection system, as shown in fig. 2, the system comprises:
the candidate frame selection module is used for determining a candidate frame capable of learning;
a candidate feature generation module for generating a learnable candidate feature corresponding to the learnable candidate frame in the target image;
the backbone network is used for extracting the features with different resolutions in the target image and fusing the extracted features to obtain fused features;
and the dynamic detection module is used for extracting the regional characteristics in each candidate frame which can be learned in the candidate characteristics and the fusion characteristics, and performing classification prediction on the regional characteristics to obtain a detection result.
Embodiments of the present invention also provide a computer storage medium, in which a plurality of computer instructions are stored, and the computer instructions are used to enable a computer to execute the above method.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An air infrared weak and small target detection method is characterized by comprising the following steps:
determining a learnable candidate box;
generating a learnable candidate feature in the target image corresponding to the learnable candidate frame;
inputting the target image into a backbone network, wherein the backbone network adopts a weighted bidirectional cyclic feature pyramid network and comprises a switchable hole convolution module, and the backbone network is used for extracting features with different resolutions in the target image and fusing the extracted features to obtain fused features;
inputting the learnable candidate feature and the fusion feature into a dynamic detection module, and extracting the regional feature in each learnable candidate frame;
and carrying out classification prediction on the region characteristics to obtain a detection result.
2. The aerial infrared small and weak target detection method of claim 1, characterized in that before inputting the target image into a backbone network, the backbone network is trained.
3. The method of claim 2, wherein the training the backbone network comprises:
inputting a training image into the backbone network to obtain a feature extraction result;
and updating the parameters of the backbone network by adopting a back propagation algorithm according to the feature extraction result.
4. The method for detecting the aerial infrared small dim target according to claim 1, characterized in that the backbone network fuses the extracted features in a fast normalized fusion manner.
5. The method according to claim 1, wherein the backbone network further performs a standard normalization operation after performing convolution processing on the extracted features by using a switchable hole convolution module each time.
6. An aerial infrared weak and small target detection system is characterized by comprising:
the candidate frame selection module is used for determining a candidate frame capable of learning;
a candidate feature generation module for generating a learnable candidate feature corresponding to the learnable candidate frame in the target image;
the backbone network adopts a weighted bidirectional cyclic feature pyramid network and comprises a switchable hole convolution module, and is used for extracting features with different resolutions in the target image and fusing the extracted features to obtain fused features;
and the dynamic detection module is used for extracting the regional characteristics in each learnable candidate frame in the learnable candidate characteristics and the fusion characteristics, and performing classification prediction on the regional characteristics to obtain a detection result.
7. The aerial infrared small target detection system of claim 6, characterized in that the switchable hole convolution module comprises a switchable hole convolution component and two global context modules, the two global context modules being respectively located at the front and the rear of the switchable hole convolution component.
8. The airborne infrared small target detection system of claim 7, wherein the switchable hole convolution component is formed by adding a plurality of 0's in a normal convolution.
9. The air infrared weak small target detection system according to claim 6, wherein the number of the dynamic detection modules is multiple, the multiple dynamic detection modules respectively extract the regional features in each learnable candidate frame in the learnable candidate features and the fusion features by using ROI Align operation, after the regional features are obtained by extraction, the dynamic detection modules further perform interaction on the regional features to filter out invalid feature blocks and output target features, then perform regression prediction on the target features by using a perceptron, and finally perform classification prediction on the results of the regression prediction by using a full connection layer.
10. A computer storage medium having stored therein a plurality of computer instructions for causing a computer to perform the method of any one of claims 1-5.
CN202210704225.XA 2022-06-21 2022-06-21 Method and system for detecting aerial infrared small and weak target and computer storage medium Pending CN114998711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210704225.XA CN114998711A (en) 2022-06-21 2022-06-21 Method and system for detecting aerial infrared small and weak target and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210704225.XA CN114998711A (en) 2022-06-21 2022-06-21 Method and system for detecting aerial infrared small and weak target and computer storage medium

Publications (1)

Publication Number Publication Date
CN114998711A true CN114998711A (en) 2022-09-02

Family

ID=83036443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210704225.XA Pending CN114998711A (en) 2022-06-21 2022-06-21 Method and system for detecting aerial infrared small and weak target and computer storage medium

Country Status (1)

Country Link
CN (1) CN114998711A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205655A (en) * 2022-09-15 2022-10-18 中国科学院长春光学精密机械与物理研究所 Infrared dark spot target detection system under dynamic background and detection method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205655A (en) * 2022-09-15 2022-10-18 中国科学院长春光学精密机械与物理研究所 Infrared dark spot target detection system under dynamic background and detection method thereof

Similar Documents

Publication Publication Date Title
CN110929578B (en) Anti-shielding pedestrian detection method based on attention mechanism
CN111027493B (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
CN114022432B (en) Insulator defect detection method based on improved yolov5
CN111160407B (en) Deep learning target detection method and system
CN111460980B (en) Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion
CN111539343B (en) Black smoke vehicle detection method based on convolution attention network
CN110569782A (en) Target detection method based on deep learning
CN108345850A (en) The scene text detection method of the territorial classification of stroke feature transformation and deep learning based on super-pixel
CN110807384A (en) Small target detection method and system under low visibility
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
US11695898B2 (en) Video processing using a spectral decomposition layer
CN112464930A (en) Target detection network construction method, target detection method, device and storage medium
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
Han et al. A method based on multi-convolution layers joint and generative adversarial networks for vehicle detection
CN113537226A (en) Smoke detection method based on deep learning
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN107529647B (en) Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network
CN114998711A (en) Method and system for detecting aerial infrared small and weak target and computer storage medium
CN113076902B (en) Multitasking fusion character fine granularity segmentation system and method
CN110728238A (en) Personnel re-detection method of fusion type neural network
Guo et al. D3-Net: Integrated multi-task convolutional neural network for water surface deblurring, dehazing and object detection
CN114155165A (en) Image defogging method based on semi-supervision
CN112464770A (en) Dense pedestrian detection method in complex environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination