CN112990333A - Deep learning-based weather multi-classification identification method - Google Patents
Deep learning-based weather multi-classification identification method Download PDFInfo
- Publication number
- CN112990333A CN112990333A CN202110329160.0A CN202110329160A CN112990333A CN 112990333 A CN112990333 A CN 112990333A CN 202110329160 A CN202110329160 A CN 202110329160A CN 112990333 A CN112990333 A CN 112990333A
- Authority
- CN
- China
- Prior art keywords
- training
- weather
- deep learning
- network model
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 21
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 17
- 238000012795 verification Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a weather multi-classification recognition method based on deep learning, which is used for classifying various weathers from a single image, such as cloudy weather, rainy weather, sunny weather, foggy weather, snow weather, thunder and lightning and the like. The invention provides a convolutional neural network model with an improved channel attention mechanism on the basis of a traditional convolutional neural network by combining the channel attention mechanism. The attention mechanism is utilized to realize the self-adaptive calibration of the characteristic channel, and the network model has better generalization capability and classification accuracy.
Description
Technical Field
The invention relates to the technical field of image recognition and machine learning, in particular to a weather multi-classification recognition method based on deep learning.
Background
Currently, accurate weather detection techniques rely on expensive sensors and manual observations by specialized weather personnel, and are limited in cost and efficiency. If we could utilize existing surveillance cameras that are almost ubiquitous, it would be possible to convert weather observation and detection into a powerful and cost-effective computer vision application.
In recent years, with the development of computer technology and deep learning technology, image recognition technology has been widely used. And a Convolutional Neural Network (CNN) in image recognition can obtain information of a deeper layer in an image, so that the accuracy of image classification is improved. However, in current weather classification, most scholars still design features to identify data sets manually, and the performance upper limit of machine learning is limited by the quality of feature engineering.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a deep learning-based weather multi-classification identification method.
The purpose of the invention can be realized by the following technical scheme:
a deep learning-based weather multi-classification identification method comprises the following steps:
step 1: acquiring a data set comprising a plurality of weather categories, and processing the data set into a training set, a verification set and a test set;
step 2: training a convolutional neural network model combined with an improved channel attention mechanism based on a training set, a verification set and a test set, selecting parameters and checking the effect of the model to obtain a trained network model;
and step 3: and acquiring an image to be recognized, processing the image, inputting the processed image into the trained network model, and outputting a recognition result.
Further, the step 1 comprises the following sub-steps:
step 101: acquiring a data set containing a plurality of weather categories, wherein the data set contains a plurality of images corresponding to the weather categories, taking each image as training data, and taking the corresponding weather category as a training label to form a sample set;
step 102: and dividing the sample set into a training set, a verification set and a test set, and carrying out standardization processing.
Further, the step 2 comprises the following sub-steps:
step 201: aiming at a convolutional neural network model combined with an improved channel attention mechanism, adjusting different hyper-parameters, respectively training by using a training set, evaluating by using a verification set, and selecting a group of hyper-parameters with the highest evaluation result of the verification set as the hyper-parameters of the convolutional neural network model;
step 202: and training the model corresponding to the selected hyper-parameter by using a training set, and checking the effect of the model by using a test set, wherein the parameter obtained by training is the trained network model.
Further, the improved channel attention mechanism in step 2 specifically includes: and grouping and stacking the channel modules, performing global average pooling on each group, and outputting output data by using a ReLU activation function.
Further, the output data corresponds to a mathematical description formula:
yc=Fscale(zc,sc)=zc·sc
in the formula, ycFor the output of the c-th function, Fscale(zc,sc) For the corresponding channel product, s, between the c-th feature map and the scalarcIs the c-th attention weight, zcIs the output of the c-th convolution.
Further, the attention weight is expressed by the corresponding mathematical description formula:
s=Fex(q,W)=σ(g(q,W))=σ(W2δ(W1q))
where δ is the ReLU function, W1And W2Respectively, a dimensionality reduction layer parameter and a dimensionality lifting layer parameter, which belong toSigma is sigmoid function, C is number of feature mapping, r is dimensionality reduction proportion, and q isThe value of the element.
Further, the element value, corresponding to the mathematical description formula, is:
in the formula, qcFor the value of the c-th element, H and W are the height and width, respectively, of the Z spatial dimension.
Further, the c-th convolution output corresponds to a mathematical description formula:
in the formula, vcIs the parameter of the c-th filter, X is the convolution input.
Compared with the prior art, the invention has the following advantages:
(1) on the basis of the traditional image recognition algorithm based on deep learning, a convolutional neural network model with an improved channel attention mechanism is provided. Specifically, the importance degree of each feature channel is automatically acquired through a learning mode, and then useful features are enhanced according to the importance degree and the features which are not useful for the current task are suppressed, so that the self-adaptive calibration of the feature channels can be realized.
(2) The optimization and improvement of the channel attention mechanism can avoid the artificial design of data characteristics and have better generalization capability and classification accuracy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a deep learning-based weather multi-classification recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network according to an embodiment of the present invention;
FIG. 3 is a schematic view of a conventional attention mechanism;
fig. 4 is a schematic diagram of an improved channel attention mechanism provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or the orientations or positional relationships that the products of the present invention are conventionally placed in use, and are only used for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal", "vertical" and the like do not imply that the components are required to be absolutely horizontal or pendant, but rather may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1 to 4, a deep learning-based weather multi-classification recognition method includes a training stage and a recognition stage, and includes the following specific steps:
a training stage:
acquiring a data set containing a plurality of weather categories, wherein the training set contains a plurality of images corresponding to the weather categories, and the images are used as training data, and the corresponding weather categories are used as training labels to form a sample set;
dividing a sample set into a training set, a verification set and a test set, and carrying out standardization processing;
adjusting different hyper-parameters, respectively training by using a training set, evaluating by using a verification set, and selecting a group of hyper-parameters with the highest evaluation result of the verification set as the hyper-parameters of the convolutional neural network model;
training the model corresponding to the selected hyper-parameter by using a training set, and checking the effect of the model by using a test set, wherein the parameter obtained by training is the recognition model;
and (3) identification:
preprocessing and standardizing an image to be identified;
and inputting the processed image into the recognition model to obtain a recognition result.
Further, the convolutional neural network combines an improved attention mechanism, attention mechanism modules are added into the module network, and the reliability of the network is improved by constructing the interdependence between the channels of the convolutional characteristic of the network.
Further, an improved attention mechanism is to stack the channel modules in groups, pool the groups in a global average, and output the result by the ReLU activation function.
Compared with the traditional neural network, the convolutional neural network utilizes a parameter sharing mechanism, so that the number of network parameters is greatly reduced, and a good classification result can be obtained. Generally, a convolutional neural network comprises three structures of convolution, activation and pooling. Convolution is to slide the template on the image, and products of all weight values on the template and the gray value of the corresponding template coverage area are added to calculate a new value. Different templates correspond to different features, and all templates constitute a feature extractor. The convolved results require nonlinear mapping with an activation function to assist in expressing complex features. Pooling is sandwiched between successive convolutional layers for compressing the amount of data and parameters, reducing overfitting. Simply speaking, the image is compressed, the resolution is reduced, and redundant information is removed. In a common construction, the convolutional neural network comprises a plurality of convolutional layers and pooling layers, the back end is connected with a full-connection layer, and the more the number of layers of the network is, the more the model is complex. Hyper-parameters such as the number of layers, the number of templates, the form of an activation function and the like which cannot be obtained through training in the model need to be artificially selected according to cross validation.
In this embodiment, the following preferred scheme can be adopted for the structure of the convolutional neural network:
as shown in fig. 2, the source image is preprocessed to be a 224 × 224 × 3 RGB image. The activation function between layers adopts a ReLU function. The image is converted into a 112 × 112 × 64 feature mapping space after one convolution, and then maximum pooling and one more convolution and improved channel attention calculation are performed, and the feature mapping space becomes 56 × 56 × 256. And then, after three layers of average value pooling, convolution and improved channel attention calculation, the feature mapping space is changed into 28 multiplied by 152, 14 multiplied by 1024 and 7 multiplied by 2048 in sequence. Finally, after one-time average pooling, the feature space becomes 1 × 1 × 2048, and then the feature space is transmitted to a full-connection layer for identification.
A conventional channel attention mechanism is shown in the form of fig. 3. The compression and excitation block is a computational unit that can be based on the transformed convolution input, and the implementation formula of X → Z is:
in the formula, vcIs the parameter of the c-th filter, X is the convolution input, and F iscoAs a general convolution operation, and using V ═ V1,v2,...,vc]To represent a learned set of filter kernels.
The Squeeze operation is to compress each feature map by using a global average pooling operation after obtaining U (multiple feature maps), so that C feature maps are finally changed into a real number sequence of 1 × 1 × C, which is called Fsq. Formally, a statistic q ∈ RCIs generated by shrinking the Z-space dimension H × W, so the c-th element is:
in the formula, qcFor the value of the c-th element, H and W are the height and width, respectively, of the Z spatial dimension.
The channel dependencies are captured comprehensively by the specification operation, and attention weights are obtained as shown in the following formula:
s=Fex(q,W)=σ(g(q,W))=σ(W2δ(W1q))
where δ is the ReLU function, W1And W2Respectively, dimension reduction layer parametersAnd the dimensionality-raising layer parameters, all belong toSigma is a sigmoid function, C is a feature mapping number, r is a dimensionality reduction proportion, and q is an element value.
Finally, the real number sequence of 1 × 1 × C is obtained, and Scale operation is performed by combining with Z (original stitch map) according to the following formula to obtain final output.
yc=Fscale(zc,sc)=zc·sc
In the formula, ycFor the output of the c-th function, Fscale(zc,sc) For the corresponding channel product, s, between the c-th feature map and the scalarcIs the c-th attention weight, zcIs the output of the c-th convolution.
The improved attention mechanism structure is shown in fig. 4, wherein the difference is that the Z after convolution is divided into C/K groups for global average pooling respectively; k is a hyper-parameter, and finally the sigmoid is changed into a ReLU activation function, and the output data is Y.
The network model of the embodiment is verified by adopting a multi-classification weather data set of CeWu and the like, the data are classified into six types of cloudy days, foggy days, rainy days, snowy days, sunny days and thunder and lightning according to weather states, and after training and testing, the model has an accuracy rate of 96.2 percent and is obviously improved compared with a model of the predecessor.
In order to verify that the improved channel attention mechanism provided by the embodiment has reliability in weather classification and identification, another weather data set is adopted for evaluation, wherein the data set comprises 10000 images and is divided into two categories, namely sunny days and cloudy days. The accuracy obtained by the test is 99.3%.
In summary, the convolutional neural network constructed by the improved channel attention mechanism provided by the embodiment has better generalization capability and classification accuracy.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A deep learning-based weather multi-classification identification method is characterized by comprising the following steps:
step 1: acquiring a data set comprising a plurality of weather categories, and processing the data set into a training set, a verification set and a test set;
step 2: training a convolutional neural network model combined with an improved channel attention mechanism based on a training set, a verification set and a test set, selecting parameters and checking the effect of the model to obtain a trained network model;
and step 3: and acquiring an image to be recognized, processing the image, inputting the processed image into the trained network model, and outputting a recognition result.
2. The deep learning-based weather multi-classification recognition method as claimed in claim 1, wherein the step 1 comprises the following sub-steps:
step 101: acquiring a data set containing a plurality of weather categories, wherein the data set contains a plurality of images corresponding to the weather categories, taking each image as training data, and taking the corresponding weather category as a training label to form a sample set;
step 102: and dividing the sample set into a training set, a verification set and a test set, and carrying out standardization processing.
3. The deep learning-based weather multi-classification recognition method as claimed in claim 1, wherein the step 2 comprises the following sub-steps:
step 201: aiming at a convolutional neural network model combined with an improved channel attention mechanism, adjusting different hyper-parameters, respectively training by using a training set, evaluating by using a verification set, and selecting a group of hyper-parameters with the highest evaluation result of the verification set as the hyper-parameters of the convolutional neural network model;
step 202: and training the model corresponding to the selected hyper-parameter by using a training set, and checking the effect of the model by using a test set, wherein the parameter obtained by training is the trained network model.
4. The deep learning-based weather multi-classification recognition method as claimed in claim 1, wherein the improved channel attention mechanism in the step 2 specifically comprises: and grouping and stacking the channel modules, performing global average pooling on each group, and outputting output data by using a ReLU activation function.
5. The deep learning-based weather multi-classification recognition method as claimed in claim 4, wherein the output data corresponds to a mathematical description formula:
yc=Fscale(zc,sc)=zc·sc
in the formula, ycFor the output of the c-th function, Fscale(zc,sc) For the corresponding channel product, s, between the c-th feature map and the scalarcIs the c-th attention weight, zcIs the output of the c-th convolution.
6. The method as claimed in claim 5, wherein the attention weight is expressed by the mathematical description formula as follows:
s=Fex(q,W)=σ(g(q,W))=σ(W2δ(W1q))
7. The deep learning-based weather multi-classification recognition method as claimed in claim 6, wherein the element values are represented by the mathematical description formula:
in the formula, qcFor the value of the c-th element, H and W are the height and width, respectively, of the Z spatial dimension.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110329160.0A CN112990333A (en) | 2021-03-27 | 2021-03-27 | Deep learning-based weather multi-classification identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110329160.0A CN112990333A (en) | 2021-03-27 | 2021-03-27 | Deep learning-based weather multi-classification identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112990333A true CN112990333A (en) | 2021-06-18 |
Family
ID=76333966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110329160.0A Pending CN112990333A (en) | 2021-03-27 | 2021-03-27 | Deep learning-based weather multi-classification identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112990333A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642614A (en) * | 2021-07-23 | 2021-11-12 | 西安理工大学 | Basic weather type classification method based on deep network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784298A (en) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | A kind of outdoor on-fixed scene weather recognition methods based on deep learning |
CN110929603A (en) * | 2019-11-09 | 2020-03-27 | 北京工业大学 | Weather image identification method based on lightweight convolutional neural network |
CN111178237A (en) * | 2019-12-27 | 2020-05-19 | 上海工程技术大学 | Pavement state recognition method |
CN111178438A (en) * | 2019-12-31 | 2020-05-19 | 象辑知源(武汉)科技有限公司 | ResNet 101-based weather type identification method |
CN111476713A (en) * | 2020-03-26 | 2020-07-31 | 中南大学 | Intelligent weather image identification method and system based on multi-depth convolution neural network fusion |
CN111667489A (en) * | 2020-04-30 | 2020-09-15 | 华东师范大学 | Cancer hyperspectral image segmentation method and system based on double-branch attention deep learning |
CN112363251A (en) * | 2020-10-26 | 2021-02-12 | 上海眼控科技股份有限公司 | Weather prediction model generation method, weather prediction method and device |
-
2021
- 2021-03-27 CN CN202110329160.0A patent/CN112990333A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784298A (en) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | A kind of outdoor on-fixed scene weather recognition methods based on deep learning |
CN110929603A (en) * | 2019-11-09 | 2020-03-27 | 北京工业大学 | Weather image identification method based on lightweight convolutional neural network |
CN111178237A (en) * | 2019-12-27 | 2020-05-19 | 上海工程技术大学 | Pavement state recognition method |
CN111178438A (en) * | 2019-12-31 | 2020-05-19 | 象辑知源(武汉)科技有限公司 | ResNet 101-based weather type identification method |
CN111476713A (en) * | 2020-03-26 | 2020-07-31 | 中南大学 | Intelligent weather image identification method and system based on multi-depth convolution neural network fusion |
CN111667489A (en) * | 2020-04-30 | 2020-09-15 | 华东师范大学 | Cancer hyperspectral image segmentation method and system based on double-branch attention deep learning |
CN112363251A (en) * | 2020-10-26 | 2021-02-12 | 上海眼控科技股份有限公司 | Weather prediction model generation method, weather prediction method and device |
Non-Patent Citations (2)
Title |
---|
HANG ZHANG ET AL.: "ResNeSt: Split-Attention Networks", 《ARXIV》 * |
JIE HU ET AL.: "Squeeze-and-Excitation Networks", 《ARXIV》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642614A (en) * | 2021-07-23 | 2021-11-12 | 西安理工大学 | Basic weather type classification method based on deep network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN113191215B (en) | Rolling bearing fault diagnosis method integrating attention mechanism and twin network structure | |
CN110717481B (en) | Method for realizing face detection by using cascaded convolutional neural network | |
CN113673590B (en) | Rain removing method, system and medium based on multi-scale hourglass dense connection network | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
CN113688821B (en) | OCR text recognition method based on deep learning | |
CN115331172A (en) | Workshop dangerous behavior recognition alarm method and system based on monitoring video | |
CN112200121A (en) | Hyperspectral unknown target detection method based on EVM and deep learning | |
CN113077444A (en) | CNN-based ultrasonic nondestructive detection image defect classification method | |
CN112766283B (en) | Two-phase flow pattern identification method based on multi-scale convolution network | |
CN111881958A (en) | License plate classification recognition method, device, equipment and storage medium | |
Lin et al. | Determination of the varieties of rice kernels based on machine vision and deep learning technology | |
CN112257741A (en) | Method for detecting generative anti-false picture based on complex neural network | |
CN111222545A (en) | Image classification method based on linear programming incremental learning | |
CN117274662A (en) | Lightweight multi-mode medical image classification method for improving ResNeXt neural network | |
CN111967361A (en) | Emotion detection method based on baby expression recognition and crying | |
CN111046838A (en) | Method and device for identifying wetland remote sensing information | |
CN112990333A (en) | Deep learning-based weather multi-classification identification method | |
CN116630700A (en) | Remote sensing image classification method based on introduction channel-space attention mechanism | |
CN117152644A (en) | Target detection method for aerial photo of unmanned aerial vehicle | |
CN116704241A (en) | Full-channel 3D convolutional neural network hyperspectral remote sensing image classification method | |
CN114897909B (en) | Crankshaft surface crack monitoring method and system based on unsupervised learning | |
CN111401442A (en) | Fruit identification method based on deep learning | |
CN111126173A (en) | High-precision face detection method | |
CN116309270A (en) | Binocular image-based transmission line typical defect identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210618 |