CN110929602A - Foundation cloud picture cloud shape identification method based on convolutional neural network - Google Patents

Foundation cloud picture cloud shape identification method based on convolutional neural network Download PDF

Info

Publication number
CN110929602A
CN110929602A CN201911090618.0A CN201911090618A CN110929602A CN 110929602 A CN110929602 A CN 110929602A CN 201911090618 A CN201911090618 A CN 201911090618A CN 110929602 A CN110929602 A CN 110929602A
Authority
CN
China
Prior art keywords
cloud
network
picture
cloud picture
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911090618.0A
Other languages
Chinese (zh)
Other versions
CN110929602B (en
Inventor
贾克斌
房春瑶
刘鹏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ge Lei Information Technology Co ltd
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201911090618.0A priority Critical patent/CN110929602B/en
Publication of CN110929602A publication Critical patent/CN110929602A/en
Application granted granted Critical
Publication of CN110929602B publication Critical patent/CN110929602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a foundation cloud picture cloud shape identification method based on a convolutional neural network, and belongs to the technical field of image identification. The invention comprises the following steps: constructing a light-weight cloud identification network based on a foundation cloud picture; training a cloud recognition network model; acquiring a cloud picture to be identified and preprocessing the cloud picture; inputting the preprocessed cloud picture to be recognized into the trained cloud recognition network; the cloud identification network automatically identifies the cloud picture and outputs the belonged category. The method makes full use of the advantages of the convolutional neural network in the field of large-scale image recognition, combines three ideas of channel-by-channel convolution, channel random mixing and expansion convolution, effectively reduces the complexity and network depth of standard convolution operation by reducing the number of parameters, improves the accuracy of cloud recognition, and provides possibility for equipment integration and practical application.

Description

Foundation cloud picture cloud shape identification method based on convolutional neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to a foundation cloud picture cloud shape recognition method based on a convolutional neural network.
Background
At present, most meteorological stations still rely on the manual visual identification of meteorological observers for cloud identification in foundation cloud pictures, but the manual identification is easily influenced by subjective factors such as moods of the observers, observation experiences and the like.
In recent years, a new demand has been made for analysis and processing of image information by rapidly growing image data, and there is a need for an efficient, fast and accurate means for analyzing and interpreting such image data, so as to efficiently and accurately extract necessary information from a large amount of image data. The convolutional neural network cnn (convolutional neural network) is one of the hottest methods for deep learning, and in the face of large data volume and complex data, the recognition performance is far superior to that of the conventional image recognition method, so that the convolutional neural network cnn (convolutional neural network) is widely applied to various image classification and recognition systems, and a very significant effect is achieved.
The convolutional neural network is a deep feedforward neural network taking convolution operation as a core. Convolution is a special linear operation, and a convolution network refers to a neural network in which at least one layer in the network uses convolution operation to replace general matrix multiplication operation. CNN has two major characteristics: firstly, the network structure at least comprises a convolutional layer for extracting features, and secondly, the convolutional layer works in a weight sharing mode, so that the complexity of the network is reduced.
The development of the convolutional neural network enables image classification and recognition to be developed in a large span, but some challenges are still faced in the classification and recognition processes, images of different types have large differences in characteristics such as color, texture, shape and the like, and the existing network structures such as Googlenet and VGG may not be capable of performing more targeted recognition on the characteristics of various types of images, so that the accuracy in recognizing certain types of images is low. In addition, the existing network model occupies higher computer memory during operation, and is inconvenient for actual deployment. Therefore, in consideration of the non-rigid structural characteristics of the cloud, a lightweight cloud identification network which is specially used for identifying the cloud and is convenient to deploy is designed.
Disclosure of Invention
Feature extraction is the core step of image recognition and classification. However, feature extraction for ground-based cloud charts generally has the following difficulties: firstly, the ground cloud atlas is usually obtained by a single visible light channel, and compared with the satellite cloud atlas, the ground cloud atlas contains less data information; secondly, the useless information in the cloud picture, such as trees, buildings and the like, can interfere with feature extraction; finally, the cloud types are various and vary infrequently, and the clouds of different types often overlap in position, so that the complexity of cloud map information is increased, and the feature extraction of a single cloud genus is more difficult.
At present, most scholars still identify clouds by extracting features such as spectra, shapes and textures of the clouds and combining a traditional image identification method of a classifier, but due to high similarity among the clouds, feature differentiation is not obvious, and identification accuracy is low. Few scholars turn to deep learning for studying the gravity center, but existing research results are mostly subjected to structure fine adjustment based on existing network structures such as VGG (video graphics gateway) and Googlenet, although the recognition accuracy rate is improved to some extent, the complex network structure design does not realize feature extraction aiming at the characteristics of cloud variability and complex mixing property information of cloud pictures, the pertinence is poor, and the recognition accuracy rate is still to be further improved. In addition, similar network structure parameters are more, so that larger video memory is occupied, and the deployment is not facilitated.
The invention provides a foundation cloud picture identification method based on a convolutional neural network, aiming at the problems that manual cloud picture identification is large in workload, easy to be influenced by subjective factors, low in accuracy of a traditional image identification method, difficult to deploy in an existing network structure and the like. In addition, due to the fact that the number of related network repeating units is large and the number of complex parameters is low, the lightweight of a network structure is achieved, the video memory ratio is effectively reduced, and the possibility is provided for business deployment.
In order to achieve the purpose, the invention adopts the following technical scheme:
a foundation cloud picture cloud shape identification method based on a convolutional neural network comprises the following steps:
step 1: constructing a light-weight cloud identification network based on a foundation cloud picture;
the foundation cloud image cloud identification network model based on the convolutional neural network comprises a convolutional layer, a maximum pooling layer, 4 sub-networks, a global pooling layer and a full-connection layer, wherein the network structure of each sub-network is similar, and the parameters of each sub-network are different.
Furthermore, convolution layers at the input end and the output end adopt 1 x 1 convolution respectively and are used for realizing dimension increasing and dimension reducing of the number of convolution kernel channels; convolution kernels with the sizes of 3 multiplied by 3 and 7 multiplied by 7 are respectively adopted in the maximum pooling layer and the global pooling layer, so that the size of a parameter matrix is reduced, the number of parameters is reduced, and the possibility of overfitting is reduced; the full connection layer is used for carrying out feature classification according to the extracted features.
Further, each sub-network comprises n (n >0) feature extraction units and 1 down-sampling unit: the characteristic extraction unit is used for extracting the image characteristics of the cloud picture to be identified; the down-sampling unit is used for reducing the dimensionality of the features on the premise of keeping effective information and avoiding overfitting to a certain extent.
Furthermore, the feature extraction unit integrates the ideas of depth separable convolution, channel random mixing and expansion convolution, and effectively reduces the operation amount on the premise of ensuring the feature extraction accuracy. The down-sampling unit down-samples the image space scale by using the average pooling layer and the maximum pooling layer, and selectively processes the key information by using the attention mechanism of the channel so as to improve the efficiency of the network. In addition, a BN layer (Batch Normalization) is added in the network structure design of the feature extraction unit and the down sampling unit so as to accelerate the convergence speed of the network model during training and effectively avoid the problems of gradient disappearance and gradient explosion.
Step 2: training a cloud recognition network model;
the specific steps of training the cloud recognition network model are as follows: acquiring a cloud picture data training set subjected to manual classification; preprocessing cloud picture data in the data set; and inputting the preprocessed cloud picture data into a cloud recognition network for training.
Further, the specific method for preprocessing the cloud images in the training data set comprises the following steps: normalizing the pictures in the training set; data enhancement (flipping, translation, color dithering, etc.) is performed on the training picture. Further, when the network model is trained, a cross entropy function is selected as a loss function, an optimizer is Adam, the iteration times are set to be 100, the batch processing amount is set to be 100, and the initial learning rate is set to be 0.01.
And step 3: acquiring a cloud picture to be identified and preprocessing the cloud picture;
further, a cloud picture to be identified is obtained and preprocessed, and the preprocessing specific method comprises the following steps: carrying out fisheye correction on the cloud picture to be recognized; and normalizing the cloud picture to be recognized to make the cloud picture to be recognized consistent with the cloud picture of the preprocessed training set in size.
And 4, step 4: inputting the preprocessed cloud picture to be recognized into the trained cloud recognition network;
and 5: the cloud identification network automatically identifies the cloud picture and outputs the belonged category.
The cloud identification network carries out cloud identification on the input image, outputs various probability values of 11 categories of the current cloud picture belonging to the ten cloud and clear sky without clouds, and takes the cloud with the maximum probability value as the final identification result.
Compared with the prior art, the invention has the following advantages:
1. aiming at the characteristics of a foundation cloud picture, the designed cloud identification network designs a plurality of repeated feature extraction units in a targeted manner, realizes the refined extraction of cloud features, overcomes the defects of large workload of manual identification, easy influence of subjective factors, low identification precision of a traditional image identification algorithm and the like, and effectively improves the accuracy and efficiency of cloud identification.
2. A sub-network fusing a feature extraction unit and a down-sampling unit is creatively constructed, a final cloud-shaped identification network is formed by a plurality of sub-networks with different parameters, on the premise of ensuring high identification accuracy of the network, the number of network parameters is effectively reduced, parameter operation is reduced, network identification efficiency is improved, and the constructed lightweight network model is more suitable for actual deployment. And the light weight is realized corresponding to the defects faced by directly applying CNN.
Drawings
FIG. 1 is a schematic flow chart of a foundation cloud image cloud identification method based on a convolutional neural network provided by the invention;
FIG. 2 is a schematic diagram of a feature extraction unit structure of a convolutional neural network-based cloud image cloud identification network of the present invention;
FIG. 3 is a schematic diagram of a down-sampling unit structure of a convolutional neural network-based cloud image cloud identification network of the present invention.
Detailed Description
The invention mainly realizes foundation cloud picture cloud shape recognition based on the convolutional neural network. The specific method adopted by the invention will be described in detail below with reference to the accompanying drawings.
Specifically, the flow of the foundation cloud image cloud identification method based on the convolutional neural network is shown in fig. 1, and the method includes the following steps: and S1, constructing a light-weight cloud identification network based on the foundation cloud picture. And S2, training the cloud recognition network model. And S3, acquiring the cloud picture to be identified and preprocessing the cloud picture. S4: and inputting the preprocessed cloud picture to be recognized into the trained cloud recognition network. And S5, automatically identifying the cloud picture and outputting the belonged category by the cloud identification network.
For S1, a lightweight cloud identification network based on the foundation cloud image is constructed.
In the invention, the network structure design of the cloud-shaped identification network is shown in table 1, and the cloud-shaped identification network mainly comprises a convolutional layer, a maximum pooling layer, 4 sub-networks, a global pooling layer and a full-connection layer.
And (3) rolling layers: convolution kernels of 1 × 1 size are used for the convolution layers (convolution layer 1) of the input part and the convolution layer (convolution layer 2) of the output part of the cloud identification network. The convolutional layer 1 mainly functions to increase the number of output channels without changing the width and height of an output image, thereby realizing the dimensionality of data. The convolutional layer 2 mainly functions to reduce the number of output channels without changing the width and height of an output image, realize the dimensionality reduction of data, and reduce the number of parameters.
Maximum pooling layer, global pooling layer: the input end of the cloud identification network adopts a maximum pooling layer with a convolution kernel size of 3 multiplied by 3, and the output end of the cloud identification network adopts a global pooling layer with a convolution kernel size of 7 multiplied by 7, so that the cloud identification network is mainly used for feature dimension reduction, data and parameter quantity compression, overfitting is reduced, and meanwhile, the fault tolerance of the model is improved.
Sub-network (sub-network 1-4): the network structures of the 4 sub-networks are similar and are respectively composed of a feature extraction unit and a down-sampling unit, and the difference is that the repetition times of the feature extraction unit in each sub-network are different.
A feature extraction unit: the network structure of the feature extraction unit is shown in fig. 2. The feature extraction unit integrates the ideas of depth separable convolution, channel random mixing and expansion convolution. The flow of the feature extraction unit is that firstly, the channel separation is carried out on the input feature graph to form 4 groups of feature channels, and according to the direction from left to right: the first group of characteristic channels utilizes short chains to carry out residual extraction; the second group is a combination of common depth separable convolution and point-by-point convolution; the steps of the third group and the fourth group are similar to those of the second group, and the difference is that the convolution kernels adopt the dilation convolution with different dilation rates (r), so that a larger receptive field can be provided. And finally, connecting the four groups of channels, and performing channel random mixing operation to exert the characterization capability of the channels and make up the loss of channel width narrowing.
A down-sampling unit: the network structure of the down-sampling unit is shown in fig. 3. In the down-sampling unit, firstly, feature information after channel random mixing is integrated by point-by-point convolution (1 × 1 convolution). And then, respectively passing through an average pooling layer and a maximum pooling layer with convolution kernel size of 3 multiplied by 3, wherein the average pooling layer is used for extracting the whole information of the local area, the maximum pooling layer is used for extracting the contour information of the local area, the feature dimensionality reduction is realized while the key information is extracted, and the space scale down sampling is completed. The channels of both are then connected to increase the width of the channel. And finally, according to the attention mechanism of the channel: the global pooling and the two full connection layers (full connection layer 1 and full connection layer 2) are utilized to extract the overall characteristics of the connected channels, the result is multiplied by each element in the original channel characteristic diagram, the weight distribution of the channels is realized, and the model can know which channel characteristic diagrams are more remarkable, so that the key information is selectively processed, and the network efficiency is improved.
And BN layers are added in the network structure design of the feature extraction unit and the down-sampling unit. The BN layer guarantees that a large learning rate can be selected during network model training, so that the model convergence speed is accelerated, the network training time is reduced, meanwhile, data can be thoroughly disordered before each layer to improve the precision and improve the regularization strategy, and the situations such as overfitting are effectively avoided.
And combining the extracted feature units with different repetition times with 1 down-sampling unit to form a sub-network module. The overall process of cloud identification is as follows: firstly, extracting shallow layer information by utilizing a common 3 multiplied by 3 convolutional layer and a pooling layer, then extracting deep layer characteristics by utilizing the combination of four sub-networks with different parameters, and finally realizing characteristic classification by virtue of a global pooling layer and a full connection layer.
For S2: and training a cloud recognition network model.
The specific steps of training the cloud recognition network model are as follows: acquiring a cloud picture data training set subjected to manual classification; preprocessing cloud picture data in the data set; and inputting the preprocessed cloud picture data into a cloud recognition network for training.
The specific method for preprocessing the cloud image data in the data set is to perform normalization processing and image enhancement processing on the image. The normalization unifies the picture resolution into a 214 × 214 size. The image enhancement comprises operations of rotation, translation, color dithering and the like, and aims to amplify the data volume under the condition of ensuring that the image characteristics are not changed so as to achieve a better network training effect and avoid overfitting.
Inputting the preprocessed cloud image data into the cloud recognition network established in the step S1 for training, selecting a cross entropy function as a loss function, setting the iteration times as 100, the batch processing amount as 100 and the initial learning rate as 0.01 by using an optimizer as Adam.
For S3: and acquiring a cloud picture to be identified and preprocessing the cloud picture.
The specific method for preprocessing the cloud picture to be recognized comprises the following steps: and (5) carrying out fisheye correction and normalization processing on the cloud picture to be recognized.
Because the acquired cloud pictures to be identified are all shot by the fisheye lens, correction operation is required before processing. The fisheye correction method for the cloud picture to be recognized mainly utilizes a chessboard correction method (also called fisheye correction algorithm) carried by OpenCV, and comprises the following specific steps: making a checkerboard; calling a get _ K _ and _ D () function to calculate an internal parameter (K) and a correction coefficient (D) of the fisheye; and correcting the fisheye picture according to the K and the D obtained by calculation.
And normalizing the cloud picture to be recognized after the fisheye correction operation so as to enable the cloud picture to be consistent with the preprocessed training set cloud picture in size.
For S4: and inputting the preprocessed cloud picture to be recognized into the trained cloud recognition network.
And inputting the cloud picture to be recognized processed in the step S3 into a trained cloud recognition network, and performing operations such as feature extraction and the like in the network.
For S5: the cloud identification network automatically identifies the cloud picture and outputs the belonged category.
After the cloud picture is automatically analyzed by the cloud picture identification network, all kinds of probability values (total probability sum is 1) of 11 types of cloud pictures to be identified, belonging to ten cloud pictures and cloud pictures without clouds in clear sky, are output, and the cloud pictures to be identified are automatically classified into the cloud with the highest probability value through comparison, so that cloud picture identification is realized.
TABLE 1 cloud recognition network architecture
Figure BDA0002266742690000071
The above embodiments are merely illustrative of the technical solutions of the present invention, and are not restrictive. Those skilled in the art will understand that: the above embodiments do not limit the present invention in any way, and all similar technical solutions obtained by means of equivalent replacement or equivalent transformation belong to the protection scope of the present invention.

Claims (5)

1. A foundation cloud picture cloud shape identification method based on a convolutional neural network is characterized by comprising the following steps:
step 1: constructing a light-weight cloud identification network based on a foundation cloud picture;
the foundation cloud image cloud identification network model based on the convolutional neural network comprises a convolutional layer, a maximum pooling layer, 4 sub-networks, a global pooling layer and a full-connection layer, wherein the network structure of each sub-network is similar, and the parameters of each sub-network are different;
step 2: training a cloud recognition network model;
the specific steps of training the cloud recognition network model are as follows: acquiring a cloud picture data training set subjected to manual classification; preprocessing cloud picture data in the data set; inputting the preprocessed cloud picture data into a cloud recognition network for training;
and step 3: acquiring a cloud picture to be identified and preprocessing the cloud picture;
acquiring a cloud picture to be identified and preprocessing the cloud picture, wherein the specific preprocessing method comprises the following steps: carrying out fisheye correction on the cloud picture to be recognized; carrying out normalization processing on the cloud picture to be recognized to make the cloud picture consistent with the size of the preprocessed training set cloud picture;
and 4, step 4: inputting the preprocessed cloud picture to be recognized into the trained cloud recognition network;
and 5: the cloud identification network automatically identifies the cloud picture and outputs the category of the cloud picture;
the cloud identification network carries out cloud identification on the input image, outputs various probability values of 11 categories of the current cloud picture belonging to the ten cloud and clear sky without clouds, and takes the cloud with the maximum probability value as the final identification result.
2. The foundation cloud picture cloud identification method based on the convolutional neural network as claimed in claim 1, wherein convolution layers at the input end and the output end are all subjected to 1 x 1 convolution and are respectively used for realizing the dimension increasing and the dimension reducing of the number of convolution kernel channels; convolution kernels with the sizes of 3 multiplied by 3 and 7 multiplied by 7 are respectively adopted in the maximum pooling layer and the global pooling layer, so that the size of a parameter matrix is reduced, the number of parameters is reduced, and the possibility of overfitting is reduced; the full connection layer is used for carrying out feature classification according to the extracted features.
3. The foundation cloud image cloud identification method based on the convolutional neural network as claimed in claim 1, wherein each sub-network comprises n feature extraction units and 1 down-sampling unit, n > 0: the characteristic extraction unit is used for extracting the image characteristics of the cloud picture to be identified; the down-sampling unit is used for reducing the dimensionality of the features on the premise of keeping effective information and avoiding overfitting to a certain extent.
4. The foundation cloud pattern recognition method based on the convolutional neural network as claimed in claim 1, wherein the feature extraction unit integrates the ideas of deep separable convolution, channel random mixing and expansion convolution, and effectively reduces the operation amount on the premise of ensuring the feature extraction accuracy; the down-sampling unit down-samples the image space scale by using the average pooling layer and the maximum pooling layer, and selectively processes the key information by using the attention mechanism of the channel so as to improve the network efficiency; in addition, the BN layer is added in the network structure design of the feature extraction unit and the down sampling unit, so that the convergence speed of the network model during training is increased, and the problems of gradient disappearance and gradient explosion are effectively avoided.
5. The foundation cloud picture cloud identification method based on the convolutional neural network as claimed in claim 1, wherein the specific method for preprocessing the cloud picture in the training data set is as follows: normalizing the pictures in the training set; performing data enhancement on the training picture; further, when the network model is trained, a cross entropy function is selected as a loss function, an optimizer is Adam, the iteration times are set to be 100, the batch processing amount is set to be 100, and the initial learning rate is set to be 0.01.
CN201911090618.0A 2019-11-09 2019-11-09 Foundation cloud picture cloud identification method based on convolutional neural network Active CN110929602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911090618.0A CN110929602B (en) 2019-11-09 2019-11-09 Foundation cloud picture cloud identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911090618.0A CN110929602B (en) 2019-11-09 2019-11-09 Foundation cloud picture cloud identification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110929602A true CN110929602A (en) 2020-03-27
CN110929602B CN110929602B (en) 2023-08-22

Family

ID=69852580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911090618.0A Active CN110929602B (en) 2019-11-09 2019-11-09 Foundation cloud picture cloud identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110929602B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507399A (en) * 2020-04-16 2020-08-07 上海眼控科技股份有限公司 Cloud recognition and model training method, device, terminal and medium based on deep learning
CN111626255A (en) * 2020-06-02 2020-09-04 中国人民解放军国防科技大学 Foundation cloud graph database construction method for convolutional neural network training
CN111695640A (en) * 2020-06-18 2020-09-22 南京信息职业技术学院 Foundation cloud picture recognition model training method and foundation cloud picture recognition method
CN112258431A (en) * 2020-09-27 2021-01-22 成都东方天呈智能科技有限公司 Image classification model based on mixed depth separable expansion convolution and classification method thereof
CN112348058A (en) * 2020-10-20 2021-02-09 华东交通大学 Satellite cloud picture classification method based on CNN-LSTM network and computer readable storage medium
CN112418087A (en) * 2020-11-23 2021-02-26 中山大学 Underwater video fish identification method based on neural network
CN112434554A (en) * 2020-10-16 2021-03-02 中科院成都信息技术股份有限公司 Heterogeneous reduction-based cloud image identification method and system
CN112508255A (en) * 2020-12-01 2021-03-16 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN112766176A (en) * 2021-01-21 2021-05-07 深圳市安软科技股份有限公司 Training method of lightweight convolutional neural network and face attribute recognition method
CN112801270A (en) * 2021-01-21 2021-05-14 中国人民解放军国防科技大学 Automatic U-shaped network slot identification method integrating depth convolution and attention mechanism
CN112884031A (en) * 2021-02-04 2021-06-01 南京信息工程大学 Foundation cloud picture cloud form automatic identification method based on convolutional neural network
CN113192084A (en) * 2021-05-07 2021-07-30 中国公路工程咨询集团有限公司 Machine vision-based highway slope micro-displacement deformation monitoring method
CN113469344A (en) * 2021-07-23 2021-10-01 成都数联云算科技有限公司 Deep convolutional neural network model improvement method, system, device and medium
CN113539238A (en) * 2020-03-31 2021-10-22 中国科学院声学研究所 End-to-end language identification and classification method based on void convolutional neural network
CN113627376A (en) * 2021-08-18 2021-11-09 北京工业大学 Facial expression recognition method based on multi-scale dense connection depth separable network
CN114067153A (en) * 2021-11-02 2022-02-18 暨南大学 Image classification method and system based on parallel double-attention light-weight residual error network
CN114565854A (en) * 2022-04-29 2022-05-31 河北冀云气象技术服务有限责任公司 Intelligent image cloud identification system and method
CN116363510A (en) * 2023-03-20 2023-06-30 中国气象局人工影响天气中心 Ice crystal, cloud droplet identification method and device in artificial precipitation (snow) process, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846334A (en) * 2018-05-30 2018-11-20 安徽大学 Cloud category automatic identification method and system
US20190095730A1 (en) * 2017-09-25 2019-03-28 Beijing University Of Posts And Telecommunications End-To-End Lightweight Method And Apparatus For License Plate Recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190095730A1 (en) * 2017-09-25 2019-03-28 Beijing University Of Posts And Telecommunications End-To-End Lightweight Method And Apparatus For License Plate Recognition
CN108846334A (en) * 2018-05-30 2018-11-20 安徽大学 Cloud category automatic identification method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JUNYU LU ET AL.: "P_Segnet and NP_Segnet: New Neural Network Architectures for Cloud Recognition of Remote Sensing Images", 《 IEEE ACCESS》, vol. 7 *
张弛;刘钧;李旭光;张淇海;杨毕宣;杨俊;: "基于可见光――红外图像信息融合的云状识别方法", 气象与环境学报, no. 01 *
林封笑;陈华杰;姚勤炜;张杰豪;: "基于混合结构卷积神经网络的目标快速检测算法", 计算机工程, no. 12 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539238A (en) * 2020-03-31 2021-10-22 中国科学院声学研究所 End-to-end language identification and classification method based on void convolutional neural network
CN113539238B (en) * 2020-03-31 2023-12-08 中国科学院声学研究所 End-to-end language identification and classification method based on cavity convolutional neural network
CN111507399A (en) * 2020-04-16 2020-08-07 上海眼控科技股份有限公司 Cloud recognition and model training method, device, terminal and medium based on deep learning
CN111626255A (en) * 2020-06-02 2020-09-04 中国人民解放军国防科技大学 Foundation cloud graph database construction method for convolutional neural network training
CN111695640A (en) * 2020-06-18 2020-09-22 南京信息职业技术学院 Foundation cloud picture recognition model training method and foundation cloud picture recognition method
CN111695640B (en) * 2020-06-18 2024-04-09 南京信息职业技术学院 Foundation cloud picture identification model training method and foundation cloud picture identification method
CN112258431A (en) * 2020-09-27 2021-01-22 成都东方天呈智能科技有限公司 Image classification model based on mixed depth separable expansion convolution and classification method thereof
CN112434554A (en) * 2020-10-16 2021-03-02 中科院成都信息技术股份有限公司 Heterogeneous reduction-based cloud image identification method and system
CN112434554B (en) * 2020-10-16 2023-08-04 中科院成都信息技术股份有限公司 Cloud image recognition method and system based on heterogeneous reduction
CN112348058A (en) * 2020-10-20 2021-02-09 华东交通大学 Satellite cloud picture classification method based on CNN-LSTM network and computer readable storage medium
CN112418087A (en) * 2020-11-23 2021-02-26 中山大学 Underwater video fish identification method based on neural network
CN112418087B (en) * 2020-11-23 2023-06-09 中山大学 Underwater video fish identification method based on neural network
CN112508255A (en) * 2020-12-01 2021-03-16 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN112508255B (en) * 2020-12-01 2021-09-07 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN112801270B (en) * 2021-01-21 2023-12-12 中国人民解放军国防科技大学 Automatic U-shaped network slot identification method integrating depth convolution and attention mechanism
CN112766176A (en) * 2021-01-21 2021-05-07 深圳市安软科技股份有限公司 Training method of lightweight convolutional neural network and face attribute recognition method
CN112766176B (en) * 2021-01-21 2023-12-01 深圳市安软科技股份有限公司 Training method of lightweight convolutional neural network and face attribute recognition method
CN112801270A (en) * 2021-01-21 2021-05-14 中国人民解放军国防科技大学 Automatic U-shaped network slot identification method integrating depth convolution and attention mechanism
CN112884031A (en) * 2021-02-04 2021-06-01 南京信息工程大学 Foundation cloud picture cloud form automatic identification method based on convolutional neural network
CN113192084A (en) * 2021-05-07 2021-07-30 中国公路工程咨询集团有限公司 Machine vision-based highway slope micro-displacement deformation monitoring method
CN113469344A (en) * 2021-07-23 2021-10-01 成都数联云算科技有限公司 Deep convolutional neural network model improvement method, system, device and medium
CN113469344B (en) * 2021-07-23 2024-04-16 成都数联云算科技有限公司 Method, system, device and medium for improving deep convolutional neural network model
CN113627376A (en) * 2021-08-18 2021-11-09 北京工业大学 Facial expression recognition method based on multi-scale dense connection depth separable network
CN113627376B (en) * 2021-08-18 2024-02-09 北京工业大学 Facial expression recognition method based on multi-scale dense connection depth separable network
CN114067153A (en) * 2021-11-02 2022-02-18 暨南大学 Image classification method and system based on parallel double-attention light-weight residual error network
CN114565854A (en) * 2022-04-29 2022-05-31 河北冀云气象技术服务有限责任公司 Intelligent image cloud identification system and method
CN116363510B (en) * 2023-03-20 2023-10-24 中国气象局人工影响天气中心 Method and device for identifying ice crystals and cloud drops in artificial rain or snow process, computer equipment and storage medium
CN116363510A (en) * 2023-03-20 2023-06-30 中国气象局人工影响天气中心 Ice crystal, cloud droplet identification method and device in artificial precipitation (snow) process, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110929602B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN110929602A (en) Foundation cloud picture cloud shape identification method based on convolutional neural network
CN110929603B (en) Weather image recognition method based on lightweight convolutional neural network
CN112766199B (en) Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model
CN111832546B (en) Lightweight natural scene text recognition method
CN113239954B (en) Attention mechanism-based image semantic segmentation feature fusion method
CN109840560B (en) Image classification method based on clustering in capsule network
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN106295613A (en) A kind of unmanned plane target localization method and system
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN111401380B (en) RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN110991349B (en) Lightweight vehicle attribute identification method based on metric learning
CN110334584B (en) Gesture recognition method based on regional full convolution network
CN113642445B (en) Hyperspectral image classification method based on full convolution neural network
CN112347970A (en) Remote sensing image ground object identification method based on graph convolution neural network
CN113705641A (en) Hyperspectral image classification method based on rich context network
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN114973011A (en) High-resolution remote sensing image building extraction method based on deep learning
CN110598746A (en) Adaptive scene classification method based on ODE solver
CN111709443B (en) Calligraphy character style classification method based on rotation invariant convolution neural network
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN112036454A (en) Image classification method based on multi-core dense connection network
CN111401156A (en) Image identification method based on Gabor convolution neural network
CN114330516A (en) Small sample logo image classification based on multi-graph guided neural network model
CN110956201B (en) Convolutional neural network-based image distortion type classification method
CN111695450B (en) Face rapid identification method based on IMobaileNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230724

Address after: 100012 817, Floor 8, No. 101, Floor 3 to 8, Building 17, Rongchuang Road, Chaoyang District, Beijing

Applicant after: Beijing Ge Lei Information Technology Co.,Ltd.

Address before: 100124 No. 100 Chaoyang District Ping Tian Park, Beijing

Applicant before: Beijing University of Technology

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant