CN109740686A - A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features - Google Patents

A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features Download PDF

Info

Publication number
CN109740686A
CN109740686A CN201910019115.8A CN201910019115A CN109740686A CN 109740686 A CN109740686 A CN 109740686A CN 201910019115 A CN201910019115 A CN 201910019115A CN 109740686 A CN109740686 A CN 109740686A
Authority
CN
China
Prior art keywords
feature
interest
fusion features
pool area
candidate region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910019115.8A
Other languages
Chinese (zh)
Inventor
孙远
李宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910019115.8A priority Critical patent/CN109740686A/en
Publication of CN109740686A publication Critical patent/CN109740686A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The deep learning image multiple labeling classification method based on pool area and Fusion Features that the present invention provides a kind of the steps include: 1) to carry out candidate region extraction to image to be processed, generate the candidate region of different scale;2) feature of interest figure is extracted by the conventional part of the convolutional neural networks of pre-training in candidate region, generate multichannel feature of interest Fig. 3 of different scale) the feature of interest figure of different scale is subjected to pool area, so that various sizes of feature of interest figure becomes identical size;4) the feature of interest figure of multiple identical sizes is merged, generates last feature vector;5) feature vector after Fusion Features is entered into classifier, predicts the probability in image containing object;The present invention is based on convolutional neural networks, and pool area and Fusion Features is added, provide not only the method that Fusion Features are carried out in convolutional neural networks, and the method for a high precision image multiple labeling is provided, facilitate researcher and engineering application personnel to improve image classification effect in practical applications.

Description

It is a kind of to be classified based on the deep learning image multiple labeling of pool area and Fusion Features Method
Technical field
The invention belongs to computer image processing technology fields, are related to a kind of depth based on pool area and Fusion Features Degree study image multiple labeling classification method.
Background technique
The classification of image multiple labeling is the basic task of image procossing, it is therefore an objective to be identified in image comprising which class object Body.Image classification and the classification of image multiple labeling are two tasks, and image classification is a kind of special circumstances of the more classification tasks of image, A kind of object is contained only in image.In this year, due to the fast development of deep learning, image classification task also achieves considerable Progress, the image classification based on convolutional neural networks are even more to have reached and the comparable level of the mankind.Then the multiple labeling of image point For class due to the multiple objects in image to be identified, task is more complicated, and effect need to be improved.However in actual application In, multiple labeling image classification has many application scenarios.There are below in research and application for image multiple labeling task at present Problem:
On the one hand, image multiple labeling is classified the object due to identify multiple types, and scene is often relative complex, It sometimes will overlap between object and object, increase the difficulty of identification.
On the other hand, multiple labeling classification requires have certain contiguity between label and label to a certain extent, uses Deep learning model can capture being associated between label and label, and model can tend to convergence however actual scene in training In, there may be lesser relevance between label and label.
There are also on the one hand, image classification task needs flag data to complete the training of model.In practical applications, relatively For image classification, the every picture of multiple labeling task will mark multiple labels, increase label difficulty.
These problems will lead to user and be difficult with image multiple labeling, often convert multiple images classification for problem and appoint Business weakens ease for use of the multiple labeling in practical field application to a certain extent.
Summary of the invention
Technical problem solved by the invention is in view of the deficiencies of the prior art, to provide a kind of based on pool area and spy Levy fusion deep learning image multiple labeling classification method, provided for user it is clear in structure understandable, while still have high-precision figure The method classified as multiple labeling.
Technical solution provided by the present invention are as follows:
A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features comprising the steps of:
Step 1): candidate region extraction is carried out to image to be processed;It is extracted from picture using candidate region extraction algorithm Several various sizes of candidate region P ∈ N × W outl×Hl(l=1,2....N), for subsequent step processing;Wherein N is candidate regions The quantity in domain, WlAnd HlIt is the length and width of candidate region respectively, the scale of each candidate region is different;
Step 2): the candidate region P ∈ N × W obtained according to step 1)l×Hl(l=1,2....N), it is good into pre-training Convolutional neural networks conventional part, generate multichannel feature of interest figure F ∈ N × C × w of different scalel×hl(l=1, 2....N);Wherein N is feature of interest figure quantity, and C is feature of interest figure port number, wlAnd hlIt is single feature channel respectively Length and width;Pre-training convolutional neural networks can be used, feature extraction is carried out to candidate region, generating has high abstraction The feature of interest figure of feature;
Step 3): the feature of interest figure F ∈ N × C × w obtained according to step 2)l×hl(l=1,2....N) area is carried out Domain pond generates the identical feature of interest figure of scale;wl×hlThe length and width for indicating each feature of interest figure, by it The size of all feature of interest figures is normalized to w ' × h ', the feature of interest figure after normalization are as follows: and F ∈ N × C × w ' × h′;
Step 4): several normalized feature of interest figure F ∈ N × C × w ' obtained according to step 3) × h ' progress is special Sign fusion generates the feature vector being abstracted with altitude feature, V ∈ T;Wherein T indicates the species number of object;
Step 5): the feature vector obtained according to step 4) classifies into classifier, predicts label vector p ∈ T;In advance Each position p of direction finding amount piIndicate the probability containing object i;Given threshold θ, piIf being more than that threshold θ contains object i;
The candidate region extraction algorithm mentioned in the step 1), including but not limited to SelectiveSearch, The candidate regions such as EdgeBoxes extraction algorithm.
The method for extracting feature of interest figure by convolutional neural networks mentioned in the step 2), including but not limited to Other convolutional neural networks such as Alexnet, VGG, Resnet.
The step 3) the following steps are included:
Step 3.1): by each wl×hlInterested characteristic pattern be divided into a sub- characteristic pattern of w ' × h ', that is, use transverse and longitudinal The method of cutting, the size of each subcharacter figure are as follows:
Step 3.2): each subcharacter figure is subjected to maximum pond, extracts maximum value characteristic point;Each subcharacter figure is most Big characteristic point reassembles into new characteristic pattern, and the size of new characteristic pattern is w ' × h ';It is special that each feature of interest figure is subjected to son Sign figure divides, maximum pond, and feature recombination, all feature of interest figures are normalized to w ' × h ';
The step 4) the following steps are included:
Step 4.1): several layers are entered according to normalization feature of interest figure F ∈ N × C × w ' × h ' that step 4) obtains Full Connection Neural Network carries out feature extraction, generates feature of interest vector;First normalization feature of interest figure is carried out once Convolution, convolution kernel size are w ' × h ', export as Fc ∈ N × C ', enter back into the full Connection Neural Network of several layers, export as Fs ∈ N × T, wherein N indicates feature of interest vector number, and T is the dimension of each vector;
Step 4.2): Fusion Features are carried out according to multiple feature vector Fs ∈ N × T that step 4.1) obtains, generate feature Vector V ∈ T;Set Fs=(t1,t2,.....tN)t1∈ N × 1, V=(v1,v2,.....vN), then vi=max (t1);
The prediction result vector mentioned in the step 5) is activated by feature vector and is obtained, p=δ (V), wherein pi=δ (Vi), Activiation method includes but is not limited to sigmoid, the activation primitives such as tanh, softmax.
The utility model has the advantages that
The deep learning image multiple labeling classification method based on pool area and Fusion Features that the present invention provides a kind of, knot Structure is clear, but maintains higher multiple labeling nicety of grading.This method has focused on a kind of framework, and model has certain Washability, user can select appropriately sized model for oneself real data size.User only needs to be grasped basic Deep learning and the knowledge of convolutional neural networks it will be appreciated that this method structure and self-actualization.The present invention has stronger Expansion, user can formulate data label according to self-demand, complete training and be applied in actual production.
Detailed description of the invention
Fig. 1 is the method for the invention flow chart;
Fig. 2 is integrated stand composition of the present invention;
Fig. 3 is that schematic diagram is extracted in candidate region;
Fig. 4 is Fusion Features schematic diagram;
Fig. 5 is multiple labeling classification results schematic diagram;
Specific embodiment
To keep the purpose of the present invention, mentality of designing and advantage clearer, below in conjunction with specific example, and referring to attached drawing, Invention is further described in detail.
The present invention provides a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features Method (title), as shown in Figure 1, including five key steps: picture candidate region is extracted, and different scale candidate region is generated;It waits Favored area enters pre-training convolutional neural networks, generates feature of interest figure;Feature of interest graph region pond, generates identical ruler Spend characteristic pattern;Multiple feature of interest figure Fusion Features generate feature vector;Feature vector is circulated into classifier, predicts figure The kind of object contained in piece;
The committed step that method of the invention is related to is described in detail one by one below, it is shown that specific step is as follows:
Step 1 carries out candidate region extraction to image to be processed;As shown in Fig. 2, picture generally uses within 1000px Resolution ratio, several various sizes of candidate region P ∈ N × W are extracted from picture using EdgeBoxes algorithml×Hl(l= 1,2....N) after, being extracted using EdgeBoxes algorithm, N is generally 200 or so;WlAnd HlAccording in image object it is big Depending on small and shape.
Step 2, according to step 1) obtained candidate region P ∈ N × Wl×Hl(l=1,2....N), it is good into pre-training VGG in, generate multichannel feature of interest figure F ∈ N × C × w of different scalel×hl(l=1,2....N);Wherein N is sense Interest characteristics figure quantity, generally 200 or so, C are that feature of interest figure port number is 256, wlAnd hlIt is that single feature is logical respectively The length and width in road, depending on the size of candidate region;Convolution process can reduce the size of image while extracting feature;
Step 3, according to step 2) obtained feature of interest figure F ∈ N × C × wl×hl(l=1,2....N) area is carried out Domain pond generates the identical feature of interest figure of scale;As shown in figure 4, wl×hlIndicate each feature of interest figure length and The size of its all feature of interest figure is normalized to w ' × h ', value 40*40 by width.By each wl×hlSense it is emerging The characteristic pattern of interest is divided into 40 × 40 sub- characteristic patterns, that is, the method for using transverse and longitudinal cutting, the size of each subcharacter figure are as follows:Each subcharacter figure is subjected to maximum pond, extracts maximum value characteristic point;The maximum feature of each subcharacter figure Point reassembles into new characteristic pattern, and the size of new characteristic pattern is 40 × 40;Each feature of interest figure is carried out subcharacter figure to draw Point, maximum pond, feature recombination, the feature of interest figure after normalization are as follows: F ∈ 200 × 256 × 40 × 40;
Step 4 enters several according to the normalization feature of interest figure F ∈ 200 × 256 × 40 × 40 that step 3 obtains The full Connection Neural Network of layer carries out feature extraction, generates feature of interest vector;Normalization feature of interest figure is first carried out one Secondary convolution, convolution kernel size are 40 × 40, export as Fc ∈ 200 × 256, enter back into the full Connection Neural Network of several layers, export For Fs ∈ 256 × 20, object category is 20 classes, and the output dimension connected entirely is 20;The multiple feature vector Fs ∈ 256 that will be obtained × 20 carry out Fusion Features, and maximum pond is carried out in 256 dimensions, generates feature vector V ∈ 20;
Step 5 is activated on each position, is predicted last according to the feature vector V ∈ 20 that step 4 generates As a result p ∈ 20.Activiation method are as follows: p=sigmoid (V), given threshold 0.5, if predicted value is greater than 0.5 on each position, Then contain the category;As a result as shown in Figure 5.

Claims (7)

1. a kind of deep learning image multiple labeling classification method based on pool area and Fusion Features, which is characterized in that include Following steps:
Step 1): candidate region extraction is carried out to image to be processed;Several differences are extracted from picture using candidate frame algorithm Candidate region P ∈ N × W of sizel×Hl(l=1,2....N), for subsequent step processing;Wherein N is the quantity of candidate region, WlAnd HlIt is the length and width of candidate region respectively, the scale of each candidate region is different;
Step 2): the candidate region P ∈ N × W obtained according to step 1)l×Hl(l=1,2....N), the volume good into pre-training The conventional part of product neural network, generates multichannel feature of interest figure F ∈ N × C × w of different scalel×hl(l=1, 2....N);Wherein N is feature of interest figure quantity, and C is feature of interest figure port number, wlAnd hlIt is single feature channel respectively Length and width;Pre-training convolutional neural networks can be used, feature extraction is carried out to candidate region, generating has high abstraction The feature of interest figure of feature;
Step 3): the feature of interest figure F ∈ N × C × w obtained according to step 2)l×hl(l=1,2....N) pool area is carried out Change, generates the identical feature of interest figure of scale;wl×hlThe length and width for indicating each feature of interest figure, is owned The size of feature of interest figure be normalized to w ' × h ', the feature of interest figure after normalization are as follows: F ∈ N × C × w ' × h ';
Step 4): melted according to several normalized feature of interest figure F ∈ N × C × w ' × h ' carry out features that step 3) obtains It closes, generates the feature vector being abstracted with altitude feature, V ∈ T;Wherein T indicates the species number of object;
Step 5): the feature vector obtained according to step 4) classifies into classifier, predicts label vector p ∈ T;Pre- direction finding Measure each position p of piIndicate the probability containing object i;Given threshold θ, piIf being more than that threshold θ contains object i.
2. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that the candidate region extraction algorithm that the step 1) is mentioned, including but not limited to SelectiveSearch, The candidate regions such as EdgeBoxes extraction algorithm.
3. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that the method for extracting feature of interest figure by convolutional neural networks that the step 2) is mentioned, including but not It is limited to Alexnet, other convolutional neural networks such as VGG, Resnet.
4. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that the step 3) the following steps are included:
Step 3.1): by each wl×hlInterested characteristic pattern be divided into a sub- characteristic pattern of w ' × h ', that is, use transverse and longitudinal cutting Method, the size of each subcharacter figure are as follows:
Step 3.2): each subcharacter figure is subjected to maximum pond, extracts maximum value characteristic point;The maximum of each subcharacter figure is special Sign point reassembles into new characteristic pattern, and the size of new characteristic pattern is w ' × h ';Each feature of interest figure is subjected to subcharacter figure It divides, maximum pond, feature recombination, all feature of interest figures are normalized to w ' × h '.
5. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that the step 4) the following steps are included:
Step 4.1): the normalization feature of interest figure F ∈ N × C × w ' × h ' obtained according to step 4) enters several layers and connects entirely It connects neural network and carries out feature extraction, generate feature of interest vector;Normalization feature of interest figure is first subjected to a convolution, Convolution kernel size is w ' × h ', exports as Fc ∈ N × C ', enters back into the full Connection Neural Network of several layers, is exported as Fs ∈ N × T, Wherein N indicates feature of interest vector number, and T is the dimension of each vector;
Step 4.2): Fusion Features are carried out according to multiple feature vector Fs ∈ N × T that step 4.1) obtains, generate feature vector V ∈T;Set Fs=(t1,t2,.....tN)t1∈ N × 1, V=(v1,v2,.....vN), then vi=max (t1)。
6. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that in the step 5), prediction result vector is activated by feature vector and obtained, p=δ (V), wherein pi=δ (Vi), Activiation method includes but is not limited to sigmoid, the activation primitives such as tanh, softmax.
7. a kind of deep learning image multiple labeling classification side based on pool area and Fusion Features according to claim 1 Method, which is characterized in that the process comprising training pattern;Deep learning is divided into training process and prediction process;Trained error side Journey are as follows:
Wherein l and p respectively indicate label and predicted vector.Error includes but is not limited to Euclidean distance.It is mentioned due to step 2) Convolutional Neural neural network extracts the network for the pre-training that feature uses, therefore only training step 4) the full coupling part mentioned.
CN201910019115.8A 2019-01-09 2019-01-09 A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features Pending CN109740686A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910019115.8A CN109740686A (en) 2019-01-09 2019-01-09 A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910019115.8A CN109740686A (en) 2019-01-09 2019-01-09 A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features

Publications (1)

Publication Number Publication Date
CN109740686A true CN109740686A (en) 2019-05-10

Family

ID=66364065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910019115.8A Pending CN109740686A (en) 2019-01-09 2019-01-09 A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features

Country Status (1)

Country Link
CN (1) CN109740686A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110427970A (en) * 2019-07-05 2019-11-08 平安科技(深圳)有限公司 Image classification method, device, computer equipment and storage medium
CN110825904A (en) * 2019-10-24 2020-02-21 腾讯科技(深圳)有限公司 Image matching method and device, electronic equipment and storage medium
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
CN112071421A (en) * 2020-09-01 2020-12-11 深圳高性能医疗器械国家研究院有限公司 Deep learning estimation method and application thereof
CN112101373A (en) * 2019-06-18 2020-12-18 富士通株式会社 Object detection method and device based on deep learning network and electronic equipment
CN112308115A (en) * 2020-09-25 2021-02-02 安徽工业大学 Multi-label image deep learning classification method and equipment
CN112418261A (en) * 2020-09-17 2021-02-26 电子科技大学 Human body image multi-attribute classification method based on prior prototype attention mechanism
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN113609323A (en) * 2021-07-20 2021-11-05 上海德衡数据科技有限公司 Image dimension reduction method and system based on neural network
CN113705361A (en) * 2021-08-03 2021-11-26 北京百度网讯科技有限公司 Method and device for detecting model in living body and electronic equipment
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101373A (en) * 2019-06-18 2020-12-18 富士通株式会社 Object detection method and device based on deep learning network and electronic equipment
JP7501125B2 (en) 2019-06-18 2024-06-18 富士通株式会社 Method, device and electronic device for object detection based on deep learning network
CN110427970A (en) * 2019-07-05 2019-11-08 平安科技(深圳)有限公司 Image classification method, device, computer equipment and storage medium
CN110427970B (en) * 2019-07-05 2023-08-01 平安科技(深圳)有限公司 Image classification method, apparatus, computer device and storage medium
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US11481574B2 (en) 2019-07-18 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Image processing method and device, and storage medium
CN110378297B (en) * 2019-07-23 2022-02-11 河北师范大学 Remote sensing image target detection method and device based on deep learning and storage medium
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110825904A (en) * 2019-10-24 2020-02-21 腾讯科技(深圳)有限公司 Image matching method and device, electronic equipment and storage medium
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
CN112071421A (en) * 2020-09-01 2020-12-11 深圳高性能医疗器械国家研究院有限公司 Deep learning estimation method and application thereof
CN112418261B (en) * 2020-09-17 2022-05-03 电子科技大学 Human body image multi-attribute classification method based on prior prototype attention mechanism
CN112418261A (en) * 2020-09-17 2021-02-26 电子科技大学 Human body image multi-attribute classification method based on prior prototype attention mechanism
CN112308115B (en) * 2020-09-25 2023-05-26 安徽工业大学 Multi-label image deep learning classification method and equipment
CN112308115A (en) * 2020-09-25 2021-02-02 安徽工业大学 Multi-label image deep learning classification method and equipment
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN112419292B (en) * 2020-11-30 2024-03-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN113609323A (en) * 2021-07-20 2021-11-05 上海德衡数据科技有限公司 Image dimension reduction method and system based on neural network
CN113609323B (en) * 2021-07-20 2024-04-23 上海德衡数据科技有限公司 Image dimension reduction method and system based on neural network
CN113705361A (en) * 2021-08-03 2021-11-26 北京百度网讯科技有限公司 Method and device for detecting model in living body and electronic equipment
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN109740686A (en) A kind of deep learning image multiple labeling classification method based on pool area and Fusion Features
El Achi et al. Automated diagnosis of lymphoma with digital pathology images using deep learning
Wang et al. OAENet: Oriented attention ensemble for accurate facial expression recognition
Krause et al. The unreasonable effectiveness of noisy data for fine-grained recognition
Pan et al. Object detection based on saturation of visual perception
Wu et al. Research on image text recognition based on canny edge detection algorithm and k-means algorithm
CN110533024B (en) Double-quadratic pooling fine-grained image classification method based on multi-scale ROI (region of interest) features
CN103020971A (en) Method for automatically segmenting target objects from images
CN111652273A (en) Deep learning-based RGB-D image classification method
CN111553351A (en) Semantic segmentation based text detection method for arbitrary scene shape
CN106203448A (en) A kind of scene classification method based on Nonlinear Scale Space Theory
CN109034213A (en) Hyperspectral image classification method and system based on joint entropy principle
Hu et al. Research on the cascade vehicle detection method based on CNN
Yeasmin et al. Image classification for identifying social gathering types
Weng et al. Traffic scene perception based on joint object detection and semantic segmentation
Wei et al. Defective samples simulation through neural style transfer for automatic surface defect segment
Torres A framework for the unsupervised and semi-supervised analysis of visual frames
CN105844299B (en) A kind of image classification method based on bag of words
Lu et al. HEp-2 cell image classification method based on very deep convolutional networks with small datasets
Zhou et al. Semantic image segmentation using low-level features and contextual cues
Chen et al. ADOSMNet: a novel visual affordance detection network with object shape mask guided feature encoders
CN114299342B (en) Unknown mark classification method in multi-mark picture classification based on deep learning
Wei et al. Multiscale feature U-Net for remote sensing image segmentation
Boroujerdi et al. Deep interactive region segmentation and captioning
Li et al. Dermoscopy lesion classification based on GANs and a fuzzy rank-based ensemble of CNN models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190510