CN109978069A - The method for reducing ResNeXt model over-fitting in picture classification - Google Patents

The method for reducing ResNeXt model over-fitting in picture classification Download PDF

Info

Publication number
CN109978069A
CN109978069A CN201910263146.8A CN201910263146A CN109978069A CN 109978069 A CN109978069 A CN 109978069A CN 201910263146 A CN201910263146 A CN 201910263146A CN 109978069 A CN109978069 A CN 109978069A
Authority
CN
China
Prior art keywords
resnext
characteristic pattern
network
cropout
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910263146.8A
Other languages
Chinese (zh)
Other versions
CN109978069B (en
Inventor
路通
侯文博
王文海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910263146.8A priority Critical patent/CN109978069B/en
Publication of CN109978069A publication Critical patent/CN109978069A/en
Application granted granted Critical
Publication of CN109978069B publication Critical patent/CN109978069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the methods for reducing ResNeXt model over-fitting in picture classification, include the following steps: step 1, and the training picture concentrated to public data pre-processes;Step 2, it is based on ResNeXt network establishment network model, and carries out the modification of Cropout method to ResNeXt network;Step 3, the ResNeXt network after the training modification of stochastic gradient descent method is used, trained network model is obtained;Step 4, a given picture to be sorted is inputted, is classified using network model trained in step 3 to it, obtains result to the end.

Description

The method for reducing ResNeXt model over-fitting in picture classification
Technical field
The present invention relates to depth learning technology fields, more particularly to reduce ResNeXt model over-fitting in picture classification The method of phenomenon.
Background technique
Deep neural network had played great function in the multimedias research field such as picture classification in recent years, however people face Pair a common problem be how to keep the training of depth network more stable.In order to solve this problem it and further mentions The effect of high neural network, people are commonly designed different rules and carry out constraint network, and the most common technology is exactly that batch normalizes (random inactivation Dropout is to the artificial neural network with depth structure by (BN, Batch Normalization) and Dropout The method that network optimizes, by the way that the fractional weight of hidden layer or output are zeroed at random in learning process, between reduction node Interdependency co-dependence to realize the regularization regularization of neural network, reduce its structure wind Dangerous structural risk).And over-fitting is still a problem for depth network, it may cause depth network The generalization ability of model is excessively poor.And in actual multimedia application, the mass data as required for training depth network It is not readily available and manually to mark cost too big, over-fitting is even more than more serious.
Summary of the invention
In order to solve the problems, such as the overfitting problem still having in picture classification in the prior art, the present invention is in ResNeXt A kind of new method for reducing over-fitting in picture classification task is proposed on the basis of network model, is called Cropout (Cropout belongs to the name that the present invention takes to this method, only English name).
It is including as follows the invention particularly discloses the method for reducing ResNeXt model over-fitting in picture classification Step:
Step 1, the training picture concentrated to public data pre-processes;
Step 2, it is based on ResNeXt network establishment network model, and is modified to ResNeXt network;
Step 3, the ResNeXt network after the training modification of stochastic gradient descent method is used, trained network mould is obtained Type;
Step 4, a given picture to be sorted is inputted, it is divided using network model trained in step 3 Class obtains classification results to the end.
Step 1 includes: that the training picture concentrated to public data carries out common data enhancement operations, such as: it is random to cut out Cut, flip horizontal, random scaling etc., specifically, first will training picture proportionally 0.8,0.9,1.1,1.2 random scalings, Then Random-Rotation is angularly carried out by the overturning of training picture Random Level or according to -30 °, -15 °, 15 °, 30 °, finally from instruction Practice random cropping on picture and go out the sample that size is 32 × 32, as final training picture.
Step 2 the following steps are included:
Step 2-1, according to document Aggregated residual transformations for deep neural Method in networks carries out feature extraction to training picture using the conventional part for the ResNeXt network that radix is G, obtains The characteristic pattern of transduction pathway is denoted as x by the G transduction pathway to after grouping convolution, and size is H × W, and H, W respectively indicate spy Levy the length and width of figure;
Step 2-2, Cropout method is to bind a random cropping operation at random to every transduction pathway, is specifically included: The filling for carrying out k neutral element along each edge to characteristic pattern x, is extended to (H+k) × (W+k) size from original H × W for it Characteristic pattern y, random cropping goes out the characteristic pattern x ' of H × W size on characteristic pattern y after expansion, is defined on characteristic pattern x and supplements The operation that random cropping is carried out after k neutral element is Ρk, then on characteristic pattern x random cropping transformation can with following formula come It indicates:
X '=Ρk(x),
Wherein x ' is the transformed characteristic pattern of random cropping.
Cropout method include based on ResNeXt network cluster translation (generally use grouping convolution form realize, That is the grouping convolution in step 2-1), the original cluster translation of ResNeXt network is indicated with following formula:
Wherein,It is actually the convolution function that characteristic pattern x is mapped as to a low-dimensional vector space, ∑ is to spell Operation is connect, G is the transduction pathway item number of ResNeXt, and i represents i-th transduction pathway,For the characteristic pattern after cluster translation.
Since all transform paths share identical topological structure, and Cropout method proposed by the present invention will be slight The congeniality form for breaking cluster translation, then can be indicated via the modified cluster translation of Cropout method are as follows:
WhereinFor the new feature figure after the cluster translation that Cropout method was modified;
The random cropping operation only building in netinit bound on every transduction pathway in the Cropout method, Hereafter the binding mode remains unchanged in the training of network and test process.
Characteristic pattern x ' on polymerization transduction pathway that G item was modified via method of the invention is passed through splicing by step 2-3 Operation is synthesized together, and forms input data of the new characteristic pattern as next layer network of ResNeXt;
Compared with prior art, method proposed by the present invention has following advantage:
Over-fitting of the ResNeXt network in picture classification task is effectively reduced;
The present invention is highly susceptible to realizing under the premise of not changing legacy network size and depth.
Detailed description of the invention
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or Otherwise advantage will become apparent.
Fig. 1 is integrated stand composition of the present invention;
Fig. 2 a is the design that the bottleneck unit of ResNeXt of grouping convolution is not used.
Fig. 2 b is the design for having used the bottleneck unit of ResNeXt of grouping convolution.
Fig. 3 is the part public data collection CIFAR-10 picture sample.
Specific embodiment
Embodiment 1
With reference to the accompanying drawings and embodiments, by taking public data collection CIFAR-10 and CIFAR-100 as an example to the present invention do into One step explanation.
Data set CIFAR-10 is 60000 32*32 color images composition comprising 10 classification, and each classification has 6000 images, entire data set include 50000 trained pictures and 10000 test pictures;Data set CIFAR-100 is one A includes the color image of 100 classification, and each classification contains 600 pictures, is divided into 50000 training datas and 10000 Test data.CIFAR-10 data set parts of images sample is as shown in Figure 3.
Step 1,50000 training datas in two public data collection CIFAR-10 and CIFAR-100 are carried out respectively pre- Processing, including the enhancing operation of the common datas such as random cropping, flip horizontal, random scaling is carried out to it etc., specifically, first will Training picture proportionally 0.8,0.9,1.1,1.2 random scaling, then will the overturning of training picture Random Level or according to- 30 °, -15 °, 15 °, 30 ° angularly carry out Random-Rotation, finally go out the sample that size is 32 × 32 from random cropping on training picture Example, as final training picture.
Step 2, network model is built, https is used: in //github.com/prlz77/ResNeXt.pytorch The pytorch version of ResNeXt network is instance model, which is that radix is 8, the ResNeXt-29 network that depth is 64, Writing " ResNeXt-29,8 × 64D ", and with the modification of Cropout method in this network progress present invention, it specifically includes following Step:
Firstly, the conventional part of 8 × 64D network is according to document Aggregated residual using ResNeXt-29 Method in transformations for deep neural networks carries out feature extraction to training picture, obtains 8 transduction pathway after being grouped convolution, the characteristic pattern of transduction pathway are x, and size is H × W;
Then, a random cropping operation is bound at random to every transduction pathway, specifically, to each edge of characteristic pattern x It, is extended to the characteristic pattern y of (H+k) × (W+k) size by the filling for carrying out k neutral element from original H × W;
Random cropping goes out the characteristic pattern x ' of H × W size on last characteristic pattern y after expansion;
The random cropping Operation Definition that the filling quantity of the above maximum neutral element is k is Ρ by the present inventionk, so characteristic pattern Random cropping transformation on x can be indicated with following formula:
X '=Ρk(x),
Wherein x ' is the transformed characteristic pattern of random cropping.
The design of Cropout is based primarily upon the cluster translation (form for generalling use grouping convolution is realized) of ResNeXt, and Cluster translation can be indicated with following formula:
In the present invention,It is actually the convolution function that characteristic pattern x is mapped as to a low-dimensional vector space, ∑ is concatenation, and G is the transduction pathway item number of ResNeXt, and i represents i-th transduction pathway,After cluster translation Characteristic pattern.
Since all transform paths share identical topological structure, and Cropout method proposed by the present invention will be slight The congeniality form for breaking cluster translation, then can be indicated via the modified cluster translation of Cropout method are as follows:
Fig. 1 describes the concept of Cropout.In design of the invention, trimming operation is random in the netinit stage It completes, and the binding relationship of this trimming operation and transduction pathway is fixed and invariable after initializing network.Therefore, training When network structure and test when network structure be identical.
Modified model detail is as shown in table 1, devises a hyper parameter P={ p in table 1 for Cropout0,p1,p2, And by verifying repeatedly when the hyper parameter of Cropout is set as { 1,1,1 } P=, appoint in data set CIFAR-10 picture classification Behave oneself best in business;And when hyper parameter is set as { 0,1,0 } P=, it is showed in data set CIFAR-100 picture classification task It is best.
Table 1
Fig. 2 a and Fig. 2 b elaborate the design details of the bottleneck of the ResNeXt modified through Cropout method, because ResNeXt network is designed using bottleneck, and Cropout method is realized on each transduction pathway, as shown in Figure 2 a, It can be seen from the figure that random cropping occurs each in preceding layer convolution characteristic pattern after packet count is 8 grouping convolution Behind the convolutional layer that convolution kernel size is 1 × 1 in stage, before the convolutional layer that convolution kernel size is 3 × 3, then pass through 3 × 3 Convolutional layer after, the pattern image Cheng Xin after concatenation (i.e. in figure " concatenate " operate), on 8 transduction pathway Input of the characteristic pattern as the lower layer network of ResNeXt.Structure shown in Fig. 2 b is due to having used grouping convolution and than Fig. 2 a Middle structure is more efficient, and almost the same with Fig. 2 a other than the sequence of 3 × 3 convolution and Cropout is different, therefore in reality Fig. 2 b structure is used in use.
Step 3, training network model, using stochastic gradient descent method, respectively with two data after enhancing in step 1 The picture of concentration as training data to being modified in step 2 after ResNeXt-29,8 × 64D model exercises supervision training, obtains Training pattern on to two datasets, uses R respectively1And R2To indicate.Typical training parameter setting such as the following table 2:
Table 2
Step 4, picture classification, the to be sorted picture given for one, i.e. in data set CIFAR-10 or CIFAR-100 Any one in respective 10000 test datas, use the network model R of corresponding different data collection trained in step 31 And R2The classification results for classifying to the end are carried out to it.After the completion of test datas all in two datasets are classified, point Not Tong Ji two datasets classification situation accuracy rate, obtain two results:
(1) when Cropout parameter takes { 1,1,1 } P=, the classification error rate on CIFAR-10 is 3.38%, is compared 0.27% is reduced without using the model errors rate that Cropout method is modified;
(2) when Cropout parameter takes { 0,1,0 } P=, the classification error rate on CIFAR-100 is 16.89%, than 0.88% is reduced without using the model errors rate that Cropout method is modified.
Result above further forces down error rate under nowadays classification error rate low-down situation, it was demonstrated that this hair Bright method reduces over-fitting of the ResNeXt in image classification task really.
The present invention provides the methods for reducing ResNeXt model over-fitting in picture classification, implement the skill There are many method and approach of art scheme, the above is only a preferred embodiment of the present invention, it is noted that this technology is led For the those of ordinary skill in domain, various improvements and modifications may be made without departing from the principle of the present invention, these Improvements and modifications also should be regarded as protection scope of the present invention.The available prior art of each component part being not known in the present embodiment It is realized.

Claims (3)

1. the method for reducing ResNeXt model over-fitting in picture classification, which comprises the steps of:
Step 1, the training picture concentrated to public data pre-processes;
Step 2, it is based on ResNeXt network establishment network model, and is modified to ResNeXt Web vector graphic Cropout method;
Step 3, the ResNeXt network after the training modification of stochastic gradient descent method is used, trained network model is obtained;
Step 4, a given picture to be sorted is inputted, is classified using network model trained in step 3 to it, is obtained Classification results to the end.
2. the method according to claim 1, wherein step 1 include: to public data concentrate training picture into Row data enhancement operations, including random cropping, flip horizontal, random scaling.
3. according to the method described in claim 2, it is characterized in that, step 2 the following steps are included:
Step 2-1 carries out feature extraction to training picture using the conventional part for the ResNeXt network that radix is G, is grouped The characteristic pattern of transduction pathway is denoted as x by G transduction pathway after convolution, and size is H × W, and H, W respectively indicate characteristic pattern It is long and wide;
Step 2-2, Cropout method is to bind a random cropping operation at random to every transduction pathway, is specifically included: to spy Sign figure x carries out the filling of k neutral element along each edge, it is extended to the spy of (H+k) × (W+k) size from original H × W Random cropping goes out the characteristic pattern x ' of H × W size on sign figure y, characteristic pattern y after expansion, is defined on characteristic pattern x and supplements k The operation that random cropping is carried out after neutral element is Pk, then the random cropping transformation on characteristic pattern x is indicated with following formula:
X '=Pk(x),
Wherein x ' is the transformed characteristic pattern of random cropping;
Cropout method includes the cluster translation based on ResNeXt network, and the original cluster translation of ResNeXt network is used as follows Formula indicates:
Wherein,Characteristic pattern x is mapped as to the convolution function of a low-dimensional vector space for one, ∑ is concatenation, and G is The transduction pathway item number of ResNeXt, i represent i-th transduction pathway,For the characteristic pattern after cluster translation;
Then indicated via the modified cluster translation of Cropout method are as follows:
WhereinFor the new feature figure after the cluster translation that Cropout method was modified;
Characteristic pattern x ' on polymerization transduction pathway that G item was modified via Cropout method is passed through concatenation by step 2-3 It is synthesized together, forms input data of the new characteristic pattern as next layer network of ResNeXt.
CN201910263146.8A 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification Active CN109978069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910263146.8A CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910263146.8A CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Publications (2)

Publication Number Publication Date
CN109978069A true CN109978069A (en) 2019-07-05
CN109978069B CN109978069B (en) 2020-10-09

Family

ID=67082485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910263146.8A Active CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Country Status (1)

Country Link
CN (1) CN109978069B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110522440A (en) * 2019-08-12 2019-12-03 广州视源电子科技股份有限公司 Electrocardiosignal recognition device based on grouping convolution neural network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734058B1 (en) * 2005-08-24 2010-06-08 Qurio Holding, Inc. Identifying, generating, and storing cropping information for multiple crops of a digital image
US9311523B1 (en) * 2015-07-29 2016-04-12 Stradvision Korea, Inc. Method and apparatus for supporting object recognition
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF
CN106778701A (en) * 2017-01-20 2017-05-31 福州大学 A kind of fruits and vegetables image-recognizing method of the convolutional neural networks of addition Dropout
CN107563495A (en) * 2017-08-04 2018-01-09 深圳互连科技有限公司 Embedded low-power consumption convolutional neural networks method
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network
CN108629288A (en) * 2018-04-09 2018-10-09 华中科技大学 A kind of gesture identification model training method, gesture identification method and system
CN108985386A (en) * 2018-08-07 2018-12-11 北京旷视科技有限公司 Obtain method, image processing method and the corresponding intrument of image processing model
CN109063719A (en) * 2018-04-23 2018-12-21 湖北工业大学 A kind of image classification method of co-ordinative construction similitude and category information
CN109087375A (en) * 2018-06-22 2018-12-25 华东师范大学 Image cavity fill method based on deep learning
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734058B1 (en) * 2005-08-24 2010-06-08 Qurio Holding, Inc. Identifying, generating, and storing cropping information for multiple crops of a digital image
US9311523B1 (en) * 2015-07-29 2016-04-12 Stradvision Korea, Inc. Method and apparatus for supporting object recognition
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF
CN106778701A (en) * 2017-01-20 2017-05-31 福州大学 A kind of fruits and vegetables image-recognizing method of the convolutional neural networks of addition Dropout
CN107563495A (en) * 2017-08-04 2018-01-09 深圳互连科技有限公司 Embedded low-power consumption convolutional neural networks method
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network
CN108629288A (en) * 2018-04-09 2018-10-09 华中科技大学 A kind of gesture identification model training method, gesture identification method and system
CN109063719A (en) * 2018-04-23 2018-12-21 湖北工业大学 A kind of image classification method of co-ordinative construction similitude and category information
CN109087375A (en) * 2018-06-22 2018-12-25 华东师范大学 Image cavity fill method based on deep learning
CN108985386A (en) * 2018-08-07 2018-12-11 北京旷视科技有限公司 Obtain method, image processing method and the corresponding intrument of image processing model
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHUNLEI ZHANG,KAZUHITO KOISHIDA: "END-TO-END TEXT-INDEPENDENT SPEAKER VERIFICATION WITH FLEXIBILITY IN UTTERANCE DURATION", 《2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU)》 *
KENSHO HARA, HIROKATSU KATAOKA, YUTAKA SATOH: "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
RYO TAKAHASHI, TAKASHI MATSUBARA: "Data Augmentation using Random Image Cropping and Patching for Deep CNNs", 《JOURNAL OF LATEX CLASS FILES》 *
SAINING XIE,ROSS GIRSHICK,PIOTR DOLLAR,ZHUOWEN TU,KAIMING HE: "Aggregated Residual Transformations for Deep Neural Networks", 《ARXIV:1611.05431V2 [CS.CV]》 *
杨念聪,任琼,张成喆,周子煜: "基于卷积神经网络的图像特征识别研究", 《信息与电脑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
TWI749423B (en) * 2019-07-18 2021-12-11 大陸商北京市商湯科技開發有限公司 Image processing method and device, electronic equipment and computer readable storage medium
US11481574B2 (en) 2019-07-18 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Image processing method and device, and storage medium
CN110522440A (en) * 2019-08-12 2019-12-03 广州视源电子科技股份有限公司 Electrocardiosignal recognition device based on grouping convolution neural network

Also Published As

Publication number Publication date
CN109978069B (en) 2020-10-09

Similar Documents

Publication Publication Date Title
Hoare et al. On Jordanian deformations of AdS5 and supergravity
CN103020122B (en) A kind of transfer learning method based on semi-supervised clustering
CN110427990A (en) A kind of art pattern classification method based on convolutional neural networks
CN108121975A (en) A kind of face identification method combined initial data and generate data
CN105678292A (en) Complex optical text sequence identification system based on convolution and recurrent neural network
JP2009531788A (en) Conversion of digital images containing strings into token-based files for rendering
CN104598920B (en) Scene classification method based on Gist feature and extreme learning machine
CN109978069A (en) The method for reducing ResNeXt model over-fitting in picture classification
CN107657056A (en) Method and apparatus based on artificial intelligence displaying comment information
CN110569839B (en) Bank card number identification method based on CTPN and CRNN
CN110188863A (en) A kind of convolution kernel and its compression algorithm of convolutional neural networks
CN113128588B (en) Model training method, device, computer equipment and computer storage medium
CN112632196A (en) Data visualization method and device and storage medium
CN107729931A (en) Picture methods of marking and device
CN112347742B (en) Method for generating document image set based on deep learning
CN108399288A (en) A kind of device adding decorative element automatically in planar design
CN106407186B (en) Establish the method and device of participle model
CN105047068B (en) Take the travel folder generation method and system of protanopia anerythrochloropsia crowd's visual characteristic into account
CN108133205A (en) The method and device of content of text in duplicating image
CN105260741B (en) A kind of digital picture labeling method based on high-order graph structure p Laplacian sparse coding
CN114219875A (en) Intelligent LOGO generation method based on StyleGAN
CN103065317A (en) Partial color transferring method and transferring device based on color classification
CN107368323A (en) A kind of time shaft method for drafting and system for bill
Li et al. Chinese flower-bird character generation based on pencil drawings or brush drawings
CN108921226A (en) A kind of zero sample classification method based on low-rank representation and manifold regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant