CN109886391A - A kind of neural network compression method based on the positive and negative diagonal convolution in space - Google Patents

A kind of neural network compression method based on the positive and negative diagonal convolution in space Download PDF

Info

Publication number
CN109886391A
CN109886391A CN201910089080.5A CN201910089080A CN109886391A CN 109886391 A CN109886391 A CN 109886391A CN 201910089080 A CN201910089080 A CN 201910089080A CN 109886391 A CN109886391 A CN 109886391A
Authority
CN
China
Prior art keywords
convolution
diagonal
positive
characteristic pattern
convolution operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910089080.5A
Other languages
Chinese (zh)
Other versions
CN109886391B (en
Inventor
张萌
沈旭照
李国庆
李建军
刘文昭
郭晟昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910089080.5A priority Critical patent/CN109886391B/en
Publication of CN109886391A publication Critical patent/CN109886391A/en
Application granted granted Critical
Publication of CN109886391B publication Critical patent/CN109886391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of neural network compression methods based on the positive and negative diagonal convolution in space, for many scenes, convolutional neural networks are the cores of current computer vision and Digital Image Processing solution, but the computation complexity of convolutional neural networks and number of parameters are still the limiting factor of various application scenarios.In order to improve the computational efficiency of convolutional neural networks, reduce the number of parameters of network, the present invention is spatially, a pair of continuous traditional square convolution kernel is replaced with into positive and negative diagonal convolution kernel, first carry out face angle convolution operation, then after batch normalization and nonlinear function activation processing, it carries out opposing angle convolution operation again, it further reduced the computation complexity of convolutional neural networks under the premise of remaining more effective local experiences center, accelerate Internet communication speed, diagonal convolution has certain regularization effect simultaneously, improve the robustness of network, reduce the over-fitting of model, overall effect after Web compression is promoted obvious.

Description

A kind of neural network compression method based on the positive and negative diagonal convolution in space
Technical field
The present invention relates to a kind of cutting of neural network and convolution decomposition techniques, belong to digital image processing techniques field.
Background technique
In recent years, deep learning has significant achievement, convolutional neural networks on solving the problems, such as high-level abstract cognitive It is one of the mostly important tool of deep learning, its weight shares network and is allowed to more similar biological neural network structure, subtracts Lack weight quantity, reduces the scale of model.Translation of the convolutional neural networks to image, scaling, the deformation of the forms such as rotation Adaptability with height is widely used, as Microsoft is done using convolutional neural networks in the fields such as image recognition and target detection The hand-written discrimination system of Arabic and Chinese, Google identify face and license plate in Streetscape picture using convolutional neural networks Etc..
Convolutional neural networks initially contain convolutional layer and pond layer both structures.Convolution unit conduct in convolutional layer The identifier of feature, the association between picture pixels be it is local, similar human eye is to go to perceive small image respectively by optic nerve Block does not need each neuron and perceives to picture in its entirety, therefore each convolution kernel has an output, and synthesis is sentenced in the brain It has no progeny, obtains the feature of picture in its entirety.It is a feature of convolutional neural networks that weight is shared, is characterized in positioned at picture bottom It is blanket, it is unrelated with position, such as image edge, either it is located at the regional area above image or below image Edge features can use similar feature extractor, in this way, a feature of regional area only needs one in piece image Convolution kernel can extract.For being mainly used for extracting former layer networks of low-level image feature, weight is shared to further reduce parameter Number, while convolutional layer identification feature, pondization sampling can those tiny characteristics merger be one, because of these positions Close region often includes some delicate connections between each other, and pondization sampling can close these tiny feature sets Come, exactly because the also uniqueness of convolutional neural networks structure, so that it is widely used in field of image recognition.
Convolutional neural networks reduce the number of parameters of network, reduce network while being sufficiently reserved characteristics of image Scale, but the computation complexity of convolutional neural networks and number of parameters are still the limiting factor of many application scenarios, especially In mobile terminal in use, a propagated forward will consume huge computing resource, longer time is expended, is unfavorable for being deployed in reality When property processing requirement is high using upper.
Yann Le Cun etc. is in " Optimal brain damage " (Advances in Neural Information Processing Systems (NIPS), 1990:598-605) propose that the parameter of neural network is numerous, but some of them parameter pair The final output result contribution less redundancy that seems, needs according to the contribution to final output come to model nerve Member is ranked up, and is cast out the low neuron of those contribution degrees, is made model running speed faster, parameter is less;Han Song etc. exists 《Deep compression:Compressing deep neural networks with pruning,trained quantization andhuffman coding》(International Conference on Learning Representations (ICLR), 2016) beta pruning is used, in all connections, the connection that weight is less than certain threshold value is removed, The parameter of Alexnet and VGG-16 model is reduced 9 times and 13 times by the method by then re -training network respectively;But It is the process that above two method requires a continuous iteration, i.e. both model beta pruning and network training is alternately repeated.
Szegedy C etc. is in " Rethinking the inception architecture for computer vision》(Proceeding ofthe IEEE conference on computer vision and pattern Recognition, 2016:2818-2826) in traditional spatial convolution is decomposed into asymmetric convolution, behind 1*n convolution sum Then a n*1 convolution replaces any n*n convolution, and number of parameters significantly reduces, but this decomposition method is several before network Layer cannot work well, obvious only for the effect of middle layer.
Simonyan K etc. is in " Very deep convolutional networks for large-scale image Recognition " VGG model is proposed in (arXiv preprint arXiv:1409.1556,2014.), being put forward for the first time can To use multiple 3x3 convolution kernels to can replace the biggish convolutional layer of convolution kernel, equivalent receptive field thought is obtained, it is such as continuous The equivalent receptive field of the convolution operation of two 3x3 is 5x5, on the one hand reduces parameter in this way, is on the other hand equivalent to and has carried out more Nonlinear Mapping, enhance the capability of fitting of network.There are also many methods accelerated with compact model, for example reduce numerical value essence Degree, global average pond, optimization activation primitive cost function, hardware-accelerated etc., it is real-time in order to which model is deployed in processing Property demanding mobile terminal using upper, convolutional neural networks need more efficient trimming algorithm.
Summary of the invention
Goal of the invention: although being sufficiently reserved characteristics of image it is an object of the invention to solve existing convolutional neural networks While, reduce the number of parameters of network, reduces network size, but the computation complexity and parameter number of convolutional neural networks Amount is still the limiting factor of many application scenarios, especially in mobile terminal in use, a propagated forward to consume it is huge Computing resource expends longer time, is unfavorable for being deployed in real-time processing requirement high using upper problem.
Technical solution: in order to solve the above technical problems, the present invention the following technical schemes are provided:
A kind of neural network compression method based on the positive and negative diagonal convolution in space, includes the following steps:
(1) zero padding is filled to the input feature vector figure of convolutional neural networks;
(2) just diagonal convolution operation is carried out to the characteristic pattern after filling zero padding;
(3) batch normalized is first carried out to the output characteristic pattern after just diagonal convolution operation, then carries out non-linear letter Number activation processing, treated characteristic pattern size constancy;
(4) to step (3) treated characteristic pattern is filled zero padding, implement to oppose angle convolution operation.
Further, in step (2), just diagonal convolution operation only exists compared to traditional square convolution operation, convolution kernel There is parameter on positive diagonal line, rest part is all 0.
Further, in step (4), oppose angle convolution operation compared to traditional square convolution operation, convolution kernel only exists There is parameter on back-diagonal, rest part is all 0.
Further, in step (2) and step (4), spatially, a pair of continuous traditional square convolution operation is replaced For a pair of continuous positive and negative diagonal convolution operation.
Further, in the step (1) and step (4), filling zero padding needs to be determined according to the size N of diagonal convolution kernel Zero padding quantity, total line number of zero padding and total columns are equal to N-1;When N-1 is odd number, the left side and top of characteristic pattern are mended respectively (N-2)/2 columns and rows, the right of characteristic pattern and following benefit N/2 columns and rows respectively;When N-1 is even number, a left side up and down for characteristic pattern Right each benefit (N-1)/2 row and column.
The utility model has the advantages that the present invention is compared with prior art: the invention proposes the nerves based on the positive and negative diagonal convolution in space A pair of continuous traditional square convolution operation is spatially replaced with a pair of continuous positive and negative diagonal convolution operation by network, Less parameter has been used under the premise of equivalent local experiences center, has improved the computational efficiency of convolutional neural networks, has been accelerated Internet communication speed, while intermediate Nonlinear Processing link is increased, there is certain regularization effect, reduce model Over-fitting.The experimental results showed that network pruning effect promoting is obvious compared with existing convolutional neural networks.
Detailed description of the invention
Fig. 1 is the schematic diagram of the diagonal convolution in space;
Fig. 2 is the just diagonal convolution operation schematic diagram that the diagonal convolution kernel size in space is 2;
Fig. 3 is the opposition angle convolution operation schematic diagram that the diagonal convolution kernel size in space is 2;
Fig. 4 is the positive and negative diagonal equivalent impression center schematic diagram of convolution that the diagonal convolution kernel size in space is 2;
Fig. 5 is the just diagonal convolution operation schematic diagram that the diagonal convolution kernel size in space is 3;
Fig. 6 is the opposition angle convolution operation schematic diagram that the diagonal convolution kernel size in space is 3;
Fig. 7 is the positive and negative diagonal equivalent impression center schematic diagram of convolution that the diagonal convolution kernel size in space is 3;
Fig. 8 is the convolutional neural networks structure chart of experiment of the present invention.
Specific embodiment
A kind of neural network compression method based on the positive and negative diagonal convolution in space, includes the following steps:
(1) zero padding is filled to the input feature vector figure of convolutional neural networks;
(2) just diagonal convolution operation is carried out to the characteristic pattern after filling zero padding;
(3) batch normalized is first carried out to the output characteristic pattern after just diagonal convolution operation, then carries out non-linear letter Number activation processing, treated characteristic pattern size constancy;
(4) to step (3) treated characteristic pattern is filled zero padding, implement to oppose angle convolution operation.
The positive and negative diagonal convolution operation in space is as shown in Figure 1, first to progress face angle convolution after input feature vector totem culture zero padding Then operation is handled by pilot process, during intermediate treatment, first carry out batch normalized, i.e., will input pixel xi Mean μ is first subtracted then divided by mean square deviationThe value standardizedThen it carries out change of scale and offset obtains criticizing and return One changes treated value yi, in which:
M is batch processing size, and ε is a fixed value, and γ and β are the preset parameters learnt;It carries out again non-linear Function activation processing is greater than or equal to 0 number, output used here as simplest line rectification function ReLU for inputting Equal to input, for inputting the number less than 0, output is equal to 0, carries out opposing angle convolution operation after being refilled with zero padding;For step Suddenly the filling zero padding operation in (1) and step (4), needs to determine zero padding quantity according to the size N of diagonal convolution kernel, zero padding it is total Line number and total columns are equal to N-1.When N-1 is odd number, (N-2)/2 columns and rows, feature are mended in the left side and top of characteristic pattern respectively The right of figure and following benefit N/2 columns and rows respectively;When N-1 be even number when, characteristic pattern up and down respectively mend (N-1)/2 row and Column.
As shown in Figures 2 and 3, the positive and negative diagonal convolution operation schematic diagram of diagonal convolution kernel size N=2, is drilled for convenience Show, by just diagonal convolution kernel and opposes that the parameter of angle convolution kernel is both configured to 1;Shown in Fig. 2 left figure, input feature vector figure is 7*7, root According to convolution kernel size N=2, therefore total line number of zero padding is answered and total columns is N-1=1, because 1 is odd number, therefore the left side of characteristic pattern (N-2)/2=0 columns and rows should be mended with top, the right of characteristic pattern and should mend N/2=1 columns and rows below, the spy after filling zero padding Sign figure size is 8*8;By just diagonal convolution operation, first output in the upper left corner of characteristic pattern etc. is exported as shown in Fig. 2 right figure In a11 and a22 respectively multiplied by respective weights after be added, because weight is both configured to 1, therefore export first of the upper left corner of characteristic pattern Output is a11+a22, and then just diagonal convolution kernel slides to the right, and obtaining second output is a12+a23, successively suitable to the right line by line Sequence sliding is completed until the last one result a77 of output characteristic pattern is calculated, after the completion of just diagonal convolution operation, by centre After treatment process, the size for exporting characteristic pattern is 7*7.
It is 2 according to diagonal convolution kernel size as shown in Fig. 3 left figure, benefit is filled to just diagonal convolution output characteristic pattern Size is 8*8 after Z-operation, and by opposing angle convolution operation, as shown in Fig. 3 right figure, first value for exporting characteristic pattern is a12+ A23 after respective weights with a21+a32 respectively multiplied by being added, and because weight is both configured to 1, therefore first output for exporting characteristic pattern is Then a12+a23+a21+a32 opposes that angle convolution kernel successively slides to the right line by line, successively obtains the corresponding knot of output characteristic pattern Fruit, the size for exporting characteristic pattern is 7*7.
As shown in Fig. 3 right figure, in the output characteristic pattern by positive and negative diagonal convolution operation, among it by specific identifier Value a45+a56+a54+a65 is shown in specific identifier part of the equivalent impression center of initial input characteristic pattern such as in Fig. 4, quite Center is experienced in a diamond shape, and it is 2*N*N=8 that a pair of traditional rectangular convolution kernel will need parameter comprising diamond shape impression center It is a, and 2*N=4 parameter is only needed using positive and negative diagonal convolution kernel, under the premise of guaranteeing more effective impression center, parameter Quantity reduces 2*N (N-1)=4, substantially increases the sparsity of network, has regularization effect to network.
As shown in Figure 5 and Figure 6, the positive and negative diagonal convolution operation schematic diagram of diagonal convolution kernel size N=3, is drilled for convenience Show, by just diagonal convolution kernel and opposes that the parameter of angle convolution kernel is both configured to 1;Shown in Fig. 5 left figure, input feature vector figure is 7*7, root According to convolution kernel size N=3, therefore total line number of zero padding is answered and total columns is N-1=2, because 2 be even number, therefore the left side of characteristic pattern (N-1)/2=1 columns and rows should be mended with top, the right of characteristic pattern and should mend (N-1)/2=1 columns and rows below, after filling zero padding Characteristic pattern size be 9*9;By just diagonal convolution operation, first, the upper left corner that characteristic pattern is exported as shown in Fig. 5 right figure is defeated It is equal to 0, a11 and a22 out respectively multiplied by being added after respective weights, because weight is both configured to 1, therefore export the upper left corner of characteristic pattern First output is a11+a22, and then just diagonal convolution kernel slides to the right, and obtaining second output is a12+a23, and sequence is line by line It slides to the right, is completed until the last one result a66+a77 of output characteristic pattern is calculated, after the completion of just diagonal convolution operation, warp After crossing intermediate treatment process, the size for exporting characteristic pattern is 7*7.
As shown in the upper figure of Fig. 6, according to opposing that convolution kernel size in angle is 3, just diagonal convolution output characteristic pattern is filled Size is 9*9 after zero padding operation, and by opposing angle convolution operation, as shown in Fig. 6 following figure, first value for exporting characteristic pattern is 0, A11+a22 and 0 after respective weights respectively multiplied by being added, and because weight is both configured to 1, therefore first output for exporting characteristic pattern is Then a11+a22 opposes that angle convolution kernel successively slides to the right line by line, successively obtains the accordingly result of output characteristic pattern, output is special The size for levying figure is 7*7.
As shown in Fig. 6 following figure, in the output characteristic pattern by positive and negative diagonal convolution operation, among it by specific identifier Value a24+a35+a46+a33+a44+a55+a42+a53+a64 is special in the equivalent impression center of initial input characteristic pattern such as Fig. 7 Determine shown in identification division, be equivalent to a class diamond shape impression center, a pair of traditional rectangular convolution kernel will be experienced comprising the diamond shape It is 2*N*N=18 that center, which needs parameter, and only needs 2*N=6 parameter using positive and negative diagonal convolution kernel, is being guaranteed more effectively Under the premise of impression center, number of parameters reduces 2*N* (N-1)=12, and network parameter quantity is further reduced, model meter It calculates efficiency to greatly improve, model cutting effect is obvious.
For the verifying present invention, test is compared using convolutional neural networks structure shown in Fig. 8, Fig. 8 (a) is The VGG-19 network of the propositions such as Simonyan K includes five convolution modules, in first and second convolution modules, Mei Gemo All there are two continuous convolution operations for block, but port number is different, and respectively 64 and 128;In third into the 5th convolution module, There are four continuous convolution operations for each module, and port number is respectively 256,512 and 512;It all can be tight after each convolution operation Then normalized layer and nonlinear function active coating are criticized, is not drawn in figure;Can have between each convolution module One maximum pond layer (Max pooling);And then three full articulamentums after 5th convolution module, due to using Size is that Three Channel Color picture CIFAR-10 and the CIFAR-100 data set of 32x32 is tested and trained, wherein CIFAR- 10 be 10 classification, and CIFAR-100 is 100 classification, therefore the output channel number of the last one full articulamentum is 10 for CIFAR-10, It is 100 for CIFAR-100 output channel number, finally accesses normalization exponential function (Sofemax) layer and complete classification,
Fig. 8 (b) is using present invention network improved on the basis of Fig. 8 (a), as shown in dotted line frame module, from third Convolution module starts, and the positive and negative diagonal convolution kernel of use space replaces former rectangular convolution kernel, wherein just diagonal convolution operation and opposition Normalized layer and nonlinear function active coating are all and then criticized after the convolution operation of angle, using Tensorflow build network into Row training and test, table 1 give the model comparative situation of 200 wheel of training under same hyper parameter.
1 network model contrast test situation of table
As shown in table 1, under same hyper parameter, after 200 wheel of training, under CIFAR-10 data set, VGG-19 network test Accuracy rate is 92.83%, the use of the improved test accuracy rate of the present invention is 92.89%;Under CIFAR-100 data set, VGG- The test accuracy rate of 19 networks is 69.92%, the improved test accuracy rate 69.78% of the present invention, the survey under two datasets Examination accuracy rate is not much different;Number of parameters under different data collection slightly has not because the output channel number of last full articulamentum is different Together, but totally it is not much different, table one gives the network parameter comparative situation under CIFAR-10 data set, primitive network parameter number Amount is 45.23M, and improved network is only 32.05M, and number of parameters reduces 13.18M, and slip reaches 29.13%, in the case where guaranteeing test accuracy rate, the computation complexity of network is greatly reduced, computational efficiency is improved.

Claims (5)

1. a kind of neural network compression method based on the positive and negative diagonal convolution in space, which comprises the steps of:
(1) zero padding is filled to the input feature vector figure of convolutional neural networks;
(2) just diagonal convolution operation is carried out to the characteristic pattern after filling zero padding;
(3) batch normalized is first carried out to the output characteristic pattern after just diagonal convolution operation, is then carried out nonlinear function and is swashed Processing living, treated characteristic pattern size constancy;
(4) to step (3) treated characteristic pattern is filled zero padding, implement to oppose angle convolution operation.
2. the neural network compression method as described in claim 1 based on the positive and negative diagonal convolution in space, which is characterized in that step (2) in, just diagonal convolution operation only has parameter on positive diagonal line compared to traditional square convolution operation, convolution kernel, remaining Part is all 0.
3. the neural network compression method as described in claim 1 based on the positive and negative diagonal convolution in space, which is characterized in that step (4) in, oppose angle convolution operation compared to traditional square convolution operation, convolution kernel only has parameter on back-diagonal, remaining Part is all 0.
4. the neural network compression method as described in claim 1 based on the positive and negative diagonal convolution in space, which is characterized in that step (2) and in step (4), spatially, a pair of continuous traditional square convolution operation is replaced with a pair of continuous positive and negative diagonal Convolution operation.
5. the neural network compression method as described in claim 1 based on the positive and negative diagonal convolution in space, which is characterized in that described In step (1) and step (4), filling zero padding needs to determine zero padding quantity, total line number of zero padding according to the size N of diagonal convolution kernel It is equal to N-1 with total columns;When N-1 is odd number, (N-2)/2 columns and rows are mended on the left side and top of characteristic pattern respectively, characteristic pattern The right and following benefit N/2 columns and rows respectively;When N-1 is even number, characteristic pattern respectively mends (N-1)/2 row and column up and down.
CN201910089080.5A 2019-01-30 2019-01-30 Neural network compression method based on space forward and backward diagonal convolution Active CN109886391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910089080.5A CN109886391B (en) 2019-01-30 2019-01-30 Neural network compression method based on space forward and backward diagonal convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910089080.5A CN109886391B (en) 2019-01-30 2019-01-30 Neural network compression method based on space forward and backward diagonal convolution

Publications (2)

Publication Number Publication Date
CN109886391A true CN109886391A (en) 2019-06-14
CN109886391B CN109886391B (en) 2023-04-28

Family

ID=66927371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910089080.5A Active CN109886391B (en) 2019-01-30 2019-01-30 Neural network compression method based on space forward and backward diagonal convolution

Country Status (1)

Country Link
CN (1) CN109886391B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782001A (en) * 2019-09-11 2020-02-11 东南大学 Improved method for using shared convolution kernel based on group convolution neural network
CN112101547A (en) * 2020-09-14 2020-12-18 中国科学院上海微***与信息技术研究所 Pruning method and device for network model, electronic equipment and storage medium
CN112288829A (en) * 2020-11-03 2021-01-29 中山大学 Compression method and device for image restoration convolutional neural network
CN112766392A (en) * 2021-01-26 2021-05-07 杭州师范大学 Image classification method of deep learning network based on parallel asymmetric hole convolution
WO2021120036A1 (en) * 2019-12-18 2021-06-24 华为技术有限公司 Data processing apparatus and data processing method
CN113283351A (en) * 2021-05-31 2021-08-20 深圳神目信息技术有限公司 Video plagiarism detection method using CNN to optimize similarity matrix

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639932A (en) * 2008-07-28 2010-02-03 汉王科技股份有限公司 Method and system for enhancing digital image resolution
CN106127297A (en) * 2016-06-02 2016-11-16 中国科学院自动化研究所 The acceleration of degree of depth convolutional neural networks based on resolution of tensor and compression method
WO2018073975A1 (en) * 2016-10-21 2018-04-26 Nec Corporation Improved sparse convolution neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639932A (en) * 2008-07-28 2010-02-03 汉王科技股份有限公司 Method and system for enhancing digital image resolution
CN106127297A (en) * 2016-06-02 2016-11-16 中国科学院自动化研究所 The acceleration of degree of depth convolutional neural networks based on resolution of tensor and compression method
WO2018073975A1 (en) * 2016-10-21 2018-04-26 Nec Corporation Improved sparse convolution neural network

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782001A (en) * 2019-09-11 2020-02-11 东南大学 Improved method for using shared convolution kernel based on group convolution neural network
CN110782001B (en) * 2019-09-11 2024-04-09 东南大学 Improved method for using shared convolution kernel based on group convolution neural network
WO2021120036A1 (en) * 2019-12-18 2021-06-24 华为技术有限公司 Data processing apparatus and data processing method
CN112101547A (en) * 2020-09-14 2020-12-18 中国科学院上海微***与信息技术研究所 Pruning method and device for network model, electronic equipment and storage medium
CN112101547B (en) * 2020-09-14 2024-04-16 中国科学院上海微***与信息技术研究所 Pruning method and device for network model, electronic equipment and storage medium
CN112288829A (en) * 2020-11-03 2021-01-29 中山大学 Compression method and device for image restoration convolutional neural network
CN112766392A (en) * 2021-01-26 2021-05-07 杭州师范大学 Image classification method of deep learning network based on parallel asymmetric hole convolution
CN112766392B (en) * 2021-01-26 2023-10-24 杭州师范大学 Image classification method of deep learning network based on parallel asymmetric hole convolution
CN113283351A (en) * 2021-05-31 2021-08-20 深圳神目信息技术有限公司 Video plagiarism detection method using CNN to optimize similarity matrix
CN113283351B (en) * 2021-05-31 2024-02-06 深圳神目信息技术有限公司 Video plagiarism detection method using CNN optimization similarity matrix

Also Published As

Publication number Publication date
CN109886391B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN109886391A (en) A kind of neural network compression method based on the positive and negative diagonal convolution in space
CN107122826B (en) Processing method and system and storage medium for convolutional neural networks
CN108288035A (en) The human motion recognition method of multichannel image Fusion Features based on deep learning
CN108717568A (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN106408039A (en) Off-line handwritten Chinese character recognition method carrying out data expansion based on deformation method
CN110222760B (en) Quick image processing method based on winograd algorithm
CN107844795A (en) Convolutional neural network feature extraction method based on principal component analysis
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN113221694B (en) Action recognition method
CN111259880A (en) Electric power operation ticket character recognition method based on convolutional neural network
CN113505719B (en) Gait recognition model compression system and method based on local-integral combined knowledge distillation algorithm
CN110991444A (en) Complex scene-oriented license plate recognition method and device
CN109753996A (en) Hyperspectral image classification method based on D light quantisation depth network
CN108491863A (en) Color image processing method based on Non-negative Matrix Factorization and convolutional neural networks
CN113011243A (en) Facial expression analysis method based on capsule network
CN105095857A (en) Face data enhancement method based on key point disturbance technology
CN108090409A (en) Face identification method, device and storage medium
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
KR100956747B1 (en) Computer Architecture Combining Neural Network and Parallel Processor, and Processing Method Using It
CN112966672B (en) Gesture recognition method under complex background
CN109886160A (en) It is a kind of it is non-limiting under the conditions of face identification method
CN113822825A (en) Optical building target three-dimensional reconstruction method based on 3D-R2N2
CN105718858B (en) A kind of pedestrian recognition method based on positive and negative broad sense maximum pond
CN108090504A (en) Object identification method based on multichannel dictionary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant