CN108985316A - A kind of capsule network image classification recognition methods improving reconstructed network - Google Patents
A kind of capsule network image classification recognition methods improving reconstructed network Download PDFInfo
- Publication number
- CN108985316A CN108985316A CN201810509412.6A CN201810509412A CN108985316A CN 108985316 A CN108985316 A CN 108985316A CN 201810509412 A CN201810509412 A CN 201810509412A CN 108985316 A CN108985316 A CN 108985316A
- Authority
- CN
- China
- Prior art keywords
- network
- capsule
- reconstructed
- vector
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of capsule network image classification recognition methods for improving reconstructed network: S1 constructs capsule network;S2, input picture training set are bonded to the capsule network, and image classification identification calibration is completed after the trained study of capsule network;S3, input image to be classified to the capsule network, the output vector v of the job networkjMiddle numerical value maximum one is obtained recognition result;S4, the capsule network export the recognition result of the image to be classified;Wherein the reconstructed network structure of capsule network is deconvolution operation.The utility model has the advantages that proposing a kind of new reconstructed network structure, image is reduced to by deconvolution operation handlebar vector, the error of the image and original image that compare reduction carrys out regulating networks parameter, reduces calculating parameter amount, has vacated more running memories for hardware device.
Description
Technical field
The present invention relates to application of the capsule network in image classification, specifically, is related to a kind of improving reconstructed network
Capsule network image classification recognition methods.
Background technique
In recent years, convolutional neural networks achieve quick hair in directions such as image recognition, target detection, semantic segmentations
Exhibition, convolutional neural networks are usually made of convolutional layer, active coating, pond layer and full articulamentum, and pond layer is convolutional neural networks
In important component part, typically maximum pond and average pondization operate, and pond layer can reduce the ruler of input feature vector figure
It is very little, reduce the calculation amount of model, but the problem of pond layer is lost there is also location information.
There are problems that location information loss pond layer in convolutional neural networks, Hinton proposes capsule within 2017
Network (capsnet), capsule network carry out parameter update as input and output, and using Dynamic routing mechanisms using vector,
Location information can be extracted, opposite convolutional neural networks can extract more accurate characteristic information, be expected to replace existing rank
The convolutional neural networks structure of section.
However, there is also the disadvantage that parameter amount is big, model occupancy on image processing problem for existing capsule network design
Memory is big, and the data volume that operation hardware is handled simultaneously is few.
Summary of the invention
In order to solve the problems, such as that capsule network parameter amount is big, the present invention proposes a kind of new weight for existing capsule network
Structure network structure is reduced to image by deconvolution operation handlebar vector, and the error of the image and original image that compare reduction is adjusted
Network parameter is saved, to provide a kind of capsule network image classification recognition methods for improving reconstructed network, reduces calculating parameter
Amount has vacated more running memories for hardware device.
In order to achieve the above objectives, the specific technical solution that the present invention uses is as follows:
A kind of capsule network image classification recognition methods improving reconstructed network:
S1 constructs capsule network, and the capsule network is provided with job network and check and correction network, and the job network is used for
Input picture and the recognition result for exporting the image, the check and correction network adjust job network parameter for training;
The job network includes convolutional coding structure and full connection structure, and the convolution output end connection of the convolutional coding structure connects entirely
The full connection input terminal of binding structure, the convolutional coding structure are sequentially connected convolutional layer and PrimaryCaps layers, the full connection
Structure is successively to carry out weight calculation, the network structure of dynamic routing adjusting, activation primitive operation;
The check and correction network includes parallel margin loss operating structure and reconstructed network structure, described
The loss input terminal of marginloss operating structure connects the full connection output end of the full connection structure, the reconstructed network knot
The reconstruct input terminal of structure is separately connected the vector layer of full the connection output end and input picture of the full connection structure, described
The loss output end of margin loss operating structure and the reconstruct output end of the reconstructed network structure are separately connected Loss layers
Loss function input terminal, described Loss layers of loss function output end connect majorized function computation layer;
The reconstructed network structure includes Reshape layers sequentially connected, deconvolution structure, Flatten layers and variance meter
Layer is calculated, described and variance computation layer variance input terminal is separately connected the vector layer of Flatten layers He input picture, described and square
The variance output end of poor computation layer connects Loss layers of loss function input terminal;
S2, input picture training set are bonded to the capsule network, and image point is completed after the trained study of capsule network
Class identification calibration;
S3, input image to be classified to the capsule network, the output vector v of the job networkjMiddle numerical value is maximum
One is obtained recognition result;
S4, the capsule network export the recognition result of the image to be classified.
The reconstructed network structure of existing capsule network is several layers of functional operation connected entirely, i.e., only the transformation of vector is transported
It calculates, operand is larger, and by above-mentioned design, vector parameter is converted to image parameter through deconvolution structure and carries out operation,
Parameter amount reduces, but the performances such as image procossing accuracy actually obtained are constant, so that the operation hardware of capsule network
There can be bigger memory more than needed.
It further describes, the capsule network training study detailed process of the step S2 is as follows:
S2.1, described image trains the image in set to sequentially input job network, and obtains after job network calculates
Output vector vj;
S2.2, the maximum output vector v of Selecting All Parameters valuejInput margin loss operating structure is simultaneously calculated
Departure;
S2.3, the maximum output vector v of above-mentioned parameter valuejIt also inputs reconstructed network structure and is converted to through Reshape layers
Characteristic pattern;
S2.4, the characteristic pattern operate to obtain reconstructed image through deconvolution structure deconvolution;
S2.5, the reconstructed image are converted to reconstruct vector through Flatten layers;
S2.6, the reconstruct vector and input picture vector pass through and variance vectors are calculated in variance computation layer;
The departure that S2.7, the variance vectors and step S2.2 are obtained inputs the Loss layers of loss for obtaining job network
Amount;
S2.8 feeds back to job network after the optimized function computation layer optimization of loss amount;
S2.9, inverted order adjusts every layer parameter to the job network from back to front, until job network recognition accuracy is constant
Then complete the training study of capsule network.
The output vector v of job networkjFor multiple vectors, wherein the image of numerical value maximum one as job network divides
Class recognition result, the recognition result have certain error when not training, then training study is exactly to be correctly oriented error court
Reduce, finally obtain accurate recognition result, and by above-mentioned design, margin loss operating structure calculate recognition result with
Recognition result is reversely reduced into image by the departure of legitimate reading, reconstructed network structure, then image and input reduction
Image calculates variance, and departure and variance are all the errors generated in job network calculating process, only when this two parts error
Job network, which is just calculated, when all close to zero can accurately identify image, therefore feedback arrives job network after two parts error is added
In, job network can achieve the purpose that accurately identify image after constantly training study.
It is further described, the input of the margin loss operating structure is the output vector v of job networkj, output
For ∑jLj, LjCalculation formula it is as follows:
Lj=Tjmax(0,m+-‖vj‖)2+λ(1-Tj)max(0,‖vj‖-m-)2
Wherein, TjFor the concrete class of input picture, m+For | | vj| | coboundary, m-For | | vj| | lower boundary, λ be adjust
Save coefficient;
Loss layers of the loss function is calculated as the output ∑ of the margin loss operating structurejLjWith reconstruct net
The reconstructed error of network structure is added, and obtains loss amount.
Departure is calculated in the concrete class and recognition result of input picture through the above way.
It is further described, the deconvolution structure of the reconstructed network structure is sequentially connected 1 warp lamination, 1
Convolutional layer and 2 warp laminations.
Replace operation by convolution and deconvolution, reduce calculated distortion, prevent itself error of deconvolution structure big and
Mistake adjusts job network parameter.
It is further described, the warp lamination is all made of the deconvolution operation of convolution kernel 4 × 4, step-length 2;
The convolutional layer is using convolution kernel 2 × 2, the convolution operation of step-length 1.
It is further described, the weight calculation of the full connection structure are as follows:
Dynamic routing is adjusted are as follows:
Wherein, uiFor the input vector of full connection structure, vjFor output vector,For weight vectors, WijFor weight ginseng
Number, bijFor dynamic routing parameter, cijFor adjustment parameter, k is dynamic routing number of parameters, sjCentre after being adjusted for dynamic routing
Vector.
By above-mentioned design, full connection structure itself also has dynamic routing regulating power, i.e. output vector vjDynamic adjusts
To dynamic routing parameter bijIn, thus regulating calculation error.
It is further described, the function of the activation primitive operation are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
Above-mentioned activation primitive is new activation primitive, which is added in the calculating of capsule network, can be significantly
Improve image classification recognition accuracy.
It is further described, the function of the activation primitive operation are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
The majorized function of the majorized function computation layer is Adam function.
Beneficial effects of the present invention: a kind of new reconstructed network structure is proposed, is restored by deconvolution operation handlebar vector
For image, the error of the image and original image that compare reduction carrys out regulating networks parameter, reduces calculating parameter amount, sets for hardware
It is standby to have vacated more running memories;A kind of new activation primitive is proposed, promotes image classification recognition accuracy significantly.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is the circuit theory schematic diagram of capsule network;
Fig. 3 is the training learning procedure schematic diagram of capsule network;
Fig. 4 is the capsule schematic network structure of embodiment one;
Fig. 5 is the reconstructed network structural schematic diagram of embodiment one;
Fig. 6 is training and the test analysis figure of embodiment one;
Fig. 7 is the reconstructed network structural schematic diagram of conventional capsules network;
Fig. 8 is training and the test analysis figure of conventional capsules network;
Fig. 9 is the figure compared with the test effect of conventional capsules network of embodiment one;
Figure 10 is the capsule network training and test analysis figure after the new activation primitive of replacement;
Figure 11 is that the capsule network test effect before and after replacing activation primitive compares figure.
Specific embodiment
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments:
As shown in Figure 1, a kind of capsule network image classification recognition methods for improving reconstructed network:
S1 constructs capsule network;
S2, input picture training set are bonded to the capsule network, and image point is completed after the trained study of capsule network
Class identification calibration;
S3, input image to be classified to the capsule network, the output vector v of the job networkjMiddle numerical value is maximum
One is obtained recognition result;
S4, the capsule network export the recognition result of the image to be classified.
Wherein the capsule net network is as shown in Fig. 2, be provided with job network and check and correction network, the job network is for defeated
Enter image and export the recognition result of the image, the check and correction network adjusts job network parameter for training;
The job network includes convolutional coding structure and full connection structure, and the convolution output end connection of the convolutional coding structure connects entirely
The full connection input terminal of binding structure, the convolutional coding structure are sequentially connected convolutional layer and PrimaryCaps layers, the full connection
Structure is successively to carry out weight calculation, the network structure of dynamic routing adjusting, activation primitive operation;
The check and correction network includes parallel margin loss operating structure and reconstructed network structure, described
The loss input terminal of marginloss operating structure connects the full connection output end of the full connection structure, the reconstructed network knot
The reconstruct input terminal of structure is separately connected the vector layer of full the connection output end and input picture of the full connection structure, described
The loss output end of margin loss operating structure and the reconstruct output end of the reconstructed network structure are separately connected Loss layers
Loss function input terminal, described Loss layers of loss function output end connect majorized function computation layer;
Preferably, reconstructed network structure described in the present embodiment includes Reshape layers sequentially connected, deconvolution knot
Structure, Flatten layers and variance calculating (SSE) layer, it is described to be separately connected Flatten layers with variance computation layer variance input terminal
With the vector layer of input picture, the loss function input terminal that Loss layers are connected with the variance output end of variance computation layer, ginseng
According to Fig. 4, Fig. 5;
As shown in figure 4, the weight calculation of the full connection structure is preferred are as follows:
Dynamic routing is adjusted are as follows:
Wherein, uiFor the input vector of full connection structure, vjFor output vector,For weight vectors, WijFor weight ginseng
Number, bijFor dynamic routing parameter, cijFor adjustment parameter, k is dynamic routing number of parameters, sjCentre after being adjusted for dynamic routing
Vector.
The function of activation primitive operation described in the present embodiment are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
Preferably, the input of the margin loss operating structure is the output vector v of job networkj, export as ∑jLj, LjCalculation formula it is as follows:
Lj=Tjmax(0,m+-‖vj‖)2+λ(1-Tj)max(0,‖vj‖-m-)2
Wherein, TjFor the concrete class of input picture, m+For | | vj| | coboundary, m-For | | vj| | lower boundary, λ be adjust
Save coefficient;
Loss layers of the loss function is calculated as the output ∑ of the margin loss operating structurejLjWith reconstruct net
The reconstructed error of network structure is added, and obtains loss amount.
Preferably, the deconvolution structure of reconstructed network structure described in the present embodiment is sequentially connected 1 deconvolution
Layer, 1 convolutional layer and 2 warp laminations;
Preferably, the warp lamination is all made of the deconvolution operation of convolution kernel 4 × 4, step-length 2;
The convolutional layer is using convolution kernel 2 × 2, the convolution operation of step-length 1.
The preferred Adam function of the majorized function of the majorized function computation layer.
As shown in figure 3, the capsule network training study detailed process of the step S2 is as follows:
S2.1, described image trains the image in set to sequentially input job network, and obtains after job network calculates
Output vector vj;
S2.2, the maximum output vector v of Selecting All Parameters valuejInput margin loss operating structure is simultaneously calculated
Departure;
S2.3, the maximum output vector v of above-mentioned parameter valuejIt also inputs reconstructed network structure and is converted to through Reshape layers
Characteristic pattern;
S2.4, the characteristic pattern operate to obtain reconstructed image through deconvolution structure deconvolution;
S2.5, the reconstructed image are converted to reconstruct vector through Flatten layers;
S2.6, the reconstruct vector and input picture vector pass through and variance vectors are calculated in variance computation layer;
The departure that S2.7, the variance vectors and step S2.2 are obtained inputs the Loss layers of loss for obtaining job network
Amount;
S2.8 feeds back to job network after the optimized function computation layer optimization of loss amount;
S2.9, inverted order adjusts every layer parameter to the job network from back to front, until job network recognition accuracy is constant
Then complete the training study of capsule network.
The image that the present embodiment is identified is Lung sections comprising the image of malign lung nodules totally 2526, is not wrapped
It is benign totally 3967 containing Lung neoplasm and Lung neoplasm, i.e., total picture amount 6691 is opened.
Using 70% data set as training set, 30% data set is as test set.Training set: Lung neoplasm is not included
It is benign 2927 with Lung neoplasm, malign lung nodules 1756 are opened;Test set: not including Lung neoplasm and Lung neoplasm is benign 1238
, malign lung nodules 770 are opened.
Fig. 6 is the training and test data analysis figure using reconstructed network structure of the present invention, and Fig. 7 is existing capsule
Network reconfiguration network structure, i.e., using the reconstructed network structure of full connection operation, Fig. 8 is its training and test data analysis figure,
Test effect compares after Fig. 9 as training of the two, it can be seen that the difference of the two is minimum.
But the parameter amount of the two is smaller by reconstructed network structural parameters amount new known to following comparison:
Reconstructed network structure of the invention:
Input: the vector that 2 length is 16;
The vector reshape that Reshape: one length is 16 at 4*4 characteristic pattern: there is no parameter;
4*4 characteristic pattern is (4,4) by convolution kernel, and the deconvolution that step-length is 2 operates to obtain the feature that 64 sizes are 8*8
Figure: parameter 1*4*4*64=1024;
64 8*8 characteristic pattern convolution (convolution kernel 2*2) are to 64 7*7 characteristic patterns: parameter 64*2*2*64=16384 is a;
64 7*7 characteristic pattern deconvolution (convolution kernel 4*4) are to 32 14*14 characteristic patterns: parameter 64*4*4*32=32768
It is a;
32 14*14 characteristic pattern deconvolution (convolution kernel 4*4) are to 1 28*28 characteristic pattern: parameter 32*4*4*1=512 is a;
The vector that 1 28*28 characteristic pattern flatten (compression) is 784 at length: there is no parameter;
So new reconstructed network Headquarters of the General Staff quantity are as follows: 1024+16384+32768+512=50688.
The reconstructed network structure of full connection operation:
Input: the vector that 2 length is 16;
The vector that the vector that 1:1 length of connection is 16 entirely is 512 to length: parameter 16*512=8192;
Full connection 2: length be 512 vectors to length be 1024 vectors: parameter 512*1024=524288;
Full connection 3: length be 1024 vectors to length be 784 vectors: parameter 1024*784=802816;
So former reconstructed network Headquarters of the General Staff quantity are as follows: 8192+524288+802816=1335296.
On the basis of above scheme, a kind of new activation primitive is had also been devised in embodiment two, i.e., the described activation primitive fortune
The function of calculation are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
As shown in Figure 10, activation primitive is replaced for capsule network training after replacing new activation primitive and the analysis of the data of test
The test effect of front and back compares as shown in figure 11, it is evident that new activation primitive to the recognition accuracy of capsule network substantially
Degree improves.
Claims (9)
1. a kind of capsule network image classification recognition methods for improving reconstructed network, it is characterised in that:
S1 constructs capsule network, and the capsule network is provided with job network and check and correction network, and the job network is for inputting
Image and the recognition result for exporting the image, the check and correction network adjust job network parameter for training;
The job network includes convolutional coding structure and full connection structure, the full connection knot of the convolution output end connection of the convolutional coding structure
The full connection input terminal of structure, the convolutional coding structure are sequentially connected convolutional layer and PrimaryCaps layers, the full connection structure
For the network structure for successively carrying out weight calculation, dynamic routing adjusting, activation primitive operation;
The check and correction network includes parallel margin loss operating structure and reconstructed network structure, the margin loss fortune
The loss input terminal for calculating structure connects the full connection output end of the full connection structure, the reconstruct input of the reconstructed network structure
End is separately connected the vector layer of full the connection output end and input picture of the full connection structure, the margin loss operation
The loss output end of structure and the reconstruct output end of the reconstructed network structure are separately connected Loss layers of loss function input terminal,
Described Loss layers of loss function output end connects majorized function computation layer;
The reconstructed network structure includes Reshape layers sequentially connected, deconvolution structure, Flatten layers and variance calculating
Layer, described and variance computation layer variance input terminal are separately connected the vector layer of Flatten layers He input picture, described and variance
The variance output end of computation layer connects Loss layers of loss function input terminal;
S2, input picture training set are bonded to the capsule network, and image classification is completed after the trained study of capsule network and is known
It does not calibrate;
S3, input image to be classified to the capsule network, the output vector v of the job networkjMiddle numerical value maximum one i.e.
For obtained recognition result;
S4, the capsule network export the recognition result of the image to be classified.
2. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that: institute
The capsule network training study detailed process for stating step S2 is as follows:
S2.1, described image trains the image in set to sequentially input job network, and is exported after job network calculates
Vector vj;
S2.2, the maximum output vector v of Selecting All Parameters valuejSimultaneously deviation is calculated in input margin loss operating structure
Amount;
S2.3, the maximum output vector v of above-mentioned parameter valuejIt also inputs reconstructed network structure and is characterized through Reshape layers of conversion
Figure;
S2.4, the characteristic pattern operate to obtain reconstructed image through deconvolution structure deconvolution;
S2.5, the reconstructed image are converted to reconstruct vector through Flatten layers;
S2.6, the reconstruct vector and input picture vector pass through and variance vectors are calculated in variance computation layer;
The departure that S2.7, the variance vectors and step S2.2 are obtained inputs the Loss layers of loss amount for obtaining job network;
S2.8 feeds back to job network after the optimized function computation layer optimization of loss amount;
S2.9, inverted order adjusts every layer parameter to the job network from back to front, until job network recognition accuracy is constant then complete
At the training study of capsule net network.
3. the capsule network image classification recognition methods according to claim 1 or 2 for improving reconstructed network, feature exist
In: the input of the margin loss operating structure is the output vector v of job networkj, export as ∑jLj, LjCalculating it is public
Formula is as follows:
Lj=Tjmax(0,m+-‖vj‖)2+λ(1-Tj)max(0,‖vj‖-m-)2
Wherein, TjFor the concrete class of input picture, m+For | | vj| | coboundary, m-For | | vj| | lower boundary, λ be adjust system
Number;
Loss layers of the loss function is calculated as the output ∑ of the margin loss operating structurejLjWith reconstructed network knot
The reconstructed error of structure is added, and obtains loss amount.
4. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that: institute
The deconvolution structure for stating reconstructed network structure is sequentially connected 1 warp lamination, 1 convolutional layer and 2 warp laminations.
5. the capsule network image classification recognition methods according to claim 4 for improving reconstructed network, it is characterised in that: institute
State the deconvolution operation that warp lamination is all made of convolution kernel 4 × 4, step-length 2;
The convolutional layer is using convolution kernel 2 × 2, the convolution operation of step-length 1.
6. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that: institute
State the weight calculation of full connection structure are as follows:
Dynamic routing is adjusted are as follows:
Wherein, uiFor the input vector of full connection structure, vjFor output vector,For weight vectors, WijFor weight parameter, bij
For dynamic routing parameter, cijFor adjustment parameter, k is dynamic routing number of parameters, sjIntermediate vector after being adjusted for dynamic routing.
7. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that institute
State the function of activation primitive operation are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
8. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that institute
State the function of activation primitive operation are as follows:
Wherein, vjFor output vector, sjIntermediate vector after being adjusted for dynamic routing.
9. the capsule network image classification recognition methods according to claim 1 for improving reconstructed network, it is characterised in that: institute
The majorized function for stating majorized function computation layer is Adam function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810509412.6A CN108985316B (en) | 2018-05-24 | 2018-05-24 | Capsule network image classification and identification method for improving reconstruction network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810509412.6A CN108985316B (en) | 2018-05-24 | 2018-05-24 | Capsule network image classification and identification method for improving reconstruction network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108985316A true CN108985316A (en) | 2018-12-11 |
CN108985316B CN108985316B (en) | 2022-03-01 |
Family
ID=64542630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810509412.6A Active CN108985316B (en) | 2018-05-24 | 2018-05-24 | Capsule network image classification and identification method for improving reconstruction network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108985316B (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN109801305A (en) * | 2019-01-17 | 2019-05-24 | 西安电子科技大学 | SAR image change detection based on depth capsule network |
CN109840560A (en) * | 2019-01-25 | 2019-06-04 | 西安电子科技大学 | Based on the image classification method for incorporating cluster in capsule network |
CN110009097A (en) * | 2019-04-17 | 2019-07-12 | 电子科技大学 | The image classification method of capsule residual error neural network, capsule residual error neural network |
CN110032925A (en) * | 2019-02-22 | 2019-07-19 | 广西师范大学 | A kind of images of gestures segmentation and recognition methods based on improvement capsule network and algorithm |
CN110059730A (en) * | 2019-03-27 | 2019-07-26 | 天津大学 | A kind of thyroid nodule ultrasound image classification method based on capsule network |
CN110059741A (en) * | 2019-04-15 | 2019-07-26 | 西安电子科技大学 | Image-recognizing method based on semantic capsule converged network |
CN110084320A (en) * | 2019-05-08 | 2019-08-02 | 广东工业大学 | Thyroid papillary carcinoma Ultrasound Image Recognition Method, device, system and medium |
CN110110668A (en) * | 2019-05-08 | 2019-08-09 | 湘潭大学 | A kind of gait recognition method based on feedback weight convolutional neural networks and capsule neural network |
CN110163489A (en) * | 2019-04-28 | 2019-08-23 | 湖南师范大学 | A kind of drug rehabilitation motion exercise effect assessment method |
CN110288555A (en) * | 2019-07-02 | 2019-09-27 | 桂林电子科技大学 | A kind of low-light (level) Enhancement Method based on improved capsule network |
CN110309811A (en) * | 2019-07-10 | 2019-10-08 | 哈尔滨理工大学 | A kind of hyperspectral image classification method based on capsule network |
CN110399899A (en) * | 2019-06-21 | 2019-11-01 | 武汉大学 | Uterine neck OCT image classification method based on capsule network |
CN110414317A (en) * | 2019-06-12 | 2019-11-05 | 四川大学 | Full-automatic Arneth's count method based on capsule network |
CN110458852A (en) * | 2019-08-13 | 2019-11-15 | 四川大学 | Segmentation of lung parenchyma method, apparatus, equipment and storage medium based on capsule network |
CN110502970A (en) * | 2019-07-03 | 2019-11-26 | 平安科技(深圳)有限公司 | Cell image identification method, system, computer equipment and readable storage medium storing program for executing |
CN110599457A (en) * | 2019-08-14 | 2019-12-20 | 广东工业大学 | Citrus huanglongbing classification method based on BD capsule network |
CN111046916A (en) * | 2019-11-20 | 2020-04-21 | 上海电机学院 | Motor fault diagnosis method and system based on void convolution capsule network |
CN111241958A (en) * | 2020-01-06 | 2020-06-05 | 电子科技大学 | Video image identification method based on residual error-capsule network |
CN111292322A (en) * | 2020-03-19 | 2020-06-16 | 中国科学院深圳先进技术研究院 | Medical image processing method, device, equipment and storage medium |
CN111291712A (en) * | 2020-02-25 | 2020-06-16 | 河南理工大学 | Forest fire recognition method and device based on interpolation CN and capsule network |
CN111461063A (en) * | 2020-04-24 | 2020-07-28 | 武汉大学 | Behavior identification method based on graph convolution and capsule neural network |
CN111460818A (en) * | 2020-03-31 | 2020-07-28 | 中国测绘科学研究院 | Web page text classification method based on enhanced capsule network and storage medium |
CN111612030A (en) * | 2020-03-30 | 2020-09-01 | 华电电力科学研究院有限公司 | Wind turbine generator blade surface fault identification and classification method based on deep learning |
CN111626361A (en) * | 2020-05-28 | 2020-09-04 | 辽宁大学 | Bearing sub-health identification method for improving capsule network optimization layered convolution |
CN111931882A (en) * | 2020-07-20 | 2020-11-13 | 五邑大学 | Automatic goods checkout method, system and storage medium |
CN112308089A (en) * | 2019-07-29 | 2021-02-02 | 西南科技大学 | Attention mechanism-based capsule network multi-feature extraction method |
CN112364920A (en) * | 2020-11-12 | 2021-02-12 | 西安电子科技大学 | Thyroid cancer pathological image classification method based on deep learning |
CN112528165A (en) * | 2020-12-16 | 2021-03-19 | 中国计量大学 | Session social recommendation method based on dynamic routing graph network |
CN112766340A (en) * | 2021-01-11 | 2021-05-07 | 中山大学 | Depth capsule network image classification method and system based on adaptive spatial mode |
CN113870241A (en) * | 2021-10-12 | 2021-12-31 | 北京信息科技大学 | Tablet defect identification method and device based on capsule neural network |
CN114338093A (en) * | 2021-12-09 | 2022-04-12 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140030786A1 (en) * | 2005-10-31 | 2014-01-30 | The Regents Of The University Of Michigan | Compositions and methods for treating and diagnosing cancer |
CN107301640A (en) * | 2017-06-19 | 2017-10-27 | 太原理工大学 | A kind of method that target detection based on convolutional neural networks realizes small pulmonary nodules detection |
CN107527318A (en) * | 2017-07-17 | 2017-12-29 | 复旦大学 | A kind of hair style replacing options based on generation confrontation type network model |
-
2018
- 2018-05-24 CN CN201810509412.6A patent/CN108985316B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140030786A1 (en) * | 2005-10-31 | 2014-01-30 | The Regents Of The University Of Michigan | Compositions and methods for treating and diagnosing cancer |
CN107301640A (en) * | 2017-06-19 | 2017-10-27 | 太原理工大学 | A kind of method that target detection based on convolutional neural networks realizes small pulmonary nodules detection |
CN107527318A (en) * | 2017-07-17 | 2017-12-29 | 复旦大学 | A kind of hair style replacing options based on generation confrontation type network model |
Non-Patent Citations (2)
Title |
---|
RODNEY LALONDE 等: "Capsules for Object Segmentation", 《ARXIV:1804.04241V1 [STAT.ML]》 * |
XIANLI ZOU 等: "Fast Convergent Capsule Network with Applications in MNIST", 《ISNN 2018: ADVANCES IN NEURAL NETWORKS – ISNN 2018》 * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727197B (en) * | 2019-01-03 | 2023-03-14 | 云南大学 | Medical image super-resolution reconstruction method |
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN109801305A (en) * | 2019-01-17 | 2019-05-24 | 西安电子科技大学 | SAR image change detection based on depth capsule network |
CN109801305B (en) * | 2019-01-17 | 2021-04-06 | 西安电子科技大学 | SAR image change detection method based on deep capsule network |
CN109840560B (en) * | 2019-01-25 | 2023-07-04 | 西安电子科技大学 | Image classification method based on clustering in capsule network |
CN109840560A (en) * | 2019-01-25 | 2019-06-04 | 西安电子科技大学 | Based on the image classification method for incorporating cluster in capsule network |
CN110032925A (en) * | 2019-02-22 | 2019-07-19 | 广西师范大学 | A kind of images of gestures segmentation and recognition methods based on improvement capsule network and algorithm |
CN110059730A (en) * | 2019-03-27 | 2019-07-26 | 天津大学 | A kind of thyroid nodule ultrasound image classification method based on capsule network |
CN110059741A (en) * | 2019-04-15 | 2019-07-26 | 西安电子科技大学 | Image-recognizing method based on semantic capsule converged network |
CN110059741B (en) * | 2019-04-15 | 2022-12-02 | 西安电子科技大学 | Image recognition method based on semantic capsule fusion network |
CN110009097A (en) * | 2019-04-17 | 2019-07-12 | 电子科技大学 | The image classification method of capsule residual error neural network, capsule residual error neural network |
CN110163489B (en) * | 2019-04-28 | 2023-07-14 | 湖南师范大学 | Method for evaluating rehabilitation exercise effect |
CN110163489A (en) * | 2019-04-28 | 2019-08-23 | 湖南师范大学 | A kind of drug rehabilitation motion exercise effect assessment method |
CN110110668A (en) * | 2019-05-08 | 2019-08-09 | 湘潭大学 | A kind of gait recognition method based on feedback weight convolutional neural networks and capsule neural network |
CN110084320A (en) * | 2019-05-08 | 2019-08-02 | 广东工业大学 | Thyroid papillary carcinoma Ultrasound Image Recognition Method, device, system and medium |
CN110414317A (en) * | 2019-06-12 | 2019-11-05 | 四川大学 | Full-automatic Arneth's count method based on capsule network |
CN110414317B (en) * | 2019-06-12 | 2021-10-08 | 四川大学 | Full-automatic leukocyte classification counting method based on capsule network |
CN110399899A (en) * | 2019-06-21 | 2019-11-01 | 武汉大学 | Uterine neck OCT image classification method based on capsule network |
CN110399899B (en) * | 2019-06-21 | 2021-05-04 | 武汉大学 | Cervical OCT image classification method based on capsule network |
CN110288555B (en) * | 2019-07-02 | 2022-08-02 | 桂林电子科技大学 | Low-illumination enhancement method based on improved capsule network |
CN110288555A (en) * | 2019-07-02 | 2019-09-27 | 桂林电子科技大学 | A kind of low-light (level) Enhancement Method based on improved capsule network |
CN110502970A (en) * | 2019-07-03 | 2019-11-26 | 平安科技(深圳)有限公司 | Cell image identification method, system, computer equipment and readable storage medium storing program for executing |
CN110309811A (en) * | 2019-07-10 | 2019-10-08 | 哈尔滨理工大学 | A kind of hyperspectral image classification method based on capsule network |
CN112308089A (en) * | 2019-07-29 | 2021-02-02 | 西南科技大学 | Attention mechanism-based capsule network multi-feature extraction method |
CN110458852A (en) * | 2019-08-13 | 2019-11-15 | 四川大学 | Segmentation of lung parenchyma method, apparatus, equipment and storage medium based on capsule network |
CN110599457B (en) * | 2019-08-14 | 2022-12-16 | 广东工业大学 | Citrus huanglongbing classification method based on BD capsule network |
CN110599457A (en) * | 2019-08-14 | 2019-12-20 | 广东工业大学 | Citrus huanglongbing classification method based on BD capsule network |
CN111046916A (en) * | 2019-11-20 | 2020-04-21 | 上海电机学院 | Motor fault diagnosis method and system based on void convolution capsule network |
CN111241958A (en) * | 2020-01-06 | 2020-06-05 | 电子科技大学 | Video image identification method based on residual error-capsule network |
CN111291712A (en) * | 2020-02-25 | 2020-06-16 | 河南理工大学 | Forest fire recognition method and device based on interpolation CN and capsule network |
CN111292322A (en) * | 2020-03-19 | 2020-06-16 | 中国科学院深圳先进技术研究院 | Medical image processing method, device, equipment and storage medium |
CN111292322B (en) * | 2020-03-19 | 2024-03-01 | 中国科学院深圳先进技术研究院 | Medical image processing method, device, equipment and storage medium |
CN111612030A (en) * | 2020-03-30 | 2020-09-01 | 华电电力科学研究院有限公司 | Wind turbine generator blade surface fault identification and classification method based on deep learning |
CN111460818B (en) * | 2020-03-31 | 2023-06-30 | 中国测绘科学研究院 | Webpage text classification method based on enhanced capsule network and storage medium |
CN111460818A (en) * | 2020-03-31 | 2020-07-28 | 中国测绘科学研究院 | Web page text classification method based on enhanced capsule network and storage medium |
CN111461063B (en) * | 2020-04-24 | 2022-05-17 | 武汉大学 | Behavior identification method based on graph convolution and capsule neural network |
CN111461063A (en) * | 2020-04-24 | 2020-07-28 | 武汉大学 | Behavior identification method based on graph convolution and capsule neural network |
CN111626361B (en) * | 2020-05-28 | 2023-08-11 | 辽宁大学 | Bearing sub-health identification method for improving capsule network optimization hierarchical convolution |
CN111626361A (en) * | 2020-05-28 | 2020-09-04 | 辽宁大学 | Bearing sub-health identification method for improving capsule network optimization layered convolution |
CN111931882A (en) * | 2020-07-20 | 2020-11-13 | 五邑大学 | Automatic goods checkout method, system and storage medium |
CN112364920A (en) * | 2020-11-12 | 2021-02-12 | 西安电子科技大学 | Thyroid cancer pathological image classification method based on deep learning |
CN112364920B (en) * | 2020-11-12 | 2023-05-23 | 西安电子科技大学 | Thyroid cancer pathological image classification method based on deep learning |
CN112528165A (en) * | 2020-12-16 | 2021-03-19 | 中国计量大学 | Session social recommendation method based on dynamic routing graph network |
CN112766340A (en) * | 2021-01-11 | 2021-05-07 | 中山大学 | Depth capsule network image classification method and system based on adaptive spatial mode |
CN112766340B (en) * | 2021-01-11 | 2024-06-04 | 中山大学 | Depth capsule network image classification method and system based on self-adaptive spatial mode |
CN113870241A (en) * | 2021-10-12 | 2021-12-31 | 北京信息科技大学 | Tablet defect identification method and device based on capsule neural network |
CN114338093A (en) * | 2021-12-09 | 2022-04-12 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
CN114338093B (en) * | 2021-12-09 | 2023-10-20 | 上海大学 | Method for transmitting multi-channel secret information through capsule network |
Also Published As
Publication number | Publication date |
---|---|
CN108985316B (en) | 2022-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985316A (en) | A kind of capsule network image classification recognition methods improving reconstructed network | |
US11403838B2 (en) | Image processing method, apparatus, equipment, and storage medium to obtain target image features | |
CN107145908B (en) | A kind of small target detecting method based on R-FCN | |
CN107316066B (en) | Image classification method and system based on multi-channel convolutional neural network | |
CN106548208B (en) | A kind of quick, intelligent stylizing method of photograph image | |
CN106297774B (en) | A kind of the distributed parallel training method and system of neural network acoustic model | |
WO2022011681A1 (en) | Method for fusing knowledge graph based on iterative completion | |
CN104361328B (en) | A kind of facial image normalization method based on adaptive multiple row depth model | |
CN105654117B (en) | High spectrum image sky based on SAE depth network composes united classification method | |
CN106062786A (en) | Computing system for training neural networks | |
CN106023154B (en) | Multidate SAR image change detection based on binary channels convolutional neural networks | |
CN109215028A (en) | A kind of multiple-objection optimization image quality measure method based on convolutional neural networks | |
CN109063742A (en) | Butterfly identifies network establishing method, device, computer equipment and storage medium | |
CN110288030A (en) | Image-recognizing method, device and equipment based on lightweight network model | |
CN110119447A (en) | From coding Processing with Neural Network method, apparatus, computer equipment and storage medium | |
CN108446711A (en) | A kind of Software Defects Predict Methods based on transfer learning | |
CN109784474A (en) | A kind of deep learning model compression method, apparatus, storage medium and terminal device | |
CN109165743A (en) | A kind of semi-supervised network representation learning algorithm based on depth-compression self-encoding encoder | |
CN109840560A (en) | Based on the image classification method for incorporating cluster in capsule network | |
CN105469376A (en) | Method and device for determining picture similarity | |
CN110119805B (en) | Convolutional neural network algorithm based on echo state network classification | |
CN108960404A (en) | A kind of people counting method and equipment based on image | |
CN110390107A (en) | Hereafter relationship detection method, device and computer equipment based on artificial intelligence | |
WO2020168796A1 (en) | Data augmentation method based on high-dimensional spatial sampling | |
CN108491925A (en) | The extensive method of deep learning feature based on latent variable model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |