CN114548197A - Clustering method based on self-discipline learning SDL model - Google Patents

Clustering method based on self-discipline learning SDL model Download PDF

Info

Publication number
CN114548197A
CN114548197A CN202011396635.XA CN202011396635A CN114548197A CN 114548197 A CN114548197 A CN 114548197A CN 202011396635 A CN202011396635 A CN 202011396635A CN 114548197 A CN114548197 A CN 114548197A
Authority
CN
China
Prior art keywords
clustering
function
model
probability
sdl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011396635.XA
Other languages
Chinese (zh)
Inventor
顾泽苍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011396635.XA priority Critical patent/CN114548197A/en
Publication of CN114548197A publication Critical patent/CN114548197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a clustering method based on an autonomous learning SDL model in the field of information processing, which is characterized by comprising the following steps: the clustering result of each class obtains the best clustering solution on the effects of function mapping and function Gaussian distribution according to the maximum probability scale of the probability space among the clustered feature vectors according to the scale of the distance of the probability space. The method is characterized in that: the SDL model clustering can consider the fusion of a function mapping model and a function Gaussian distribution model, can simulate a deep learning function mapping model, and can realize the high-precision image recognition capability and the high generalization capability of the function Gaussian distribution model. The method has no black box problem, does not need large hardware support, does not need marking of large data, and only needs training of small data, so the method has high performance, low import cost and convenient mass popularization.

Description

Clustering method based on autonomous learning SDL model
[ technical field ] A method for producing a semiconductor device
The invention belongs to a clustering method based on an autonomous learning SDL model in the field of artificial intelligence.
[ background of the invention ]
The "deep learning" (non-patent document 1) proposed by the Hinton team of the university of toronto, canada achieved excellent performance in the test data set of image classification of IMAGENET, attracting attention of the world, and therefore raised the climax of this artificial intelligence. Many researchers have been working on the control of autonomous vehicles using "deep learning" models. A typical technique is "driving at a day" (non-patent document 2).
Inventor Hinton as "deep learning" was declared on receiving Axios website interviews in 2017 at 9 months: "my view is to drop it (back-propagation) all away, re-start the stove", which is a dream break of the boltzmann machine by Hinton, the black box problem of "deep learning" is unsolvable and therefore not suitable for widespread popularization and eventually to end up.
Therefore, people need to find a new generation artificial intelligence model for replacing deep learning, and hopefully, a small-data, probabilistic and iterative machine learning model without the black box problem is obtained. However, the Capsule theory (non-patent document 3) by Hinton prediction does not produce the expected effect.
After deep learning is negated by the inventor, the algorithm school grows up, and one topic is: a new generation of artificial intelligence autonomous learning SDL model of a novel artificial intelligence neural network construction method (CN108510052A) is published.
[ non-patent document 1 ]
A.Krizhevsky,I.Sutskever,Geoffrey E.Hinton:“ImageNet Classification with Deep Convolutional Neural Networks”,Advances in Neural Information Processing Systems 25:pp1097-1105(2012).
[ non-patent document 2 ]
A.Kendall,J.Hawke:″Learning to Drive in Day″, arXiv:1807.00412v2.[cs.LG]11 Sep.(2018).
[ non-patent document 3 ]
S.Sabour,N.Frosst,Geoffrey E.Hinton:″Dynamic Routing Between Capsules″,arXiv:1710.09829v2.[cs.CV]7 Nov.(2017).
[ patent document 1 ]
(CN108510052A)
The deep learning model described above (non-patent document 1) requires exhaustive enumeration to obtain global optimum when solving a data set, which is an NPC problem in such a large combination space. Moreover, only the local optimal solution can be obtained by the SGD through the probability gradient descent method, and the global optimal solution is difficult to achieve. Moreover, the local optimal solution obtained by the SGD is random in application effect to deep learning, and it cannot be guaranteed that each SGD solution is the solution with the best application effect. The local optimal solution of SGD is very unstable because the global optimal solution is not available, and a distinct solution is obtained as long as the data fluctuates a little, which is the reason for the black box problem.
Moreover, huge hardware overhead is consumed in exhaustive calculation, the processing efficiency is extremely low, the hardware cost is very high, and the method is the reason for solving small tasks by a large model. In practical application, one deep learning algorithm personnel needs to be matched with 100 marking personnel, which is completely artificial intelligence, because the deep learning is a function mapping model, so that the application cost is very high. Further, the deep learning is limited by the application scene, and can be applied only to image recognition and voice recognition, and cannot be applied to industrial control, control of an autonomous vehicle, and the like.
The above (non-patent document 2) adopts a model-free depth reinforcement learning algorithm method, and uses Deep Deterministic Policy Gradients (DDPG) to solve the lane tracking task. In the face of complicated control of the automatic driving automobile, the method is easy to fall into the NPC problem of control and is difficult to be applied in practical engineering.
In the Capsule theory of the above (non-patent document 3), the weight value is increased by using the effective node information, and the weight value is decreased by using the ineffective node information, and as an iterative method, the result is calculated by a formulation method, and the like.
The above (patent document 1) introduces the most mathematically significant theory of the gaussian process, and is a probabilistic model, a small data model, and a highly iterative model. The application effect of an infinite data set corresponding to the function mapping can be obtained with only a small amount of data. The system scale can be infinitely expanded, and the computational complexity is close to linearity. Can be applied to any field. But does not have the feature of a deep learning function mapping model that can enlarge the interval between feature vectors.
[ summary of the invention ]
The first object of the present invention is: the deep learning of the mathematical calculation and simulation function mapping model is realized through the algorithm, so that the optimal solution of data training is not required to be obtained through combination, the processing efficiency of the system is improved, the hardware overhead is reduced, and the problem of black boxes is solved.
The second object of the present invention is: the method provides a method for combining the deep learning simulated by mathematical calculation with the SDL Gaussian distribution model and the probability distribution of the function with the mapping of the function, and has the advantages that the two characteristics can be exerted, the strongest artificial intelligence model is constructed at present, and the deep popularization of the artificial intelligence is promoted.
In order to achieve at least one of the above objects, the present invention proposes the following technical solutions.
A clustering method based on an autonomous learning SDL model at least has the following characteristics:
(1) the clustering feature vectors are subjected to a scale according to the probability space distance;
(2) clustering results of each class are according to the maximum probability scale of the probability space;
(3) the best solution of clustering is obtained on the effect of function mapping and function Gaussian distribution.
Moreover, the clustering algorithm of the SDL model is the fusion of a function mapping model and a Gaussian distribution model; performing optimal clustering on the feature vectors through probability scale self-organization and probability space distance; the clustering of the results of the individual probability spaces of the characteristic values is given directly; and a clustering algorithm for obtaining an optimal solution between the function mapping characteristic and the Gaussian distribution characteristic.
Moreover, the clustering algorithm of the SDL model is as follows: the mapping function comprises at least one of a linear function, a nonlinear function, a random function and a plurality of mixed mapping functions; or a clustering algorithm of the SDL model.
Moreover, the clustering algorithm of the SDL model is as follows: the mapping function is not only a classical linear function, a classical nonlinear function and a classical random function, and particularly, the mapping function is comprehensively constructed by considering the effect of deep learning on improving the accuracy of mode identification and combining a manual intervention method according to the characteristics of a solution solved by the SDG of the deep learning; the mapping function comprises components in a mathematical operation form, components with membership functions, regularly constructed components, at least one of clustering components of the SDL model, or a mixture of a plurality of components.
Moreover, the clustering algorithm of the SDL model is as follows: the probability space of the maximum probability is obtained by a probability scale self-organizing algorithm.
A clustering method based on an autonomous learning SDL model is characterized by comprising the following steps:
(1) performing probability scale self-organizing iteration on all data according to Euclidean distance between feature vectors to obtain two maximum probability Gaussian distributions
Figure BSA0000226776690000041
The maximum probability value and the maximum probability scale of (the maximum probability space);
(2) taking the two obtained maximum probability values as centers, taking the distance of the probability space as a scale with all the data which are not clustered, carrying out countermeasure in the two probability spaces, and taking the data within the two maximum probability scales as the final two clustering results;
(3) repeating the processes (1) to (2) until all the data are clustered.
Moreover, the clustering algorithm of the SDL model is the fusion of a function mapping model and a Gaussian distribution model; performing optimal clustering on the feature vectors through probability scale self-organization and probability space distance; the clustering of the results of the individual probability spaces of the characteristic values is given directly; and a clustering algorithm for obtaining an optimal solution between the function mapping characteristic and the Gaussian distribution characteristic.
Moreover, the clustering algorithm of the SDL model is as follows: the mapping function comprises at least one of a linear function, a nonlinear function, a random function and a plurality of mixed mapping functions; or a clustering algorithm of the SDL model.
Moreover, the clustering algorithm of the SDL model refers to: the mapping function is not only a classical linear function, a classical nonlinear function and a classical random function, and particularly, the mapping function is comprehensively constructed by considering the effect of deep learning on improving the accuracy of mode identification and combining a manual intervention method according to the characteristics of a solution solved by the SDG of the deep learning; the mapping function comprises components in a mathematical operation form, components with membership functions, regularly constructed components, at least one of clustering components of the SDL model, or a mixture of a plurality of components.
Moreover, the clustering algorithm of the SDL model is as follows: the probability space of the maximum probability is obtained by a probability scale self-organizing algorithm.
The invention provides a construction method based on algorithm simulation deep learning, which has the implementation effects that: the SDL model clustering can consider the fusion of a function mapping model and a function Gaussian distribution model, can simulate a deep learning function mapping model, and can realize the high-precision image recognition capability and the high generalization capability of the function Gaussian distribution model. The method has no black box problem, does not need large hardware support, does not need marking of large data, and only needs training of small data, so the method has high performance, low import cost and convenient mass popularization.
Drawings
FIG. 1 is a schematic diagram of a minimum-scale neural network configuration
FIG. 2 is an example of the relationship between the solution and the application effect of all SGDs obtained for an input message
FIG. 3 is a gray scale conversion image processing method
FIG. 4 is a diagram illustrating an image processing method for emphasizing frame information
FIG. 5 is another image processing method for emphasizing border information
FIG. 6 is a schematic diagram of a simulation deep learning based on SDL model
FIG. 7 is another structural diagram of simulation deep learning based on SDL model
FIG. 8 is a schematic diagram of various forms of mapping functions
FIG. 9 is a schematic of two overlapping Gaussian distributions
FIG. 10 shows training data of Image class for Image _ NET
FIG. 11 is a flow chart of the autonomous machine learning SDL clustering method
FIG. 12 is a schematic diagram of a depth separable convolution
FIG. 13 is a schematic diagram of a depth separable convolution when more features need to be extracted
Description of the symbols
I1,I2,I3,I4Is to input information
T1,T2,T3,...,T16Is a weight value
O1,O2,O3,O4Is the output information
(601) Is an input layer
(602) Is a Gaussian layer
(603) Is a function mapping layer
(604) Data set layer
(610) Image information
(611) Machine learning of SDL model composition
(612) Machine learning with SDL model construction
(613) Output data of Gaussian distribution
(701) Is an input layer
(702) Is a Gaussian layer
(703) Is a function mapping layer
(704) Data set layer
(710) Image information
(711) Machine learning with SDL model construction
(712) Machine learning with SDL model construction
(713) Output data of Gaussian distribution
GζA Gaussian distribution
GξAnother Gaussian distribution
Overlap of two gaussian distributions of ω
φmaxζGaussian distribution GζMaximum value of
φmaxξGaussian distribution GξMaximum value of
mmaxζGaussian distribution GζMaximum probability scale of
mmaxξGaussian distribution GξMaximum probability scale of
σζGaussian distribution GζMaximum probability scale ofCompression value of
σξGaussian distribution GξCompressed value of the maximum probability scale of
Figure BSA0000226776690000071
Initialization step
Figure BSA0000226776690000072
Two division steps
Figure BSA0000226776690000073
Data exchange procedure
Figure BSA0000226776690000074
Classification end judgment step
Figure BSA0000226776690000075
Probability space obtaining step
Detailed Description
The embodiments of the present invention will be described in further detail below with reference to the attached drawings, but the embodiments of the present invention are illustrative rather than restrictive.
First, some new definitions, new concepts and new formulas are introduced
[ probabilistic Scale self-organization ]
Setting a probability space:
[ EQUATION 1 ]
Figure BSA0000226776690000076
There is an initial region, the central value of which
Figure BSA0000226776690000077
And the variance of the Gaussian distribution calculated by the central value as the initial maximum severalScale of rate
Figure BSA0000226776690000078
To be provided with
Figure BSA0000226776690000079
As a central value, on a probability scale
Figure BSA00002267766900000710
The following iterations are performed for the reference:
[ equation 2 ]
Figure BSA0000226776690000081
Figure BSA0000226776690000082
Figure BSA0000226776690000083
The result of n iterations can obtain a maximum probability value close to the parent body of the probability space
Figure BSA0000226776690000084
Maximum probability scale
Figure BSA0000226776690000085
And a maximum probability space G(n). Thereby constituting a probability scale self-organization.
[ migration of SDL model ]
The above-described SDK model can, by several iterations, necessarily migrate to converge within the region of greatest probability regardless of the initial region.
[ probabilistic space ]
Probability Space (Probability Space) as described herein: based on the theory that the "probability theory" of the Soviet Union mathematician Andrey Kolmogorov is based on the theory of the measure theory, the so-called probability space is a measurable space with the overall measure of "1". Theorem 1 can be generated according to this theory: "there is only one gaussian distribution in the probability space, so there is an infinite probability space in euclidean space.
[ probabilistic spatial distance ]
One point of euclidean space is measured to one probability space, or a measure between one probability space to another.
[ method for calculating a probabilistic spatial distance ]
Let the eigenvalues of the eigenvector V
Figure BSA0000226776690000086
Maximum probability value of probability space
Figure BSA0000226776690000087
And maximum probability scale
Figure BSA0000226776690000088
With another eigenvector W eigenvalue
Figure BSA0000226776690000089
Maximum probability value of probability space
Figure BSA00002267766900000810
Maximum probability scale
Figure BSA00002267766900000811
And feature vectors in Euclidean space
Figure BSA00002267766900000812
Characteristic value of
Figure BSA00002267766900000813
The distance G (V, W) between the euclidean space and the probability space can be unified as follows.
[ equation 3 ]
Figure BSA00002267766900000814
Figure BSA00002267766900000815
Figure BSA0000226776690000091
Figure BSA0000226776690000092
The following provides a method of opening a deep learning black box.
The well-known combination of more than 40 elements according to the theory of composition is the problem of NPC, which is not resolvable by turing machines. Based on this knowledge, we construct a minimum-scale neural network that can be exhaustively evaluated to obtain a globally optimal solution.
Fig. 1 is a schematic diagram of a minimum-scale neural network configuration.
As shown in fig. 1; i is1,I2,I3,I4Is input information, T1,T2,T3,...,T16Is a weight, i.e. the data set of the combined result, O1,O2,O3,O4Is the output information. According to the principle of neural network, then
[ EQUATION 4 ]
Figure BSA0000226776690000093
Figure BSA0000226776690000094
Figure BSA0000226776690000095
Figure BSA0000226776690000096
Order to
Figure BSA0000226776690000097
Then:
Figure BSA0000226776690000098
Figure BSA0000226776690000099
Figure BSA00002267766900000910
Figure BSA00002267766900000911
as shown in equation 4; this is a system of linear equations that should have a global optimal solution when the input information is equal to the output information. Therefore, the system is a stable system when the global optimal solution is obtained, and the black box problem does not exist.
A unique global optimal solution is found through an exhaustive method, the correctness of the formula 1 is proved, and meanwhile, according to the principle of a probability gradient descent method SGD, the solution of the SGD is solved through the exhaustive method. The number of local optimal solutions of the SGD, which is found to be a simple neural network, is random according to different input information. The small amount is hundreds, and the large amount can reach more than twenty thousand. The SGD solution may be advanced toward the globally optimal solution despite the occasional encounter of input information until the globally optimal solution is obtained. But this is extremely incidental. Mostly, the SGD method is extremely difficult to get across so many hills of local optimal solutions because the SGD solutions are too many. Therefore, the initiative of the SGD method to attempt to obtain the global best solution through SGD is not scientifically based and is a very wrong theory.
The purpose of opening a black box for deep learning is to try to break through the secret with good application effect of the deep learning in the fields of image recognition, image classification and the like. For each of various input information, we find the corresponding value of the application effect of the deep learning and all SGD local optimum solutions, and examine the relationship between the SGD solutions and the application effect.
Fig. 2 is an example of the relationship between the solutions of all SGDs obtained for a certain input information and the application effect.
As shown in fig. 2; from the first SGD solution obtained to the last 5,187 SGD solutions, the effect of the application is random and several times different. Therefore, the SGD method can not ensure that a global optimal solution can be obtained certainly, and can not ensure that the SGD solution is the solution with the best application effect of deep learning, so that the SGD method is a pseudo proposition.
After the black box of the neural network is opened, the mechanism of deep learning is thoroughly recognized through a large amount of data: the functional mapping implemented by the neural network through the combination of the neural networks for the input data may enlarge the interval of different feature vectors to hundreds or even thousands of times, or higher. The function mapping is random function mapping, and slight difference of input information can be mapped to results of distinct data sets through the random function mapping, so that the probability of misidentification of data of different types of data sets can be greatly reduced according to the Gaussian distribution theory. This is advantageous for image classification, as the accuracy of image recognition is improved. The prominent achievement of deep learning on the application effect is not determined by the form of the structure or weight generation of the neural network. But rather by the form of the functional mapping. The function mapping is mapped under independent single data, so that the result of correct mapping and the result of data matching can be obtained by the small difference between the feature vectors, which is the root of the deep learning to obtain the accuracy exceeding the traditional identification accuracy.
In order to improve the accuracy of image recognition, the method simulates deep learning, and in order to highlight the personalized features of the image, the image can be filtered through various templates or the recognized image is directly processed by an image processing algorithm. The processing method for highlighting the personalized features of the image is described below by taking a gray-scale value adjustment image processing method as an example, and the application of the method in the method for simulating deep learning based on the SDL model provided by the invention has novelty according to various methods disclosed in the past and belongs to the scope of the invention.
Fig. 3 is a gradation conversion image processing method.
Fig. 3(a) shows the original gray scale value of any 3 × 3 pixels in the original image. The maximum gray value among the original gray values of 3 × 3 pixels is exchanged with the central gray value as in fig. 3 (b). The minimum gray value among the original gray values of 3 × 3 pixels is swapped with the center gray value as in (c) of fig. 3. As shown in fig. 3 (d), the maximum probability value is calculated by using the SDL model for each gray value among the original gray values of 3 × 3 pixels, and the maximum probability value is exchanged with the center gray value.
In the method described above, the approximate probability value may be obtained by self-organizing the diagonal line of the original gray values of 3 × 3 pixels in fig. 3(a), and the pixel value of the center cross line with the maximum gray value and the minimum gray value among the original gray values of 3 × 3 pixels by using a probability scale.
Fig. 4 is an image processing method in which frame information is emphasized.
As shown in fig. 4 (a); the image is differentiated in the x direction and the y direction, and then the gray value of the original pixel is replaced by the result of multiplication according to the correspondence of each pixel by using the constants in the left 3 x3 grid and the right 3 x3 grid of the image 4 (a). Also, as shown in fig. 4 (b); the image is differentiated in the x direction and the y direction, and then the gray value of the original pixel is replaced by the result of multiplying the constants in the left 3 × 3 grid and the right 3 × 3 grid of the image 4 (b).
Fig. 5 is another image processing method of emphasizing the frame information.
As in fig. 4, the x-direction derivative result is multiplied by the template, as shown in fig. 5 (a), to obtain the processing effect of the vertical frame filter. As shown in fig. 5 (b), the y-direction derivative result is multiplied by the template, so that the processing effect of the horizontal frame filter can be obtained.
When the image is identified, one image can be converted into a plurality of images, and the Gaussian distribution of each characteristic vector can be formed, so that the identification rate is improved, and the quality of the image can be improved.
In particular, the number of feature vectors can be increased during image recognition, and the image recognition rate can be improved. Especially, the output of the convolutional neural network frequently used in deep learning can be directly input to the nodes of the input layer of the SDL model to be used as a group of new characteristic values, the number of characteristic vectors can be increased, the intervals among the characteristic vectors of different types of images can be increased, the scale of a data set can be increased, the classification precision of the images can be finally improved, and the precision of image identification can be improved.
The main convolution algorithm for deep learning is as follows:
1. gaussian convolution kernel
[ equation 5 ]
Figure BSA0000226776690000121
Corresponding to the pixels of the cells of each RGB color image, the processing results are accumulated and then averaged, and one pixel may be slid, two pixels may be slid, or three pixels may be slid, etc.
Roberts edge detection
[ equation 6 ]
Figure BSA0000226776690000122
Or
Figure BSA0000226776690000123
Prewitt edge detection
[ equation 7 ]
Figure BSA0000226776690000124
Or
Figure BSA0000226776690000125
Sobel edge detection
[ EQUATION 8 ]
Figure BSA0000226776690000126
Or
Figure BSA0000226776690000131
Scharr edge detection
[ equation 9 ]
Figure BSA0000226776690000132
Or
Figure BSA0000226776690000133
Laplacian operator
[ EQUATION 10 ]
Figure BSA0000226776690000134
Kirsch directional operator
[ equation 11 ]
Figure BSA0000226776690000135
The difference values in 8 directions are calculated, the maximum value is taken as the final output edge strength, and the corresponding direction is the edge direction.
8. Relief filter
[ EQUATION 12 ]
Figure BSA0000226776690000137
And filtering small-area noise in the image.
9. Edge reinforcement
[ equation 13 ]
Figure BSA0000226776690000141
10. Average filtering
[ equation 14 ]
Figure BSA0000226776690000142
11. Depth separable convolution
FIG. 12 is a schematic diagram of a depth separable convolution.
As shown in fig. 12; as with neural networks, deep separable convolution may also be used in the SDL model, where spatial convolution is performed while keeping the channels separated, followed by deep convolution. Taking an RGB image with an input image of 12 × 3 as an example, the normal convolution is to convolve 3 channels simultaneously with a convolution kernel. That is, 3 channels, after one convolution, output one number. While the depth separable convolution is divided into two steps: three convolutions are used to convolve the three channels, so that after one convolution 3 numbers are output. The three numbers output are passed through a 1x1x3 convolution kernel (poitwise kernel) to obtain a number. The depth separable convolution is actually achieved by two convolutions.
And step one, performing convolution on the three channels respectively and outputting the attributes of the three channels.
Secondly, convolving the three channels again by using a convolution kernel 1x1x3 to obtain a characteristic value data, wherein the characteristic value data at this time corresponds to 64 characteristic values, namely 8x8x 1.
FIG. 13 is a schematic diagram of a depth separable convolution when more features need to be extracted.
As shown in fig. 13; when more features need to be extracted, more 1x1x3 convolution kernels need to be designed (e.g., the 8x8x256 cube is drawn as 256 8x8x1, since they are not unity, representing 256 properties). In the SDL model, when more features need to be extracted, the processing method of the deep separable convolution is that the result output by the convolution neural network is directly input to each node corresponding to the input layer of the SDL model.
Deep learning is of interest to the world with excellent performance in Image classification of Image _ NET in 2012. In order to prove that the deep learning based on the algorithm simulation can achieve the same capacity of the common deep learning, a new-generation artificial intelligence model which is stronger is formed by taking Image classification of Image _ NET as an example, introducing an SDL function Gaussian distribution model and using a function mapping model for simulating the deep learning by the algorithm.
Fig. 6 is a schematic diagram of a simulation deep learning based on the SDL model.
As shown in fig. 6; (601) the input layer is mainly responsible for receiving image information (610) through an SDL (611) on each node of the input layer, and the input layer (602) is a Gaussian layer and is mainly responsible for obtaining the Gaussian distribution (613) of the characteristic values of the images through training of machine learning (612) on the same type of images input for multiple times. The function mapping layer (603) is mainly responsible for mapping the result (613) of the gaussian distribution obtained by the gaussian layer to the data set (604) by deep learning through algorithm simulation.
Here, the image information (610) may divide the image into η e δ pixel small regions, each of which finds a maximum fraction value of the region by the SDL model, and input the maximum fraction value to a corresponding node of the input layer. The maximum probability value of each small region constitutes a respective feature value, and the feature values of the maximum probability values of all the small regions of the image constitute a feature vector of the image.
In the SDL model, when more features need to be extracted, all small regions of an image are processed by a convolution algorithm as in deep learning, the processing result is input to corresponding nodes of an input layer, and for deep separation convolution, the output of a convolution neural network can be directly used as the input of the SDL model and directly input to corresponding nodes of the input layer.
In the following, we use mathematical formulas to express the principles of image classification and image recognition based on algorithm simulation deep learning.
The training data of alpha images formed by beta characteristic values input to each node of an input layer (601) is the same type of training set images obtained under different conditions, or different types of images mixed by data in different training sets, such as the same type of training images on Image _ NET, or different types of images are mixed together, and the intervals of different types of characteristic vectors are separated through training of characteristic vector classification of probability distance and maximum probability scale, hereinafter referred to as training images for short, and the expression is as follows:
[ equation 15 ]
Figure BSA0000226776690000161
Through the training of formula (15), each group of characteristic values
Figure BSA0000226776690000162
Gamma eigenvector groups consisting of beta maximum probability eigenvalues can be obtained from (equations 1-2):
[ equation 16 ]
Figure BSA0000226776690000163
Here: gamma is less than or equal to alpha, and the vector of the maximum probability scale can be obtained by (formula 1-2):
[ equation 17 ]
Figure BSA0000226776690000164
According to the definition of the probability space, each element is divided into
Figure BSA0000226776690000165
And mmaxi(i ═ 1, 2,. beta.) can be constructedA set of maximum probability spaces for the feature vectors:
[ equation 18 ]
Figure BSA0000226776690000166
Is composed of (formula 1 to 2)
Figure BSA0000226776690000167
And mmaxiIs to calculate the probability space smaxijA constant of (1, 2.,. gamma.,. j.,. beta.) is calculated. There are homogeneous images and heterogeneous images in the gamma probability spaces, but the gaussian distribution intervals between the feature vectors of heterogeneous images must be separated. The difference between the deep learning and the novel SDL model for simulating the deep learning by the algorithm is that the deep learning only maps data into data set, the novel SDL model can separate the interval of Gaussian distribution of images of different classes, can map the Gaussian distribution into the data set, and has the characteristic of small data training effect on big data.
To improve the recognition accuracy, the maximum probability characteristic value of images of different classes is always desirable
Figure BSA0000226776690000171
And
Figure BSA0000226776690000172
the farther the probability space distance is, the better, this problem can be solved by functional mapping. Setting mapping function
Figure BSA0000226776690000173
The following inequalities may be satisfied:
[ equation 19 ]
Figure BSA0000226776690000174
Fig. 7 is a schematic diagram of another simulation deep learning based on the SDL model.
As shown in fig. 7; (701) the input layer is mainly responsible for receiving image characteristic information (710) through an SDL model (711) at each node of the input layer, the mapping function is 703, the mapping function is mainly responsible for mapping the characteristic information of the image output by the input layer to a data set layer (704), the Gaussian layer is 702, the Gaussian distribution of the maximum probability of the characteristic value of the image is obtained through training of machine learning (712) by mainly responsible for the data set of the same type of training image (formula 5) obtained by the data set layer (704).
When inputting images of different classes, the maximum probability Gaussian distribution (formula 18) of the characteristic values of the images of different classes is obtained through the training of machine learning (712), under the condition, if the Gaussian distributions of the images of two different classes are overlapped, the maximum probability scale values of the two Gaussian distributions are compressed, and finally, the maximum probability value which can represent the compressed Gaussian distribution (713) and the maximum probability scale values are obtained and are sent to each node of a Gaussian layer (702) to be used as output values.
Here, the image information (710) may divide the image into η e δ pixel small regions, and each small region obtains a maximum fraction value of the region by the SDL model and inputs the maximum fraction value to a corresponding node of the input layer. The maximum probability value of each small region constitutes a respective feature value, and the feature values of the maximum probability values of all the small regions of the image constitute a feature vector of the image.
The whole image can be directly subjected to feature extraction, and the result of the feature extraction is sent to the node of the input layer. Feature extraction can also be performed in each small region of the image through a convolution algorithm (formulas 5 to 14) commonly used in deep learning, and the extracted features of each small region are used as a group of feature values to form a new feature vector together with the feature values described above.
FIG. 8 is a schematic diagram of various forms of mapping functions.
Figure BSA0000226776690000181
May be a linear function, as shown in (a) of fig. 8; (801) for feature vectors consisting of individual feature valuesAnd (802) as a result of the mapping, the distance interval of the characteristic vector (801) can be arbitrarily increased by the mapping function, namely, the effect of amplifying the interval of the characteristic vector after the information is input and mapped to the data set layer by the mapping function imitating deep learning is achieved.
Figure BSA0000226776690000182
A nonlinear function is also possible, as shown in fig. 8 (b); (803) is a feature vector composed of individual feature values, and (804) is a mapped result, and the feature vector (803) can be mapped into a complex nonlinear result through a mapping function at will. The feature vectors mapped to the data set layer generate nonlinear effects by simulating the excitation function of deep learning, and are used for corresponding nonlinear data classification.
Figure BSA0000226776690000183
It may also be a random function, as shown in (c) of fig. 8; (805) for the eigenvectors formed from the individual eigenvalues, (806) for the mapped results, the eigenvectors (805) can be mapped arbitrarily by the mapping function into the results of a complex random function, which is a random permutation of the individual eigenvalues in the eigenvectors that mimics the random relationship between the SGD and the input information.
Figure BSA0000226776690000184
It is also possible to have a composite function of at least two of the three functions. As shown in (d) of fig. 8; (807) for the feature vector composed of each feature value, (808) for the mapped result, the feature vector (807) can be mapped into a complex function mapping result with the effect of random result and non-linear effect at will by the mapping function, which is also the characteristic of the function mapping result of deep learning.
Figure BSA0000226776690000185
Not only classical linearityThe function, the classical nonlinear function and the classical random function are combined with a manual intervention method to comprehensively construct a mapping function, and particularly, the mapping function is comprehensively constructed according to the characteristics of a solution solved by the SDG of the deep learning, and the effect of the deep learning on the improvement of the accuracy of the mode identification is considered. The mapping function has components in a mathematical operation form, membership function components, regularly constructed components and the like, and can meet the comprehensive function mapping model.
In combination with the mechanism of gaussian distribution, the gaussian distribution with the maximum probability of the eigenvalue derived from the training data of the object identified by the SDL model (612) or (712) output from each node of the gaussian layer (602) or (702), and all data within the maximum probability scale range of the maximum probability value in this gaussian distribution with the maximum probability should be mapped to the same data in the same data space according to the formula (3) of probability space distance.
The result of the gaussian distribution of the eigenvalues output from each node of the gaussian layer 602 or 702 and obtained from the training data of the object identified by the SDL model 612 or 712 may be regarded as infinite function mapping data and directly passed through the mapping function
Figure BSA0000226776690000191
And mapping the image to a data set layer, and on the judgment of the recognition result, solving the distance of a probability space according to the result of the amplified feature value Gaussian distribution, and taking the image corresponding to the data set with the closest distance as the recognition result.
Fig. 9 is a schematic illustration of two overlapping gaussian distributions.
As shown in fig. 9; two Gaussian distributions G obtained for two different classes of imagesζAnd GξThe overlapping part is omega, and the Gaussian distribution is shown as GζMaximum value of phimaxζAnd the maximum probability scale value mmaxζAnd may represent a Gaussian distribution score GξMaximum value of phimaxξMaximum probability scale value mmaxξ. The traditional method obtains the Gaussian distribution score G through probability scale self-organizationζMaximum value of phimaxζMaximum probability scale value mmaxζThen, the feature vector of the sample data (formula 18) and the Gaussian distribution G of the training data (formula 15) are calculatedζThe minimum probability spatial distance of (2) can determine the recognition result of the image. In this case, the distance between the feature vectors of the images of different types is required to be as large as possible, which requires a technical effort in the quality of feature extraction or the number of feature values, and is limited in reality.
The functional mapping model does not need to take into account the maximization of the distance between the feature vectors of the different types of images, as long as the feature vector data of each mapped different type of image has to exist independently. Some processing is required for this purpose on the intervals of the feature vectors of different kinds of images.
As shown in fig. 9; the result of the gaussian distribution of the two different classes of images, which is the region of coincidence ω, is present, which means that there is a possibility of recognition errors, and if the data of the range of the largest probability scale is mapped onto the data set, there is a possibility of erroneous results of classifying the different classes of images into one class.
The invention proposes to put images of different classes together for training, and when the Gaussian distributions of the images of different classes shown in FIG. 9 coincide, the maximum probability scale value m of the two Gaussian distributionsmaxζAnd mmaxξTo compress σζAnd σξValues of the range, m 'of the result of compressing the scale value of the maximum probability of the two Gaussian distributions can be obtained'maxζAnd m'maxξ. When the deep learning is simulated by the algorithm, the Gaussian distribution is used as mapping data, specifically, the maximum probability value of the Gaussian distribution and the compressed maximum probability scale m'maxζAnd m'maxξAs output of the mapping data, maximum probability scale m 'after compression of gaussian distribution'maxζAnd m'maxξAll data within (2) are to be regarded as being the same numerical value. I.e. in a gaussian distribution GζInterval of maximum probability of phimaxζ-m’maxζ≤ SPζ≤Φmaxζ+m’maxζSince it is the region of maximum probability, the same can be considered as the same by formula (3)The values, therefore, in this interval, can be treated as the same mapped value regardless of the value of the gaussian distribution. Also in the Gaussian distribution GξInterval of maximum probability of phimaxξ-m’maxξ≤SPξ≤Φmaxξ+m’maxξThe treatment method is the same as above.
If deep learning is completely simulated, the labeled data of big data is used for training, function mapping can be carried out without Gaussian distribution, and the data directly output from each node of the input layer 601 directly passes through the mapping function of the formula (10)
Figure BSA0000226776690000201
The feature vectors are mapped to a large data space (604).
In order to improve the generalization capability of machine learning, various kinds of characteristic vectors which are beneficial to expanding the distance between the characteristic vectors of the images in different classes can be used for distinguishing the images in different classes. Feature extraction of an image can be performed by the templates of fig. 3 to 5, or by extracting feature vectors of an image using convolution kernels (equations 5 to 14, fig. 12 and 13), or by a combination of a plurality of feature extraction methods. Finally, the processing result is input to the node of the input layer (601, 701) as a new feature value.
The following specifically describes the method for simulating deep learning by using an algorithm, which is provided by the present invention, for the problem of Image classification of Image _ NET.
Fig. 10 shows training data of Image _ NET for one Image class.
As shown in fig. 10; the training data is training data of goldfish images, and in order to achieve a higher-precision image classification effect, firstly, object images are dug out from the background through an artificial method, for example, the object images in fig. 9 are goldfishes, so that the goldfish images are dug out through the artificial method. This is also the process of telling the machine what is the object image by human intervention.
Next, the feature vector of the excavated image is obtained, and various feature vectors that can reflect the features of the object image and can distinguish other images can be generated by using the gray scale information of R, G, B, or a, B in Lab, or colors in other color spaces, the maximum probability value, the maximum probability scale, the maximum gray scale value, the minimum gray scale value, or the like, or the texture information obtained by deriving each color, or the like.
As shown in fig. 10, even though the target images are goldfish, different goldfish exist, and therefore, it is necessary to classify different goldfish into a probability space of the maximum probability by autonomous machine learning SDL clustering, and to mix data sets of other classes, and to deal with separation of intervals of feature vectors of different classes, so as to directly map gaussian distribution of each feature value to a data set or directly output the data set.
The algorithm for autonomous machine learning SDL clustering is as follows:
Figure BSA0000226776690000211
Figure BSA0000226776690000221
Figure BSA0000226776690000231
the traditional K-Means clustering is based on Euclidean distance as a scale, so that the probability space cannot be classified, the number of classes to be classified needs to be manually specified in advance, the best maximum probability self-discipline machine learning SDL clustering result cannot be obtained, and the Gaussian distribution of the maximum probability of the target function cannot be considered while the mapping of the target function is considered.
Fig. 11 is a flow chart of an autonomous machine learning SDL model clustering algorithm.
As shown in fig. 11; the clustering method is characterized in that data training does not need to be combined, the problem of black boxes does not exist, the optimal clustering result is obtained autonomously on the effects of function mapping and Gaussian distribution, the characteristic of function mapping of an objective function can be exerted on different types of feature vectors at small intervals, the identification result can be accurately obtained, and meanwhile, aiming at Image _ NET Image data in the type shown in figure 10, which is a single Image with large difference in color and texture, the clustering algorithm can obtain the optimal fusion result of a function mapping model and a Gaussian distribution model according to the extraction result of a given feature vector and given training data.
As shown in FIG. 11; the specific self-discipline machine learning SDL clustering steps are as follows:
STEP1 initialization STEP: respectively setting up databases which are not clustered yet
Figure BSA0000226776690000232
And already clustered databases
Figure BSA0000226776690000233
Initially putting all the feature vector data of the training data participating in the clustering
Figure BSA0000226776690000234
STEP2 probability scale self-organization STEP: for in database
Figure BSA0000226776690000235
All data are subjected to probability scale self-organizing iteration according to Euclidean distance between characteristic vectors to obtain Gaussian distribution capable of representing maximum probability
Figure BSA0000226776690000236
Constants of (maximum probability space), i.e. maximum probability values phimaxζ(expected value), and maximum probability scale mmax(variance). Putting the data rejected in iteration back into the database
Figure BSA0000226776690000237
Once again for the database
Figure BSA0000226776690000238
Using self-organizing iteration using a probability scale to obtain another maximum probability value and a maximum probability scale (variance). Data culled in the iteration is also put back into the database
Figure BSA0000226776690000241
STEP3 produces two class STEPs: since the feature vector is high-dimensional data and only clustering based on euclidean space distance falls into a local optimum solution, the following processing is performed. By maximum probability value phimaxIs a center, and a maximum probability scale mmaxTwo newly derived probability spaces represented, and a database
Figure BSA0000226776690000242
All data in (1) are calculated by using the distance of probability space, and countermeasures are carried out in the two probability spaces, and two maximum probability scales m are respectively usedmaxThe data inside are stored in a database as the final two clustering results
Figure BSA0000226776690000243
In (1).
STEP4 probability scale modification STEP: aiming at two newly generated Gaussian probability spaces, the method also needs to be matched with a database
Figure BSA0000226776690000244
The probability space of the data of different training sets in (1) is processed by compressing the maximum probability scale of fig. 9, and a pair of compressed probability distribution data is stored in the database as the result of the function mapping data set
Figure BSA0000226776690000245
In this way, the function of high recognition accuracy of the function mapping can be exerted to the maximum, and the function of the maximum generalization capability of the gaussian distribution can be retained to the maximum.
STEP5 clustering completion judgment: and judging whether all vector data obtain clustering results, switching from Y to the next clustering ending STEP, and switching from N to STEP2 probability scale self-organization STEP.
STEP6 clustering completes the STEP.
In spite of the fact that the method shown in FIG. 6 is firstly performed on the characteristic values input to each node of the input layer, Gaussian distribution values are calculated through an SDL model, and then the maximum probability scale of probability spaces with overlapped Gaussian distributions of different classes is compressed and mapped to a data set; as shown in fig. 7, the characteristic values input to each node of the input layer are directly mapped to the data set layer, the gaussian distribution values at each node of the data set layer are calculated by the SDL model, then the maximum probability scale of the probability space with coincident gaussian distributions in different classes is compressed, and finally the maximum probability value and the maximum probability scale value are used as the output value of the SDL model.
The mapping mechanism of the target function of deep learning is focused on expanding the space of mapping data, namely, training values of large data obtained by combining complex neural networks can be correctly identified even if the distance between feature vectors of images of different classifications is small. Since each identified object is mapped into the data set, the generalization capability is poor, and all the states of the object image need to be labeled by big data to be practically applied.
The mechanism of gaussian distribution is to expand the distance between the feature vectors of different types of images as much as possible to improve the recognition accuracy of the images, and it is possible to have a very strong generalization capability by training of small data, but it is a problem that it is limited to expand the distance between the feature vectors of images as much as possible by improving the extraction quality of feature values. The Gaussian distribution has the characteristic of taking the inverse three aspects, and simultaneously, because the quality of the characteristic vectors can not ensure that the distance between the characteristic vectors of different types of images is large enough, different types of images or images in the same probability space can be caused, the interval between the characteristic vectors of different images is pulled open by finely compressing the maximum probability scale value of the Gaussian distribution, and the compressed Gaussian distribution result with the maximum probability is mapped to a large data space layer, namely an output layer. Therefore, the mapping effect of the target function can be obtained, and the method has the characteristic of small-data probability distribution of the target function.
The probability space of each maximum probability generated by the classification result is mapped by a mapping function in a function mapping layer (603)
Figure BSA0000226776690000251
The data is mapped into a data set (604).
In order to classify images with higher precision, the training data set and the test data set of the ImageNET can extract image outlines by considering the utilization of outline information of target images, and derivation is carried out from 8 directions, positions of derivative values of maximum density in different directions are connected according to the extracted outlines, and Gaussian distribution results of the direction and the length of the link are used as structural feature vectors to obtain more precise classification of the images so as to prevent the images from influencing the precision of image classification due to background noise.

Claims (10)

1. A clustering method based on an autonomous learning SDL model at least has the following characteristics:
(1) the clustering feature vectors are subjected to a scale according to the probability space distance;
(2) clustering results of each class are according to the maximum probability scale of the probability space;
(3) the best solution of clustering is obtained on the effects of function mapping and function Gaussian distribution.
2. The clustering method based on the self-discipline learning SDL model as claimed in claim 1, wherein: the clustering algorithm of the SDL model is the fusion of a function mapping model and a Gaussian distribution model; performing optimal clustering on the feature vectors through probability scale self-organization and probability space distance; the clustering of the results of the individual probability spaces of the characteristic values is given directly; and a clustering algorithm for obtaining an optimal solution between the function mapping characteristic and the Gaussian distribution characteristic.
3. The clustering method based on the self-discipline learning SDL model as claimed in claim 1, wherein: the SDL model clustering algorithm is as follows: the mapping function comprises at least one of a linear function, a nonlinear function, a random function and a plurality of mixed mapping functions; or a clustering algorithm of the SDL model.
4. The clustering method based on the self-discipline learning SDL model as claimed in claim 1, wherein: the SDL model clustering algorithm is as follows: the mapping function is not only a classical linear function, a classical nonlinear function and a classical random function, and particularly, the mapping function is comprehensively constructed by considering the effect of deep learning on improving the accuracy of mode identification and combining a manual intervention method according to the characteristics of a solution solved by the SDG of the deep learning; the mapping function comprises components in a mathematical operation form, components with membership functions, regularly constructed components, at least one of clustering components of the SDL model, or a mixture of a plurality of components.
5. The clustering method based on the self-discipline learning SDL model as claimed in claim 1, wherein: the SDL model clustering algorithm refers to: the probability space of the maximum probability of the mapping function is obtained by a probability scale self-organizing algorithm.
6. A clustering method based on an autonomous learning SDL model is characterized by comprising the following steps:
(1) performing probability scale self-organizing iteration on all data according to Euclidean distances between the feature vectors to obtain the maximum probability value and the maximum probability scale of two maximum probability Gaussian distributions;
(2) taking the two obtained maximum probability values as centers, taking the distance of the probability space as a scale with all the data which are not clustered, carrying out countermeasure in the two probability spaces, and taking the data within the two maximum probability scales as the final two clustering results;
(3) repeating the processes (1) to (2) until all the data are clustered.
7. The clustering method based on the self-discipline learning SDL model as claimed in claim 6, wherein: the clustering algorithm of the SDL model is the fusion of a function mapping model and a Gaussian distribution model; performing optimal clustering on the feature vectors through probability scale self-organization and probability space distance; the clustering of the results of the individual probability spaces of the characteristic values is given directly; and a clustering algorithm for obtaining an optimal solution between the function mapping characteristic and the Gaussian distribution characteristic.
8. The clustering method based on the self-discipline learning SDL model as claimed in claim 6, wherein: the SDL model clustering algorithm is as follows: the mapping function may be a linear function, a nonlinear function, or a mixture of at least one or more random functions.
9. The clustering method based on the self-discipline learning SDL model as claimed in claim 6, wherein: the SDL model clustering algorithm is as follows: the mapping function is not only a classical linear function, a classical nonlinear function and a classical random function, and particularly, the mapping function is comprehensively constructed according to the characteristics of a solution solved by the deep learning SDG, the effect of the deep learning on improving the accuracy of the mode identification is considered, and the manual intervention technique is combined; the mapping function comprises at least one or more mixed components including components in mathematical operation form, components with membership functions and regularly constructed components.
10. The clustering method based on the self-discipline learning SDL model as claimed in claim 6, wherein: the SDL model clustering algorithm is as follows: the probability space of the maximum probability is obtained by autonomous machine learning SDL clustering.
CN202011396635.XA 2020-11-26 2020-11-26 Clustering method based on self-discipline learning SDL model Pending CN114548197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011396635.XA CN114548197A (en) 2020-11-26 2020-11-26 Clustering method based on self-discipline learning SDL model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011396635.XA CN114548197A (en) 2020-11-26 2020-11-26 Clustering method based on self-discipline learning SDL model

Publications (1)

Publication Number Publication Date
CN114548197A true CN114548197A (en) 2022-05-27

Family

ID=81667782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011396635.XA Pending CN114548197A (en) 2020-11-26 2020-11-26 Clustering method based on self-discipline learning SDL model

Country Status (1)

Country Link
CN (1) CN114548197A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI799356B (en) * 2022-10-28 2023-04-11 國立臺中科技大學 Art learning system and method using deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI799356B (en) * 2022-10-28 2023-04-11 國立臺中科技大學 Art learning system and method using deep learning

Similar Documents

Publication Publication Date Title
CN110020682B (en) Attention mechanism relation comparison network model method based on small sample learning
CN105975931B (en) A kind of convolutional neural networks face identification method based on multiple dimensioned pond
CN113657349B (en) Human behavior recognition method based on multi-scale space-time diagram convolutional neural network
CN104268593B (en) The face identification method of many rarefaction representations under a kind of Small Sample Size
CN106446942A (en) Crop disease identification method based on incremental learning
CN107766850A (en) Based on the face identification method for combining face character information
CN112862792B (en) Wheat powdery mildew spore segmentation method for small sample image dataset
Mai et al. Multiple kernel approach to semi-supervised fuzzy clustering algorithm for land-cover classification
CN103914705B (en) Hyperspectral image classification and wave band selection method based on multi-target immune cloning
CN106845528A (en) A kind of image classification algorithms based on K means Yu deep learning
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN106709528A (en) Method and device of vehicle reidentification based on multiple objective function deep learning
CN111524140B (en) Medical image semantic segmentation method based on CNN and random forest method
CN108268890A (en) A kind of hyperspectral image classification method
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
Bhimavarapu et al. Analysis and characterization of plant diseases using transfer learning
Tan et al. Rapid fine-grained classification of butterflies based on FCM-KM and mask R-CNN fusion
Twum et al. Textural Analysis for Medicinal Plants Identification Using Log Gabor Filters
US20220164648A1 (en) Clustering method based on self-discipline learning sdl model
CN106709869A (en) Dimensionally reduction method based on deep Pearson embedment
CN114548197A (en) Clustering method based on self-discipline learning SDL model
CN109934281B (en) Unsupervised training method of two-class network
Niepceron et al. Brain tumor detection using selective search and pulse-coupled neural network feature extraction
Lin et al. Looking from shallow to deep: Hierarchical complementary networks for large scale pest identification
CN115588487A (en) Medical image data set making method based on federal learning and generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination