CN111738367A - Part classification method based on image recognition - Google Patents
Part classification method based on image recognition Download PDFInfo
- Publication number
- CN111738367A CN111738367A CN202010825217.1A CN202010825217A CN111738367A CN 111738367 A CN111738367 A CN 111738367A CN 202010825217 A CN202010825217 A CN 202010825217A CN 111738367 A CN111738367 A CN 111738367A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- neural network
- processing module
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a part classification method based on image recognition, which belongs to the field of image processing and comprises the following steps: collecting part images, preprocessing the collected part images, and taking the preprocessed part images as a training set; constructing a part recognition neural network, and initializing network parameters of the part recognition neural network to obtain a primary part recognition neural network; constructing a loss function, taking the minimum loss function as a target, and training the primary part recognition neural network through a training set until the loss function is less than a to obtain a trained part recognition neural network; and acquiring an image to be recognized, preprocessing the image to be recognized, and inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain a part classification result. The invention can assist maintenance personnel or field workers to maintain equipment by realizing part classification, thereby avoiding the problem that the part identification is time-consuming and labor-consuming in the maintenance process.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a part classification method based on image recognition.
Background
The image classification research task mainly comprises three main links of preprocessing, feature extraction and classification, and each link has important influence on the classification effect of the image. With the rapid development of computer software and hardware and internet technology, the amount of multimedia data is also increasing at an incredible speed, and more information is expressed in the form of images in various industries, which undoubtedly brings huge challenges to each link of the task of image classification. In the industrial manufacturing, various parts exist, wherein various types of similar parts are not lacked, the time for distinguishing the parts to maintain is long, and the manual work sometimes generates errors, so that the loss is caused, and when a maintenance worker is not on the spot, other workers cannot replace the parts independently.
Disclosure of Invention
Aiming at the defects in the prior art, the part classification method based on image recognition solves the problem that manual part recognition in the prior art is time-consuming and labor-consuming.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a part classification method based on image recognition comprises the following steps:
s1, collecting K part images, and collecting N parts of each part image;
s2, preprocessing the collected part images, and taking the preprocessed part images as a training set;
s3, constructing a part recognition neural network, and initializing network parameters of the part recognition neural network to obtain a primary part recognition neural network;
s4, constructing a loss function, taking the minimum loss function as a target, and training the primary part recognition neural network through a training set until the loss function is smaller than a set training threshold value a to obtain a trained part recognition neural network;
and S5, collecting the image to be recognized, preprocessing the image to be recognized, and inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain a part classification result.
Further, the specific method for preprocessing the part image in step S2 is as follows:
a1, sequentially carrying out Gaussian filtering, mean filtering, minimum mean square error filtering and Gabor filtering on the part image to obtain a first-stage processing part image;
a2, carrying out gray processing on the primary processing part image to obtain a secondary processing part image;
a3, obtaining the gradient of pixel points in the secondary processing part image, and performing gray level representation on the secondary processing part image according to the gradient to obtain a tertiary processing part image;
a4, carrying out contour vertical coordinate reconstruction on the three-level processing part image to obtain a four-level processing part image;
a5, extracting an outline region in the four-level processed part image, and acquiring a preprocessed part image;
further, the specific steps of step a3 are as follows:
a31, sequentially solving the gradient of each pixel point in the secondary processing part image function f (x, y)Comprises the following steps:
wherein X represents an abscissa of the pixel point, Y represents an ordinate of the pixel point, X =0,1,., X, Y =0,1,.., Y, X represents a maximum abscissa, Y represents a maximum ordinate,;
a32, setting a gray threshold T, and determining the gradient of each pixel point according to the gray threshold TPerforming gray scale on the image of the secondary processing partRepresenting to obtain a three-level processing part image; the gray scaleComprises the following steps:
wherein M represents a pixel point located on the contour, and N represents a pixel point on the non-contour line.
Further, the specific steps of step a4 are as follows:
a41, randomly searching a gray scale in the image of the three-level processed partThe pixel point is recorded as;
A42, pixel pointCentering, extracting pixel pointsThe gray level of the pixel point with M in all the adjacent pixel points;
a43, selecting the pixel with the maximum gradient from the pixels with the gray scale of M in the step A42, and taking the pixel with the maximum gradient as the center to extract the pixel with the gray scale of M from all adjacent pixels;
and A44, analogizing according to the method in the step A43, obtaining contour pixel points in the three-level processing part image, and obtaining a four-level processing part image.
Further, the step a5 of extracting the contour region in the four-level processed part image, and the specific step of obtaining the preprocessed part image includes: and extracting a square area containing all contour pixel points in the four-level processed part image, and modifying the size of the square area to 224 x 224 to obtain the preprocessed part image.
Further, the specific structure of the part recognition neural network in step S3 includes an input layer, a first convolution layer, a first maximum pooling layer, a first normalization layer, a second convolution layer, a third convolution layer, a second normalization layer, a second maximum pooling layer, a first image processing module, a second image processing module, a third maximum pooling layer, a third image processing module, a fourth image processing module, a fifth image processing module, a sixth image processing module, a seventh image processing module, a fourth maximum pooling layer, an eighth image processing module, a ninth image processing module, a first average pooling layer, a first full-connection layer, a first softmaxation activation layer, and an output layer, which are connected in sequence.
Furthermore, the first image processing module, the second image processing module, the third image processing module, the fourth image processing module, the fifth image processing module, the sixth image processing module, the seventh image processing module, the eighth image processing module and the ninth image processing module have the same structure, and each comprises a fourth convolution layer, a fifth convolution layer, a sixth convolution layer and a fifth maximum pooling layer, wherein an input end of the fourth convolution layer, an input end of the fifth convolution layer, an input end of the sixth convolution layer and an input end of the fifth maximum pooling layer jointly form an input end of the image processing module, the output end of the fourth convolution layer is connected with the input end of the polymerization layer, the fifth convolution layer is connected with the input end of the polymerization layer through the seventh convolution layer, the sixth convolutional layer is connected with the input end of the polymerization layer through the eighth convolutional layer, and the output end of the fifth largest pooling layer is connected with the input end of the polymerization layer through the ninth convolutional layer; the output end of the aggregation layer is the output end of the image processing module, and the output end of the aggregation layer is used for aggregation in the dimension of the output channel.
Further, the output end of the third image processing module is connected with a first auxiliary classification module, the output end of the sixth image processing module is connected with a second auxiliary classification module, the first auxiliary classification module and the second auxiliary classification module have the same structure and respectively comprise a second average pooling layer, a tenth convolution layer, a second full-connection layer, a third full-connection layer, a second SoftmaxAction activation layer and an auxiliary classification output layer which are sequentially connected.
Further, the loss function L in step S4 is specifically:
wherein N =1, 2.. N, N denotes the total number of samples of each class, K =1, 2.. K, K denotes the number of sample classes,the output result of the nth sample calculated by the part recognition neural network is represented as the activation function value under the k-th condition,indicating the probability that the nth sample is of class k,a value representing a first loss calculation parameter,represents a second loss calculation parameter value, R () represents regularization, W represents a network parameter of the first part identification neural network,network parameters representing a second part identification neural network;
wherein the content of the first and second substances,indicates that in the case of the part identification neural network parameters W and b, the input sample isThe resulting input signal abstract features; b represents the network parameters of the third part identification neural network;expressing the neural network parameters in part recognition asIn the case of (2), inputting a featureThe corresponding label obtained;
wherein the content of the first and second substances,network parameters representing the first part recognition neural network when trained using class k samples,network parameters representing a second part recognition neural network when trained using class k samples,network parameters representing a third part recognition neural network when trained using class k samples,、andeach of which represents a differential term that is,representing the network update learning rate.
Further, the step S5 of inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain the part classification result includes the specific steps of:
b1, inputting the preprocessed image to be recognized into the trained part recognition neural network;
b2, the classification result of the acquisition output layer isThe classification result of the first auxiliary classification module isAnd the second auxiliary classification module has the classification result of;
B3, setting the weight values of the output layer, the first auxiliary classification module and the second auxiliary classification module as、And;
b4, mixing、Andand adding the weights of the results of the same type, and taking the classification result with the maximum weight as a part classification result.
The invention has the beneficial effects that:
(1) the invention is provided with the image processing module, thereby increasing the depth and the width of the network, improving the performance of the deep neural network, accelerating the training process and ensuring the accuracy of the network in the later period.
(2) The invention provides a part classification method based on image recognition, which is used for performing auxiliary classification by arranging a plurality of classifiers and realizing accurate part classification.
(3) The part recognition neural network of the present invention avoids the problem of gradient disappearance for deeper widths and depths.
(4) The method is simple and quick, and can be used for assisting maintenance personnel or field workers to maintain equipment by realizing part classification, so that time and labor are saved in part identification in the maintenance process.
Drawings
FIG. 1 is a flow chart of a part classification method based on image recognition according to the present invention;
FIG. 2 is a schematic diagram of a part recognition neural network according to the present invention;
FIG. 3 is a schematic diagram of an image processing module according to the present invention;
FIG. 4 is a schematic diagram of an auxiliary classification module according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a part classification method based on image recognition includes the following steps:
s1, collecting K part images, and collecting N parts of each part image;
s2, preprocessing the collected part images, and taking the preprocessed part images as a training set;
s3, constructing a part recognition neural network, and initializing network parameters of the part recognition neural network to obtain a primary part recognition neural network;
s4, constructing a loss function, taking the minimum loss function as a target, and training the primary part recognition neural network through a training set until the loss function is smaller than a set training threshold value a to obtain a trained part recognition neural network;
and S5, collecting the image to be recognized, preprocessing the image to be recognized, and inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain a part classification result.
The specific method for preprocessing the part image in step S2 is as follows:
a1, sequentially carrying out Gaussian filtering, mean filtering, minimum mean square error filtering and Gabor filtering on the part image to obtain a first-stage processing part image;
a2, carrying out gray processing on the primary processing part image to obtain a secondary processing part image;
a3, obtaining the gradient of pixel points in the secondary processing part image, and performing gray level representation on the secondary processing part image according to the gradient to obtain a tertiary processing part image;
a4, carrying out contour vertical coordinate reconstruction on the three-level processing part image to obtain a four-level processing part image;
a5, extracting an outline region in the four-level processed part image, and acquiring a preprocessed part image;
the specific steps of the step A3 are as follows:
a31, sequentially solving the gradient of each pixel point in the secondary processing part image function f (x, y)Comprises the following steps:
wherein X represents an abscissa of the pixel point, Y represents an ordinate of the pixel point, X =0,1,., X, Y =0,1,.., Y, X represents a maximum abscissa, Y represents a maximum ordinate,,;
a32, setting a gray threshold T, and determining the gradient of each pixel point according to the gray threshold TPerforming gray scale on the image of the secondary processing partRepresenting to obtain a three-level processing part image; the gray scaleComprises the following steps:
wherein M represents a pixel point located on the contour, and N represents a pixel point on the non-contour line.
The specific steps of the step A4 are as follows:
a41, randomly searching a gray scale in the image of the three-level processed partThe pixel point is recorded as;
A42, pixel pointCentering, extracting pixel pointsThe gray level of the pixel point with M in all the adjacent pixel points;
a43, selecting the pixel with the maximum gradient from the pixels with the gray scale of M in the step A42, and taking the pixel with the maximum gradient as the center to extract the pixel with the gray scale of M from all adjacent pixels;
and A44, analogizing according to the method in the step A43, obtaining contour pixel points in the three-level processing part image, and obtaining a four-level processing part image.
The step a5 of extracting the contour region in the four-level processed part image, and the specific steps of obtaining the preprocessed part image are as follows: and extracting a square area containing all contour pixel points in the four-level processed part image, and modifying the size of the square area to 224 x 224 to obtain the preprocessed part image.
As shown in fig. 2, the specific structure of the part recognition neural network in step S3 includes an input layer, a first convolution layer, a first maximum pooling layer, a first normalization layer, a second convolution layer, a third convolution layer, a second normalization layer LocalRespNorm, a second maximum pooling layer, a first image processing module, a second image processing module, a third maximum pooling layer, a third image processing module, a fourth image processing module, a fifth image processing module, a sixth image processing module, a seventh image processing module, a fourth maximum pooling layer, an eighth image processing module, a ninth image processing module, a first average pooling layer, a first full-connection layer, a first full-resolution activation layer, and an output layer, which are connected in sequence.
As shown in fig. 3, the first image processing module, the second image processing module, the third image processing module, the fourth image processing module, the fifth image processing module, the sixth image processing module, the seventh image processing module, the eighth image processing module and the ninth image processing module have the same structure, and each comprises a fourth convolution layer, a fifth convolution layer, a sixth convolution layer and a fifth maximum pooling layer, wherein an input end of the fourth convolution layer, an input end of the fifth convolution layer, an input end of the sixth convolution layer and an input end of the fifth maximum pooling layer jointly form an input end of the image processing module, the output end of the fourth convolution layer is connected with the input end of the aggregation layer DepthCocat, the fifth convolution layer is connected with the input end of the polymerization layer through a seventh convolution layer, the sixth convolution layer is connected with the input end of the polymerization layer through an eighth convolution layer, the output end of the fifth maximum pooling layer is connected with the input end of the aggregation layer through a ninth convolution layer; the output end of the aggregation layer is the output end of the image processing module, and the output end of the aggregation layer is used for aggregation in the dimension of the output channel.
The output end of the third image processing module is further connected with the first auxiliary classification module, and the output end of the sixth image processing module is further connected with the second auxiliary classification module.
As shown in fig. 4, the first auxiliary classification module and the second auxiliary classification module have the same structure, and each of the first auxiliary classification module and the second auxiliary classification module includes a second average pooling layer, a tenth convolution layer, a second full-connected layer, a third full-connected layer, a second softmaxaction activation layer, and an auxiliary classification output layer, which are sequentially connected.
In this embodiment, the output results of the first max pooling layer, the second max pooling layer, and each of the first convolution layers are all subjected to the ReLU calculation and then transmitted to the next layer.
The loss function L in step S4 is specifically:
wherein N =1, 2.. N, N denotes the total number of samples of each class, K =1, 2.. K, K denotes the number of sample classes,the output result of the nth sample calculated by the part recognition neural network is represented as the activation function value under the k-th condition,indicating the probability that the nth sample is of class k,a value representing a first loss calculation parameter,represents a second loss calculation parameter value, R () represents regularization, W represents a network parameter of the first part identification neural network,network parameters representing a second part identification neural network;
wherein the content of the first and second substances,indicates that in the case of the part identification neural network parameters W and b, the input sample isThe resulting input signal abstract features; b represents a third part recognition neural networkThe network parameter of (2);expressing the neural network parameters in part recognition asIn the case of (2), inputting a featureThe corresponding label obtained;
wherein the content of the first and second substances,network parameters representing the first part recognition neural network when trained using class k samples,network parameters representing a second part recognition neural network when trained using class k samples,network parameters representing a third part recognition neural network when trained using class k samples,、andeach of which represents a differential term that is,representing the network update learning rate.
In step S5, the specific steps of inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain a part classification result are as follows:
b1, inputting the preprocessed image to be recognized into the trained part recognition neural network;
b2, the classification result of the acquisition output layer isThe classification result of the first auxiliary classification module isAnd the second auxiliary classification module has the classification result of;
B3, setting the weight values of the output layer, the first auxiliary classification module and the second auxiliary classification module as、And;
Claims (10)
1. A part classification method based on image recognition is characterized by comprising the following steps:
s1, collecting K part images, and collecting N parts of each part image;
s2, preprocessing the collected part images, and taking the preprocessed part images as a training set;
s3, constructing a part recognition neural network, and initializing network parameters of the part recognition neural network to obtain a primary part recognition neural network;
s4, constructing a loss function, taking the minimum loss function as a target, and training the primary part recognition neural network through a training set until the loss function is smaller than a set training threshold value a to obtain a trained part recognition neural network;
and S5, collecting the image to be recognized, preprocessing the image to be recognized, and inputting the preprocessed image to be recognized into the trained part recognition neural network to obtain a part classification result.
2. The part classification method based on image recognition according to claim 1, wherein the specific method for preprocessing the part image in step S2 is as follows:
a1, sequentially carrying out Gaussian filtering, mean filtering, minimum mean square error filtering and Gabor filtering on the part image to obtain a first-stage processing part image;
a2, carrying out gray processing on the primary processing part image to obtain a secondary processing part image;
a3, obtaining the gradient of pixel points in the secondary processing part image, and performing gray level representation on the secondary processing part image according to the gradient to obtain a tertiary processing part image;
a4, carrying out contour vertical coordinate reconstruction on the three-level processing part image to obtain a four-level processing part image;
and A5, extracting the outline region in the four-level processed part image, and acquiring the preprocessed part image.
3. The part classification method based on image recognition according to claim 2, wherein the specific steps of the step A3 are as follows:
a31, sequentially solving the gradient of each pixel point in the secondary processing part image function f (x, y)Comprises the following steps:
wherein X represents an abscissa of the pixel point, Y represents an ordinate of the pixel point, X =0,1,., X, Y =0,1,.., Y, X represents a maximum abscissa, Y represents a maximum ordinate,;
a32, setting a gray threshold T, and determining the gradient of each pixel point according to the gray threshold TPerforming gray scale on the image of the secondary processing partRepresenting to obtain a three-level processing part image; the gray scaleComprises the following steps:
wherein M represents a pixel point located on the contour, and N represents a pixel point on the non-contour line.
4. The part classification method based on image recognition according to claim 2, wherein the specific steps of the step A4 are as follows:
a41, randomly searching a gray scale in the image of the three-level processed partThe pixel point is recorded as;
A42, pixel pointCentering, extracting pixel pointsThe gray level of the pixel point with M in all the adjacent pixel points;
a43, selecting the pixel with the maximum gradient from the pixels with the gray scale of M in the step A42, and taking the pixel with the maximum gradient as the center to extract the pixel with the gray scale of M from all adjacent pixels;
and A44, analogizing according to the method in the step A43, obtaining contour pixel points in the three-level processing part image, and obtaining a four-level processing part image.
5. The method for classifying parts based on image recognition according to claim 4, wherein the step A5 is to extract the contour region in the four-level processed part image, and the specific step of obtaining the pre-processed part image is to: and extracting a square area containing all contour pixel points in the four-level processed part image, and modifying the size of the square area to 224 x 224 to obtain the preprocessed part image.
6. The part classification method based on image recognition according to claim 1, wherein the specific structure of the part recognition neural network in step S3 includes an input layer, a first convolution layer, a first maximum pooling layer, a first normalization layer, a second convolution layer, a third convolution layer, a second normalization layer, a second maximum pooling layer, a first image processing module, a second image processing module, a third maximum pooling layer, a third image processing module, a fourth image processing module, a fifth image processing module, a sixth image processing module, a seventh image processing module, a fourth maximum pooling layer, an eighth image processing module, a ninth image processing module, a first average pooling layer, a first full-connection layer, a first softmaxation activation layer, and an output layer, which are connected in sequence.
7. The image recognition-based part sorting method according to claim 6, wherein the first image processing module, the second image processing module, the third image processing module, the fourth image processing module, the fifth image processing module, the sixth image processing module, the seventh image processing module, the eighth image processing module and the ninth image processing module are identical in structure and each include a fourth convolution layer, a fifth convolution layer, a sixth convolution layer and a fifth maximum pooling layer, an input end of the fourth convolution layer, an input end of the fifth convolution layer, an input end of the sixth convolution layer and an input end of the fifth maximum pooling layer together constitute an input end of the image processing module, an output end of the fourth convolution layer is connected with an input end of the polymerization layer, the fifth convolution layer is connected with an input end of the polymerization layer through the seventh convolution layer, the sixth convolution layer is connected with an input end of the polymerization layer through the eighth convolution layer, the output end of the fifth maximum pooling layer is connected with the input end of the aggregation layer through a ninth convolution layer; the output end of the aggregation layer is the output end of the image processing module, and the output end of the aggregation layer is used for aggregation in the dimension of the output channel.
8. The part classification method based on image recognition according to claim 6, wherein an output end of the third image processing module is further connected with a first auxiliary classification module, an output end of the sixth image processing module is further connected with a second auxiliary classification module, and the first auxiliary classification module and the second auxiliary classification module have the same structure and each include a second average pooling layer, a tenth convolution layer, a second full-connection layer, a third full-connection layer, a second Softmaxation activation layer and an auxiliary classification output layer which are sequentially connected.
9. The method for classifying parts based on image recognition according to claim 1, wherein the loss function L in step S4 is specifically:
wherein N =1, 2.. N, N denotes the total number of samples of each class, K =1, 2.. K, K denotes the number of sample classes,the output result of the nth sample calculated by the part recognition neural network is represented as the activation function value under the k-th condition,indicating the probability that the nth sample is of class k,a value representing a first loss calculation parameter,represents a second loss calculation parameter value, R () represents regularization, W represents a network parameter of the first part identification neural network,network parameters representing a second part identification neural network;
wherein the content of the first and second substances,indicates that in the case of the part identification neural network parameters W and b, the input sample isThe resulting input signal abstract features; b represents the network parameters of the third part identification neural network;expressing the neural network parameters in part recognition asIn the case of (2), inputting a featureThe corresponding label obtained;
wherein the content of the first and second substances,network parameters representing the first part recognition neural network when trained using class k samples,network parameters representing a second part recognition neural network when trained using class k samples,network parameters representing a third part recognition neural network when trained using class k samples,、andeach of which represents a differential term that is,representing the network update learning rate.
10. The method for classifying parts based on image recognition according to claim 8, wherein the step S5 of inputting the pre-processed image to be recognized into the trained part recognition neural network to obtain the part classification result comprises the specific steps of:
b1, inputting the preprocessed image to be recognized into the trained part recognition neural network;
b2, the classification result of the acquisition output layer isThe classification result of the first auxiliary classification module isAnd the second auxiliary classification module has the classification result of;
B3, setting the weight values of the output layer, the first auxiliary classification module and the second auxiliary classification module as、And;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010825217.1A CN111738367B (en) | 2020-08-17 | 2020-08-17 | Part classification method based on image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010825217.1A CN111738367B (en) | 2020-08-17 | 2020-08-17 | Part classification method based on image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111738367A true CN111738367A (en) | 2020-10-02 |
CN111738367B CN111738367B (en) | 2020-11-13 |
Family
ID=72658509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010825217.1A Active CN111738367B (en) | 2020-08-17 | 2020-08-17 | Part classification method based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111738367B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990132A (en) * | 2021-04-27 | 2021-06-18 | 成都中轨轨道设备有限公司 | Positioning and identifying method for track number plate |
CN114435795A (en) * | 2022-02-25 | 2022-05-06 | 湘南学院 | Garbage classification system |
CN114581439A (en) * | 2022-04-29 | 2022-06-03 | 天津七一二通信广播股份有限公司 | Method and system for quickly and automatically counting bulk parts |
WO2023106243A1 (en) * | 2021-12-06 | 2023-06-15 | ダイキン工業株式会社 | Part identification method and identification device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
CN107886073A (en) * | 2017-11-10 | 2018-04-06 | 重庆邮电大学 | A kind of more attribute recognition approaches of fine granularity vehicle based on convolutional neural networks |
CN108109137A (en) * | 2017-12-13 | 2018-06-01 | 重庆越畅汽车科技有限公司 | The Machine Vision Inspecting System and method of vehicle part |
CN109598306A (en) * | 2018-12-06 | 2019-04-09 | 西安电子科技大学 | Hyperspectral image classification method based on SRCM and convolutional neural networks |
CN110633738A (en) * | 2019-08-30 | 2019-12-31 | 杭州电子科技大学 | Rapid classification method for industrial part images |
CN110796253A (en) * | 2019-11-01 | 2020-02-14 | 中国联合网络通信集团有限公司 | Training method and device for generating countermeasure network |
CN110852265A (en) * | 2019-11-11 | 2020-02-28 | 天津津航技术物理研究所 | Rapid target detection and positioning method applied to industrial production line |
CN111079748A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨市科佳通用机电股份有限公司 | Method for detecting oil throwing fault of rolling bearing of railway wagon |
CN111460894A (en) * | 2020-03-03 | 2020-07-28 | 温州大学 | Intelligent car logo detection method based on convolutional neural network |
CN111540203A (en) * | 2020-04-30 | 2020-08-14 | 东华大学 | Method for adjusting green light passing time based on fast-RCNN |
-
2020
- 2020-08-17 CN CN202010825217.1A patent/CN111738367B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
CN107886073A (en) * | 2017-11-10 | 2018-04-06 | 重庆邮电大学 | A kind of more attribute recognition approaches of fine granularity vehicle based on convolutional neural networks |
CN108109137A (en) * | 2017-12-13 | 2018-06-01 | 重庆越畅汽车科技有限公司 | The Machine Vision Inspecting System and method of vehicle part |
CN109598306A (en) * | 2018-12-06 | 2019-04-09 | 西安电子科技大学 | Hyperspectral image classification method based on SRCM and convolutional neural networks |
CN110633738A (en) * | 2019-08-30 | 2019-12-31 | 杭州电子科技大学 | Rapid classification method for industrial part images |
CN110796253A (en) * | 2019-11-01 | 2020-02-14 | 中国联合网络通信集团有限公司 | Training method and device for generating countermeasure network |
CN110852265A (en) * | 2019-11-11 | 2020-02-28 | 天津津航技术物理研究所 | Rapid target detection and positioning method applied to industrial production line |
CN111079748A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨市科佳通用机电股份有限公司 | Method for detecting oil throwing fault of rolling bearing of railway wagon |
CN111460894A (en) * | 2020-03-03 | 2020-07-28 | 温州大学 | Intelligent car logo detection method based on convolutional neural network |
CN111540203A (en) * | 2020-04-30 | 2020-08-14 | 东华大学 | Method for adjusting green light passing time based on fast-RCNN |
Non-Patent Citations (5)
Title |
---|
CHRISTIAN SZEGEDY等: "Going deeper with convolutions", 《ARXIV:1409.4842V1 [CS.CV]》 * |
HONGYANG LI等: "CNN for saliency detection with low-level feature integration", 《NEUROCOMPUTING》 * |
ILYA SUTSKEVER等: "On the importance of initialization and momentum in deep learning", 《ICML13:PROCEEDING OF THE 30TH INTERNATIONAL CONFERENCE ON INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 * |
杨北: "基于深度学习的机械零件分类关键技术研究", 《中国优秀硕士学位论文全文数据库·工程科技Ⅱ辑》 * |
赵鹏: "卷积神经网络在无纺布缺陷分类检测中的应用", 《包装工程》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990132A (en) * | 2021-04-27 | 2021-06-18 | 成都中轨轨道设备有限公司 | Positioning and identifying method for track number plate |
CN112990132B (en) * | 2021-04-27 | 2023-01-03 | 成都中轨轨道设备有限公司 | Positioning and identifying method for track number plate |
WO2023106243A1 (en) * | 2021-12-06 | 2023-06-15 | ダイキン工業株式会社 | Part identification method and identification device |
CN114435795A (en) * | 2022-02-25 | 2022-05-06 | 湘南学院 | Garbage classification system |
CN114581439A (en) * | 2022-04-29 | 2022-06-03 | 天津七一二通信广播股份有限公司 | Method and system for quickly and automatically counting bulk parts |
Also Published As
Publication number | Publication date |
---|---|
CN111738367B (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111738367B (en) | Part classification method based on image recognition | |
CN104408449B (en) | Intelligent mobile terminal scene literal processing method | |
CN104598885B (en) | The detection of word label and localization method in street view image | |
CN111709935B (en) | Real-time coal gangue positioning and identifying method for ground moving belt | |
CN112634243B (en) | Image classification and recognition system based on deep learning under strong interference factors | |
CN110929713B (en) | Steel seal character recognition method based on BP neural network | |
CN111027443B (en) | Bill text detection method based on multitask deep learning | |
CN112307919B (en) | Improved YOLOv 3-based digital information area identification method in document image | |
CN105117740B (en) | Font identification method and apparatus | |
CN113920516B (en) | Calligraphy character skeleton matching method and system based on twin neural network | |
Lv et al. | Few-shot learning combine attention mechanism-based defect detection in bar surface | |
CN109086772A (en) | A kind of recognition methods and system distorting adhesion character picture validation code | |
CN111652846B (en) | Semiconductor defect identification method based on characteristic pyramid convolution neural network | |
CN114359199A (en) | Fish counting method, device, equipment and medium based on deep learning | |
CN112597904A (en) | Method for identifying and classifying blast furnace charge level images | |
CN111340032A (en) | Character recognition method based on application scene in financial field | |
CN115272225A (en) | Strip steel surface defect detection method and system based on countermeasure learning network | |
CN108932471B (en) | Vehicle detection method | |
CN114187247A (en) | Ampoule bottle printing character defect detection method based on image registration | |
CN111932639B (en) | Detection method of unbalanced defect sample based on convolutional neural network | |
CN115830514B (en) | Whole river reach surface flow velocity calculation method and system suitable for curved river channel | |
CN109829511B (en) | Texture classification-based method for detecting cloud layer area in downward-looking infrared image | |
CN105844299A (en) | Image classification method based on bag of words | |
CN113516193B (en) | Image processing-based red date defect identification and classification method and device | |
CN113610831B (en) | Wood defect detection method based on computer image technology and transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |