CN111007068B - Yellow cultivated diamond grade classification method based on deep learning - Google Patents

Yellow cultivated diamond grade classification method based on deep learning Download PDF

Info

Publication number
CN111007068B
CN111007068B CN201911149112.2A CN201911149112A CN111007068B CN 111007068 B CN111007068 B CN 111007068B CN 201911149112 A CN201911149112 A CN 201911149112A CN 111007068 B CN111007068 B CN 111007068B
Authority
CN
China
Prior art keywords
image
diamond
yellow
classification
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911149112.2A
Other languages
Chinese (zh)
Other versions
CN111007068A (en
Inventor
杨建新
兰小平
闫蕾
王波
杨一铭
冯亚东
姚志强
刘文军
王伟平
宋培卿
程辉
郭世峰
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Central Of China North Industries Group Corp
Original Assignee
Information Central Of China North Industries Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Central Of China North Industries Group Corp filed Critical Information Central Of China North Industries Group Corp
Priority to CN201911149112.2A priority Critical patent/CN111007068B/en
Publication of CN111007068A publication Critical patent/CN111007068A/en
Application granted granted Critical
Publication of CN111007068B publication Critical patent/CN111007068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/87Investigating jewels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of yellow cultivated diamond grade classification, and particularly relates to a yellow cultivated diamond grade classification method based on deep learning. Compared with the prior art, the yellow cultivated diamond grade classification method based on deep learning and multi-model decision-making provided by the invention can be used for realizing online image acquisition and accurate classification of yellow diamonds, and the efficiency is far higher than that of human eyes. The invention is the first application of a machine vision technology, an image processing technology, a deep learning technology and a multi-model decision making technology in automatic sorting of yellow cultivated diamonds.

Description

Yellow cultivated diamond grade classification method based on deep learning
Technical Field
The invention belongs to the technical field of yellow cultivated diamond grade classification, and particularly relates to a yellow cultivated diamond grade classification method based on deep learning.
Background
Various difficult and high material processing problems can be encountered in the fields of aerospace, national defense and military industry, photovoltaic, electronic information and the like, and the problem to be solved at present is that diamond which is a superhard material cannot be used. Superhard materials and products, represented by diamond, are known as the "hardest and sharpest industrial teeth". China is a big country of origin for artificially synthesizing diamonds. According to statistics, by 2017, the yield of the Chinese synthetic diamond is about 250 hundred million carats, which occupies more than 90% of the whole world, and occupies the market dominance in continuous 15 years.
Yellow grown diamond (hereinafter, referred to as yellow diamond) is one of synthetic diamonds, and is widely used for industrial tools, drills, and the like. The generation of yellow diamond is that carbon raw material powder is pressed into diamond under the environment of high temperature and high pressure, a metal catalyst is required to be added in the middle of the diamond, so that a plurality of yellow diamonds contain metal impurities, the grade of the yellow diamonds is influenced, and the grade division is directly related to the product quality and the economic benefit. At present, the method for manually sorting the industrial yellow diamonds is mainly adopted in China, a sorter observes the external appearance and the internal impurity characteristics of each diamond particle with the diameter of 1.2-1.4 mm under a magnifier through naked eyes, and carries out grade evaluation according to experience. Therefore, how to realize automation and intellectualization of yellow diamond grade classification in industry is a technical problem which needs to be solved urgently.
At present, a deep learning technology based on machine vision is widely applied to aspects of practical engineering, and is particularly widely practiced and popularized in the aspects of classification and quality detection of industrial products.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to provide a yellow diamond grade classification method based on machine vision and deep learning to solve the technical problems of high labor consumption, low accuracy and low efficiency in the prior art.
(II) technical scheme
In order to solve the technical problem, the invention provides a yellow cultivated diamond grade classification method based on deep learning, which comprises the following steps:
step S1: collecting yellow diamond original sample images: illuminating by a coaxial light source, and acquiring an original sample image of the yellow diamond by using a CCD industrial camera and an image acquisition card;
step S2: yellow diamond original sample image classification: according to different grades of yellow diamonds, dividing the yellow diamond original sample image into a first-grade sample, a second-grade sample and a third-grade sample;
the step S2 includes the following sub-steps:
step S201: in order to avoid the problem of inaccurate sample classification caused by visual observation errors and personal experience, the method of the international diamond committee diamond classification standard and the Chinese technical supervision agency diamond classification standard GB/T16554-one 2003 is adopted to classify the yellow diamond original sample images respectively to obtain two training sample sets which are respectively recorded as: u shapea、Ub
Step S202: is composed ofImprove the accuracy of classification of yellow diamond grade from Ua、UbScreening out yellow diamond samples with consistent classification to form a reinforced training sample set, and recording as follows: u shapec
Step S3: preprocessing an original sample image of the yellow diamond: detecting the image processing method of the graph according to graying, filtering and drying removal, binaryzation, corrosion and expansion and minimum external rectanglea、Ub、UcProcessing the original sample image of the yellow diamond to obtain a yellow diamond image with uniform size 299 x 299;
the step S3 includes the following sub-steps:
step S301: graying: performing graying processing on the image of the step S2;
step S302: filtering and drying: smoothing high-frequency noise points in the image obtained in the step S301 by using a low-pass filter, and reducing the change rate of the image;
step S303: binarization: setting the pixel value of the image not greater than 90 in the step S302 as 0, that is, black, and setting the rest as 255, that is, white;
step S304: corrosion and expansion: in order to eliminate the interference of the white spot in the image to the detection of the outline of the yellow diamond region after the step S303, the morphological processing of corrosion and expansion is respectively carried out for 4 times on the image;
step S305: detecting the minimum circumscribed rectangle: detecting the outer contours of all the communication areas in the image after the step S304 through a contour detection algorithm, and calculating the minimum circumscribed rectangles of all the outer contours through a minimum circumscribed rectangle algorithm;
step S306: filtering the minimum circumscribed rectangle in the step S305 according to the area size and length-width ratio to finally obtain a unique yellow diamond image region, cutting out the region, and then unifying the region to a fixed size 299 x 299 after image scaling operation;
step S4: model training: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the prediction models into an increment V3 network for training, and respectively obtaining a prediction model M through full iterationa、Mb、Mc
The step S4 includes the following sub-steps:
step S401: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the samples into an increment V3 network, enabling the samples to pass through 3 layers of convolution networks, enabling each layer of convolution network to comprise 1 or more convolution kernels, and enabling the characteristic expression capability of the network to be enhanced through the nonlinear excitation effect of an activation function ReLU on convolution output; the expression of the convolutional layer activation function ReLU is:
Figure BDA0002283048830000031
wherein, x is the output after passing through the 3 layers of convolution network;
after the features of the convolutional layer are extracted, the size and the number of channels of the image can be greatly changed, and the change is related to the size, the filling mode and the step size factor of a convolutional kernel; the calculation formulas of the size and the channel number of the output image after being subjected to the convolutional layer are respectively shown as the following formulas:
Figure BDA0002283048830000041
N=D
wherein O is the size of the output image after the convolution layer; i is the size of the input image; k is the size of the convolution kernel; p is the padding number during convolution; s is the step length; n is the number of channels of the output image; d is the number of convolution kernels;
step S402: performing pooling downsampling processing on the output image of the step S401, reducing parameters and filtering out redundant information in the characteristic diagram at the same time, so as to avoid the problem of model overfitting; the calculation formula of the size of the output image after passing through the pooling layer is as follows:
Figure BDA0002283048830000042
wherein O' is the size of the output image after passing through the pooling layer; psThe size of the pooling layer;
different from the effect of changing the number of image channels by the convolutional layer, the number of image channels after passing through the pooling layer is not changed;
step S403: simplifying feature information of the output image of the step S402 by 3 module groups, wherein each module group comprises a plurality of modules with similar structures, each module is formed by combining a simple convolution layer, a complex convolution layer and a pooling layer with a simplified structure in parallel, high-order features of different levels can be selectively reserved by feature abstraction and transformation of different degrees, and the module groups are used for simplifying a spatial structure and converting the spatial information into abstract high-order feature information, so that the expression capability of the network is enriched to the maximum extent, and meanwhile, the output tensor of each layer is continuously reduced, and the calculation amount is reduced; the size of the output image after passing through the module groups is 8 × 2048;
step S404: converting the image output in the step S403 into 1 × 2048 by using a global average pooling layer, performing dropout layer, convolution layer and linearization dimension elimination processing, and finally mapping the prediction of each category of the model into a probability value by using a normalized exponential function, wherein the expression of the normalized exponential function is as follows:
Figure BDA0002283048830000043
wherein n is all the number of grades, ziRepresenting values, p, predicted by the model and belonging to the ith classiRepresenting the probability that the converted model prediction result belongs to the ith category;
step S405: comparing the class with the maximum prediction probability with the actual class of the sample, calculating the error loss of the sample in the same training batch, setting related training hyper-parameters, optimizing by taking the minimum error loss as a target, and finally obtaining a sample set U through repeated iteration optimization and model data solidificationa、Ub、UcRespectively obtain corresponding models Ma、Mb、Mc
Step S5: the yellow diamond image to be classified is processed according to the step S3 after processing, inputting the data into the model MaIn the method, a result p of model classification is obtained through feature extraction and class matchinga1、pa2、pa3Wherein: p is a radical ofa1Representation model MaProbability of considering the yellow diamond as a first-grade product, and the like; similarly, the image to be classified is input into the model Mb、McThe classification result p can be obtainedb1、pb2、pb3、pc1、pc2、pc3
And:
Figure BDA0002283048830000051
step S6: in order to ensure the robustness of the yellow diamond classification result, the classification results of the three models are integrated; then the grade function Y of the yellow diamond classification is max (P)1,P2,P3) Wherein:
Figure BDA0002283048830000052
wherein, in the step S201, Ua、UbRespectively comprises three types of samples of a first-class product, a second-class product and a third-class product which are obtained by classification according to respective classification standards.
Wherein, in the step S202, UcThe method comprises three types of samples of a first-grade product, a second-grade product and a third-grade product.
Wherein, UcThe set of the first-class products in the inner part is Ua、UbOverlapping portions of respective primary item sets.
Wherein, UcThe secondary product in the composition is Ua、UbOverlapping portions of respective secondary product sets.
Wherein, UcThe set of the inner tertiary products is Ua、UbOverlapping portions of respective tertiary product sets.
Wherein, the low-pass filter in step S302 is a 3 × 3 kernel low-pass filter.
In step S401, after the convolution layer feature extraction, the size of the output image is 147 × 64.
In step S402, after the single-layer pooling, the image size after further feature extraction through the double-layer convolution and single-layer pooling operations is 35 × 192.
In step S404, n is 3, i is 1, …, n.
(III) advantageous effects
Compared with the prior art, the yellow cultivated diamond grade classification method based on deep learning and multi-model decision-making provided by the invention realizes on-line image acquisition and accurate classification of yellow diamonds, and has efficiency far higher than that of human eyes. The invention is the first application of a machine vision technology, an image processing technology, a deep learning technology and a multi-model decision making technology in automatic sorting of yellow cultivated diamonds.
Drawings
Fig. 1 is a diagram of an inclusion V3 network structure according to an embodiment of the present invention.
Fig. 2 is an original view of a sample of yellow diamond collected according to an embodiment of the present invention, with a size of 600 x 500.
Fig. 3 is an image after binarization processing according to an embodiment of the present invention.
Fig. 4 is an image after erosion and dilation processing according to an embodiment of the present invention.
Fig. 5 is a diagram of a sample yellow diamond 299 by 299 with respect to the size of the sample yellow diamond after the graphic image processing according to the embodiment of the present invention.
Fig. 6 is a graph of the total loss during the training process according to an embodiment of the present invention.
Fig. 7 is a statistical chart of classification results provided in the embodiment of the present invention.
FIG. 8 is a chart of the classification accuracy provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
In order to solve the problems in the prior art, the invention provides a yellow cultivated diamond grade classification method based on deep learning, as shown in fig. 1, the method comprises the following steps:
step S1: collecting yellow diamond original sample images: illuminating by a coaxial light source, and acquiring an original sample image of the yellow diamond by using a CCD industrial camera and an image acquisition card;
step S2: yellow diamond original sample image classification: according to different grades of yellow diamonds, dividing the yellow diamond original sample image into a first-grade sample, a second-grade sample and a third-grade sample;
the step S2 includes the following sub-steps:
step S201: in order to avoid the problem of inaccurate sample classification caused by visual observation errors and personal experience, the method of the international diamond committee diamond classification standard and the Chinese technical supervision agency diamond classification standard GB/T16554-one 2003 is adopted to classify the yellow diamond original sample images respectively to obtain two training sample sets which are respectively recorded as: u shapea、Ub
Step S202: to improve the accuracy of the classification of yellow diamond grades, from Ua、UbScreening out yellow diamond samples with consistent classification to form a reinforced training sample set, and recording as follows: u shapec
Step S3: preprocessing an original sample image of the yellow diamond: detecting the image processing method of the graph according to graying, filtering and drying removal, binaryzation, corrosion and expansion and minimum external rectanglea、Ub、UcProcessing the original sample image of the yellow diamond to obtain a yellow diamond image with uniform size 299 x 299;
the step S3 includes the following sub-steps:
step S301: graying: performing graying processing on the image of the step S2;
step S302: filtering and drying: smoothing high-frequency noise points in the image obtained in the step S301 by using a low-pass filter, and reducing the change rate of the image;
step S303: binarization: setting the pixel value of the image not greater than 90 in the step S302 as 0, that is, black, and setting the rest as 255, that is, white;
step S304: corrosion and expansion: in order to eliminate the interference of the white spot in the image to the detection of the outline of the yellow diamond region after the step S303, the morphological processing of corrosion and expansion is respectively carried out for 4 times on the image;
step S305: detecting the minimum circumscribed rectangle: detecting the outer contours of all the communication areas in the image after the step S304 through a contour detection algorithm, and calculating the minimum circumscribed rectangles of all the outer contours through a minimum circumscribed rectangle algorithm;
step S306: filtering the minimum circumscribed rectangle in the step S305 according to the area size and length-width ratio to finally obtain a unique yellow diamond image region, cutting out the region, and then unifying the region to a fixed size 299 x 299 after image scaling operation;
step S4: model training: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the prediction models into an increment V3 network for training, and respectively obtaining a prediction model M through full iterationa、Mb、Mc
The step S4 includes the following sub-steps:
step S401: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the samples into an inclusion V3 network, wherein the network structure is shown in figure 1, the samples firstly pass through 3 layers of convolution networks, each layer of convolution network comprises 1 or more convolution kernels, and then the nonlinear excitation effect of an activation function ReLU (the expression of which is shown as the following formula) on convolution output is performed, so that the characteristic expression capability of the network is enhanced; the expression of the convolutional layer activation function ReLU is:
Figure BDA0002283048830000081
wherein, x is the output after passing through the 3 layers of convolution network;
after the convolutional layer features are extracted, the size and the number of channels of the image are greatly changed, which are related to factors such as the size of a convolution kernel, a filling mode, a step size and the like, and in the embodiment, the size of the output image is 147 × 64; the calculation formulas of the size and the channel number of the output image after being subjected to the convolutional layer are respectively shown as the following formulas:
Figure BDA0002283048830000082
N=D
wherein O is the size of the output image after the convolution layer; i is the size of the input image; k is the size of the convolution kernel; p is the padding number during convolution; s is the step length; n is the number of channels of the output image; d is the number of convolution kernels;
step S402: performing pooling downsampling processing on the output image of the step S401, reducing parameters and filtering out redundant information in the characteristic diagram at the same time, so as to avoid the problem of model overfitting; the calculation formula of the size of the output image after passing through the pooling layer is as follows:
Figure BDA0002283048830000091
wherein O' is the size of the output image after passing through the pooling layer; psThe size of the pooling layer;
different from the effect of changing the number of image channels by the convolutional layer, the number of image channels after passing through the pooling layer is not changed; in this example, the size of the image after the single-layer pooling, further feature extraction by double-layer convolution and single-layer pooling was 35 × 192;
step S403: simplifying feature information of the output image of the step S402 by 3 module groups, wherein each module group comprises a plurality of modules with similar structures, each module is formed by combining a simple convolution layer, a complex convolution layer and a pooling layer with a simplified structure in parallel, high-order features of different levels can be selectively reserved by feature abstraction and transformation of different degrees, and the module groups are used for simplifying a spatial structure and converting the spatial information into abstract high-order feature information, so that the expression capability of the network is enriched to the maximum extent, and meanwhile, the output tensor of each layer is continuously reduced, and the calculation amount is reduced; the size of the output image after passing through the module groups is 8 × 2048;
step S404: converting the image output in the step S403 into 1 × 2048 by using a global average pooling layer, performing dropout layer, convolution layer and linearization dimension elimination processing, and finally mapping the prediction of each category of the model into a probability value by using a normalized exponential function, wherein the expression of the normalized exponential function is as follows:
Figure BDA0002283048830000092
where n is the number of all grades, in the examples of the invention n is 3, i is 1, …, n, ziRepresenting values, p, predicted by the model and belonging to the ith classiRepresenting the probability that the converted model prediction result belongs to the ith category;
step S405: comparing the class with the maximum prediction probability with the actual class of the sample, calculating the error loss of the sample in the same training batch, setting related training hyper-parameters, optimizing by taking the minimum error loss as a target, and finally obtaining a sample set U through repeated iteration optimization and model data solidificationa、Ub、UcRespectively obtain corresponding models Ma、Mb、Mc
Step S5: the yellow diamond image to be classified is processed according to step S3 and then input into the model MaIn the method, a result p of model classification is obtained through feature extraction and class matchinga1、pa2、pa3Wherein: p is a radical ofa1Representation model MaProbability of considering the yellow diamond as a first-grade product, and the like; similarly, the image to be classified is input into the model Mb、McThe classification result p can be obtainedb1、pb2、pb3、pc1、pc2、pc3
And:
Figure BDA0002283048830000101
step S6: in order to ensure the robustness of the yellow diamond classification result, the classification results of the three models are integrated; then the grade function Y of the yellow diamond classification is max (P)1,P2,P3) Wherein:
Figure BDA0002283048830000102
wherein, in the step S201, Ua、UbRespectively comprises three types of samples of a first-class product, a second-class product and a third-class product which are obtained by classification according to respective classification standards.
Wherein, in the step S202, UcComprises three types of samples of a first-grade product, a second-grade product and a third-grade product.
Wherein, UcThe set of the first-class products in the inner part is Ua、UbOverlapping portions of respective primary item sets.
Wherein, UcThe secondary product in the composition is Ua、UbOverlapping portions of respective secondary product sets.
Wherein, UcThe set of the inner tertiary products is Ua、UbOverlapping portions of respective tertiary product sets.
Wherein, the low-pass filter in step S302 is a 3 × 3 kernel low-pass filter.
In step S401, after the convolution layer feature extraction, the size of the output image is 147 × 64.
In step S402, after the single-layer pooling, the image size after further feature extraction through the double-layer convolution and single-layer pooling operations is 35 × 192.
In step S404, n is 3, i is 1, …, n.
Example 1
In this embodiment, the yellow cultivated diamond grade classification method based on deep learning and multi-model decision includes the steps of:
step S1: collecting an original sample image: illuminating by a coaxial light source, and collecting a yellow diamond image by using a CCD industrial camera and an image acquisition card; in order to ensure the image quality, the invention adopts a customized white surface light source to supplement light, and simultaneously sets the exposure time of the camera to be 150us and the frame rate to be 32 fps. The raw sample is as shown in fig. 2;
step S2: sample image classification: according to different grades of yellow diamonds, dividing yellow diamond samples into three types of samples, namely a first-grade sample, a second-grade sample and a third-grade sample;
the step S2 includes the following sub-steps:
step S201: in order to avoid the problem of inaccurate sample classification caused by visual inspection errors and personal experience, the method of classifying the yellow diamond original samples respectively according to the international diamond committee diamond classification standard and the national technical regulatory agency diamond classification standard (GB/T16554-: u shapea、Ub
Step S202: to improve the accuracy of the classification of yellow diamond grades, from Ua、UbScreening out yellow diamond samples with consistent classification to form a reinforced training sample set, and recording as follows: u shapec
Step S3: sample image preprocessing: processing the U by graphic image processing methods such as graying, filtering and drying removal, binaryzation, corrosion and expansion, minimum external rectangle detection and the likea、Ub、UcProcessing the middle sample image to obtain a yellow diamond image with uniform size 299 x 299;
the step S3 includes the following sub-steps:
step S301: graying: performing graying processing on the image in the step S2;
step S302: filtering and drying: smoothing high-frequency noise points in the image obtained in the step S301 by using a low-pass filter (3 x 3 kernel) to reduce the change rate of the image;
step S303: binarization: setting the pixel value of the image in the step S302 that is not greater than 90 as 0, that is, black, and setting the remaining pixel value as 255, that is, white, where the image after the binarization processing in this embodiment is as shown in fig. 3;
step S304: corrosion and expansion: in order to eliminate the interference of the white spots in the image to the detection of the yellow diamond region contour after the step S303, the image is respectively processed by erosion and expansion for 4 times, and the image after erosion and expansion is shown in fig. 4;
step S305: detecting the minimum circumscribed rectangle: searching the outer contours of all the communicated areas in the image by using a contour detection algorithm, and calculating the minimum circumscribed rectangles of all the outer contours by using a minimum circumscribed rectangle algorithm;
step S306: filtering the minimum circumscribed rectangle in step S305 by area size, length-width ratio, etc., to finally obtain a unique yellow diamond image region, cutting out the region, and then performing image scaling operation to unify the region to a fixed size 299 x 299, as shown in fig. 5;
step S4: model training: the preprocessed sample set U isa、Ub、UcRespectively inputting the parameters into an increment V3 network for training, and obtaining the optimal hyper-parameter configuration through repeated iterative training tests: training is executed for 100000 steps maximally, batch processing parameters are 32, fixed learning rate is 0.001, dropout proportion is 0.8, secondary regularization hyper-parameters of all parameters in the model are 0.00004, and finally the prediction model M is obtained respectivelya、Mb、McThe variation of the loss value in the training process of the embodiment of the invention is shown in fig. 6;
step S5: the yellow diamond image to be classified is processed according to step S3 and then input into the model MaIn the method, a result p of model classification is obtained through feature extraction and class matchinga1、pa2、pa3Wherein: p is a radical ofa1Representation model MaThe probability of considering the yellow diamond as a first-grade product, and the like. Similarly, the image to be classified is input into the model Mb、McThe classification result p can be obtainedb1、pb2、pb3、pc1、pc2、pc3
And:
Figure BDA0002283048830000131
step S6: and in order to ensure the robustness of the yellow diamond classification result, the classification results of the three models are integrated. Then the grade function Y of the yellow diamond classification is max (P)1,P2,P3) Wherein:
Figure BDA0002283048830000132
1500 yellow diamond samples are randomly collected, wherein each of 500 primary, secondary and tertiary samples are subjected to classification detection by using the model disclosed by the invention, the classification statistical results are shown in the attached drawings 7 and 8, and the results show that: in the embodiment of the present invention, the correct number of yellow diamonds in classification is 1393, and the classification accuracy is 92.8%, where: the accuracy of the first-grade product is 92.7%, the accuracy of the second-grade product is 90.9%, the accuracy of the third-grade product is 95.1%, and the sorting time of a single sample is 500 ms. Compared with the existing manual naked eye sorting method, the method has the advantages of high sorting speed and high precision, can greatly reduce the labor intensity of workers, simultaneously improves the sorting efficiency, and has good industrial application prospect.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A yellow cultivation diamond grade classification method based on deep learning is characterized by comprising the following steps:
step S1: collecting yellow diamond original sample images: illuminating by a coaxial light source, and acquiring an original sample image of the yellow diamond by using a CCD industrial camera and an image acquisition card;
step S2: yellow diamond original sample image classification: according to different grades of yellow diamonds, dividing the yellow diamond original sample image into a first-grade sample, a second-grade sample and a third-grade sample;
the step S2 includes the following sub-steps:
step S201: in order to avoid the problem of inaccurate sample classification caused by visual observation errors and personal experience, the method of the international diamond committee diamond classification standard and the Chinese technical supervision agency diamond classification standard GB/T16554-one 2003 is adopted to classify the yellow diamond original sample images respectively to obtain two training sample sets which are respectively recorded as: u shapea、Ub
Step S202: to improve the accuracy of the classification of yellow diamond grades, from Ua、UbScreening out yellow diamond samples with consistent classification to form a reinforced training sample set, and recording as follows: u shapec
Step S3: preprocessing an image of a yellow diamond original sample: detecting a graphic image processing method for U according to graying, filtering denoising, binaryzation, corrosion and expansion and minimum external rectanglea、Ub、UcProcessing the original sample image of the yellow diamond to obtain a yellow diamond image with uniform size 299 x 299;
the step S3 includes the following sub-steps:
step S301: graying: performing graying processing on the image of the step S2;
step S302: filtering and denoising: smoothing high-frequency noise points in the image obtained in the step S301 by using a low-pass filter, and reducing the change rate of the image;
step S303: binarization: setting the pixel value of the image in the step S302, which is not greater than 90, to 0, that is, black, and setting the remaining pixel values to 255, that is, white;
step S304: corrosion and expansion: in order to eliminate the interference of the white spot in the image to the detection of the outline of the yellow diamond region after the step S303, the morphological processing of corrosion and expansion is respectively carried out for 4 times on the image;
step S305: detecting the minimum circumscribed rectangle: detecting the outer contours of all the communication areas in the image after the step S304 through a contour detection algorithm, and calculating the minimum circumscribed rectangles of all the outer contours through a minimum circumscribed rectangle algorithm;
step S306: filtering the minimum circumscribed rectangle in the step S305 according to the area size and length-width ratio to finally obtain a unique yellow diamond image region, cutting out the region, and then unifying the region to a fixed size 299 x 299 after image scaling operation;
step S4: model training: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the prediction models into an increment V3 network for training, and respectively obtaining a prediction model M through full iterationa、Mb、Mc
The step S4 includes the following sub-steps:
step S401: the sample set U preprocessed by the step S3 isa、Ub、UcRespectively inputting the samples into an increment V3 network, enabling the samples to pass through 3 layers of convolution networks, enabling each layer of convolution network to comprise 1 or more convolution kernels, and enabling the characteristic expression capability of the network to be enhanced through the nonlinear excitation effect of an activation function ReLU on convolution output; the expression of the convolutional layer activation function ReLU is:
Figure FDA0003515524670000021
wherein, x is the output after passing through the 3 layers of convolution network;
after the features of the convolutional layer are extracted, the size and the number of channels of the image can be greatly changed, and the change is related to the size, the filling mode and the step size factor of a convolutional kernel; the calculation formulas of the size and the channel number of the output image after being subjected to the convolutional layer are respectively shown as the following formulas:
Figure FDA0003515524670000022
N=D
wherein O is the size of the output image after the convolution layer; i is the size of the input image; k is the size of the convolution kernel; p is the padding number during convolution; s is a step length; n is the number of channels of the output image; d is the number of convolution kernels;
step S402: performing pooling downsampling processing on the output image of the step S401, reducing parameters and filtering out redundant information in the characteristic diagram at the same time, so as to avoid the problem of model overfitting; the calculation formula of the size of the output image after passing through the pooling layer is as follows:
Figure FDA0003515524670000031
wherein O' is the size of the output image after passing through the pooling layer; psThe size of the pooling layer;
different from the effect of changing the number of image channels by the convolutional layer, the number of image channels after passing through the pooling layer is not changed;
step S403: simplifying feature information of the output image of the step S402 by 3 module groups, wherein each module group comprises a plurality of modules with similar structures, each module is formed by combining a simple convolution layer, a complex convolution layer and a pooling layer with a simplified structure in parallel, high-order features of different levels can be selectively reserved by feature abstraction and transformation of different degrees, and the module groups are used for simplifying a spatial structure and converting the spatial information into abstract high-order feature information, so that the expression capability of the network is enriched to the maximum extent, and meanwhile, the output tensor of each layer is continuously reduced, and the calculation amount is reduced; the size of the output image after passing through the module groups is 8 × 2048;
step S404: converting the image output in the step S403 into 1 × 2048 by using a global average pooling layer, performing dropout layer, convolution layer and linear dimension elimination processing, and finally mapping the prediction of each category of the model into a probability value by using a normalized exponential function, wherein the expression of the normalized exponential function is as follows:
Figure FDA0003515524670000032
wherein n is all the number of grades, ziRepresenting values predicted by the model to belong to the i-th class, piRepresentation of transformed model prediction results genusProbability in the ith class;
step S405: comparing the class with the maximum prediction probability with the actual class of the sample, calculating the error loss of the sample in the same training batch, setting related training hyper-parameters, optimizing by taking the minimum error loss as a target, and finally obtaining a sample set U through repeated iteration optimization and model data solidificationa、Ub、UcRespectively obtain corresponding models Ma、Mb、Mc
Step S5: the yellow diamond image to be classified is processed according to step S3 and then input into the model MaIn the method, a result p of model classification is obtained through feature extraction and class matchinga1、pa2、pa3Wherein: p is a radical ofa1Representation model MaProbability of considering the yellow diamond as a first-grade product, and the like; similarly, the image to be classified is input into the model Mb、McThe classification result p can be obtainedb1、pb2、pb3、pc1、pc2、pc3
And:
Figure FDA0003515524670000041
step S6: in order to ensure the robustness of the yellow diamond classification result, the classification results of the three models are integrated; then the grade function Y of the yellow diamond classification is max (P)1,P2,P3) Wherein:
Figure FDA0003515524670000042
2. the method for classifying diamond grades according to claim 1, wherein in step S201, U isa、UbRespectively comprises three types of samples of a first-class product, a second-class product and a third-class product which are obtained by classification according to respective classification standards.
3. The method for classifying diamond grades according to claim 2, wherein in step S202, U iscThe method comprises three types of samples of a first-grade product, a second-grade product and a third-grade product.
4. The method for classifying diamond grades according to claim 3, wherein U is the number of diamonds in yellow colorcThe set of the first-class products in the inner part is Ua、UbOverlapping portions of respective primary item sets.
5. The method for classifying diamond grades according to claim 3, wherein U is the number of diamonds in yellow colorcThe secondary product in the composition is Ua、UbOverlapping portions of respective secondary product sets.
6. The method for classifying diamond grades according to claim 3, wherein U is the number of diamonds in yellow colorcThe set of the inner tertiary products is Ua、UbOverlapping portions of respective tertiary product sets.
7. The method for classifying diamond grades according to claim 1, wherein said low-pass filter in step S302 is a 3 x 3 kernel low-pass filter.
8. The method for classifying diamond grades according to yellow culture based on deep learning of claim 1, wherein in step S401, after convolutional layer feature extraction, the size of the output image is 147 × 64.
9. The method for classifying grade of yellow grown diamonds according to claim 1, wherein said step S402 further extracting features from said single-layered pooling by a double-layer convolution and single-layered pooling operation, and then obtaining an image size of 35 x 192.
10. The method for classifying diamond grades according to claim 1, wherein in step S404, n-3, i-1, …, n.
CN201911149112.2A 2019-11-21 2019-11-21 Yellow cultivated diamond grade classification method based on deep learning Active CN111007068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911149112.2A CN111007068B (en) 2019-11-21 2019-11-21 Yellow cultivated diamond grade classification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911149112.2A CN111007068B (en) 2019-11-21 2019-11-21 Yellow cultivated diamond grade classification method based on deep learning

Publications (2)

Publication Number Publication Date
CN111007068A CN111007068A (en) 2020-04-14
CN111007068B true CN111007068B (en) 2022-05-13

Family

ID=70112726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911149112.2A Active CN111007068B (en) 2019-11-21 2019-11-21 Yellow cultivated diamond grade classification method based on deep learning

Country Status (1)

Country Link
CN (1) CN111007068B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465835B (en) * 2020-11-26 2022-07-08 深圳市对庄科技有限公司 Method for jadeite image segmentation and model training method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256607B1 (en) * 1998-09-08 2001-07-03 Sri International Method and apparatus for automatic recognition using features encoded with product-space vector quantization
CN102324046A (en) * 2011-09-01 2012-01-18 西安电子科技大学 Four-classifier cooperative training method combining active learning
CN107909103A (en) * 2017-11-13 2018-04-13 武汉地质资源环境工业技术研究院有限公司 A kind of diamond 4C standards automatic grading method, equipment and storage device
CN109582963A (en) * 2018-11-29 2019-04-05 福建南威软件有限公司 A kind of archives automatic classification method based on extreme learning machine
CN110223266A (en) * 2019-03-08 2019-09-10 湖南工业大学 A kind of Railway wheelset tread damage method for diagnosing faults based on depth convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680120B (en) * 2013-12-02 2018-10-19 华为技术有限公司 A kind of generation method and device of the strong classifier of Face datection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256607B1 (en) * 1998-09-08 2001-07-03 Sri International Method and apparatus for automatic recognition using features encoded with product-space vector quantization
CN102324046A (en) * 2011-09-01 2012-01-18 西安电子科技大学 Four-classifier cooperative training method combining active learning
CN107909103A (en) * 2017-11-13 2018-04-13 武汉地质资源环境工业技术研究院有限公司 A kind of diamond 4C standards automatic grading method, equipment and storage device
CN109582963A (en) * 2018-11-29 2019-04-05 福建南威软件有限公司 A kind of archives automatic classification method based on extreme learning machine
CN110223266A (en) * 2019-03-08 2019-09-10 湖南工业大学 A kind of Railway wheelset tread damage method for diagnosing faults based on depth convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Diamond Color Grading Based on Machine Vision;Zhiguo Ren等;《2009 IEEE 12th International Conference on Computer Vision Workshops》;20100503;全文 *
Inclusion extraction from diamond clarity images based on the analysis of diamond optical properties;Wenjing Wang等;《OPTICS EXPRESS》;OPTICAL SOC AMER2010 MASSACHUSETTS AVE NW, WASHINGTON, DC 20036;20190916;第27卷(第19期);全文 *
基于卷积神经网络的小样本树皮图像识别方法;刘嘉政;《西北林学院学报》;20190831(第4期);全文 *
基于视觉认知模型的乳腺肿块诊断算法研究;王红玉;《万方数据库》;20190118;全文 *
磨料磨具用金刚石生产品质检测技术研究;张留振;《万方数据库》;20190806;全文 *

Also Published As

Publication number Publication date
CN111007068A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN112819802B (en) Method for supervising and predicting blast furnace condition abnormality based on tuyere information deep learning
CN108596880A (en) Weld defect feature extraction based on image procossing and welding quality analysis method
CN110070008A (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN108647722B (en) Zinc ore grade soft measurement method based on process size characteristics
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN109215015A (en) A kind of online visible detection method of silk cocoon based on convolutional neural networks
CN117253024B (en) Industrial salt quality inspection control method and system based on machine vision
CN111458269A (en) Artificial intelligent identification method for peripheral blood lymph micronucleus cell image
CN111007068B (en) Yellow cultivated diamond grade classification method based on deep learning
CN113610035A (en) Rice tillering stage weed segmentation and identification method based on improved coding and decoding network
CN114820471A (en) Visual inspection method for surface defects of intelligent manufacturing microscopic structure
CN110516648B (en) Ramie plant number identification method based on unmanned aerial vehicle remote sensing and pattern identification
CN115272225A (en) Strip steel surface defect detection method and system based on countermeasure learning network
Sidnal et al. Grading and quality testing of food grains using neural network
CN114037671A (en) Microscopic hyperspectral leukocyte detection method based on improved fast RCNN
CN113724339B (en) Color space feature-based color separation method for tiles with few samples
CN113084193A (en) In-situ quality comprehensive evaluation method for selective laser melting technology
CN111126435B (en) Deep learning-based yellow cultivation diamond grade classification system
Zhang et al. Design of tire damage image recognition system based on deep learning
LU501790B1 (en) A multi-source, multi-temporal and large-scale automatic remote sensing interpretation model based on surface ecological features
CN102494987A (en) Automatic category rating method for microscopic particles in nodular cast iron
CN113210264B (en) Tobacco sundry removing method and device
CN114863277A (en) Machine vision-based method for rapidly detecting irregular particle group overrun particles
CN114240822A (en) Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion
CN109034172B (en) Product appearance defect detection method based on fuzzy relaxation constraint multi-core learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant