CN110232445B - Cultural relic authenticity identification method based on knowledge distillation - Google Patents
Cultural relic authenticity identification method based on knowledge distillation Download PDFInfo
- Publication number
- CN110232445B CN110232445B CN201910526264.3A CN201910526264A CN110232445B CN 110232445 B CN110232445 B CN 110232445B CN 201910526264 A CN201910526264 A CN 201910526264A CN 110232445 B CN110232445 B CN 110232445B
- Authority
- CN
- China
- Prior art keywords
- yolov3
- network
- tiny
- cultural relic
- authenticity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013140 knowledge distillation Methods 0.000 title claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000012360 testing method Methods 0.000 claims abstract description 12
- 238000004519 manufacturing process Methods 0.000 claims abstract description 6
- 238000012795 verification Methods 0.000 claims description 8
- 238000005286 illumination Methods 0.000 claims description 4
- 238000004821 distillation Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Tourism & Hospitality (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Strategic Management (AREA)
- Primary Health Care (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- Molecular Biology (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of cultural relic authenticity identification, in particular to a cultural relic authenticity identification method based on knowledge distillation, which mainly comprises the following steps: step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made; step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network; and step 3: training YOLOV 3; and 4, step 4: training YOLOV3-Tiny based on knowledge distillation; and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set; step 6: and (3) identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny. According to the method, the YOLOV3 which is good in accuracy but slow in speed is used as a teacher network, the YOLOV3-Tiny which is poor in accuracy but fast in speed is used as a student network, knowledge distillation is carried out, softened targets are used for supervising student network learning, the accuracy of the YOLOV3-Tiny is greatly improved under the condition that the original fast detection speed of the YOLOV3-Tiny is kept, hardware resource occupation in the cultural relic identification process is reduced, the detection efficiency is improved, and the identification cost is saved.
Description
Technical Field
The invention relates to the field of cultural relic authenticity identification, in particular to a cultural relic authenticity identification method based on knowledge distillation.
Background
The historical culture of China is long, and a great amount of historical relics are treasure. In order to carry forward historical culture, cultural relic collection units all over the country can regularly develop cultural relic exhibition activities. After the activity is shown, the cultural relics need to be identified so as to prevent the cultural relics from being damaged and replaced. At present, the cultural relic identification work is mainly completed manually, which depends on the personal experience and knowledge range of experts and sometimes needs to be assisted by high-tech equipment for detection. The manual identification process is long in time consumption, more manpower and material resources are required to be invested, and the subjectivity of the identification result is strong.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a cultural relic authenticity identification method based on knowledge distillation, which has the advantages of high detection speed and high accuracy.
The technical scheme for solving the problems is as follows: the cultural relic authenticity identification method based on knowledge distillation is characterized by comprising the following steps of:
step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made;
step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network;
and step 3: training YOLOV 3;
and 4, step 4: training YOLOV3-Tiny based on knowledge distillation;
and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set;
step 6: and (3) identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny.
Further, the step 1 specifically includes: before the cultural relics are shown, selecting an area on the cultural relics as a fingerprint area, acquiring RGB images of the fingerprint area from multiple angles by using a high-precision camera under different illumination conditions, marking the fingerprint area in the images by using a marking tool, manufacturing a data set, randomly selecting a part of images and marking files thereof as a training set, and taking the rest parts of the images as a verification set.
Further, in step 2, the yolo layer with prediction scale of 52 × 52 of YOLOV3 is deleted, and only the predictions of 13 × 13 and 26 × 26 are retained; aiming at the training set obtained in the step 1, 6 anchors are calculated by using a K-Means algorithm, and the anchors of the original Yolov3 and Yolov3-Tiny are replaced.
Further, the step 3 specifically includes: and (3) training the Yolov3 network model on the data set obtained in the step 1, and storing a weight file.
Further, in the step 4, the trained Yolov3 network is used as a teacher network, and the Yolov3-Tiny network is used as a student network; the two networks perform forward propagation on the input image in turn, resulting in outputs of scale 13 × 13 × c, 26 × 26 × c, respectivelyIs marked as outt(teacher) and outs(students); the error of Yolov3-Tiny was calculated according to the formulas (1) to (3):
LOSS=αT2·losssoft+(1-α)·losshard (1)
losshard=crossentropy(outs,Target) (3)
in the formula, losssoftFor soft target error, losshardFor a hard target error (i.e., the original error of the Yolov3 network), α is the adjustment losssoftAnd losshardT is the distillation temperature; target represents the original label of the data set, namely the hard Target; softmax () and cross () respectively represent the function value of softmax and the cross entropy value; to balance losssoftAnd losshardOrder of magnitude of (1), introduction coefficient beta1(ii) a In equation 2, the position, confidence, and category prediction values of the student and teacher networks are softened, and then the relative entropy is obtained as a soft target error.
Further, the step 5 specifically includes: and after the cultural relics are recovered, acquiring a plurality of images in the fingerprint area again to be used as a test set.
Further, the step 6 specifically includes: and (3) performing inference on the trained YOLOV3-Tiny on the test set obtained in the step (5) to obtain confidence values of the fingerprint regions, solving the average value of the confidence values of each fingerprint region, if the confidence value is greater than a set threshold value, determining that the fingerprint region is unchanged, and if the fingerprint region is not changed, determining that the cultural relic is a genuine relic.
The invention has the advantages that:
according to the cultural relic authenticity identification method based on knowledge distillation, the YOLOV3 which is good in accuracy and low in speed is used as a teacher network, the YOLOV3-Tiny which is poor in accuracy and high in speed is used as a student network, knowledge distillation is carried out, the softened target is used for supervising student network learning, under the condition that the original high detection speed of YOLOV3-Tiny is kept, the accuracy is greatly improved, the hardware resource occupation in the cultural relic identification process is reduced, the detection efficiency is improved, and the identification cost is saved.
Drawings
FIG. 1 is a schematic flow chart of the cultural relic authenticity identification method based on knowledge distillation of the invention;
FIG. 2 is a modified YOLOV3 network structure;
FIG. 3 is a YOLOV3-Tiny network structure;
FIG. 4 is a schematic diagram of the knowledge distillation.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, a cultural relic authenticity identification method based on knowledge distillation comprises the following steps:
step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made;
step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network;
and step 3: YOLOV3 was trained on the training set;
and 4, step 4: training YOLOV3-Tiny based on knowledge distillation;
and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set;
step 6: identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny;
step 1: before the cultural relics are shown, fingerprint area images are collected and a data set is made.
Further, the step 1 specifically includes: before the cultural relics are shown, selecting an area on the cultural relics as a fingerprint area, acquiring RGB images of the fingerprint area from multiple angles by using a high-precision camera under different illumination conditions, marking the fingerprint area in the images by using a marking tool, manufacturing a data set, randomly selecting a part of images and marking files thereof as a training set, and taking the rest parts of the images as a verification set.
Further, in step 2, the yolo layer with prediction scale of 52 × 52 in YOLOV3 is deleted, only the predictions of 13 × 13 and 26 × 26 are retained, and the network model after deletion is shown in fig. 2. The number of classes of the modified YoloV3 network is m classes, the number of convolution kernel channels of the yolo layer is (m +1+4) × 3 and is marked as c, and the output size of the modified yolo layer is 13 × 13 × c and 26 × 26 × c. Aiming at the training set obtained in the step 1, 6 anchors are calculated by using a K-Means algorithm, and the anchors of the original Yolov3 and Yolov3-Tiny are replaced. The Yolov3-TINY network model is shown in FIG. 3.
Further, in step 3, setting an initial hyper-parameter of the YOLOV3 network, setting a maximum iteration number epochmax and a batch processing number batch, training on a training set, calculating precision ratio, recall ratio and mAP of the YOLOV3 on a verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP in a training log. After each training (reaching the maximum iteration frequency epochmax), the coefficients of the position error function, the confidence coefficient error function and the classification error function are adjusted according to the training log, the training results with high precision ratio, recall ratio and mAP are finally obtained, and the weight file is saved.
Further, in step 4, the trained YOLOV3 network is used as a teacher network, and the YOLOV3-Tiny network is used as a student network. The two networks perform forward propagation on the input image in turn, resulting in outputs with scales of 13 × 13 × c, 26 × 26 × c, respectively denoted as out, as shown in fig. 4t(teacher) and outs(students). Backward propagation and updating of weights is only done for the YOLOV3-Tiny networkHeavy, the YOLOV3 network does not update the weights, only performs forward inference. The back propagation error includes two components, as shown in equations 1-3.
LOSS=αT2·losssoft+(1-α)·losshard (1)
losshard=crossentropy(outs,Target) (3)
In the formula, losssoftFor soft target error, losshardFor a hard target error (i.e., the original error of the Yolov3 network), α is the adjustment losssoftAnd losshardT is the distillation temperature. Target represents the original annotation of the data set, i.e. the hard Target. softmax () and cross () represent the softmax function value and cross entropy value, respectively. To balance losssoftAnd losshardOrder of magnitude of (1), introduction coefficient beta1. In equation 2, the position, confidence, and category prediction values of the student and teacher networks are softened, and then the relative entropy is obtained as a soft target error.
Setting the maximum iteration times and the batch processing number, training on a training set, calculating precision ratio, recall ratio and mAP of YOLOV3-Tiny on a verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP in a training log. After each training is finished, the hyperparameter of YOLOV3-Tiny is adjusted according to the training log, the training result with higher precision ratio, recall ratio and mAP is finally obtained, and the weight file is saved.
Further, in the step 5, after the cultural relics are recovered, a plurality of images are collected again in the fingerprint area at the position m to form a test set for identifying the authenticity of the cultural relics.
Further, step 6 uses the YOLOV3-Tiny network trained in step 4 to perform an inference process on the test set, and obtains a confidence value of each image fingerprint region. And counting the confidence values of the fingerprint areas at the same position, calculating the average value of the confidence values, and if the m average values are all larger than a set threshold value, judging that the cultural relic is a genuine product.
Example (b):
in the embodiment, 5 fingerprint areas are known before the cultural relics are displayed, and the sizes of the fingerprint areas are all 5mm by 5mm2. Under different illumination conditions, 500 RGB images are respectively collected from 5 fingerprint areas at multiple angles by using an EOS 7D Mark II camera with an MP-E65 mm f/2.81-5X lens, the resolution is 5472 multiplied by 3648 pixels, and 2500 images are obtained in total. And marking out the fingerprint area in the image by using a marking tool labelimg to obtain an XML file corresponding to each image. 2300 images and the labeled files thereof are randomly selected as a training set, and the rest are used as a verification set.
Step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network; modifying the network model and parameters;
in this embodiment, a deep learning development environment needs to be configured, the CPU is i79700K, the GPU is nvidiageforcetrtx 2080, the operating system is Ubuntu 16.04LTS, CUDA10.0, and the deep learning framework is Pytorch.
Modifying a Yolov3 network structure, wherein the concrete contents comprise: 1) deleting a yolo layer with the dimension of 52 multiplied by 52, and deleting a corresponding convolutional layer, an upper sampling layer and a route layer in the cfg configuration file; 1) modifying the number of anchors to be 6, modifying num to be 6 in the cfg configuration file, recalculating 6 anchors on the training set by using a K-Means algorithm and replacing the original values; 1) the modified classification number is 5, the modified classes in the cfg configuration file is 5, the number of channels of the yolo layer is (5+1+4) × 3, that is, 30, and the modified yolo layer size is 13 × 13 × 30, 26 × 26 × 30.
Modifying the YOLOV3-Tiny network parameter, wherein the specific contents comprise: 1) replacing the original values with the 6 anchors; 2) the modified classification number is 5, the modified classes in the cfg configuration file is 5, the number of channels of the yolo layer is (5+1+4) × 3, that is, 30, and the modified yolo layer size is 13 × 13 × 30, 26 × 26 × 30. And building a Yolov3 and a Yolov3-Tiny network model.
And step 3: and training a Yolov3 network and storing a weight file.
This embodiment is used to train the YOLOV3 network model. Setting the epochmax to be 250 and the batch to be 8, calculating precision ratio, recall ratio and mAP on the verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP as training logs. And after one-time training is finished, adjusting coefficients of the positioning error function, the confidence error function and the classification error function according to the training log, training for multiple times to obtain a network model with better performance, and storing a weight file.
And 4, step 4: training YOLOV3-Tiny based on knowledge distillation
In this embodiment, the YOLOV3 network loads its weight file, and the YOLOV3-Tiny network trains from scratch. The two networks perform forward propagation on the input image in turn, resulting in outputs of scale 13 × 13 × 30, 26 × 26 × 30, respectively denoted outt(teacher) and outs(students). Then, calculating a YOLOV3-Tiny network error according to the formulas 1-3, and setting the relevant parameters as follows: t4, α 0.6, β1=0.0003。
And setting the maximum iteration times and the batch processing number, only calculating the error and the gradient of the YOLOV3-Tiny network in the training process, and updating the weight, wherein the YOLOV3 network only executes forward inference and does not calculate the error and the gradient. And after each epoch is finished, calculating precision ratio, recall ratio and mAP of YOLOV3-Tiny on the verification set, and storing the precision ratio, the recall ratio and the mAP as training logs. After one-time training is finished, the hyperparameter of YOLOV3-Tiny is adjusted according to the training log, a network model with better performance is obtained through multiple times of training, and a weight file is stored.
And 5: after the cultural relics are recovered, the fingerprint area images are collected again;
in the embodiment, after the cultural relic is withdrawn, 100 images of the fingerprint area at 5 positions are collected again by using the camera, the resolution is 5472 multiplied by 3648 pixels, and a total of 500 image forming test sets are obtained.
Step 6: identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny;
the trained YOLOV3-Tiny network was used to perform an inference process on the test set, and each image outputted a confidence value for the fingerprint region. Averaging 100 confidence values of each fingerprint area, if the average value is larger than a set threshold value of 0.95, determining that the fingerprint area is not changed, and if no fingerprint area is changed at 5 positions, determining that the cultural relic is a genuine relic.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or applied directly or indirectly to other related systems, are included in the scope of the present invention.
Claims (7)
1. A cultural relic authenticity identification method based on knowledge distillation is characterized by comprising the following steps:
step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made;
step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network;
and step 3: training YOLOV 3;
and 4, step 4: training YOLOV3-Tiny based on knowledge distillation;
and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set;
step 6: and (3) identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny.
2. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 1, which is characterized in that:
the step 1 specifically comprises the following steps: before the cultural relics are shown, an area is selected on the cultural relics as a fingerprint area, RGB images of the fingerprint area are collected from multiple angles by a camera under different illumination conditions, the fingerprint area in the images is marked by a marking tool, a data set is manufactured, a part of images and marking files thereof are randomly selected as a training set, and the rest are used as a verification set.
3. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 2, which is characterized in that:
in the step 2, a yolo layer with a prediction scale of 52 × 52 of YOLOV3 is deleted, and only the predictions of 13 × 13 and 26 × 26 are reserved; modifying the class number of the YOLOV3 network into m classes, and the number of the pass of the yolo layer convolution kernel is (m +1+4) multiplied by 3 and is marked as c; aiming at the training set obtained in the step 1, 6 anchors are calculated by using a K-Means algorithm, and the anchors of the original Yolov3 and Yolov3-Tiny are replaced.
4. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 3, wherein the method comprises the following steps:
the step 3 specifically comprises the following steps: and (3) training the Yolov3 network model on the data set obtained in the step 1, and storing a weight file.
5. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 4, wherein the method comprises the following steps:
in the step 4, the trained Yolov3 network is used as a teacher network, and the Yolov3-Tiny network is used as a student network; the two networks perform forward propagation on the input image in turn, resulting in outputs of scale 13 × 13 × c, 26 × 26 × c, respectively denoted outtAnd outs(ii) a The error of Yolov3-Tiny was calculated according to the formulas (1) to (3):
LOSS=αT2·losssoft+(1-α)·losshard (1)
losshard=crossentropy(outs,Target) (3)
in the above formula, losssoftFor soft target error, losshardFor hard target error, α is the adjustment losssoftAnd losshardT is the distillation temperature; target represents the original label of the data set, namely the hard Target; softmax () and cross () respectively represent the function value of softmax and the cross entropy value; to balance losssoftAnd losshardOrder of magnitude of (1), introduction coefficient beta1(ii) a In the formula (2), the position, confidence and category prediction values of the student and teacher networks are softened, and then the relative entropy is obtained as a soft target error.
6. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 5, wherein the method comprises the following steps:
the step 5 specifically comprises the following steps: and after the cultural relics are recovered, acquiring a plurality of images in the fingerprint area again to be used as a test set.
7. The method for authenticating the authenticity of the cultural relic based on the knowledge distillation as claimed in claim 6, wherein the method comprises the following steps:
the step 6 specifically comprises the following steps: and (3) performing inference on the trained YOLOV3-Tiny on the test set obtained in the step (5) to obtain confidence values of the fingerprint regions, solving the average value of the confidence values of each fingerprint region, if the confidence value is greater than a set threshold value, determining that the fingerprint region is unchanged, and if the fingerprint region is not changed, determining that the cultural relic is a genuine relic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526264.3A CN110232445B (en) | 2019-06-18 | 2019-06-18 | Cultural relic authenticity identification method based on knowledge distillation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526264.3A CN110232445B (en) | 2019-06-18 | 2019-06-18 | Cultural relic authenticity identification method based on knowledge distillation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110232445A CN110232445A (en) | 2019-09-13 |
CN110232445B true CN110232445B (en) | 2021-03-26 |
Family
ID=67859621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910526264.3A Active CN110232445B (en) | 2019-06-18 | 2019-06-18 | Cultural relic authenticity identification method based on knowledge distillation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110232445B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200764B (en) * | 2020-09-02 | 2022-05-03 | 重庆邮电大学 | Photovoltaic power station hot spot detection and positioning method based on thermal infrared image |
CN112348167B (en) * | 2020-10-20 | 2022-10-11 | 华东交通大学 | Knowledge distillation-based ore sorting method and computer-readable storage medium |
CN112308130B (en) * | 2020-10-29 | 2021-10-15 | 成都千嘉科技有限公司 | Deployment method of deep learning network of Internet of things |
CN113158969A (en) * | 2021-05-10 | 2021-07-23 | 上海畅选科技合伙企业(有限合伙) | Apple appearance defect identification system and method |
CN115705688A (en) * | 2021-08-10 | 2023-02-17 | 万维数码智能有限公司 | Ancient and modern artwork identification method and system based on artificial intelligence |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108287833A (en) * | 2017-01-09 | 2018-07-17 | 北京艺鉴通科技有限公司 | It is a kind of for the art work identification to scheme to search drawing method |
CN109003098A (en) * | 2018-05-24 | 2018-12-14 | 孝昌天空电子商务有限公司 | Agricultural-product supply-chain traceability system based on Internet of Things and block chain |
CN109523282A (en) * | 2018-12-02 | 2019-03-26 | 程昔恩 | A method of constructing believable article Internet of Things |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930251B (en) * | 2012-10-26 | 2016-09-21 | 北京炎黄拍卖有限公司 | Bidimensional collectibles data acquisition and the apparatus and method of examination |
KR20170035362A (en) * | 2015-08-31 | 2017-03-31 | (주)늘푸른광고산업 | Cultural properties guide system using wireless terminal and guide plate for cultural properties guide system using wireless terminal |
US10706336B2 (en) * | 2017-03-17 | 2020-07-07 | Nec Corporation | Recognition in unlabeled videos with domain adversarial learning and knowledge distillation |
-
2019
- 2019-06-18 CN CN201910526264.3A patent/CN110232445B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108287833A (en) * | 2017-01-09 | 2018-07-17 | 北京艺鉴通科技有限公司 | It is a kind of for the art work identification to scheme to search drawing method |
CN109003098A (en) * | 2018-05-24 | 2018-12-14 | 孝昌天空电子商务有限公司 | Agricultural-product supply-chain traceability system based on Internet of Things and block chain |
CN109523282A (en) * | 2018-12-02 | 2019-03-26 | 程昔恩 | A method of constructing believable article Internet of Things |
Non-Patent Citations (2)
Title |
---|
Machine Learning for the Developing World;MARIA DE-ARTEAGA et al;《ACM Transactions on Management Information Systems》;20180831;1-14 * |
智能算法在古陶瓷文物鉴定中的应用;吴旭东等;《技术创新》;20171225;49-50 * |
Also Published As
Publication number | Publication date |
---|---|
CN110232445A (en) | 2019-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110232445B (en) | Cultural relic authenticity identification method based on knowledge distillation | |
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
CN109948522B (en) | X-ray hand bone maturity interpretation method based on deep neural network | |
CN108446711A (en) | A kind of Software Defects Predict Methods based on transfer learning | |
CN107004141A (en) | To the efficient mark of large sample group | |
CN107392252A (en) | Computer deep learning characteristics of image and the method for quantifying perceptibility | |
CN110956196B (en) | Automatic recognition method for window wall ratio of urban building | |
CN113792667A (en) | Method and device for automatically classifying properties of buildings in villages and towns based on three-dimensional remote sensing image | |
CN111104850B (en) | Remote sensing image building automatic extraction method and system based on residual error network | |
CN111985325A (en) | Aerial small target rapid identification method in extra-high voltage environment evaluation | |
CN111652835A (en) | Method for detecting insulator loss of power transmission line based on deep learning and clustering | |
CN115493532B (en) | Measuring system, method and medium for measuring area of element to be measured on surface of plate | |
CN112418632A (en) | Ecological restoration key area identification method and system | |
CN115494007A (en) | Random forest based high-precision rapid detection method and device for soil organic matters | |
CN113255181A (en) | Heat transfer inverse problem identification method and device based on deep learning | |
CN103778306B (en) | A kind of sensors location method based on EI and successive Method | |
CN117933095B (en) | Earth surface emissivity real-time inversion and assimilation method based on machine learning | |
CN115860214A (en) | Early warning method and device for PM2.5 emission concentration | |
CN115272826A (en) | Image identification method, device and system based on convolutional neural network | |
CN116343157A (en) | Deep learning extraction method for road surface cracks | |
CN111782978B (en) | Method and device for processing interest point data, electronic equipment and readable medium | |
CN112614570A (en) | Sample set labeling method, pathological image classification method and classification model construction method and device | |
CN111104965A (en) | Vehicle target identification method and device | |
CN115936264A (en) | Single-day engineering quantity calculation method, staged engineering quantity prediction method and prediction device | |
CN105469116A (en) | Picture recognition and data extension method for infants based on man-machine interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |