CN114782411A - Hazardous article identification method and identification system based on neural network - Google Patents

Hazardous article identification method and identification system based on neural network Download PDF

Info

Publication number
CN114782411A
CN114782411A CN202210570202.4A CN202210570202A CN114782411A CN 114782411 A CN114782411 A CN 114782411A CN 202210570202 A CN202210570202 A CN 202210570202A CN 114782411 A CN114782411 A CN 114782411A
Authority
CN
China
Prior art keywords
network model
cascade
image set
trained
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210570202.4A
Other languages
Chinese (zh)
Inventor
姚鸿勋
陶胤旭
段风志
韩国权
黄海峰
王兆林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Taiji Computer Corp Ltd
Original Assignee
Harbin Institute of Technology
Taiji Computer Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology, Taiji Computer Corp Ltd filed Critical Harbin Institute of Technology
Priority to CN202210570202.4A priority Critical patent/CN114782411A/en
Publication of CN114782411A publication Critical patent/CN114782411A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a hazardous article identification method and an identification system based on a neural network, in particular to an identification method and an identification system of a hazardous article in a security check image based on a Cascade R-CNN double-stage neural network, aiming at solving the problems that an X-ray security check scanning image is abstract, the article identification accuracy is low, visual fatigue and missing judgment are easily caused to long-term workers, and a data set cannot be fully utilized, collecting a plurality of scanning images containing the hazardous article, and dividing the scanning images into a marked image set and an unmarked image set; establishing a network model, inputting the marked image set into the network model for training to obtain an initial network model; inputting the unmarked image set into an initial network model to obtain a pseudo label file set; inputting the marked image set and the unmarked image set into the initial network model to obtain a trained network model; and collecting an X-ray security check scanning image to be identified, inputting the X-ray security check scanning image into the trained network model to obtain frame annotation information of the dangerous goods. Belongs to the field of computer image processing.

Description

Hazardous article identification method and identification system based on neural network
Technical Field
The invention relates to a dangerous goods identification method and system, in particular to a dangerous goods identification method and system in a security inspection image based on a Cascade R-CNN double-stage neural network, and belongs to the field of computer image processing.
Background
In recent years, with the progress of economy and society, the number of trips of people is increased, security inspection is required to be carried out when people take subways, trains, passenger cars, airplanes and the like, the purpose of the security inspection is to eliminate all dangerous goods threatening the personal safety of passengers and guarantee the riding safety of the passengers, if the sufficiency of security inspection measures is not effectively guaranteed, the life safety of the passengers is threatened and property loss is caused, so that the effective identification of the dangerous goods concerning violence and fear in the X-ray security inspection images is very important.
The existing security check mode mostly adopts an X-ray security check machine to perform security check, and because the scanning image of the X-ray security check machine is relatively abstract, people who work for a long time are easy to have the problems of visual fatigue, missing judgment and the like, so that the improvement of the accuracy and the performance of processing and analyzing the pictures becomes a problem which needs to be solved urgently. With the explosive development of the field of computer vision, applications and research on security issues are attracting more and more attention of scholars. However, most of the existing technologies are applied to real objects such as photographs and remote sensing images, and the performance of the algorithms applied to the scanogram of the X-ray security inspection machine is unsatisfactory, so that the existing data set cannot be fully utilized.
Disclosure of Invention
The invention provides a hazardous article identification method and system based on a neural network, and aims to solve the problems that an X-ray security check scanning image is relatively abstract, the article identification accuracy is low, visual fatigue and missing judgment are easily caused to long-term workers, and a data set cannot be fully utilized.
The technical scheme adopted by the invention is as follows:
a dangerous goods identification method based on a neural network comprises the following steps:
s1, collecting a plurality of X-ray security check scanograms containing dangerous goods, selecting a part of the collected X-ray security check scanograms to perform frame labeling to serve as a labeled image set, and using unselected X-ray security check scanograms as unlabeled image sets;
s2, establishing a network model, inputting the labeled image set in the S1 into the network model for training until loss converges, and obtaining a trained initial network model;
s3, inputting the unlabeled image set in the S1 into the trained initial network model obtained in the S2 for recognition, and obtaining a pseudo label file set corresponding to the unlabeled image set;
s4, inputting the annotated image set in S1 and the annotated image set with the pseudo label files obtained in S3 into the trained initial network model obtained in S2 for training to obtain a trained network model;
and S5, collecting an X-ray security check scanogram to be recognized, and inputting the scanogram into the trained network model obtained in S4 to obtain frame annotation information of the dangerous goods.
Preferably, the dangerous goods in the S1 comprise the articles involved in terrorist and the flammable and explosive articles.
Preferably, the box label information in S5 includes the position coordinates of the hazardous article in the X-ray security scan, the type of the hazardous article, and the type confidence.
Preferably, the network model is established in S2, and the labeled image set in S1 is input into the network model for training until loss converges, so as to obtain a trained initial network model, which includes the following specific processes:
s21, establishing a network model, wherein the network model comprises a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model;
s22, inputting the labeling image set in the S1 into a ResNeXt-50_ Cascade R-CNN network model for training until loss converges to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
inputting the labeled image set in the S1 into a Res2Net _ Cascade R-CNN network model for training until loss converges to obtain a trained Res2Net _ Cascade R-CNN network model;
and S23, performing weighted fusion on the output result of the trained ResNeXt-50_ Cascade R-CNN network model obtained in the S22 and the output result of the trained Res2Net _ Cascade R-CNN network model to obtain a trained initial network model.
Preferably, the ResNeXt-50_ Cascade R-CNN network model in S21 sequentially comprises ResNeXt-50 and Cascade R-CNN.
Preferably, in the step S22, the labeled image set in the step S1 is input into a ResNeXt-50_ Cascade R-CNN network model for training until loss converges, and a trained ResNeXt-50_ Cascade R-CNN network model is obtained; inputting the labeled image set in the S1 into a Res2Net _ Cascade R-CNN network model for training until loss converges to obtain a trained Res2Net _ Cascade R-CNN network model, wherein the specific process comprises the following steps:
s221, preprocessing the labeled image set in S1 to obtain a preprocessed labeled image set;
s222, inputting the preprocessed labeled image set in the S221 into a ResNeXt-50_ Cascade R-CNN network model for training to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
and inputting the preprocessed labeled image set in the S221 into a Res2Net _ Cascade R-CNN network model for training to obtain a trained Res2Net _ Cascade R-CNN network model.
Preferably, the method for preprocessing the labeled image set in S1 in S221 is an online data enhancement method.
Preferably, the online data enhancement method is one or more of a Mixup online data enhancement method, a Mosaic online data enhancement method, an AutoAutoAutoAutoAutoAutoAutoAutomation online data enhancement method and a GridMask online data enhancement method.
Preferably, the method for performing weighted fusion on the output result of the trained ResNeXt-50_ Cascade R-CNN network model obtained in S22 and the output result of the trained Res2Net _ Cascade R-CNN network model in S23 is a wbf model fusion method.
A dangerous goods identification system based on a neural network is used for executing a dangerous goods identification method based on the neural network.
Has the advantages that:
according to the method, a fusion network model comprising a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model is constructed, and the ResNeXt-50_ Cascade R-CNN network model and the Res2Net _ Cascade R-CNN network model are fused by using an wbf model fusion method, so that identification and detection of dangerous goods in an X-ray security inspection scanning image are realized. The ResNeXt-50_ Cascade R-CNN network model sequentially comprises ResNeXt-50 and Cascade R-CNN, and the addition of the Res2Net _ Cascade R-CNN network model is equivalent to the addition of an assistant for the ResNeXt-50_ Cascade R-CNN network model, so that the accuracy of dangerous goods identification and detection in an X-ray security scan is improved.
When the X-ray security check scanogram is detected, the X-ray security check scanogram to be detected is respectively input into a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model contained in the fusion network model after being input into the fusion network model, and a result corresponding to the network models is output, and then the result output by the ResNeXt-50_ Cascade R-CNN network model and the result output by the Res2Net _ Cascade R-CNN network model are subjected to weighted fusion, so that the result is used as the output of the fusion network model, namely the information such as the position coordinate of a dangerous article in the X-ray security check scanogram, the type and the type confidence coefficient of the dangerous article is output. The problem that security check personnel miss and misjudge dangerous goods on the X-ray security check scanogram is reduced, and the situations that the security check personnel are visual fatigue and short in attention are not concentrated due to long-time work can be avoided. And the data set which can not be labeled under the limitation of manpower and material resources is fully utilized.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a Cascade R-CNN;
Detailed Description
The first embodiment is as follows: the present embodiment is described with reference to fig. 1 to fig. 2, and the method for identifying a hazardous material based on a neural network according to the present embodiment includes the following steps:
s1, collecting a plurality of X-ray security check scanograms containing dangerous goods, selecting a part of the collected X-ray security check scanograms to perform frame labeling to serve as a labeled image set, and using unselected X-ray security check scanograms as unlabeled image sets;
gather a large amount of X-ray security check scanograms, all include the hazardous articles in requiring every X-ray security check scanogram, X-ray security check scanogram shows the scanning image of the article that obtains when article passes through the X-ray security check machine, the hazardous articles include and wade violently fear article: such as tube cutters, guns, etc.; flammable and explosive articles: such as alcohol, lighters, fireworks, etc. And intercepting the area containing the dangerous goods through manual positioning, and then using the area as a data set for training a subsequent model. And selecting a part of the X-ray security check scan acquired in the step S1, and manually marking the dangerous goods by using a frame marking method to provide marking data for the model training process. And performing semi-supervised training on the rest X-ray security inspection scanning images, wherein the frame labels comprise position coordinates of the dangerous goods in the X-ray security inspection scanning images, types of the dangerous goods, type confidence coefficients and the like. The selected partial X-ray security scan is 1/2, 2/3 or 3/4 of the whole X-ray security scan and the like.
S2, establishing a network model, inputting the labeled image set in S1 into the network model for training until loss is converged to obtain a trained initial network model, and the specific process is as follows:
the step is to use the labeled image set to realize the first training of the network model, so that the model has a rough form.
S21, establishing a network model, wherein the network model comprises a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model;
the ResNeXt-50_ Cascade R-CNN network model sequentially comprises ResNeXt-50 and Cascade R-CNN.
The Cascade R-CNN network model takes ResNeXt-50 as a backbone, firstly acquires an ImageNet-1k pre-training model of ResNeXt-50 to construct a convolution deep learning network, secondly fuses the convolution deep learning network constructed by ResNeXt-50 and a traditional Cascade R-CNN network model, and takes the output of ResNeXt-50 as the input of Cascade R-CNN to obtain an initial ResNeXt-50_ Cascade R-CNN network model, wherein the addition of Res2Net _ Cascade R-CNN network model is equivalent to the addition of an assistant to the ResNeXt-50_ Cascade R-CNN network model, thereby improving the accuracy of dangerous goods identification and detection in an X-ray security scan.
The structural diagram of ResNeXt-50 is as follows:
Figure BDA0003659912070000041
Figure BDA0003659912070000051
wherein params represents the model parameters, and FLOPs represents the model calculation amount.
S22, inputting the labeling image set in the S1 into a ResNeXt-50_ Cascade R-CNN network model for training until loss converges to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
inputting the labeled image set in the S1 into a Res2Net _ Cascade R-CNN network model for training until loss converges to obtain a trained Res2Net _ Cascade R-CNN network model, wherein the specific process comprises the following steps:
s221, preprocessing the labeled image set in S1 to obtain a preprocessed labeled image set;
firstly, the labeling image set in S1 is respectively input into a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model, and after the input, the two models preprocess the labeling image set to obtain a corresponding preprocessed labeling image set. The method for preprocessing the marked image set is an online data enhancement method, the online data enhancement method is one or more of a Mixup online data enhancement method, a Mosaic online data enhancement method, an AutoAutoAutoAutoAutoAutoAutoaugmentation online data enhancement method and a GridMask online data enhancement method, the two models automatically select a proper data enhancement method during preprocessing, the selection probability of each online data enhancement method is 50%, and the AutoAutoaugmentation online data enhancement method comprises the steps of performing horizontal rotation, vertical rotation, scaling, RGB dithering and the like on the image.
S222, inputting the preprocessed labeled image set in the S221 into a ResNeXt-50_ Cascade R-CNN network model for training to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
inputting the preprocessed labeled image set in the S221 into a Res2Net _ Cascade R-CNN network model for training to obtain a trained Res2Net _ Cascade R-CNN network model;
and taking the preprocessed labeled image set as input to be trained in the ResNeXt-50_ Cascade R-CNN network model and the Res2Net _ Cascade R-CNN network model respectively to obtain the trained ResNeXt-50_ Cascade R-CNN network model and the trained Res2Net _ Cascade R-CNN network model respectively.
And S23, performing weighted fusion on the output result of the trained ResNeXt-50_ Cascade R-CNN network model obtained in the S22 and the output result of the trained Res2Net _ Cascade R-CNN network model to obtain a trained initial network model.
The fusion method is an wbf model fusion method.
After the ResNeXt-50_ Cascade R-CNN network model and the Res2Net _ Cascade R-CNN network model are fused, one more assistance is added to the ResNeXt-50_ Cascade R-CNN network model. The Res2Net _ Cascade R-CNN network model can also output a result when the X-ray safety inspection scanogram is predicted, and then the result output by the ResNeXt-50_ Cascade R-CNN network model and the result output by the Res2Net _ Cascade R-CNN network model are subjected to weighted fusion to form a new result, the dangerous goods information is more accurate, and the accuracy of dangerous goods identification in the X-ray safety inspection scanogram is improved.
S3, inputting the unlabeled image set in the S1 into the trained initial network model obtained in the S2 for recognition, and obtaining a pseudo label file set corresponding to the unlabeled image set;
inputting the unmarked image set in the S1 into the trained initial network model obtained in the S2 for identification and prediction to obtain corresponding pseudo label files, selecting pseudo label samples with higher confidence degrees, and finally integrating all the selected pseudo label files together to form a pseudo label file set. This step is to perform non-human labeling on the unlabeled image set.
S4, inputting the annotated image set in the S1 and the unlabeled image set with the pseudo label file obtained in the S3 into the trained initial network model obtained in the S2 for training to obtain a trained network model;
in this step, the trained initial network model obtained in S2 is trained for the second time, and the model is trained for the second time, so that the accuracy of the model is increased.
And S5, collecting an X-ray security check scanogram to be identified, and inputting the scanogram into the trained network model obtained in S4 to obtain frame annotation information of the dangerous goods.
The second embodiment is as follows: the present embodiment is described with reference to fig. 1 to fig. 2, and the present embodiment is a dangerous goods identification system based on a neural network, which is used for executing a dangerous goods identification method based on a neural network.

Claims (10)

1. A dangerous goods identification method based on a neural network is characterized in that: it comprises the following steps:
s1, collecting a plurality of X-ray safety check scanning pictures containing dangerous goods, selecting a part of the collected X-ray safety check scanning pictures for frame labeling to be used as a labeled image set, and using unselected X-ray safety check scanning pictures as unlabeled image sets;
s2, establishing a network model, inputting the labeled image set in the S1 into the network model for training until loss converges, and obtaining a trained initial network model;
s3, inputting the unlabeled image set in the S1 into the trained initial network model obtained in the S2 for recognition, and obtaining a pseudo label file set corresponding to the unlabeled image set;
s4, inputting the annotated image set in the S1 and the unlabeled image set with the pseudo label file obtained in the S3 into the trained initial network model obtained in the S2 for training to obtain a trained network model;
and S5, collecting an X-ray security check scanogram to be identified, and inputting the scanogram into the trained network model obtained in S4 to obtain frame annotation information of the dangerous goods.
2. The method for identifying dangerous goods based on the neural network as claimed in claim 1, wherein: the dangerous goods in the S1 comprise terrorist and inflammable and explosive goods.
3. The method for identifying dangerous goods based on the neural network as claimed in claim 2, wherein: and the box marking information in the S5 comprises the position coordinates of the dangerous goods in the X-ray security inspection scanning image, the types of the dangerous goods and the type confidence coefficient.
4. A method for identifying dangerous goods based on neural network as claimed in claim 3, wherein: and establishing a network model in the step S2, inputting the labeled image set in the step S1 into the network model for training until loss converges, and obtaining a trained initial network model, wherein the specific process comprises the following steps:
s21, establishing a network model, wherein the network model comprises a ResNeXt-50_ Cascade R-CNN network model and a Res2Net _ Cascade R-CNN network model;
s22, inputting the labeling image set in the S1 into a ResNeXt-50_ Cascade R-CNN network model for training until loss converges to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
inputting the labeled image set in the S1 into a Res2Net _ Cascade R-CNN network model for training until loss converges to obtain a trained Res2Net _ Cascade R-CNN network model;
and S23, performing weighted fusion on the output result of the trained ResNeXt-50_ Cascade R-CNN network model obtained in the S22 and the output result of the trained Res2Net _ Cascade R-CNN network model to obtain a trained initial network model.
5. The method for identifying dangerous goods based on neural network as claimed in claim 4, wherein: the ResNeXt-50_ Cascade R-CNN network model in the S21 sequentially comprises ResNeXt-50 and Cascade R-CNN.
6. The method for identifying dangerous goods based on neural network as claimed in claim 5, wherein: inputting the labeled image set in the S1 into a ResNeXt-50_ Cascade R-CNN network model for training in the S22 until loss converges to obtain a trained ResNeXt-50_ Cascade R-CNN network model; inputting the labeled image set in the S1 into a Res2Net _ Cascade R-CNN network model for training until loss converges to obtain a trained Res2Net _ Cascade R-CNN network model, wherein the specific process comprises the following steps:
s221, preprocessing the labeled image set in S1 to obtain a preprocessed labeled image set;
s222, inputting the preprocessed labeled image set in the S221 into a ResNeXt-50_ Cascade R-CNN network model for training to obtain a trained ResNeXt-50_ Cascade R-CNN network model;
and inputting the preprocessed labeled image set in the S221 into the Res2Net _ Cascade R-CNN network model for training to obtain the trained Res2Net _ Cascade R-CNN network model.
7. The method for identifying dangerous goods based on neural network as claimed in claim 6, wherein: the method for preprocessing the labeled image set in S1 in S221 is an online data enhancement method.
8. The method for identifying dangerous goods based on the neural network as claimed in claim 7, wherein: the online data enhancement method is one or more of a Mixup online data enhancement method, a Mosaic online data enhancement method, an AutoAutoAutoAutoaugmentation online data enhancement method and a GridMask online data enhancement method.
9. The method for identifying dangerous goods based on neural network as claimed in claim 8, wherein: and in the S23, a wbf model fusion method is used for performing weighted fusion on the output result of the trained ResNeXt-50_ Cascade R-CNN network model obtained in the S22 and the output result of the trained Res2Net _ Cascade R-CNN network model.
10. A dangerous goods identification system based on a neural network is characterized in that: the system is used for executing any one of the dangerous goods identification method based on the neural network in the claims 1 to 9.
CN202210570202.4A 2022-05-24 2022-05-24 Hazardous article identification method and identification system based on neural network Pending CN114782411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210570202.4A CN114782411A (en) 2022-05-24 2022-05-24 Hazardous article identification method and identification system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210570202.4A CN114782411A (en) 2022-05-24 2022-05-24 Hazardous article identification method and identification system based on neural network

Publications (1)

Publication Number Publication Date
CN114782411A true CN114782411A (en) 2022-07-22

Family

ID=82409218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210570202.4A Pending CN114782411A (en) 2022-05-24 2022-05-24 Hazardous article identification method and identification system based on neural network

Country Status (1)

Country Link
CN (1) CN114782411A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526874A (en) * 2022-10-08 2022-12-27 哈尔滨市科佳通用机电股份有限公司 Round pin of brake adjuster control rod and round pin split pin loss detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526874A (en) * 2022-10-08 2022-12-27 哈尔滨市科佳通用机电股份有限公司 Round pin of brake adjuster control rod and round pin split pin loss detection method

Similar Documents

Publication Publication Date Title
US9594984B2 (en) Business discovery from imagery
CN108038424B (en) Visual automatic detection method suitable for high-altitude operation
CN110569843A (en) Intelligent detection and identification method for mine target
CN110972499A (en) Labeling system of neural network
CN112949457A (en) Maintenance method, device and system based on augmented reality technology
CN110689018A (en) Intelligent marking system and processing method thereof
CN114782411A (en) Hazardous article identification method and identification system based on neural network
CN114972316A (en) Battery case end surface defect real-time detection method based on improved YOLOv5
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN114662605A (en) Flame detection method based on improved YOLOv5 model
Alayed et al. Real-Time Inspection of Fire Safety Equipment using Computer Vision and Deep Learning
CN115131826B (en) Article detection and identification method, and network model training method and device
Peng et al. [Retracted] Helmet Wearing Recognition of Construction Workers Using Convolutional Neural Network
CN116030050A (en) On-line detection and segmentation method for surface defects of fan based on unmanned aerial vehicle and deep learning
CN112633286B (en) Intelligent security inspection system based on similarity rate and recognition probability of dangerous goods
CN117150003A (en) Work order analysis method and device
CN114005054A (en) AI intelligence system of grading
Jiang et al. YOLO Based Thermal Screening Using Artificial Intelligence (Al) for Instinctive Human Facial Detection
CN113420711A (en) Service industry worker service behavior recognition algorithm and system
Chatrasi et al. Pedestrian and object detection using image processing by yolov3 and yolov2
Chowdhury et al. Towards Tabular Data Extraction From Richly-Structured Documents Using Supervised and Weakly-Supervised Learning
CN109886360A (en) A kind of certificate photo Classification and Identification based on deep learning and detection method without a hat on and system
CN116894978B (en) Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics
Chen Development of image recognition system for steel defects detection
Wang et al. Deep Neural Network Based Automatic Litter Detection in Desert Areas Using Unmanned Aerial Vehicle Imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination