CN111815616A - Method for detecting dangerous goods in X-ray security inspection image based on deep learning - Google Patents

Method for detecting dangerous goods in X-ray security inspection image based on deep learning Download PDF

Info

Publication number
CN111815616A
CN111815616A CN202010706378.9A CN202010706378A CN111815616A CN 111815616 A CN111815616 A CN 111815616A CN 202010706378 A CN202010706378 A CN 202010706378A CN 111815616 A CN111815616 A CN 111815616A
Authority
CN
China
Prior art keywords
dangerous goods
security inspection
image
deep learning
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010706378.9A
Other languages
Chinese (zh)
Inventor
李钢
张玲
杨子固
贺婧
刘剑超
郝中良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Castel Network Technology Co ltd
Taiyuan University of Technology
Original Assignee
Shanxi Castel Network Technology Co ltd
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Castel Network Technology Co ltd, Taiyuan University of Technology filed Critical Shanxi Castel Network Technology Co ltd
Priority to CN202010706378.9A priority Critical patent/CN111815616A/en
Publication of CN111815616A publication Critical patent/CN111815616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting dangerous goods in an X-ray security inspection image based on deep learning, belongs to the technical field of security inspection dangerous goods detection, and aims to improve security inspection efficiency and security inspection quality; the technical scheme is as follows: constructing a fusion model of DenseNet and Yolov3, processing an X-ray security inspection image or video to be detected, loading the processed image or video into the fusion model, and loading weights obtained by pre-training; carrying out forward propagation on the loaded image by using a convolutional neural network as a feature extractor, and extracting the features of the loaded image; calculating the confidence degree of the dangerous goods and the position information of the dangerous goods contained in the information; removing redundant prediction objects by using a non-maximization inhibition algorithm, and outputting prediction results; the invention can automatically finish the identification and early warning of dangerous goods, improve the security inspection efficiency and the security inspection quality and improve the security inspection technical level.

Description

Method for detecting dangerous goods in X-ray security inspection image based on deep learning
Technical Field
The invention belongs to the technical field of security inspection dangerous goods detection, and particularly relates to a dangerous goods detection method in an X-ray security inspection image based on deep learning.
Background
In the public safety field, such as public activity places like airports, subway stations, railway stations, bus stations, etc., security inspectors mainly inspect luggage of passengers through X-ray security inspection machines, which can effectively improve the efficiency and accuracy of security inspection, so that the X-ray security inspection machines have extremely wide application. Dangerous articles such as explosives, chemical agents, guns, knives and the like carried by terrorists are generally hidden in luggage interlayers or clothes in stations and airports, are relatively secret, are difficult to find in manual inspection, and are relatively low in efficiency. When checking, luggage enters an X-ray checking channel to generate an X-ray signal, X-rays penetrate the luggage to generate different attenuations, the security check machine presents images with different colors on a screen according to the received signals, and then the images are analyzed and checked manually, so that dangerous goods are identified and early warning is given out.
In a practical scene, occasions needing security inspection such as airports, subways, railway stations and the like are high in population density and large in freight traffic flow, and the problems of congestion, delay and the like are considered, so that security inspection work in public places generally has higher real-time requirements and higher accuracy requirements. However, the current security check work mainly depends on the active observation of a security checker on an X-ray image, and the identification capability of the security checker on a dangerous goods image directly influences the quality and efficiency of the security check work. The method not only needs to train the full-time staff, but also has larger potential safety hazard because the inspection process is monotonous and tedious, and is easy to generate manpower fatigue to cause missed inspection and false inspection.
Disclosure of Invention
The invention overcomes the defects of the prior art, provides a dangerous goods detection method in an X-ray security inspection image based on deep learning, aims to improve the security inspection efficiency and the security inspection quality, and uses image identification or target detection for automatic segmentation and identification of the X-ray security inspection image by a big data method.
In order to achieve the above object, the present invention is achieved by the following technical solutions.
A dangerous goods detection method in an X-ray security inspection image based on deep learning comprises the following specific steps:
step S1: constructing a Yolov3 model by utilizing a pyrorch, wherein a characteristic extractor adopts Darknet and simultaneously fuses dense connection in Densenet to construct a deep learning model fusing DenseNet and Yolov3, and pre-training the model to obtain weight;
step S2: loading the weight obtained by pre-training, processing the X-ray security inspection video or image to be detected and then loading the processed X-ray security inspection video or image into a model;
step S3: carrying out forward propagation on the loaded image by using a convolutional neural network as a feature extractor, and extracting the features of the loaded image;
step S4: according to the extracted features, calculating the confidence degree of the dangerous goods and the position information of the dangerous goods;
step S5: and removing redundant prediction objects by using a non-maximization inhibition algorithm, outputting prediction results, and labeling the types and the position information of the dangerous goods contained in the images containing the dangerous goods.
Further, the step S2 specifically includes:
step S21: for the loaded video, extracting video frames, converting the video frames into images and loading the images;
step S22: carrying out scale transformation on the loaded image, and zooming the image into a fixed size;
step S23: converting the processed image into a digital signal for fusion model calculation, and inputting the digital signal into the fusion model;
further, the convolutional neural network is a Darknet convolutional neural network, and step S3 specifically includes:
step S31: continuously performing convolution and down-sampling on the loaded image by using the Darknet convolution neural network fused with dense connection as a feature extractor to finish forward propagation and extract the original features of the input image under different scales;
step S32: and (4) utilizing the FPN network to obtain three fusion features with different scales and sizes by transversely connecting and upsampling the original features extracted in the step S31 under different scales.
The convolutional neural network has the advantages of high fault tolerance, strong self-learning capability and the like, and also has the advantages of weight sharing, automatic feature extraction and the like. The method has great advantages in the field of image recognition and target detection.
Preferably, the Darknet convolutional neural network connects the original input of a layer directly to the output of the layer:
y=F(x)+x
where y is the output of the layer, F is the functional mapping of the layer, and x is the input of the layer.
The connection mode reduces the loss of the original information in the transmission process, and relieves the gradient disappearance problem in the deep neural network to a certain extent. And simultaneously, the idea of dense connection in the DenseNet is further fused, namely, all layers in the network are directly connected:
xl=Hl([x0,x1,...,xl-1])
wherein x islIs the output of the l-th layer, HlIs a convolution function of the l-th layer, x0,x1,...,xl-1For the output of all neural network layers before the l-th layerThe mode of connection between layers reduces the loss of the original information in the transmission process, and is beneficial to transmitting the information to the maximum extent, so that the shallow layer characteristic information can be fully utilized, and the characteristic multiplexing is enhanced.
In step S32, an FPN network is introduced, and different levels are fused through horizontal connection and upsampling to obtain three fusion features of different scales and sizes, so that the fusion features include more semantic features, which is beneficial to object recognition in an image, and meanwhile, the fusion features have richer information about image local details, which is beneficial to positioning the position of an object in the image, and enhancing the expression output capability of the convolutional neural network on image information.
Further, the step S4 includes:
step S41: uniformly dividing the extracted features into grids with the same size, wherein each grid is a vector representing the feature of the grid region and is responsible for detecting an object falling in the grid region;
step S42: extracting a feature vector of each grid, calculating the confidence coefficient of each dangerous article contained in the grid, setting a threshold value to be 0.3, and filtering the prediction with the confidence coefficient smaller than the threshold value, namely setting the confidence coefficient of the prediction to be zero;
step S43: distributing three anchor frames with different shapes for each grid to fit dangerous goods with different shapes and sizes falling in the grid;
step S44: extracting the feature vector of each grid, calculating the offset of the object in the grid relative to the distributed anchor frame, further calculating the position of the object, and finally obtaining a group of bounding box lists.
Further, the step S5 includes:
step S51: and sorting according to the confidence scores.
Step S52: selecting a prediction bounding box with the highest confidence coefficient, adding the prediction bounding box into a final output list, and deleting the prediction bounding box from the bounding box list;
step S53: calculating the areas of all the bounding boxes;
step S54: calculating the intersection ratio of the bounding box with the highest confidence degree and other candidate boxes IoU:
Figure BDA0002594832930000031
wherein, boxaI.e., a prediction box, in step S54, the bounding box with the highest confidence level, boxbNamely other prediction bounding boxes;
step S55: setting a threshold and deleting IoU bounding boxes that are larger than the threshold;
step S56: repeating the above process until the bounding box list is empty;
step S57: and labeling the predicted type and position information of the dangerous goods for the image containing the dangerous goods, and if the dangerous goods are not detected, determining that the image is a normal security inspection image without the dangerous goods.
Compared with the prior art, the invention has the beneficial effects that.
The invention is mainly used for places such as airports and the like which need security inspection, and dangerous goods are detected in a manual mode at present, which causes great labor cost. The invention can automatically finish the identification and early warning of dangerous articles and save manpower and financial resources. Meanwhile, the invention has very important significance for improving the security inspection technical level, preventing and stopping terrorist violent events, improving the safety sense of people and building and maintaining the national safety image. Meanwhile, the project research can be applied to transportation hubs and important activity places such as railway stations, subway stations and the like, and has great economic value and social value.
Drawings
Fig. 1 is a diagram of a dangerous goods detection model in an X-ray security inspection image based on deep learning according to an embodiment of the present invention.
FIG. 2 is a diagram of an example of dangerous goods detection in an X-ray security image according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail with reference to the embodiments and the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The technical solution of the present invention is described in detail below with reference to the embodiments and the drawings, but the scope of protection is not limited thereto.
As shown in fig. 1-2, a method for detecting dangerous goods in an X-ray security inspection image based on deep learning specifically includes the following steps:
in the security check process, the X-ray security check machine can continuously generate a scanning video in the scanning process, so that the invention can support the detection of the video and the detection of the corresponding picture.
Step S1: and loading the X-ray security inspection image or video to be detected. The method specifically comprises the following steps:
step S11: and for the loaded video, extracting a video frame, converting the video frame into an image, and loading.
Step S12: and carrying out certain scale transformation on the loaded image, and scaling the loaded image into a fixed size.
Step S13: for the processed image, it is converted into a digital signal for model calculation, and input into a deep learning model fusing densenert and Yolov 3.
Step S2: and realizing the definition and loading of the model, and loading the weight obtained by pre-training.
Yolov3 is a recently proposed one-stage target detection model, which improves classification accuracy by improving classification algorithm and feature extraction network main body structure on the premise of ensuring faster detection speed, compensates for the deficiency of small object detection by refining grid division and using a multi-scale prediction mechanism to realize higher detection precision, and meets the requirements of real-time performance and precision of X-ray security inspection dangerous goods detection.
Step S3: and (3) carrying out forward propagation on the loaded image by using the convolutional neural network as a feature extractor, and extracting the features of the loaded image. The method specifically comprises the following steps:
step S31: and continuously performing convolution and down-sampling on the loaded image by using the Darknet convolution neural network fused with the dense connection as a feature extractor to finish forward propagation and extract the original features of the input image under different scales.
Step S32: and (4) utilizing the FPN network to obtain three fusion features with different scales and sizes by transversely connecting and upsampling the original features extracted in the step S31 under different scales.
Step S41: and uniformly dividing the extracted features into grids with the same size, wherein each grid is a vector representing the regional features of the grid. Each mesh is responsible for detecting objects falling within it;
step S42: extracting a feature vector of each grid, calculating the confidence coefficient of each dangerous article contained in the grid, setting a threshold value to be 0.7, and filtering the prediction with the too small confidence coefficient;
step S43: for each grid, three anchor frames of different shapes are assigned for fitting the hazardous materials of different shapes and sizes falling within the grid.
Step S44: extracting the feature vector of each grid, calculating the offset of the object in the grid relative to the assigned anchor frame, and further calculating the position of the object (b)x,by,bw,bh)
bx=σ(tx)+cx
by=σ(ty)+cy
Figure BDA0002594832930000051
Figure BDA0002594832930000052
Wherein (b)x,by) The coordinates of the center of the bounding box of the predicted object in the image (b)w,bh) For the width and height of the predicted object bounding box, (t)x,ty) Respectively, the offset of the bounding box of the predicted object with respect to the anchor box center coordinates, (t)w,th) The offsets of the bounding box of the predicted object relative to the anchor frame width and height, respectively. (c)x,cy) For the coordinates of the bounding box of the predicted object with respect to the upper left corner of the mesh, (p)w,ph) The width and height of the assigned anchor frame.
Finally, a set of bounding box lists is obtained.
Step S5: outputting a prediction result, and labeling the type and position information of the dangerous goods contained in the image containing the dangerous goods; the method specifically comprises the following steps:
step S51: and removing redundant prediction objects by using a non-maximization suppression NMS algorithm.
The idea of the Non-Maximum Suppression NMS algorithm (Non-Maximum Suppression) is to search for local maxima, suppressing maxima. For example, in the process of detecting a dangerous article, a large number of prediction frames may be generated at the same target position, and these prediction frames may overlap with each other, and at this time, we need to find the optimal object bounding box by using non-maximum suppression, so as to eliminate redundant prediction frames.
Step S52: and sorting according to the confidence scores.
Step S53: and selecting the prediction bounding box with the highest confidence coefficient, adding the prediction bounding box into the final output list, and deleting the prediction bounding box from the bounding box list.
Step S54: the area of all bounding boxes is calculated.
Step S55: calculating the intersection ratio of the bounding box with the highest confidence degree and other candidate boxes IoU:
Figure BDA0002594832930000061
wherein, boxaI.e., a prediction box, in step S55, the bounding box with the highest confidence level, boxbI.e. other prediction bounding boxes.
Step S56: the threshold is set to 0.5 and the bounding box above that threshold is removed IoU.
Step S57: the above process is repeated until the bounding box list is empty.
Step S58: and labeling the predicted dangerous goods category and position information for the image containing the dangerous goods. If no dangerous goods are detected, the image is a normal security inspection image without dangerous goods.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A dangerous goods detection method in an X-ray security inspection image based on deep learning is characterized by comprising the following specific steps:
step S1: constructing a deep learning model fusing DenseNet and Yolov3, and pre-training the model to obtain a weight;
step S2: loading the weight obtained by pre-training, processing the X-ray security inspection video or image to be detected and then loading the processed X-ray security inspection video or image into a model;
step S3: carrying out forward propagation on the loaded image by using a convolutional neural network as a feature extractor, and extracting the features of the loaded image;
step S4: according to the extracted features, calculating the confidence degree of the dangerous goods and the position information of the dangerous goods;
step S5: and removing redundant prediction objects by using a non-maximization inhibition algorithm, outputting prediction results, and labeling the types and the position information of the dangerous goods contained in the images containing the dangerous goods.
2. The method for detecting the dangerous goods in the X-ray security inspection image based on the deep learning of claim 1 is characterized in that a Yolov3 model is built by utilizing a pytorech, a feature extractor of the method adopts Darknet, and dense connection in Densenet is fused at the same time, so that shallow feature information can be fully utilized, and feature multiplexing is enhanced.
3. The method for detecting dangerous goods in X-ray security inspection images based on deep learning of claim 1, wherein the step S2 specifically comprises:
step S21: for the loaded video, extracting video frames, converting the video frames into images and loading the images;
step S22: carrying out scale transformation on the loaded image, and zooming the image into a fixed size;
step S23: for the processed image, it is converted into a digital signal for fusion model calculation, and input into the fusion model.
4. The method for detecting dangerous goods in X-ray security inspection images based on deep learning of claim 1, wherein the convolutional neural network is a Darknet convolutional neural network, and the step S3 specifically comprises:
step S31: continuously performing convolution and down-sampling on the loaded image by using the Darknet convolution neural network fused with dense connection as a feature extractor to finish forward propagation and extract the original features of the input image under different scales;
step S32: and (4) utilizing the FPN network to obtain three fusion features with different scales and sizes by transversely connecting and upsampling the original features extracted in the step S31 under different scales.
5. The method for detecting dangerous goods in X-ray security inspection image based on deep learning of claim 3, wherein Darknet convolutional neural network directly connects the original input of a certain layer to the output of the layer:
y=F(x)+x
where y is the output of the layer, F is the functional mapping of the layer, and x is the input of the layer.
6. The method for detecting dangerous goods in X-ray security inspection images based on deep learning of claim 1, wherein the step S4 comprises:
step S41: uniformly dividing the extracted features into grids with the same size, wherein each grid is a vector representing the feature of the grid region and is responsible for detecting an object falling in the grid region;
step S42: extracting a feature vector of each grid, calculating the confidence coefficient of each dangerous article contained in the grid, setting a threshold value to be 0.3, and filtering the prediction with the confidence coefficient smaller than the threshold value, namely setting the confidence coefficient of the prediction to be zero;
step S43: distributing three anchor frames with different shapes for each grid to fit dangerous goods with different shapes and sizes falling in the grid;
step S44: extracting the feature vector of each grid, calculating the offset of the object in the grid relative to the distributed anchor frame, further calculating the position of the object, and finally obtaining a group of bounding box lists.
7. The method for detecting dangerous goods in X-ray security inspection images based on deep learning of claim 1, wherein the step S5 comprises:
step S51: sorting according to the confidence score;
step S52: selecting a prediction bounding box with the highest confidence coefficient, adding the prediction bounding box into a final output list, and deleting the prediction bounding box from the bounding box list;
step S53: calculating the areas of all the bounding boxes;
step S54: calculating the intersection ratio of the bounding box with the highest confidence degree and other candidate boxes IoU:
Figure FDA0002594832920000021
wherein, boxaI.e., a prediction box, in step S54, the bounding box with the highest confidence level, boxbNamely it is
He predicts the bounding box;
step S55: setting a threshold and deleting IoU bounding boxes that are larger than the threshold;
step S56: repeating the above process until the bounding box list is empty;
step S57: and labeling the predicted type and position information of the dangerous goods for the image containing the dangerous goods, and if the dangerous goods are not detected, determining that the image is a normal security inspection image without the dangerous goods.
CN202010706378.9A 2020-07-21 2020-07-21 Method for detecting dangerous goods in X-ray security inspection image based on deep learning Pending CN111815616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706378.9A CN111815616A (en) 2020-07-21 2020-07-21 Method for detecting dangerous goods in X-ray security inspection image based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706378.9A CN111815616A (en) 2020-07-21 2020-07-21 Method for detecting dangerous goods in X-ray security inspection image based on deep learning

Publications (1)

Publication Number Publication Date
CN111815616A true CN111815616A (en) 2020-10-23

Family

ID=72861515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706378.9A Pending CN111815616A (en) 2020-07-21 2020-07-21 Method for detecting dangerous goods in X-ray security inspection image based on deep learning

Country Status (1)

Country Link
CN (1) CN111815616A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633286A (en) * 2020-12-25 2021-04-09 北京航星机器制造有限公司 Intelligent security inspection system based on similarity rate and recognition probability of dangerous goods
CN112884755A (en) * 2021-03-11 2021-06-01 北京理工大学 Method and device for detecting contraband
CN113159110A (en) * 2021-03-05 2021-07-23 安徽启新明智科技有限公司 X-ray-based liquid intelligent detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858569A (en) * 2019-03-07 2019-06-07 中国科学院自动化研究所 Multi-tag object detecting method, system, device based on target detection network
CN110018524A (en) * 2019-01-28 2019-07-16 同济大学 A kind of X-ray safety check contraband recognition methods of view-based access control model-attribute
CN111126447A (en) * 2019-11-29 2020-05-08 中国船舶重工集团公司第七一三研究所 Intelligent passenger security check luggage image automatic identification method
CN111126238A (en) * 2019-12-19 2020-05-08 华南理工大学 X-ray security inspection system and method based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018524A (en) * 2019-01-28 2019-07-16 同济大学 A kind of X-ray safety check contraband recognition methods of view-based access control model-attribute
CN109858569A (en) * 2019-03-07 2019-06-07 中国科学院自动化研究所 Multi-tag object detecting method, system, device based on target detection network
CN111126447A (en) * 2019-11-29 2020-05-08 中国船舶重工集团公司第七一三研究所 Intelligent passenger security check luggage image automatic identification method
CN111126238A (en) * 2019-12-19 2020-05-08 华南理工大学 X-ray security inspection system and method based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨露菁 等: "《智能图像处理及应用》", 31 March 2019, 北京:中国铁道出版社, pages: 232 - 233 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633286A (en) * 2020-12-25 2021-04-09 北京航星机器制造有限公司 Intelligent security inspection system based on similarity rate and recognition probability of dangerous goods
CN113159110A (en) * 2021-03-05 2021-07-23 安徽启新明智科技有限公司 X-ray-based liquid intelligent detection method
CN112884755A (en) * 2021-03-11 2021-06-01 北京理工大学 Method and device for detecting contraband

Similar Documents

Publication Publication Date Title
CN111815616A (en) Method for detecting dangerous goods in X-ray security inspection image based on deep learning
CN110532889B (en) Track foreign matter detection method based on rotor unmanned aerial vehicle and YOLOv3
CN111222478A (en) Construction site safety protection detection method and system
CN112200225B (en) Steel rail damage B display image identification method based on deep convolution neural network
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN102982313B (en) The method of Smoke Detection
CN112633149B (en) Domain-adaptive foggy-day image target detection method and device
CN111178182A (en) Real-time detection method for garbage loss behavior
CN116485709A (en) Bridge concrete crack detection method based on YOLOv5 improved algorithm
CN113469050A (en) Flame detection method based on image subdivision classification
CN113743260B (en) Pedestrian tracking method under condition of dense pedestrian flow of subway platform
CN113553916B (en) Orbit dangerous area obstacle detection method based on convolutional neural network
CN110309765B (en) High-efficiency detection method for video moving target
CN114022837A (en) Station left article detection method and device, electronic equipment and storage medium
CN114399734A (en) Forest fire early warning method based on visual information
CN112149533A (en) Target detection method based on improved SSD model
Zheng et al. A review of remote sensing image object detection algorithms based on deep learning
CN114708566A (en) Improved YOLOv 4-based automatic driving target detection method
CN115601682A (en) Method and device for detecting foreign matters of underground belt conveyor
CN115457304A (en) Luggage damage analysis method and system based on target detection
CN111462090A (en) Multi-scale image target detection method
CN113988222A (en) Forest fire detection and identification method based on fast-RCNN
CN110992324B (en) Intelligent dangerous goods detection method and system based on X-ray image
CN116206155A (en) Waste steel classification and identification method based on YOLOv5 network
CN115719475A (en) Three-stage trackside equipment fault automatic detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination