WO2020187077A1 - 一种基于深度神经网络的安检***及方法 - Google Patents
一种基于深度神经网络的安检***及方法 Download PDFInfo
- Publication number
- WO2020187077A1 WO2020187077A1 PCT/CN2020/078425 CN2020078425W WO2020187077A1 WO 2020187077 A1 WO2020187077 A1 WO 2020187077A1 CN 2020078425 W CN2020078425 W CN 2020078425W WO 2020187077 A1 WO2020187077 A1 WO 2020187077A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- module
- neural network
- image
- training
- model
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 60
- 238000012549 training Methods 0.000 claims abstract description 46
- 238000003384 imaging method Methods 0.000 claims abstract description 18
- 238000007689 inspection Methods 0.000 claims description 50
- 238000013527 convolutional neural network Methods 0.000 claims description 27
- 239000003086 colorant Substances 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 230000003631 expected effect Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000012546 transfer Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 5
- 206010033864 Paranoia Diseases 0.000 claims description 3
- 208000027099 Paranoid disease Diseases 0.000 claims description 3
- 239000005416 organic matter Substances 0.000 claims description 3
- 238000004846 x-ray emission Methods 0.000 claims description 3
- 230000000149 penetrating effect Effects 0.000 claims description 2
- 230000002457 bidirectional effect Effects 0.000 abstract 1
- 230000011218 segmentation Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 3
- 239000002360 explosive Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000012206 bottled water Nutrition 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 239000003350 kerosene Substances 0.000 description 1
- 239000003915 liquefied petroleum gas Substances 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000011430 maximum method Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V5/00—Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Definitions
- the invention belongs to the technical field of security inspection, and specifically relates to a security inspection system and method based on a deep neural network.
- X-ray security inspection machines play an important role in the safety inspection of dangerous goods and the safety of transportation vehicles.
- the traditional X-ray security inspection machine requires the staff to carefully check the X-ray luggage image to determine whether it contains dangerous goods.
- the device is low in intelligence, and the cost required for manual inspection is high. At the same time, misjudgment may occur. , Thereby posing a great threat to people's safe travel, and even causing major accidents.
- the “an automatic identification device for contraband security inspection” (CN 201710233696.6) published in the patent application converts the image from the RGB color space to the HSV color space and copies three copies, which are divided into three colors for identification. After optimizing the image quality, the three The patterns after the recognition of different colors are processed in parallel with the pre-stored contraband templates under the corresponding colors, and the SURF feature matching is performed. If the matching rate is above 55%, it is considered that the luggage has contraband.
- SURF feature matching is mainly to match the X-ray image with the number of SURF descriptors in the pre-stored contraband image template. It can only identify objects of the same style and color. The detection accuracy of similar items is low (such as toy pistols and The shape of the real gun is the same), the generalization ability is poor, and the category classification is not clear. It has a certain detection capability for rotating, expanding and deforming objects in luggage, but it is difficult to accurately detect and distinguish between disorderly stacked luggage or overlapping objects.
- the existing dangerous goods detection technology adopts image processing technology, which is mainly to segment according to the color of the object and then extract and analyze the feature of the image object.
- image processing technology which is mainly to segment according to the color of the object and then extract and analyze the feature of the image object.
- the same object of different materials cannot be handled well, for example: the tip of the scissors is blue
- the handle is usually an orange feature, so that the segmentation of the object only obtains the local features of the object, resulting in low object accuracy and unclear object categories.
- the detection accuracy of objects that rotate, expand, and deform in luggage items is low, and because of luggage items The stacking is messy, and it is difficult to accurately detect overlapping objects.
- the intelligent security inspection system integrated with deep learning algorithms will greatly improve the intelligent procedures of security inspection devices, improve the accuracy of dangerous goods identification, and effectively reduce the pressure on security inspection staff. Improve the passage efficiency of security check channels, reduce congestion, and ensure people's traffic and travel safety to the greatest extent.
- a new X-ray intelligence based on color segmentation and multi-plane deep neural network is proposed.
- the security inspection device and method solve the problem of detection and identification of items carried in daily luggage and parcels.
- a deep neural network detection model is established, and big data is used for feature training and learning of common objects, so that the detector can recognize and recognize rotating, stretching and deforming objects. classification.
- a security inspection system based on a deep neural network of the present invention includes an X-ray imaging module, a detection model training and learning module, an object recognition module, and a security management module.
- the output terminal of the X-ray imaging module and the object recognition The input end of the module is connected, the object recognition module and the detection model training and learning module are bidirectionally connected, and the output end of the object recognition module is connected to the input end of the security management module;
- the X-ray imaging module is used to obtain the X image video sequence of the object, and then A digital picture is obtained through analog-to-digital conversion, and the obtained digital picture is transferred to the object recognition module;
- the detection model training learning module is used for image training to obtain a learning model, and the learning model is transferred to the object recognition module;
- the object The recognition module is used to load the learning model in the detection model training module, classify and locate items, and transmit the type and coordinate information of the detected objects to the security management module;
- the security management module is used to identify objects based on
- the object transmission module includes an object entry channel, a dangerous goods output channel, and a non-dangerous goods output channel.
- the safety management module includes an information management module, a warning module, a baggage control module, and a display module; among them, the information management module is used to receive object classification and location information sent by the object recognition module, and based on the received object classification and location information Determine whether the object is a dangerous item; the alarm module is used to alarm; the baggage control module is used to transport the baggage to different channels in the object transmission module, and the display module is used to display X-ray pictures and detection results.
- a security inspection method based on deep neural networks uses pictures to train an image learning model.
- image learning model uses the image learning model to identify the types and coordinates of the items in the digital picture; then according to the types and coordinates of the items, the items are divided into different conveying channels according to the types.
- Step 1 Use the X-ray emission device to perform imaging after penetrating the object to obtain an X image video sequence, and the X image video sequence undergoes analog-to-digital conversion to obtain a digital picture;
- Step 2 Load the image learning model, and use the image learning model to identify and locate digital pictures; the image learning model is obtained through training.
- the specific training method is: firstly, use the convolutional layer of the convolutional neural network to pool Layer and fully connected layer to build the object training model; then the X-ray images obtained in the early stage are classified according to the security inspection object category, and the category and coordinate information of the object are marked.
- the coordinate information includes the coordinates of the object center point x, y and the target frame Length w and width h; Then set the parameters of the training model, including learning rate, batch processing scale, learning strategy, etc.; Then send the labeled pictures to the convolutional neural network, and use the built convolutional neural network to label
- the image learning model is trained to obtain the image learning model; then the image learning model is verified. If the expected effect is achieved, the image learning model is saved to the model learning library; if the expected effect is not achieved, the parameters of the convolutional neural network are adjusted and the training continues, Until the image learning model achieves the desired effect.
- Step 3 Divide the items into different conveying channels according to the types and coordinates of the items.
- step 2 the pictures used for training the learning model adopt pictures with different angles and positions.
- sending the marked pictures into the convolutional neural network, and training the marked pictures with the built convolutional neural network includes the following steps:
- n_in is the dimension of the last dimension of the tensor.
- Xk represents the k-th input matrix.
- Wk represents the k-th sub-convolution kernel matrix of the convolution kernel.
- s(i,j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, and b is the amount of paranoia;
- the loss function adopts the focalloss loss function
- the bbox with the highest confidence is selected as the detection result and output.
- the present invention has at least the following beneficial technical effects:
- Flammable and explosive objects mainly include: kerosene, liquefied petroleum gas, solid alcohol, compressed gas, firecrackers, fireworks, fireworks, etc.; guns and ammunition objects Mainly include: simulation guns, steel ball guns, stun guns, gun-type lighters, bullets, empty bullets, bullet clips, etc.; explosives mainly include: scale-shaped TNT, plastic explosives, fuse, detonator, timed firecracker Devices, etc.; controlled knives mainly include: daggers, switch knives, three-sided knives, lock knives, etc.; dangerous goods mainly include: scissors, axes, kitchen knives, slingshots, etc.; police equipment mainly include: electric shock sticks, double Knuckles, handcuffs, smoke bombs, etc.;
- Daily luggage items mainly include: bottled water, bottled wine, liquid alcohol, glass glue, etc.
- the classification of objects is clear, which effectively assists staff in safety inspections and can add object categories according to the actual situation to improve safety.
- Figure 1 is a block diagram of an X-ray intelligent security inspection system based on a deep neural network
- Figure 2 is a flowchart of the object recognition detection module and the safety management module
- Figure 3 is a flowchart of the model training module
- FIG. 4 is a flowchart of the color segmentation algorithm
- Figure 5 is the original X-ray image
- Figure 6a is a plan view of R
- Figure 6b is a plan view of G
- Figure 6c is a plan view of B
- Figure 7a is an H plan view
- Figure 7b is an S plan view
- Figure 7c is a V plan view
- Figure 8a is a plan view of the mixture
- Figure 8b is a plan view of organic matter
- Figure 8c is a plan view of an inorganic substance
- Figure 8d is another plan view.
- a security inspection system based on a deep neural network includes an object transmission module, an X-ray imaging module, a detection model training and learning module, an object recognition module, and a security management module.
- the working process is that the object transfer module transfers the luggage items into the detection range of the X-ray imaging machine module.
- the X-ray imaging machine module emits X-rays, passes through the X-ray images of the luggage, and obtains the X image video sequence, and then passes through the module
- the digital image is converted, and the security check item learning model is loaded, and the convolutional neural network is used to classify and locate the object.
- the output image recognition category and location are sent to the security management module, and the security management module determines the path to which the luggage flows.
- the object transfer module is mainly used to transfer luggage items during security check
- the X-ray imaging module mainly uses the X-rays generated by the X-ray emission tube to penetrate the luggage items in the channel to obtain the X image video sequence, and then obtain the digital picture through the analog-to-digital conversion;
- the detection model training learning module is used to collect and annotate object pictures, and then send them to the convolutional god-level network for learning, and finally get the learned object detection model, and pass the trained model to the object recognition module;
- the object recognition module is used to load the X-ray image learning model of the detection model training module, use the built-in object detection algorithm for object recognition and positioning, and transmit the detected object type and coordinate information to the security management module;
- the alarm module and the baggage control module in the safety management module are used to determine whether an alarm is needed and to transmit the items to the dangerous goods channel according to the object type and coordinate information output by the object recognition module.
- the security management module includes an information management module, a warning module, a baggage control module and a display module.
- the information management module is used to receive the object classification and location information sent by the object recognition module, and to determine whether the detected object is a dangerous article according to the received object classification and location information
- the alarm module is used to alarm
- the baggage control module is used to The baggage is conveyed to different channels in the object transmission module
- the display module is used to display X-ray pictures and detected result pictures during the working process of the security inspection machine.
- This system can be used as a new type of intelligent security inspection system, and can also update the object recognition module, model training module, and safety management module to the existing security inspection system, and intelligently upgrade the existing security inspection system.
- the intelligent security inspection method mainly includes four parts: detection area extraction, image plane processing, detector learning and training, and intelligent detection of dangerous goods.
- the detection area extraction process is as follows: segment the area to be detected according to the characteristics of the X-ray background, and discard a large number of white candidate detection areas directly, avoiding subsequent time-consuming recognition operations and improving the speed of item detection.
- Image plane processing The preprocessing is mainly to convert the image from the RGB model to the hsv model.
- the hsv model includes three color planes H, S and V.
- the image is then divided into four colors: orange, green, blue and other colors by hue H flat;
- the picture input to the convolutional neural network is represented by the RGB color model, which is composed of three color planes: R, G, and B.
- the present invention adds the H, S, and V color planes obtained in the preprocessing stage, as well as the orange, green, blue and other colors generated after color segmentation, a total of 10 color planes.
- the intelligent security inspection system uses a large number of X-ray object pictures of different angles and positions to classify and label the collected X-ray images, mark the type and coordinates of the object, and divide it into 8:2 Learning picture sets and test picture sets, and generating the .xml annotation format required by the algorithm based on the original pictures of the acquired X-ray images (including the object category, size and its coordinate position in the X-ray image, etc.).
- the baggage items are transferred from the transmission module to the X-ray imaging module.
- the X-ray transmitter passes through the imaging of the baggage to obtain an X image video sequence.
- the X image video sequence undergoes analog-to-digital conversion to obtain a digital picture.
- Load the X-ray learning model to detect The type and coordinates of the object are transmitted to the safety management module through the communication interface.
- the alarm strategy and confidence threshold set by the safety management module, it is determined whether the system alarms and whether it is transmitted to the dangerous goods channel.
- the confidence threshold can be set by itself according to the needs of security inspection. It is 70%. If the object detection confidence is greater than the threshold, it will alarm.
- the object detection module is updated on the basis of the original security inspection system and transmitted to the security inspection machine through the network communication interface.
- the X-ray learning model is used to output the type and coordinate information of the object to the existing security inspection screen through the communication interface. Alarm threshold, if dangerous goods are detected, the object conveyor will be suspended.
- the detector learning training includes the following steps:
- Step 1 Use the convolutional layer, pooling layer and fully connected layer of the convolutional neural network to build an object training model.
- Step 2 Classify the pictures in the X-ray picture library according to the security check object category, and manually mark the category and coordinate information of the object in the picture.
- Step 3 Set the parameters of the training model.
- the parameters include learning rate, batch processing scale, learning strategy, etc.
- Step 4. Send the marked pictures into the convolutional neural network.
- Step 5 Use the built convolutional neural network to train the labeled pictures to obtain a learning model.
- Step 6 Verify the learning model. If the expected effect is achieved, save the learning model to the model learning library; if the expected effect is not achieved, adjust the parameters of the convolutional neural network and continue training until the learning model achieves the expected effect.
- MAP Mean Average Precision
- Step 4 includes the following steps:
- Step 4.1 Detection area extraction
- the area to be detected is segmented according to the characteristics of the X-ray background, and a large number of candidate detection areas with blank backgrounds are directly discarded, and the colored area is the detection area, which avoids subsequent time-consuming identification operations and improves the speed of item detection.
- Step 4.2 Image plane processing
- the input HSV is divided into organic orange channel, inorganic blue channel, mixed green channel and other color channels according to the value range of hue, purity and brightness.
- the value of H is between 20° ⁇ 60°, the value of S is between 0.4 ⁇ 1.0, and the value of V is between 0.4 ⁇ 1.0, it is an organic orange channel; when the value of H is between 100° ⁇ 140°, the value of S is between 0.4 ⁇ 1.0, When the value of V is 0.4 ⁇ 1.0, it is the green channel of the mixture; when the value of H is 220° ⁇ 260°, the value of S is 0.4 ⁇ 1.0, and the value of V is 0.4 ⁇ 1.0, it is the inorganic blue channel; when the value of H is not When the orange channel, the green channel and the blue channel are within the range, the value of S is 0.4-1.0, and the value of V is 0.4-1.0, which are other color channels.
- Step 4.3 Take the r, g, and b channels of the picture and store them in the first three channels of the picture that will be input to the convolutional neural network, and then convert the rgb image model to the hsv color model, and extract the h, s, and v three of the hsv model
- Two channels are stored in the three channels behind the rgb of the input image, and the image is divided into 4 colors through the different value ranges of hsv hue, purity and brightness.
- the image is divided into organic orange, inorganic blue, mixed green, and other colors.
- the channel is stored in the last four channels of the input image, and the training image of 10 channels is input into the convolutional neural network.
- the detection model training and learning module segment the image based on the color features of the X-ray image, and synthesize a multi-plane detection image that integrates R, G, B, H, S, V and material information, which improves the accuracy of the detection of objects.
- Step 5 includes the following steps:
- Step 5.1 Use convolution operation to perform feature extraction on the area to be detected
- the input image size is 416*416, the channel is 10, and the convolution operation is performed using 3*3 and 1*1 convolutional layers.
- Feature representation after convolution is performed using 3*3 and 1*1 convolutional layers.
- n_in is the dimension of the last dimension of the tensor.
- Xk represents the k-th input matrix.
- Wk represents the k-th sub-convolution kernel matrix of the convolution kernel.
- s(i,j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, and
- b represents the amount of paranoia.
- the maximum pooling method is adopted, that is, the maximum value of the 2*2 pooling area is selected as the characteristic value, and the core step is 1.
- Step 5.3 Use Softmax to classify each Bbox
- Step 5.4 Loss function adopts focalloss loss function
- Step 5.5 The local maximum method is adopted, that is, the bbox (the rectangular area containing the object) with the highest confidence is selected as the detection result output, that is, the position information of the detected object.
- the Bbox information contains 5 data values, namely x, y, w, h, and confidence.
- x, y refer to the coordinates of the center position of the bounding box of the object predicted by the current grid.
- w, h refers to the width and height of the bounding box of the object predicted by the current grid
- confidence refers to the confidence of the predicted object.
- Figure 5 is the X-ray image after grayscale
- Figure 6a is the R plan view of the X-ray image
- Figure 6b is the G plan view
- Figure 6c is the B plan view of the X-ray image
- 7a is the H plan view of the X-ray image
- FIG. 7b is the S plan view of the X-ray image
- FIG. 7c is the V plan view of the X-ray image
- FIG. 8a is the mixture plan view of the X-ray image
- FIG. 8b is the organic matter plan view of the X-ray image
- 8c is an inorganic plan view of the X-ray image; other plan views of the X-ray image.
- the security inspection system includes the following steps:
- Step 1 Set the alarm threshold of different objects.
- the first step transfer the luggage items through the conveyor belt of the object transfer module.
- the second step X-ray imaging module emits X-rays, through X-ray imaging of luggage to obtain X image video sequence.
- Step 3 Get the digital picture of the luggage item after analog-digital conversion.
- Step 4 Load the X-ray image learning model of the model training module.
- Step 5 Use the learning model to classify and locate objects.
- Step 6 Output the object type and coordinate information in the picture and send it to the safety management module.
- Step 7 The alarm module of the security management module decides whether to give an alarm and the baggage control module determines the channel of baggage flow.
- the technology of the present invention mainly uses deep learning for object recognition and positioning, and organically combines the geometric features and texture features of the object with the color of the object in the X-ray image in the process of object feature learning. Using a large number of image data from different angles and different positions for learning can not only detect and recognize blurred, rotated, and deformed images, but also update the trained model to a series of security inspection machines in real time.
- the administrator can set the object type threshold, category and coordinates according to the degree of danger of the object.
- the intelligent security inspection system based on deep learning mainly improves the manual identification of dangerous goods in the traditional security inspection system into a process that relies on deep learning to assist the security inspection personnel, greatly reducing labor costs and making the security inspection system more intelligent.
- the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- High Energy & Nuclear Physics (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种基于深度神经网络的安检***,其特征在于,包括X光成像模块、检测模型训练学习模块、物体识别模块和安全管理模块,所述X光成像模块的输出端和物体识别模块的输入端连接,物体识别模块和检测模型训练学习模块双向连接,物体识别模块的输出端和安全管理模块的输入端连接;所述X光成像模块用于得到物品的X图像视频序列,然后经过模数转换得到数字图片,并将得到的数字图片传递至物体识别模块;所述检测模型训练学习模块用于进行图片训练,得到学习模型,并将学习模型传递至物体识别模块;所述物体识别模块用于加载检测模型训练模块中的学习模型,并对物品进行分类与定位,将检测识别出的物体的种类和坐标信息传送到安全管理模块;所述安全管理模块中用于根据物体识别模块识输出的物体种类和坐标信息将物品输送至不同的物品运送通道中。
- 根据权利要求1所述的一种基于深度神经网络的安检***,其特征在于,还包括物体传输模块,物体传输模块包括物品进入通道、危险品输出通道和非危险品输出通道。
- 根据权利要求1所述的一种基于深度神经网络的安检***,其特征在于,安全管理模块包括信息管理模块、警示模块、行李控制模块和显示模块;其中,信息管理模块用于接收物体识别模块发送的物体分类与位置信息,并根据接收到的物体分类与位置信息判别物体是否为危险物品;报警模块用于报警;行李控制模块用于将行李输送至物体传输模块中的不同通道中,显示模块用于显示X射线图片和检测结果。
- 一种基于深度神经网络的安检方法,其特征在于,首先利用图片训练出图像学习模型,在物品检测时,采集待检测物品的X图像视频序列,X图像视频序列经过模数转换得到数字 图片;然后加载图像学习模型,利用图像学习模型识别数字图片中物品的种类和坐标;然后根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
- 根据权利要求4所述的一种基于深度神经网络的安检方法,其特征在于,包括以下步骤:步骤1、利用X射线发射装置透过物品后的成像,得到X图像视频序列,X图像视频序列经过模数转换得到数字图片;步骤2、加载图像学习模型,通过图像学习模型来对数字图片进行物品的识别与定位;所述图像学习模型通过训练得到,具体训练方法为:首先采用卷积神经网络的卷积层,池化层和全连接层搭建物体训练模型;然后将前期获得的X射线图片按安检物体类别分类,并标注出物体的类别和坐标信息,其中坐标信息包括物体中心点的坐标x,y和目标框的长w和宽h;然后设置训练模型的参数,包括学习率,批处理尺度和学习策略;然后将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,得到图像学习模型;然后验证图像学习模型,若达到预期效果,则将图像学习模型保存到模型学习库;若未达到预期效果,则调整卷积神经网络的参数,继续训练,直到图像学习模型达到预期效果;步骤3、根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
- 根据权利要求5所述的一种基于深度神经网络的安检方法,其特征在于,步骤2中,用于训练学习模型的图片采用不同角度、位置的图片。
- 根据权利要求5所述的一种基于深度神经网络的安检方法,其特征在于,步骤2中,将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,包括以下步骤:S1、根据X射线背景特点将数字图片分割出待检测区域,将有颜色的区域为检测区域,所述数字图片为rgb图像;S2、将数字图片的r通道、g通道和b通道取出来存放到将要输入卷积神经网络的图片前三位通道;再将rgb图像模型转换为hsv颜色模型,提取出hsv模型的h通道、s通道和v通道,并存放到输入图片的rgb后面三个通道;通过hsv颜色模型的色调H,纯度S以及明亮度V的值,将hsv模型分为有机物橙色,无机物蓝色,混合物绿色,及其他颜色分割成4个颜色通道,存放至输入图片的后四位通道,将10个通道的训练图片输入至卷积神经网络中;S3、利用卷积运算对待检测区域进行特征提取,卷积后的特征表示为:其中,n_in是张量的最后一维的维数,Xk代表第k个输入矩阵,Wk代表卷积核的第k个子卷积核矩阵,s(i,j)即卷积核W对应的输出矩阵的对应位置元素的值,b是偏执量;S4、进行池化;S5、使用Softmax对每个目标框进行分类,得到分类之后的bbox;S6、损失函数采用focalloss损失函数,FL(pt)=-α t(1-pt) γlog(pt)γ为focusing parameter,γ>=0,1-pt称为调制系数,α t用于调节正样本和负样本的比例,前景类别使用α t时,对应的背景类别使用1-α,pt是不同类别的分类概率;S7、选取置信度最高的bbox作为检测结果输出。
- 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S2中,当H的值在20°~60°,S的值在0.4~1.0,V的值在0.4~1.0时,为有机物橙色通道;当H的值100°~140°,S的值0.4~1.0,V的值0.4~1.0时,为混合物绿色通道;当H的值220°~260°,S的值0.4~1.0,V的值0.4~1.0时,为无机物蓝色通道;当H的值不在所述橙色通道、绿色通道和蓝色通道范围内时,S的值0.4~1.0,V的值0.4~1.0,为其他颜色通道。
- 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S5中,采用最大池化的方法进行池化。
- 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S7中,α=0.25,γ=2。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910218654.4A CN109946746A (zh) | 2019-03-21 | 2019-03-21 | 一种基于深度神经网络的安检***及方法 |
CN201910218654.4 | 2019-03-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020187077A1 true WO2020187077A1 (zh) | 2020-09-24 |
Family
ID=67010557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/078425 WO2020187077A1 (zh) | 2019-03-21 | 2020-03-09 | 一种基于深度神经网络的安检***及方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109946746A (zh) |
WO (1) | WO2020187077A1 (zh) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112364903A (zh) * | 2020-10-30 | 2021-02-12 | 盛视科技股份有限公司 | 基于x光机的物品分析及多维图像关联方法与*** |
CN112444889A (zh) * | 2020-11-13 | 2021-03-05 | 北京航星机器制造有限公司 | 一种快速安检行包远程集中判读***及方法 |
CN112560944A (zh) * | 2020-12-14 | 2021-03-26 | 广东电网有限责任公司珠海供电局 | 一种基于图像识别的充电桩起火检测方法 |
CN112991256A (zh) * | 2020-12-11 | 2021-06-18 | 中国石油天然气股份有限公司 | 一种基于机器视觉和深度学习的油管物资统计方法 |
CN113435543A (zh) * | 2021-07-22 | 2021-09-24 | 北京博睿视科技有限责任公司 | 一种基于传送带标识的可见光和x光图像匹配方法及装置 |
CN113537213A (zh) * | 2021-07-14 | 2021-10-22 | 安徽炬视科技有限公司 | 一种基于可变卷积核的烟雾明火检测算法 |
CN113762023A (zh) * | 2021-02-18 | 2021-12-07 | 北京京东振世信息技术有限公司 | 基于物品关联关系的对象识别的方法和装置 |
CN113759433A (zh) * | 2021-08-12 | 2021-12-07 | 浙江啄云智能科技有限公司 | 一种违禁品筛选方法、装置和安检设备 |
CN115731213A (zh) * | 2022-11-29 | 2023-03-03 | 北京声迅电子股份有限公司 | 一种基于x光图像的利器检测方法 |
CN116610078A (zh) * | 2023-05-19 | 2023-08-18 | 广东海力储存设备股份有限公司 | 立体仓自动化储存控制方法、***、电子设备及存储介质 |
CN117197787A (zh) * | 2023-08-09 | 2023-12-08 | 海南大学 | 基于改进YOLOv5的智能安检方法、装置、设备及介质 |
CN117409199A (zh) * | 2023-10-19 | 2024-01-16 | 中南大学 | 一种基于云端大数据技术的成长型智慧安检***及方法 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109946746A (zh) * | 2019-03-21 | 2019-06-28 | 长安大学 | 一种基于深度神经网络的安检***及方法 |
CN110286415B (zh) * | 2019-07-12 | 2021-03-16 | 广东工业大学 | 安检违禁品检测方法、装置、设备及计算机可读存储介质 |
CN110794466A (zh) * | 2019-07-16 | 2020-02-14 | 中云智慧(北京)科技有限公司 | 一种x光机图片采集辅助装置和处理方法 |
CN110488368B (zh) * | 2019-07-26 | 2021-07-30 | 熵基科技股份有限公司 | 一种基于双能x光安检机的违禁品识别方法及装置 |
CN110533045B (zh) * | 2019-07-31 | 2023-01-17 | 中国民航大学 | 一种结合注意力机制的行李x光违禁品图像语义分割方法 |
CN110751329B (zh) * | 2019-10-17 | 2022-12-13 | 中国民用航空总局第二研究所 | 一种机场安检通道的控制方法、装置、电子设备及存储介质 |
CN111046908A (zh) * | 2019-11-05 | 2020-04-21 | 杭州电子科技大学 | 基于卷积神经网络的乳化***包装故障实时监测模型 |
CN111062252B (zh) * | 2019-11-15 | 2023-11-10 | 浙江大华技术股份有限公司 | 一种实时危险物品语义分割方法、装置及存储装置 |
CN111091150A (zh) * | 2019-12-12 | 2020-05-01 | 哈尔滨市科佳通用机电股份有限公司 | 铁路货车交叉杆盖板断裂检测方法 |
CN111126238B (zh) * | 2019-12-19 | 2023-06-20 | 华南理工大学 | 一种基于卷积神经网络的x光安检***及方法 |
CN111167733A (zh) * | 2020-01-14 | 2020-05-19 | 东莞理工学院 | 一种视觉缺陷自动检测*** |
CN111429410B (zh) * | 2020-03-13 | 2023-09-01 | 杭州电子科技大学 | 一种基于深度学习的物体x射线图像材质判别***及方法 |
US11989890B2 (en) * | 2020-03-31 | 2024-05-21 | Hcl Technologies Limited | Method and system for generating and labelling reference images |
CN112144150A (zh) * | 2020-10-16 | 2020-12-29 | 北京经纬纺机新技术有限公司 | 一种应用深度学习图像处理的分布式异性纤维分检*** |
CN112764120A (zh) * | 2020-12-29 | 2021-05-07 | 深圳市创艺龙电子科技有限公司 | 金属探测成像***和具有其的扫描设备 |
CN113159110A (zh) * | 2021-03-05 | 2021-07-23 | 安徽启新明智科技有限公司 | 一种基于x射线液体智能检测方法 |
CN113792826B (zh) * | 2021-11-17 | 2022-02-18 | 湖南苏科智能科技有限公司 | 基于神经网络和多源数据的双视角关联安检方法及*** |
CN114332543B (zh) * | 2022-01-10 | 2023-02-14 | 成都智元汇信息技术股份有限公司 | 一种多模板的安检图像识别方法、设备及介质 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003065077A2 (en) * | 2002-01-30 | 2003-08-07 | Rutgers, The State University | Combinatorial contraband detection using energy dispersive x-ray diffraction |
CN106599967A (zh) * | 2016-12-08 | 2017-04-26 | 同方威视技术股份有限公司 | 安检物品定位的标签和安检物品定位的方法 |
CN106872498A (zh) * | 2017-04-11 | 2017-06-20 | 西安培华学院 | 一种违禁品安检自动识别装置 |
CN107145898A (zh) * | 2017-04-14 | 2017-09-08 | 北京航星机器制造有限公司 | 一种基于神经网络的射线图像分类方法 |
CN107607562A (zh) * | 2017-09-11 | 2018-01-19 | 北京匠数科技有限公司 | 一种违禁物品识别设备及方法、x光行李安检*** |
CN107871122A (zh) * | 2017-11-14 | 2018-04-03 | 深圳码隆科技有限公司 | 安检检测方法、装置、***及电子设备 |
CN108198227A (zh) * | 2018-03-16 | 2018-06-22 | 济南飞象信息科技有限公司 | 基于x光安检机图像的违禁品智能识别方法 |
CN108303747A (zh) * | 2017-01-12 | 2018-07-20 | 清华大学 | 检查设备和检测***的方法 |
CN109446888A (zh) * | 2018-09-10 | 2019-03-08 | 唯思科技(北京)有限公司 | 一种基于卷积神经网络的细长类物品检测方法 |
CN109946746A (zh) * | 2019-03-21 | 2019-06-28 | 长安大学 | 一种基于深度神经网络的安检***及方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105116463A (zh) * | 2015-09-22 | 2015-12-02 | 同方威视技术股份有限公司 | 安检通道以及安检装置 |
CN205656321U (zh) * | 2016-05-30 | 2016-10-19 | 公安部第一研究所 | 一种用于识别包裹中危险液体的x射线检测装置 |
CN206192923U (zh) * | 2016-10-27 | 2017-05-24 | 中云智慧(北京)科技有限公司 | 一种基于云计算的x射线违禁品检测*** |
CN108226196A (zh) * | 2018-02-22 | 2018-06-29 | 青岛智慧云谷智能科技有限公司 | 一种智能x光机探测***及探测方法 |
CN108802840B (zh) * | 2018-05-31 | 2020-01-24 | 北京迈格斯智能科技有限公司 | 基于人工智能深度学习的自动识别物体的方法及其装置 |
CN109086679A (zh) * | 2018-07-10 | 2018-12-25 | 西安恒帆电子科技有限公司 | 一种毫米波雷达安检仪异物检测方法 |
CN109191389A (zh) * | 2018-07-31 | 2019-01-11 | 浙江杭钢健康产业投资管理有限公司 | 一种x光图像自适应局部增强方法 |
CN109187598A (zh) * | 2018-10-09 | 2019-01-11 | 青海奥越电子科技有限公司 | 基于数字图像处理的违禁物品检测***及方法 |
CN109447071A (zh) * | 2018-11-01 | 2019-03-08 | 博微太赫兹信息科技有限公司 | 一种基于fpga和深度学习的毫米波成像危险物品检测方法 |
CN109472309A (zh) * | 2018-11-12 | 2019-03-15 | 南京烽火星空通信发展有限公司 | 一种x光安检机图片物体检测方法 |
-
2019
- 2019-03-21 CN CN201910218654.4A patent/CN109946746A/zh active Pending
-
2020
- 2020-03-09 WO PCT/CN2020/078425 patent/WO2020187077A1/zh active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003065077A2 (en) * | 2002-01-30 | 2003-08-07 | Rutgers, The State University | Combinatorial contraband detection using energy dispersive x-ray diffraction |
CN106599967A (zh) * | 2016-12-08 | 2017-04-26 | 同方威视技术股份有限公司 | 安检物品定位的标签和安检物品定位的方法 |
CN108303747A (zh) * | 2017-01-12 | 2018-07-20 | 清华大学 | 检查设备和检测***的方法 |
CN106872498A (zh) * | 2017-04-11 | 2017-06-20 | 西安培华学院 | 一种违禁品安检自动识别装置 |
CN107145898A (zh) * | 2017-04-14 | 2017-09-08 | 北京航星机器制造有限公司 | 一种基于神经网络的射线图像分类方法 |
CN107607562A (zh) * | 2017-09-11 | 2018-01-19 | 北京匠数科技有限公司 | 一种违禁物品识别设备及方法、x光行李安检*** |
CN107871122A (zh) * | 2017-11-14 | 2018-04-03 | 深圳码隆科技有限公司 | 安检检测方法、装置、***及电子设备 |
CN108198227A (zh) * | 2018-03-16 | 2018-06-22 | 济南飞象信息科技有限公司 | 基于x光安检机图像的违禁品智能识别方法 |
CN109446888A (zh) * | 2018-09-10 | 2019-03-08 | 唯思科技(北京)有限公司 | 一种基于卷积神经网络的细长类物品检测方法 |
CN109946746A (zh) * | 2019-03-21 | 2019-06-28 | 长安大学 | 一种基于深度神经网络的安检***及方法 |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112364903A (zh) * | 2020-10-30 | 2021-02-12 | 盛视科技股份有限公司 | 基于x光机的物品分析及多维图像关联方法与*** |
CN112444889A (zh) * | 2020-11-13 | 2021-03-05 | 北京航星机器制造有限公司 | 一种快速安检行包远程集中判读***及方法 |
CN112991256A (zh) * | 2020-12-11 | 2021-06-18 | 中国石油天然气股份有限公司 | 一种基于机器视觉和深度学习的油管物资统计方法 |
CN112560944B (zh) * | 2020-12-14 | 2023-08-08 | 广东电网有限责任公司珠海供电局 | 一种基于图像识别的充电桩起火检测方法 |
CN112560944A (zh) * | 2020-12-14 | 2021-03-26 | 广东电网有限责任公司珠海供电局 | 一种基于图像识别的充电桩起火检测方法 |
CN113762023A (zh) * | 2021-02-18 | 2021-12-07 | 北京京东振世信息技术有限公司 | 基于物品关联关系的对象识别的方法和装置 |
CN113762023B (zh) * | 2021-02-18 | 2024-05-24 | 北京京东振世信息技术有限公司 | 基于物品关联关系的对象识别的方法和装置 |
CN113537213A (zh) * | 2021-07-14 | 2021-10-22 | 安徽炬视科技有限公司 | 一种基于可变卷积核的烟雾明火检测算法 |
CN113537213B (zh) * | 2021-07-14 | 2024-01-30 | 安徽炬视科技有限公司 | 一种基于可变卷积核的烟雾明火检测算法 |
CN113435543A (zh) * | 2021-07-22 | 2021-09-24 | 北京博睿视科技有限责任公司 | 一种基于传送带标识的可见光和x光图像匹配方法及装置 |
CN113435543B (zh) * | 2021-07-22 | 2024-04-09 | 湖南声迅科技有限公司 | 一种基于传送带标识的可见光和x光图像匹配方法及装置 |
CN113759433A (zh) * | 2021-08-12 | 2021-12-07 | 浙江啄云智能科技有限公司 | 一种违禁品筛选方法、装置和安检设备 |
CN113759433B (zh) * | 2021-08-12 | 2024-02-27 | 浙江啄云智能科技有限公司 | 一种违禁品筛选方法、装置和安检设备 |
CN115731213A (zh) * | 2022-11-29 | 2023-03-03 | 北京声迅电子股份有限公司 | 一种基于x光图像的利器检测方法 |
CN115731213B (zh) * | 2022-11-29 | 2024-01-30 | 北京声迅电子股份有限公司 | 一种基于x光图像的利器检测方法 |
CN116610078A (zh) * | 2023-05-19 | 2023-08-18 | 广东海力储存设备股份有限公司 | 立体仓自动化储存控制方法、***、电子设备及存储介质 |
CN117197787A (zh) * | 2023-08-09 | 2023-12-08 | 海南大学 | 基于改进YOLOv5的智能安检方法、装置、设备及介质 |
CN117409199A (zh) * | 2023-10-19 | 2024-01-16 | 中南大学 | 一种基于云端大数据技术的成长型智慧安检***及方法 |
CN117409199B (zh) * | 2023-10-19 | 2024-05-14 | 中南大学 | 一种基于云端大数据技术的成长型智慧安检***及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109946746A (zh) | 2019-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020187077A1 (zh) | 一种基于深度神经网络的安检***及方法 | |
CN106127204B (zh) | 一种全卷积神经网络的多方向水表读数区域检测算法 | |
CN108776779B (zh) | 基于卷积循环网络的sar序列图像目标识别方法 | |
CN108198227A (zh) | 基于x光安检机图像的违禁品智能识别方法 | |
CN111145177A (zh) | 图像样本生成方法、特定场景目标检测方法及其*** | |
CN112712093B (zh) | 安检图像识别方法、装置、电子设备及存储介质 | |
CN110533051B (zh) | 基于卷积神经网络的x光安检图像中违禁品自动检测方法 | |
CN105809121A (zh) | 多特征协同的交通标志检测与识别方法 | |
CN108647700A (zh) | 基于深度学习的多任务车辆部件识别模型、方法和*** | |
CN106297142A (zh) | 一种无人机山火勘探控制方法及*** | |
CN103218831A (zh) | 一种基于轮廓约束的视频运动目标分类识别方法 | |
CN107563433A (zh) | 一种基于卷积神经网络的红外小目标检测方法 | |
CN111368690A (zh) | 基于深度学习的海浪影响下视频图像船只检测方法及*** | |
CN110488368A (zh) | 一种基于双能x光安检机的违禁品识别方法及装置 | |
WO2023087653A1 (zh) | 基于神经网络和多源数据的双视角关联安检方法及*** | |
CN113963222A (zh) | 一种基于多策略组合的高分辨率遥感影像变化检测方法 | |
CN109949229A (zh) | 一种多平台多视角下的目标协同检测方法 | |
CN109101926A (zh) | 基于卷积神经网络的空中目标检测方法 | |
CN108364037A (zh) | 识别手写汉字的方法、***及设备 | |
CN112258490A (zh) | 基于光学和红外图像融合的低发射率涂层智能探损方法 | |
CN109409409A (zh) | 基于hog+cnn的交通标志的实时检测方法 | |
Saha et al. | Unsupervised multiple-change detection in VHR optical images using deep features | |
Chen et al. | Research on the process of small sample non-ferrous metal recognition and separation based on deep learning | |
CN110992324B (zh) | 一种基于x射线图像的智能危险品检测方法及*** | |
CN106169086B (zh) | 导航数据辅助下的高分辨率光学影像损毁道路提取方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20772845 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20772845 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20772845 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/05/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20772845 Country of ref document: EP Kind code of ref document: A1 |