WO2020187077A1 - 一种基于深度神经网络的安检***及方法 - Google Patents

一种基于深度神经网络的安检***及方法 Download PDF

Info

Publication number
WO2020187077A1
WO2020187077A1 PCT/CN2020/078425 CN2020078425W WO2020187077A1 WO 2020187077 A1 WO2020187077 A1 WO 2020187077A1 CN 2020078425 W CN2020078425 W CN 2020078425W WO 2020187077 A1 WO2020187077 A1 WO 2020187077A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
neural network
image
training
model
Prior art date
Application number
PCT/CN2020/078425
Other languages
English (en)
French (fr)
Inventor
屈立成
李萌萌
李坤伦
吕娇
赵明
王海飞
屈艺华
Original Assignee
长安大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长安大学 filed Critical 长安大学
Publication of WO2020187077A1 publication Critical patent/WO2020187077A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the invention belongs to the technical field of security inspection, and specifically relates to a security inspection system and method based on a deep neural network.
  • X-ray security inspection machines play an important role in the safety inspection of dangerous goods and the safety of transportation vehicles.
  • the traditional X-ray security inspection machine requires the staff to carefully check the X-ray luggage image to determine whether it contains dangerous goods.
  • the device is low in intelligence, and the cost required for manual inspection is high. At the same time, misjudgment may occur. , Thereby posing a great threat to people's safe travel, and even causing major accidents.
  • the “an automatic identification device for contraband security inspection” (CN 201710233696.6) published in the patent application converts the image from the RGB color space to the HSV color space and copies three copies, which are divided into three colors for identification. After optimizing the image quality, the three The patterns after the recognition of different colors are processed in parallel with the pre-stored contraband templates under the corresponding colors, and the SURF feature matching is performed. If the matching rate is above 55%, it is considered that the luggage has contraband.
  • SURF feature matching is mainly to match the X-ray image with the number of SURF descriptors in the pre-stored contraband image template. It can only identify objects of the same style and color. The detection accuracy of similar items is low (such as toy pistols and The shape of the real gun is the same), the generalization ability is poor, and the category classification is not clear. It has a certain detection capability for rotating, expanding and deforming objects in luggage, but it is difficult to accurately detect and distinguish between disorderly stacked luggage or overlapping objects.
  • the existing dangerous goods detection technology adopts image processing technology, which is mainly to segment according to the color of the object and then extract and analyze the feature of the image object.
  • image processing technology which is mainly to segment according to the color of the object and then extract and analyze the feature of the image object.
  • the same object of different materials cannot be handled well, for example: the tip of the scissors is blue
  • the handle is usually an orange feature, so that the segmentation of the object only obtains the local features of the object, resulting in low object accuracy and unclear object categories.
  • the detection accuracy of objects that rotate, expand, and deform in luggage items is low, and because of luggage items The stacking is messy, and it is difficult to accurately detect overlapping objects.
  • the intelligent security inspection system integrated with deep learning algorithms will greatly improve the intelligent procedures of security inspection devices, improve the accuracy of dangerous goods identification, and effectively reduce the pressure on security inspection staff. Improve the passage efficiency of security check channels, reduce congestion, and ensure people's traffic and travel safety to the greatest extent.
  • a new X-ray intelligence based on color segmentation and multi-plane deep neural network is proposed.
  • the security inspection device and method solve the problem of detection and identification of items carried in daily luggage and parcels.
  • a deep neural network detection model is established, and big data is used for feature training and learning of common objects, so that the detector can recognize and recognize rotating, stretching and deforming objects. classification.
  • a security inspection system based on a deep neural network of the present invention includes an X-ray imaging module, a detection model training and learning module, an object recognition module, and a security management module.
  • the output terminal of the X-ray imaging module and the object recognition The input end of the module is connected, the object recognition module and the detection model training and learning module are bidirectionally connected, and the output end of the object recognition module is connected to the input end of the security management module;
  • the X-ray imaging module is used to obtain the X image video sequence of the object, and then A digital picture is obtained through analog-to-digital conversion, and the obtained digital picture is transferred to the object recognition module;
  • the detection model training learning module is used for image training to obtain a learning model, and the learning model is transferred to the object recognition module;
  • the object The recognition module is used to load the learning model in the detection model training module, classify and locate items, and transmit the type and coordinate information of the detected objects to the security management module;
  • the security management module is used to identify objects based on
  • the object transmission module includes an object entry channel, a dangerous goods output channel, and a non-dangerous goods output channel.
  • the safety management module includes an information management module, a warning module, a baggage control module, and a display module; among them, the information management module is used to receive object classification and location information sent by the object recognition module, and based on the received object classification and location information Determine whether the object is a dangerous item; the alarm module is used to alarm; the baggage control module is used to transport the baggage to different channels in the object transmission module, and the display module is used to display X-ray pictures and detection results.
  • a security inspection method based on deep neural networks uses pictures to train an image learning model.
  • image learning model uses the image learning model to identify the types and coordinates of the items in the digital picture; then according to the types and coordinates of the items, the items are divided into different conveying channels according to the types.
  • Step 1 Use the X-ray emission device to perform imaging after penetrating the object to obtain an X image video sequence, and the X image video sequence undergoes analog-to-digital conversion to obtain a digital picture;
  • Step 2 Load the image learning model, and use the image learning model to identify and locate digital pictures; the image learning model is obtained through training.
  • the specific training method is: firstly, use the convolutional layer of the convolutional neural network to pool Layer and fully connected layer to build the object training model; then the X-ray images obtained in the early stage are classified according to the security inspection object category, and the category and coordinate information of the object are marked.
  • the coordinate information includes the coordinates of the object center point x, y and the target frame Length w and width h; Then set the parameters of the training model, including learning rate, batch processing scale, learning strategy, etc.; Then send the labeled pictures to the convolutional neural network, and use the built convolutional neural network to label
  • the image learning model is trained to obtain the image learning model; then the image learning model is verified. If the expected effect is achieved, the image learning model is saved to the model learning library; if the expected effect is not achieved, the parameters of the convolutional neural network are adjusted and the training continues, Until the image learning model achieves the desired effect.
  • Step 3 Divide the items into different conveying channels according to the types and coordinates of the items.
  • step 2 the pictures used for training the learning model adopt pictures with different angles and positions.
  • sending the marked pictures into the convolutional neural network, and training the marked pictures with the built convolutional neural network includes the following steps:
  • n_in is the dimension of the last dimension of the tensor.
  • Xk represents the k-th input matrix.
  • Wk represents the k-th sub-convolution kernel matrix of the convolution kernel.
  • s(i,j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, and b is the amount of paranoia;
  • the loss function adopts the focalloss loss function
  • the bbox with the highest confidence is selected as the detection result and output.
  • the present invention has at least the following beneficial technical effects:
  • Flammable and explosive objects mainly include: kerosene, liquefied petroleum gas, solid alcohol, compressed gas, firecrackers, fireworks, fireworks, etc.; guns and ammunition objects Mainly include: simulation guns, steel ball guns, stun guns, gun-type lighters, bullets, empty bullets, bullet clips, etc.; explosives mainly include: scale-shaped TNT, plastic explosives, fuse, detonator, timed firecracker Devices, etc.; controlled knives mainly include: daggers, switch knives, three-sided knives, lock knives, etc.; dangerous goods mainly include: scissors, axes, kitchen knives, slingshots, etc.; police equipment mainly include: electric shock sticks, double Knuckles, handcuffs, smoke bombs, etc.;
  • Daily luggage items mainly include: bottled water, bottled wine, liquid alcohol, glass glue, etc.
  • the classification of objects is clear, which effectively assists staff in safety inspections and can add object categories according to the actual situation to improve safety.
  • Figure 1 is a block diagram of an X-ray intelligent security inspection system based on a deep neural network
  • Figure 2 is a flowchart of the object recognition detection module and the safety management module
  • Figure 3 is a flowchart of the model training module
  • FIG. 4 is a flowchart of the color segmentation algorithm
  • Figure 5 is the original X-ray image
  • Figure 6a is a plan view of R
  • Figure 6b is a plan view of G
  • Figure 6c is a plan view of B
  • Figure 7a is an H plan view
  • Figure 7b is an S plan view
  • Figure 7c is a V plan view
  • Figure 8a is a plan view of the mixture
  • Figure 8b is a plan view of organic matter
  • Figure 8c is a plan view of an inorganic substance
  • Figure 8d is another plan view.
  • a security inspection system based on a deep neural network includes an object transmission module, an X-ray imaging module, a detection model training and learning module, an object recognition module, and a security management module.
  • the working process is that the object transfer module transfers the luggage items into the detection range of the X-ray imaging machine module.
  • the X-ray imaging machine module emits X-rays, passes through the X-ray images of the luggage, and obtains the X image video sequence, and then passes through the module
  • the digital image is converted, and the security check item learning model is loaded, and the convolutional neural network is used to classify and locate the object.
  • the output image recognition category and location are sent to the security management module, and the security management module determines the path to which the luggage flows.
  • the object transfer module is mainly used to transfer luggage items during security check
  • the X-ray imaging module mainly uses the X-rays generated by the X-ray emission tube to penetrate the luggage items in the channel to obtain the X image video sequence, and then obtain the digital picture through the analog-to-digital conversion;
  • the detection model training learning module is used to collect and annotate object pictures, and then send them to the convolutional god-level network for learning, and finally get the learned object detection model, and pass the trained model to the object recognition module;
  • the object recognition module is used to load the X-ray image learning model of the detection model training module, use the built-in object detection algorithm for object recognition and positioning, and transmit the detected object type and coordinate information to the security management module;
  • the alarm module and the baggage control module in the safety management module are used to determine whether an alarm is needed and to transmit the items to the dangerous goods channel according to the object type and coordinate information output by the object recognition module.
  • the security management module includes an information management module, a warning module, a baggage control module and a display module.
  • the information management module is used to receive the object classification and location information sent by the object recognition module, and to determine whether the detected object is a dangerous article according to the received object classification and location information
  • the alarm module is used to alarm
  • the baggage control module is used to The baggage is conveyed to different channels in the object transmission module
  • the display module is used to display X-ray pictures and detected result pictures during the working process of the security inspection machine.
  • This system can be used as a new type of intelligent security inspection system, and can also update the object recognition module, model training module, and safety management module to the existing security inspection system, and intelligently upgrade the existing security inspection system.
  • the intelligent security inspection method mainly includes four parts: detection area extraction, image plane processing, detector learning and training, and intelligent detection of dangerous goods.
  • the detection area extraction process is as follows: segment the area to be detected according to the characteristics of the X-ray background, and discard a large number of white candidate detection areas directly, avoiding subsequent time-consuming recognition operations and improving the speed of item detection.
  • Image plane processing The preprocessing is mainly to convert the image from the RGB model to the hsv model.
  • the hsv model includes three color planes H, S and V.
  • the image is then divided into four colors: orange, green, blue and other colors by hue H flat;
  • the picture input to the convolutional neural network is represented by the RGB color model, which is composed of three color planes: R, G, and B.
  • the present invention adds the H, S, and V color planes obtained in the preprocessing stage, as well as the orange, green, blue and other colors generated after color segmentation, a total of 10 color planes.
  • the intelligent security inspection system uses a large number of X-ray object pictures of different angles and positions to classify and label the collected X-ray images, mark the type and coordinates of the object, and divide it into 8:2 Learning picture sets and test picture sets, and generating the .xml annotation format required by the algorithm based on the original pictures of the acquired X-ray images (including the object category, size and its coordinate position in the X-ray image, etc.).
  • the baggage items are transferred from the transmission module to the X-ray imaging module.
  • the X-ray transmitter passes through the imaging of the baggage to obtain an X image video sequence.
  • the X image video sequence undergoes analog-to-digital conversion to obtain a digital picture.
  • Load the X-ray learning model to detect The type and coordinates of the object are transmitted to the safety management module through the communication interface.
  • the alarm strategy and confidence threshold set by the safety management module, it is determined whether the system alarms and whether it is transmitted to the dangerous goods channel.
  • the confidence threshold can be set by itself according to the needs of security inspection. It is 70%. If the object detection confidence is greater than the threshold, it will alarm.
  • the object detection module is updated on the basis of the original security inspection system and transmitted to the security inspection machine through the network communication interface.
  • the X-ray learning model is used to output the type and coordinate information of the object to the existing security inspection screen through the communication interface. Alarm threshold, if dangerous goods are detected, the object conveyor will be suspended.
  • the detector learning training includes the following steps:
  • Step 1 Use the convolutional layer, pooling layer and fully connected layer of the convolutional neural network to build an object training model.
  • Step 2 Classify the pictures in the X-ray picture library according to the security check object category, and manually mark the category and coordinate information of the object in the picture.
  • Step 3 Set the parameters of the training model.
  • the parameters include learning rate, batch processing scale, learning strategy, etc.
  • Step 4. Send the marked pictures into the convolutional neural network.
  • Step 5 Use the built convolutional neural network to train the labeled pictures to obtain a learning model.
  • Step 6 Verify the learning model. If the expected effect is achieved, save the learning model to the model learning library; if the expected effect is not achieved, adjust the parameters of the convolutional neural network and continue training until the learning model achieves the expected effect.
  • MAP Mean Average Precision
  • Step 4 includes the following steps:
  • Step 4.1 Detection area extraction
  • the area to be detected is segmented according to the characteristics of the X-ray background, and a large number of candidate detection areas with blank backgrounds are directly discarded, and the colored area is the detection area, which avoids subsequent time-consuming identification operations and improves the speed of item detection.
  • Step 4.2 Image plane processing
  • the input HSV is divided into organic orange channel, inorganic blue channel, mixed green channel and other color channels according to the value range of hue, purity and brightness.
  • the value of H is between 20° ⁇ 60°, the value of S is between 0.4 ⁇ 1.0, and the value of V is between 0.4 ⁇ 1.0, it is an organic orange channel; when the value of H is between 100° ⁇ 140°, the value of S is between 0.4 ⁇ 1.0, When the value of V is 0.4 ⁇ 1.0, it is the green channel of the mixture; when the value of H is 220° ⁇ 260°, the value of S is 0.4 ⁇ 1.0, and the value of V is 0.4 ⁇ 1.0, it is the inorganic blue channel; when the value of H is not When the orange channel, the green channel and the blue channel are within the range, the value of S is 0.4-1.0, and the value of V is 0.4-1.0, which are other color channels.
  • Step 4.3 Take the r, g, and b channels of the picture and store them in the first three channels of the picture that will be input to the convolutional neural network, and then convert the rgb image model to the hsv color model, and extract the h, s, and v three of the hsv model
  • Two channels are stored in the three channels behind the rgb of the input image, and the image is divided into 4 colors through the different value ranges of hsv hue, purity and brightness.
  • the image is divided into organic orange, inorganic blue, mixed green, and other colors.
  • the channel is stored in the last four channels of the input image, and the training image of 10 channels is input into the convolutional neural network.
  • the detection model training and learning module segment the image based on the color features of the X-ray image, and synthesize a multi-plane detection image that integrates R, G, B, H, S, V and material information, which improves the accuracy of the detection of objects.
  • Step 5 includes the following steps:
  • Step 5.1 Use convolution operation to perform feature extraction on the area to be detected
  • the input image size is 416*416, the channel is 10, and the convolution operation is performed using 3*3 and 1*1 convolutional layers.
  • Feature representation after convolution is performed using 3*3 and 1*1 convolutional layers.
  • n_in is the dimension of the last dimension of the tensor.
  • Xk represents the k-th input matrix.
  • Wk represents the k-th sub-convolution kernel matrix of the convolution kernel.
  • s(i,j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, and
  • b represents the amount of paranoia.
  • the maximum pooling method is adopted, that is, the maximum value of the 2*2 pooling area is selected as the characteristic value, and the core step is 1.
  • Step 5.3 Use Softmax to classify each Bbox
  • Step 5.4 Loss function adopts focalloss loss function
  • Step 5.5 The local maximum method is adopted, that is, the bbox (the rectangular area containing the object) with the highest confidence is selected as the detection result output, that is, the position information of the detected object.
  • the Bbox information contains 5 data values, namely x, y, w, h, and confidence.
  • x, y refer to the coordinates of the center position of the bounding box of the object predicted by the current grid.
  • w, h refers to the width and height of the bounding box of the object predicted by the current grid
  • confidence refers to the confidence of the predicted object.
  • Figure 5 is the X-ray image after grayscale
  • Figure 6a is the R plan view of the X-ray image
  • Figure 6b is the G plan view
  • Figure 6c is the B plan view of the X-ray image
  • 7a is the H plan view of the X-ray image
  • FIG. 7b is the S plan view of the X-ray image
  • FIG. 7c is the V plan view of the X-ray image
  • FIG. 8a is the mixture plan view of the X-ray image
  • FIG. 8b is the organic matter plan view of the X-ray image
  • 8c is an inorganic plan view of the X-ray image; other plan views of the X-ray image.
  • the security inspection system includes the following steps:
  • Step 1 Set the alarm threshold of different objects.
  • the first step transfer the luggage items through the conveyor belt of the object transfer module.
  • the second step X-ray imaging module emits X-rays, through X-ray imaging of luggage to obtain X image video sequence.
  • Step 3 Get the digital picture of the luggage item after analog-digital conversion.
  • Step 4 Load the X-ray image learning model of the model training module.
  • Step 5 Use the learning model to classify and locate objects.
  • Step 6 Output the object type and coordinate information in the picture and send it to the safety management module.
  • Step 7 The alarm module of the security management module decides whether to give an alarm and the baggage control module determines the channel of baggage flow.
  • the technology of the present invention mainly uses deep learning for object recognition and positioning, and organically combines the geometric features and texture features of the object with the color of the object in the X-ray image in the process of object feature learning. Using a large number of image data from different angles and different positions for learning can not only detect and recognize blurred, rotated, and deformed images, but also update the trained model to a series of security inspection machines in real time.
  • the administrator can set the object type threshold, category and coordinates according to the degree of danger of the object.
  • the intelligent security inspection system based on deep learning mainly improves the manual identification of dangerous goods in the traditional security inspection system into a process that relies on deep learning to assist the security inspection personnel, greatly reducing labor costs and making the security inspection system more intelligent.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于神经网络的安检***及方法,其中该安检***包括X光成像模块、检测模型训练学***面的检测图像,建立深度神经网络检测模型,并运用大数据对常见物品进行特征训练与学习,实现检测器对旋转、伸缩和变形的物体的识别与分类。

Description

一种基于深度神经网络的安检***及方法 技术领域
本发明属于安检技术领域,具体涉及一种基于深度神经网络的安检***及方法。
背景技术
随着经济快速发展,高铁,飞机等已经成为了人们日常出行必不可少的交通工具,然而乘客有意或者无意地携带危险品乘坐交通工具却成了交通运输安全的最大威胁。X射线安检机对于危险品安全检测和保障交通运输工具运行安全方面有着重要的作用。但是传统的X射线安检机需要工作人员认真仔细地查看X射线行李图像来判别是否含有危险物品,装置智能化程度低,人工检查所需的成本较高,同时也可能会出现误判漏判情况,从而对人们的安全出行造成极大的威胁,甚至于酿成重大事故。
专利申请公布的“一种违禁品安检自动识别装置”(CN 201710233696.6)将图像由RGB颜色空间转换到HSV颜色空间并复制三份,分为三种颜色进行识别,在优化图像质量后,将三种不同颜色识别后的图案与预存的相应颜色下违禁品模板并行处理,进行SURF特征匹配,匹配率在55%以上则认为行李存在违禁品。
SURF特征匹配主要是将X射线图像与预存的违禁品图像模板中的SURF描述子数量进行匹配,只能识别出相同样式及颜色的物体,对于外形相似物品的检测准确率低(例如玩具***和真实枪的形状是一样的),泛化能力差,类别分类不明确。对于行李物品中旋转,伸缩,变形的物体具有一定的检测能力,但是对于杂乱堆放的行李物品或者重叠在一起的物体难以进行准确的检测和区分。
专利申请公布的“一种X光安检机行李危险品快速自动检测报警装置及方法”(CN 201610748757.8),首先,采用高斯滤波法对图像去噪,其次,使用非线性增强方法进行图像 增强和危险品图像分割,最后,对疑似危险品图像进行特征提取和特征分析,若发现危险品则圈出危险品并将图像数据通过网口传输到计算机并将其显示在液晶显示器上,选择设置发出声音和LED灯的闪烁报警。
现有危险品检测技术采用图像处理技术,主要是根据物体颜色进行分割然后对图像物体特征提取分析,在分割的过程中对于不同材质的同一物体不能很好处理,例如:剪刀的尖端为蓝色特征,而把手通常为桔黄色特征,这样分割物体只得到了物体的局部特征导致物体准确率低且物体类别不明确,对于行李物品中旋转,伸缩,变形的物体检测准确率低,并且因为行李物品堆放杂乱,对重叠在一起的物体难以进行准确的检测。
随着人工智能技术的飞速发展,融入深度学习算法的智能化安检机***将会大大提高安检装置的智能化程序,提高危险物品识别的准确率,同时还能有效减轻安检工作人员的压力,大大提升安检通道的通过效率,减少拥堵,最大程度上保障人们的交通与出行安全。
发明内容
针对现有X射线行李检测技术中存在的物体检测定位准确率低,危险品类别不明确,检测过程智能化程度较低等问题,提出一种基于颜色分割和多平面深度神经网络的X射线智能安检装置和方法,解决日常行李包裹中携带物品的检测与识别问题。基于X射线图像的颜色特征分割并合成多平面的检测图像,建立深度神经网络检测模型,并运用大数据对常见物品进行特征训练与学习,实现检测器对旋转、伸缩和变形的物体的识别与分类。特别对于杂乱堆放、互相缠绕重叠在一起的行李物品进行细致的检测与甄别,深入学习其颜色、外形和纹理特征,以达到准确识别和分类的效果,提高危险物品识别的准确率,提升X射线安检过程的智能程度和安检通道的通过效率,减少拥堵,减轻安检人员的工作强度,最大程度地保障人们的交通出行安全。
为达到上述目的,本发明所述一种基于深度神经网络的安检***包括X光成像模块、检测模型训练学习模块、物体识别模块和安全管理模块,所述X光成像模块的输出端和物体识别模块的输入端连接,物体识别模块和检测模型训练学习模块双向连接,物体识别模块的输出端和安全管理模块的输入端连接;所述X光成像模块用于得到物品的X图像视频序列,然后经过模数转换得到数字图片,并将得到的数字图片传递至物体识别模块;所述检测模型训练学习模块用于进行图片训练,得到学习模型,并将学习模型传递至物体识别模块;所述物体识别模块用于加载检测模型训练模块中的学习模型,并对物品进行分类与定位,将检测识别出的物体的种类和坐标信息传送到安全管理模块;所述安全管理模块中用于根据物体识别模块识输出的物体种类和坐标信息将物品输送至不同的物品运送通道中。
进一步的,还包括物体传输模块,物体传输模块包括物品进入通道、危险品输出通道和非危险品输出通道。
进一步的,安全管理模块包括信息管理模块、警示模块、行李控制模块和显示模块;其中,信息管理模块用于接收物体识别模块发送的物体分类与位置信息,并根据接收到的物体分类与位置信息判别物体是否为危险物品;报警模块用于报警;行李控制模块用于将行李输送至物体传输模块中的不同通道中,显示模块用于显示X射线图片和检测结果。
一种基于深度神经网络的安检方法,首先利用图片训练出图像学习模型,在物品检测时,采集待检测物品的X图像视频序列,X图像视频序列经过模数转换得到数字图片;然后加载图像学习模型,利用图像学习模型识别数字图片中物品的种类和坐标;然后根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
具体包括以下步骤:
步骤1、利用X射线发射装置透过物品后的成像,得到X图像视频序列,X图像视频序列经过模数转换得到数字图片;
步骤2、加载图像学习模型,通过图像学习模型来对数字图片进行物品的识别与定位;所述图像学习模型通过训练得到,具体训练方法为:首先采用卷积神经网络的卷积层,池化层和全连接层搭建物体训练模型;然后将前期获得的X射线图片按安检物体类别分类,并标注出物体的类别和坐标信息,其中坐标信息包括物体中心点的坐标x,y和目标框的长w和宽h;然后设置训练模型的参数,包括学习率,批处理尺度,学习策略等;然后将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,得到图像学习模型;然后验证图像学习模型,若达到预期效果,则将图像学习模型保存到模型学习库;若未达到预期效果,则调整卷积神经网络的参数,继续训练,直到图像学习模型达到预期效果。
步骤3、根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
进一步的,步骤2中,用于训练学习模型的图片采用不同角度、位置的图片。
进一步的,将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,包括以下步骤:
S1、根据X射线背景特点将数字图片分割出待检测区域,将有颜色的区域为检测区域,所述数字图片为rgb图像;
S2、将数字图片的r通道、g通道和b通道取出来存放到将要输入卷积神经网络的图片前三位通道;再将rgb图像模型转换为hsv颜色模型,提取出hsv模型的h通道、s通道和v通道,并存放到输入图片的rgb后面三个通道;通过hsv颜色模型的色调H,纯度S以及明亮度V的值,将hsv模型分为有机物橙色,无机物蓝色,混合物绿色,及其他颜色分割成4个颜色通道,存放至输入图片的后四位通道,将10个通道的训练图片输入至卷积神经网络中;
S3、利用卷积运算对待检测区域进行特征提取,卷积后的特征表示为:
Figure PCTCN2020078425-appb-000001
其中,n_in是张量的最后一维的维数。Xk代表第k个输入矩阵。Wk代表卷积核的第k个子卷积核矩阵。s(i,j)即卷积核W对应的输出矩阵的对应位置元素的值,b是偏执量;
S4、进行池化;
S5、使用Softmax对每个目标框进行分类,得到分类之后的bbox;
S6、损失函数采用focalloss损失函数,
FL(pt)=-α t(1-pt) γlog(pt)
γ为focusing parameter,γ>=0,1-pt称为调制系数,α t用于调节正样本和负样本的比例,前景类别使用α t时,对应的背景类别使用1-α,pt是不同类别的分类概率;
S7、选取置信度最高的bbox作为检测结果输出。
进一步的,S2中,当H的值在20°~60°,S的值在0.4~1.0,V的值在0.4~1.0时,为有机物橙色通道;当H的值100°~140°,S的值0.4~1.0,V的值0.4~1.0时,为混合物绿色通道;当H的值220°~260°,S的值0.4~1.0,V的值0.4~1.0时,为无机物蓝色通道;当H的值不在所述橙色通道、绿色通道和蓝色通道范围内时,S的值0.4~1.0,V的值0.4~1.0,为其他颜色通道。
进一步的,S5中,采用最大池化的方法进行池化。
进一步的,S7中,α=0.25,γ=2。
与现有技术相比,本发明至少具有以下有益的技术效果:
(1)基于X射线图像的颜色特征分割并合成多平面的检测图像,提高了物品识别的准确率。
(2)在物体检测算法中对不同物体图片进行分类,通过搭建模型,调整参数,利用卷积神经网络进行特征学习,可以明确的分类行李中的物体类别;
(3)在X射线图像检测中匕首,***,刀具等都具有蓝色特征,汽油,打火机,弹药等具有绿色特征,酒精,汽油具有桔黄色特征,危险品的颜色特征主要为蓝色,绿色和桔黄色,将物体几何特征,纹理特征和颜色特征有机结合,可以大大地降低玩具***,挂饰对危险品检测时干扰,提高物体检测与定位准确度。
(4)在X射线图像学习过程中采用大量(至少500张)不同角度、位置的图片数据进行学习,对于模糊,旋转,变形的物体也能够准确识别,有效地提高物体检测的准确率。
(5)物体分类明确:对于物体检测方面主要分为七种类别,易燃易爆类物体主要包括:煤油,液化石油气,固体酒精,压缩气体,鞭炮,礼花,烟花等;***弹药类物体主要包括:***,***,***,枪式打火机,子弹,空包弹,子***等;***物类物体主要包括:鳞片状TNT,塑料***,导火索,导爆管,定时爆竹装置等;管制刀具类物体主要包括:匕首,弹簧刀,三棱刀,锁刀等;危险物品类物体主要包括:剪刀,斧头,菜刀,弹弓等;警械类物品主要包括:电击棒,双节棍,手铐,烟雾弹等;
日常行包类物品主要包括:瓶装水,瓶装酒,液态酒精,玻璃胶等。物体分类明确,有效的辅助工作人员进行安全排查并且可以根据实际情况添加物体类别,提高安全性。
(6)针对不同场景要求下的安检***,设置危险品种类的报警阈值,从模型训练库中选择合适的物体检测模型传输到物体检测模块中,具有一定的实时连动性。
附图说明
图1为是基于深度神经网络的X射线智能安检***模块图;
图2为是物体识别检测模块和安全管理模块的流程图;
图3是模型训练模块的流程图;
图4是颜色分割算法流程图;
图5为X射线原图;
图6a为R平面图;
图6b为G平面图;
图6c为B平面图;
图7a为H平面图;
图7b为S平面图;
图7c为V平面图;
图8a为混合物平面图;
图8b为有机物平面图;
图8c为无机物平面图;
图8d为其他平面图。
具体实施方式
下面结合附图和具体实施方式对本发明进行详细说明。
参照图1,一种基于深度神经网络的安检***包括物体传输模块、X光成像模块、检测模型训练学习模块、物体识别模块和安全管理模块。其工作流程为物体传送模块将行李物品传入X光成像机模块的检测范围内,X光成像机模块发出X射线,透过行李后的X射线成像,得到X图像视频序列,然后经过模数转换得到数字图像,加载安检物品学习模型,利用卷积神经网络进行物体分类与定位,输出图片识别的类别和位置发送到安全管理模块,由安全管理模块决定行李所流向的通道。
其中,物体传输模块主要是传送安检时的行李物品;
X光成像模块主要是利用X光发射管产生的X射线穿透通道中的行李物品得到X图像视频序列,然后经过模数转换得到数字图片;
检测模型训练学习模块用于采集并标注物体图片,然后送入卷积神级网络学习,最终得到学习好的物体检测的模型,并将训练好的模型传递至物体识别模块;
物体识别模块用于加载检测模型训练模块的X射线图片学习模型,利用内置的物体检测算法进行物体识别与定位,将检测识别出的物体种类和坐标信息传送到安全管理模块;
安全管理模块中的报警模块和行李控制模块用于根据物体识别模块识输出的物体种类和坐标信息确定是否需要报警及将物品传送到危险品通道。安全管理模块包括信息管理模块、警示模块、行李控制模块和显示模块。
其中,信息管理模块用于接收物体识别模块发送的物体分类与位置信息,并根据接收到的物体分类与位置信息判别被检测物体是否为危险物品,报警模块用于报警,行李控制模块用于将行李输送至物体传输模块中的不同通道中,显示模块用于显示安检机工作过程中的X射线图片和检测出来的结果图片。
本***可以作为新型智能安检***,也可将物体识别模块,模型训练模块,安全管理模块更新到现有的安检***中,对现有的安检***进行智能升级改造。
智能安检方法主要包含检测区域提取、图像平面处理、检测器学习训练和危险物品智能检测4个部分。
检测区域提取过程为:根据X射线背景特点分割出待检测区域,对于大量白色的候选检测区域直接丢弃,避免了后续耗时的识别操作,提高了物品检测的速度。
图片平面处理:预处理主要是将图片由RGB模型转换到hsv模型,hsv模型包括H、S和V三个颜色平面再通过色调H将图片再分割出橙色、绿色、蓝色和其他颜色四个平面;
通常输入卷积神经网络的图片是由RGB颜色模型表示,由R、G、B3个颜色平面组成。本发明中除了这3个颜色平面外,增加了预处理阶段获得的H、S、V颜色平面,以及经过色彩分割后生成的橙色、绿色、蓝色和其他颜色共10个颜色平面,经过平面数据融合处理后,输入卷 积神经网络进行目标识别。
检测器学习训练:智能安检***采用大量不同角度、位置的X射线物体图片,对采集获得的X射线图像进行分类与标注,标注物体的种类及坐标,并且按8:2的比例将其分为学习图片集与测试图片集,根据采集获得的X射线图像的原始图片生成算法所需的.xml标注格式(包括物体类别、大小及其在X射线图像中的坐标位置等)。
搭建学习模型并且调整合适参数,通过卷积神经网络进行物体几何特征,纹理特征和颜色特征学习,保存学习模型,然后将训练好的X射线物体学习模型通过网络通信接口传送到物体识别模块。行李物品由传输模块传送到X光成像模块,X射线发射装置透过行李后的成像,得到X图像视频序列,X图像视频序列经过模数转换得到数字图片,加载X射线学习模型,将检测到的物体种类和坐标通过通信接口传到安全管理模块,根据安全管理模块设置的报警策略和置信度阈值判定***是否报警及是否传送到危险品通道,置信度阈值可根据安检需要自己设置,阈值设为百分之70,若物体检测置信度大于阈值则报警。
在原有安检***的基础上更新物体检测模块并将通过网络通信接口传到安检机,采用X射线学习模型将物体的种类及坐标信息通过通信接口输出到已有的安检屏幕中,设置危险品的报警阈值,若检测到危险品则暂停物体传送带。
参照图3,检测器学习训练包括以下步骤:
步骤1、采用卷积神经网络的卷积层,池化层和全连接层搭建物体训练模型。
步骤2、将X射线图片库中的图片按安检物体类别分类,并人工标注出图片中物体的类别和坐标信息。
步骤3、设置训练模型的参数,参数包括学习率,批处理尺度,学习策略等。
步骤4、将标注好的图片送入卷积神经网络中。
步骤5、采用搭建好的卷积神经网络对标注好的图片进行训练,得到学习模型。
步骤6、学习模型验证,若达到预期效果,则将学习模型保存到模型学习库;若未达到预期效果,则调整卷积神经网络的参数,继续训练,直到学习模型达到预期效果。物体检测的MAP(Mean Average Precision),若值达到百分之80,则可达到预期效果。
步骤4包括以下步骤:
步骤4.1检测区域提取
根据X射线背景特点分割出待检测区域,对于大量空白背景的候选检测区域直接丢弃,有颜色的区域为检测区域,避免了后续耗时的识别操作,提高了物品检测的速度。
步骤4.2图像平面处理
将输入的HSV根据色调,纯度,明亮度的取值范围分为有机物橙色通道,无机物蓝色通道,混合物绿色通道及其他颜色的通道。
当H的值在20°~60°,S的值在0.4~1.0,V的值在0.4~1.0时,为有机物橙色通道;当H的值100°~140°,S的值0.4~1.0,V的值0.4~1.0时,为混合物绿色通道;当H的值220°~260°,S的值0.4~1.0,V的值0.4~1.0时,为无机物蓝色通道;当H的值不在所述橙色通道、绿色通道和蓝色通道范围内时,S的值0.4~1.0,V的值0.4~1.0,为其他颜色通道。
步骤4.3将图片的r,g,b通道取出来存放到将要输入卷积神经网络的图片前三位通道,再将rgb图像模型转换为hsv颜色模型,提取出hsv模型的h,s,v三个通道存放到输入图片的rgb后面三个通道,通过hsv色调,纯度,明亮度的不同取值范围,将图片分为有机物橙色,无机物蓝色,混合物绿色,及其他颜色分割成4个颜色通道,存放至输入图片的后四位通道,将10个通道的训练图片输入至卷积神经网络中。
检测模型训练学***面检测图像,提高了检测物品识别的准确率。
步骤5包括以下步骤:
步骤5.1.利用卷积运算对待检测区域进行特征提取
输入图片大小416*416,通道为10,使用3*3和1*1的卷积层进行卷积运算。卷积后的特征表示
Figure PCTCN2020078425-appb-000002
其中,n_in是张量的最后一维的维数。Xk代表第k个输入矩阵。Wk代表卷积核的第k个子卷积核矩阵。s(i,j)即卷积核W对应的输出矩阵的对应位置元素的值,b表示偏执量。
步骤5.2.池化:
采用最大池化的方法,即对2*2的池化区域选取最大值作为特征值,核步长为1。
步骤5.3.使用Softmax对每个Bbox进行分类;
步骤5.4.损失函数采用focalloss损失函数
FL(pt)=-α t(1-pt) γlog(pt)
γ称作focusing parameter,γ>=0,(1-pt)称为调制系数,α t用于调节positive和negative的比例,前景类别使用α t时,对应的背景类别使用1-α,pt为不同类别的分类概率;当α=0.25,γ=2时效果最好。α用于调节positive和negative的比例,前景类别使用时α,对应的背景类别使用1-α,pt是不同类别的分类概率。
步骤5.5.采用局部最大值的方法,即选取置信度最高的bbox(包含物体的矩形区域)作为检测结果输出,即被检测物体的位置信息。其中,Bbox信息包含5个数据值,分别是x,y,w,h,和confidence。其中x,y是指当前格子预测得到的物体的bounding box的中心位置的坐标。w,h是指当前格子预测得到的物体的bounding box的宽度和高度,confidence是指预测物体的置信度。
为便于理解,给出了图片示意图,其中图5为灰度化后的X射线图、图6a为X射线图的R平面图,图6b为G平面图;图6c为X射线图的B平面图;图7a为X射线图的H平面图; 图7b为X射线图的S平面图;图7c为X射线图的V平面图;图8a为X射线图的混合物平面图;图8b为X射线图的有机物平面图;图8c为X射线图的无机物平面图;X射线图的其他平面图。
参照图2,安检***工作包括以下步骤:
第一步:设置不同物体的报警阈值。
第一步:通过物体传输模块的传送带传入行李物品。
第二步:X光成像模块发出X射线,透过行李的X射线成像得到X图像视频序列。
第三步:经过模数转换得到行李物品的数字图片。
第四步:加载模型训练模块的X射线图像学习模型。
第五步:利用学习模型对物体进行分类与定位。
第六步:输出图片中的物体种类及坐标信息并发送到安全管理模块。
第七步:由安全管理模块的报警模块决定是否报警及行李控制模块决定行李流向的通道。本发明的技术主要是采用深度学习进行物体识别和定位,在物体特征学习过程中将物体的几何特征,纹理特征与X射线图像中物体颜色有机结合。采用大量不同角度和不同位置的图片数据进行学习,不仅可以很好的检测识别模糊,旋转,变形的图像,而且可以实时将训练好的模型更新到一系列安检机中。在管理模块中,管理员可以根据物体的危险程度设定物体种类阈值,类别及坐标。基于深度学习的智能安检***主要将传统安检***中的人工判别危险品改进为依靠深度学习来辅助安检人员工作的流程,大大地减少人工成本,使得安检***更加智能化。
在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。

Claims (10)

  1. 一种基于深度神经网络的安检***,其特征在于,包括X光成像模块、检测模型训练学习模块、物体识别模块和安全管理模块,所述X光成像模块的输出端和物体识别模块的输入端连接,物体识别模块和检测模型训练学习模块双向连接,物体识别模块的输出端和安全管理模块的输入端连接;
    所述X光成像模块用于得到物品的X图像视频序列,然后经过模数转换得到数字图片,并将得到的数字图片传递至物体识别模块;
    所述检测模型训练学习模块用于进行图片训练,得到学习模型,并将学习模型传递至物体识别模块;
    所述物体识别模块用于加载检测模型训练模块中的学习模型,并对物品进行分类与定位,将检测识别出的物体的种类和坐标信息传送到安全管理模块;
    所述安全管理模块中用于根据物体识别模块识输出的物体种类和坐标信息将物品输送至不同的物品运送通道中。
  2. 根据权利要求1所述的一种基于深度神经网络的安检***,其特征在于,还包括物体传输模块,物体传输模块包括物品进入通道、危险品输出通道和非危险品输出通道。
  3. 根据权利要求1所述的一种基于深度神经网络的安检***,其特征在于,安全管理模块包括信息管理模块、警示模块、行李控制模块和显示模块;其中,信息管理模块用于接收物体识别模块发送的物体分类与位置信息,并根据接收到的物体分类与位置信息判别物体是否为危险物品;报警模块用于报警;行李控制模块用于将行李输送至物体传输模块中的不同通道中,显示模块用于显示X射线图片和检测结果。
  4. 一种基于深度神经网络的安检方法,其特征在于,首先利用图片训练出图像学习模型,在物品检测时,采集待检测物品的X图像视频序列,X图像视频序列经过模数转换得到数字 图片;然后加载图像学习模型,利用图像学习模型识别数字图片中物品的种类和坐标;然后根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
  5. 根据权利要求4所述的一种基于深度神经网络的安检方法,其特征在于,包括以下步骤:
    步骤1、利用X射线发射装置透过物品后的成像,得到X图像视频序列,X图像视频序列经过模数转换得到数字图片;
    步骤2、加载图像学习模型,通过图像学习模型来对数字图片进行物品的识别与定位;所述图像学习模型通过训练得到,具体训练方法为:首先采用卷积神经网络的卷积层,池化层和全连接层搭建物体训练模型;然后将前期获得的X射线图片按安检物体类别分类,并标注出物体的类别和坐标信息,其中坐标信息包括物体中心点的坐标x,y和目标框的长w和宽h;然后设置训练模型的参数,包括学习率,批处理尺度和学习策略;然后将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,得到图像学习模型;然后验证图像学习模型,若达到预期效果,则将图像学习模型保存到模型学习库;若未达到预期效果,则调整卷积神经网络的参数,继续训练,直到图像学习模型达到预期效果;
    步骤3、根据物品的种类和坐标将物品按照种类划分至不同的输送通道。
  6. 根据权利要求5所述的一种基于深度神经网络的安检方法,其特征在于,步骤2中,用于训练学习模型的图片采用不同角度、位置的图片。
  7. 根据权利要求5所述的一种基于深度神经网络的安检方法,其特征在于,步骤2中,将标注好的图片送入卷积神经网络中,用搭建好的卷积神经网络对标注过的图片进行训练,包括以下步骤:
    S1、根据X射线背景特点将数字图片分割出待检测区域,将有颜色的区域为检测区域,所述数字图片为rgb图像;
    S2、将数字图片的r通道、g通道和b通道取出来存放到将要输入卷积神经网络的图片前三位通道;再将rgb图像模型转换为hsv颜色模型,提取出hsv模型的h通道、s通道和v通道,并存放到输入图片的rgb后面三个通道;通过hsv颜色模型的色调H,纯度S以及明亮度V的值,将hsv模型分为有机物橙色,无机物蓝色,混合物绿色,及其他颜色分割成4个颜色通道,存放至输入图片的后四位通道,将10个通道的训练图片输入至卷积神经网络中;
    S3、利用卷积运算对待检测区域进行特征提取,卷积后的特征表示为:
    Figure PCTCN2020078425-appb-100001
    其中,n_in是张量的最后一维的维数,Xk代表第k个输入矩阵,Wk代表卷积核的第k个子卷积核矩阵,s(i,j)即卷积核W对应的输出矩阵的对应位置元素的值,b是偏执量;
    S4、进行池化;
    S5、使用Softmax对每个目标框进行分类,得到分类之后的bbox;
    S6、损失函数采用focalloss损失函数,
    FL(pt)=-α t(1-pt) γlog(pt)
    γ为focusing parameter,γ>=0,1-pt称为调制系数,α t用于调节正样本和负样本的比例,前景类别使用α t时,对应的背景类别使用1-α,pt是不同类别的分类概率;
    S7、选取置信度最高的bbox作为检测结果输出。
  8. 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S2中,当H的值在20°~60°,S的值在0.4~1.0,V的值在0.4~1.0时,为有机物橙色通道;当H的值100°~140°,S的值0.4~1.0,V的值0.4~1.0时,为混合物绿色通道;当H的值220°~260°,S的值0.4~1.0,V的值0.4~1.0时,为无机物蓝色通道;当H的值不在所述橙色通道、绿色通道和蓝色通道范围内时,S的值0.4~1.0,V的值0.4~1.0,为其他颜色通道。
  9. 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S5中,采用最大池化的方法进行池化。
  10. 根据权利要求7所述的一种基于深度神经网络的安检方法,其特征在于,S7中,α=0.25,γ=2。
PCT/CN2020/078425 2019-03-21 2020-03-09 一种基于深度神经网络的安检***及方法 WO2020187077A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910218654.4A CN109946746A (zh) 2019-03-21 2019-03-21 一种基于深度神经网络的安检***及方法
CN201910218654.4 2019-03-21

Publications (1)

Publication Number Publication Date
WO2020187077A1 true WO2020187077A1 (zh) 2020-09-24

Family

ID=67010557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078425 WO2020187077A1 (zh) 2019-03-21 2020-03-09 一种基于深度神经网络的安检***及方法

Country Status (2)

Country Link
CN (1) CN109946746A (zh)
WO (1) WO2020187077A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364903A (zh) * 2020-10-30 2021-02-12 盛视科技股份有限公司 基于x光机的物品分析及多维图像关联方法与***
CN112444889A (zh) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 一种快速安检行包远程集中判读***及方法
CN112560944A (zh) * 2020-12-14 2021-03-26 广东电网有限责任公司珠海供电局 一种基于图像识别的充电桩起火检测方法
CN112991256A (zh) * 2020-12-11 2021-06-18 中国石油天然气股份有限公司 一种基于机器视觉和深度学习的油管物资统计方法
CN113435543A (zh) * 2021-07-22 2021-09-24 北京博睿视科技有限责任公司 一种基于传送带标识的可见光和x光图像匹配方法及装置
CN113537213A (zh) * 2021-07-14 2021-10-22 安徽炬视科技有限公司 一种基于可变卷积核的烟雾明火检测算法
CN113762023A (zh) * 2021-02-18 2021-12-07 北京京东振世信息技术有限公司 基于物品关联关系的对象识别的方法和装置
CN113759433A (zh) * 2021-08-12 2021-12-07 浙江啄云智能科技有限公司 一种违禁品筛选方法、装置和安检设备
CN115731213A (zh) * 2022-11-29 2023-03-03 北京声迅电子股份有限公司 一种基于x光图像的利器检测方法
CN116610078A (zh) * 2023-05-19 2023-08-18 广东海力储存设备股份有限公司 立体仓自动化储存控制方法、***、电子设备及存储介质
CN117197787A (zh) * 2023-08-09 2023-12-08 海南大学 基于改进YOLOv5的智能安检方法、装置、设备及介质
CN117409199A (zh) * 2023-10-19 2024-01-16 中南大学 一种基于云端大数据技术的成长型智慧安检***及方法

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946746A (zh) * 2019-03-21 2019-06-28 长安大学 一种基于深度神经网络的安检***及方法
CN110286415B (zh) * 2019-07-12 2021-03-16 广东工业大学 安检违禁品检测方法、装置、设备及计算机可读存储介质
CN110794466A (zh) * 2019-07-16 2020-02-14 中云智慧(北京)科技有限公司 一种x光机图片采集辅助装置和处理方法
CN110488368B (zh) * 2019-07-26 2021-07-30 熵基科技股份有限公司 一种基于双能x光安检机的违禁品识别方法及装置
CN110533045B (zh) * 2019-07-31 2023-01-17 中国民航大学 一种结合注意力机制的行李x光违禁品图像语义分割方法
CN110751329B (zh) * 2019-10-17 2022-12-13 中国民用航空总局第二研究所 一种机场安检通道的控制方法、装置、电子设备及存储介质
CN111046908A (zh) * 2019-11-05 2020-04-21 杭州电子科技大学 基于卷积神经网络的乳化***包装故障实时监测模型
CN111062252B (zh) * 2019-11-15 2023-11-10 浙江大华技术股份有限公司 一种实时危险物品语义分割方法、装置及存储装置
CN111091150A (zh) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 铁路货车交叉杆盖板断裂检测方法
CN111126238B (zh) * 2019-12-19 2023-06-20 华南理工大学 一种基于卷积神经网络的x光安检***及方法
CN111167733A (zh) * 2020-01-14 2020-05-19 东莞理工学院 一种视觉缺陷自动检测***
CN111429410B (zh) * 2020-03-13 2023-09-01 杭州电子科技大学 一种基于深度学习的物体x射线图像材质判别***及方法
US11989890B2 (en) * 2020-03-31 2024-05-21 Hcl Technologies Limited Method and system for generating and labelling reference images
CN112144150A (zh) * 2020-10-16 2020-12-29 北京经纬纺机新技术有限公司 一种应用深度学习图像处理的分布式异性纤维分检***
CN112764120A (zh) * 2020-12-29 2021-05-07 深圳市创艺龙电子科技有限公司 金属探测成像***和具有其的扫描设备
CN113159110A (zh) * 2021-03-05 2021-07-23 安徽启新明智科技有限公司 一种基于x射线液体智能检测方法
CN113792826B (zh) * 2021-11-17 2022-02-18 湖南苏科智能科技有限公司 基于神经网络和多源数据的双视角关联安检方法及***
CN114332543B (zh) * 2022-01-10 2023-02-14 成都智元汇信息技术股份有限公司 一种多模板的安检图像识别方法、设备及介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065077A2 (en) * 2002-01-30 2003-08-07 Rutgers, The State University Combinatorial contraband detection using energy dispersive x-ray diffraction
CN106599967A (zh) * 2016-12-08 2017-04-26 同方威视技术股份有限公司 安检物品定位的标签和安检物品定位的方法
CN106872498A (zh) * 2017-04-11 2017-06-20 西安培华学院 一种违禁品安检自动识别装置
CN107145898A (zh) * 2017-04-14 2017-09-08 北京航星机器制造有限公司 一种基于神经网络的射线图像分类方法
CN107607562A (zh) * 2017-09-11 2018-01-19 北京匠数科技有限公司 一种违禁物品识别设备及方法、x光行李安检***
CN107871122A (zh) * 2017-11-14 2018-04-03 深圳码隆科技有限公司 安检检测方法、装置、***及电子设备
CN108198227A (zh) * 2018-03-16 2018-06-22 济南飞象信息科技有限公司 基于x光安检机图像的违禁品智能识别方法
CN108303747A (zh) * 2017-01-12 2018-07-20 清华大学 检查设备和检测***的方法
CN109446888A (zh) * 2018-09-10 2019-03-08 唯思科技(北京)有限公司 一种基于卷积神经网络的细长类物品检测方法
CN109946746A (zh) * 2019-03-21 2019-06-28 长安大学 一种基于深度神经网络的安检***及方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105116463A (zh) * 2015-09-22 2015-12-02 同方威视技术股份有限公司 安检通道以及安检装置
CN205656321U (zh) * 2016-05-30 2016-10-19 公安部第一研究所 一种用于识别包裹中危险液体的x射线检测装置
CN206192923U (zh) * 2016-10-27 2017-05-24 中云智慧(北京)科技有限公司 一种基于云计算的x射线违禁品检测***
CN108226196A (zh) * 2018-02-22 2018-06-29 青岛智慧云谷智能科技有限公司 一种智能x光机探测***及探测方法
CN108802840B (zh) * 2018-05-31 2020-01-24 北京迈格斯智能科技有限公司 基于人工智能深度学习的自动识别物体的方法及其装置
CN109086679A (zh) * 2018-07-10 2018-12-25 西安恒帆电子科技有限公司 一种毫米波雷达安检仪异物检测方法
CN109191389A (zh) * 2018-07-31 2019-01-11 浙江杭钢健康产业投资管理有限公司 一种x光图像自适应局部增强方法
CN109187598A (zh) * 2018-10-09 2019-01-11 青海奥越电子科技有限公司 基于数字图像处理的违禁物品检测***及方法
CN109447071A (zh) * 2018-11-01 2019-03-08 博微太赫兹信息科技有限公司 一种基于fpga和深度学习的毫米波成像危险物品检测方法
CN109472309A (zh) * 2018-11-12 2019-03-15 南京烽火星空通信发展有限公司 一种x光安检机图片物体检测方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003065077A2 (en) * 2002-01-30 2003-08-07 Rutgers, The State University Combinatorial contraband detection using energy dispersive x-ray diffraction
CN106599967A (zh) * 2016-12-08 2017-04-26 同方威视技术股份有限公司 安检物品定位的标签和安检物品定位的方法
CN108303747A (zh) * 2017-01-12 2018-07-20 清华大学 检查设备和检测***的方法
CN106872498A (zh) * 2017-04-11 2017-06-20 西安培华学院 一种违禁品安检自动识别装置
CN107145898A (zh) * 2017-04-14 2017-09-08 北京航星机器制造有限公司 一种基于神经网络的射线图像分类方法
CN107607562A (zh) * 2017-09-11 2018-01-19 北京匠数科技有限公司 一种违禁物品识别设备及方法、x光行李安检***
CN107871122A (zh) * 2017-11-14 2018-04-03 深圳码隆科技有限公司 安检检测方法、装置、***及电子设备
CN108198227A (zh) * 2018-03-16 2018-06-22 济南飞象信息科技有限公司 基于x光安检机图像的违禁品智能识别方法
CN109446888A (zh) * 2018-09-10 2019-03-08 唯思科技(北京)有限公司 一种基于卷积神经网络的细长类物品检测方法
CN109946746A (zh) * 2019-03-21 2019-06-28 长安大学 一种基于深度神经网络的安检***及方法

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364903A (zh) * 2020-10-30 2021-02-12 盛视科技股份有限公司 基于x光机的物品分析及多维图像关联方法与***
CN112444889A (zh) * 2020-11-13 2021-03-05 北京航星机器制造有限公司 一种快速安检行包远程集中判读***及方法
CN112991256A (zh) * 2020-12-11 2021-06-18 中国石油天然气股份有限公司 一种基于机器视觉和深度学习的油管物资统计方法
CN112560944B (zh) * 2020-12-14 2023-08-08 广东电网有限责任公司珠海供电局 一种基于图像识别的充电桩起火检测方法
CN112560944A (zh) * 2020-12-14 2021-03-26 广东电网有限责任公司珠海供电局 一种基于图像识别的充电桩起火检测方法
CN113762023A (zh) * 2021-02-18 2021-12-07 北京京东振世信息技术有限公司 基于物品关联关系的对象识别的方法和装置
CN113762023B (zh) * 2021-02-18 2024-05-24 北京京东振世信息技术有限公司 基于物品关联关系的对象识别的方法和装置
CN113537213A (zh) * 2021-07-14 2021-10-22 安徽炬视科技有限公司 一种基于可变卷积核的烟雾明火检测算法
CN113537213B (zh) * 2021-07-14 2024-01-30 安徽炬视科技有限公司 一种基于可变卷积核的烟雾明火检测算法
CN113435543A (zh) * 2021-07-22 2021-09-24 北京博睿视科技有限责任公司 一种基于传送带标识的可见光和x光图像匹配方法及装置
CN113435543B (zh) * 2021-07-22 2024-04-09 湖南声迅科技有限公司 一种基于传送带标识的可见光和x光图像匹配方法及装置
CN113759433A (zh) * 2021-08-12 2021-12-07 浙江啄云智能科技有限公司 一种违禁品筛选方法、装置和安检设备
CN113759433B (zh) * 2021-08-12 2024-02-27 浙江啄云智能科技有限公司 一种违禁品筛选方法、装置和安检设备
CN115731213A (zh) * 2022-11-29 2023-03-03 北京声迅电子股份有限公司 一种基于x光图像的利器检测方法
CN115731213B (zh) * 2022-11-29 2024-01-30 北京声迅电子股份有限公司 一种基于x光图像的利器检测方法
CN116610078A (zh) * 2023-05-19 2023-08-18 广东海力储存设备股份有限公司 立体仓自动化储存控制方法、***、电子设备及存储介质
CN117197787A (zh) * 2023-08-09 2023-12-08 海南大学 基于改进YOLOv5的智能安检方法、装置、设备及介质
CN117409199A (zh) * 2023-10-19 2024-01-16 中南大学 一种基于云端大数据技术的成长型智慧安检***及方法
CN117409199B (zh) * 2023-10-19 2024-05-14 中南大学 一种基于云端大数据技术的成长型智慧安检***及方法

Also Published As

Publication number Publication date
CN109946746A (zh) 2019-06-28

Similar Documents

Publication Publication Date Title
WO2020187077A1 (zh) 一种基于深度神经网络的安检***及方法
CN106127204B (zh) 一种全卷积神经网络的多方向水表读数区域检测算法
CN108776779B (zh) 基于卷积循环网络的sar序列图像目标识别方法
CN108198227A (zh) 基于x光安检机图像的违禁品智能识别方法
CN111145177A (zh) 图像样本生成方法、特定场景目标检测方法及其***
CN112712093B (zh) 安检图像识别方法、装置、电子设备及存储介质
CN110533051B (zh) 基于卷积神经网络的x光安检图像中违禁品自动检测方法
CN105809121A (zh) 多特征协同的交通标志检测与识别方法
CN108647700A (zh) 基于深度学习的多任务车辆部件识别模型、方法和***
CN106297142A (zh) 一种无人机山火勘探控制方法及***
CN103218831A (zh) 一种基于轮廓约束的视频运动目标分类识别方法
CN107563433A (zh) 一种基于卷积神经网络的红外小目标检测方法
CN111368690A (zh) 基于深度学习的海浪影响下视频图像船只检测方法及***
CN110488368A (zh) 一种基于双能x光安检机的违禁品识别方法及装置
WO2023087653A1 (zh) 基于神经网络和多源数据的双视角关联安检方法及***
CN113963222A (zh) 一种基于多策略组合的高分辨率遥感影像变化检测方法
CN109949229A (zh) 一种多平台多视角下的目标协同检测方法
CN109101926A (zh) 基于卷积神经网络的空中目标检测方法
CN108364037A (zh) 识别手写汉字的方法、***及设备
CN112258490A (zh) 基于光学和红外图像融合的低发射率涂层智能探损方法
CN109409409A (zh) 基于hog+cnn的交通标志的实时检测方法
Saha et al. Unsupervised multiple-change detection in VHR optical images using deep features
Chen et al. Research on the process of small sample non-ferrous metal recognition and separation based on deep learning
CN110992324B (zh) 一种基于x射线图像的智能危险品检测方法及***
CN106169086B (zh) 导航数据辅助下的高分辨率光学影像损毁道路提取方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20772845

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20772845

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20772845

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20772845

Country of ref document: EP

Kind code of ref document: A1