CN115081469A - Article category identification method, device and equipment based on X-ray security inspection equipment - Google Patents

Article category identification method, device and equipment based on X-ray security inspection equipment Download PDF

Info

Publication number
CN115081469A
CN115081469A CN202110277577.7A CN202110277577A CN115081469A CN 115081469 A CN115081469 A CN 115081469A CN 202110277577 A CN202110277577 A CN 202110277577A CN 115081469 A CN115081469 A CN 115081469A
Authority
CN
China
Prior art keywords
energy
image
ray image
target sample
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110277577.7A
Other languages
Chinese (zh)
Inventor
吴昌建
张迪
陈鹏
石仕伟
张玉全
曹海潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110277577.7A priority Critical patent/CN115081469A/en
Publication of CN115081469A publication Critical patent/CN115081469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • G01N23/083Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption the radiation being X-rays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Toxicology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The embodiment of the invention provides an article type identification method, device and equipment based on X-ray security inspection equipment, and on the first hand, in the scheme, a pre-trained identification model is used for identifying the article type in an X-ray image to be identified, so that the automatic identification of the type of an article irradiated by the X-ray security inspection equipment is realized; in the second aspect, in the scheme, the sample X-ray image is transformed to obtain a color image, the color image is labeled to obtain labeling information, the labeling information is used for supervision training to obtain the recognition model, the visual effect of the color image is better than that of the sample X-ray image, therefore, compared with the method for directly labeling the sample X-ray image, the labeling information obtained by labeling the color image is more accurate, and the recognition model obtained by training with the labeling information has higher accuracy.

Description

Article category identification method, device and equipment based on X-ray security inspection equipment
Technical Field
The invention relates to the technical field of computer vision, in particular to an article type identification method, device and equipment based on X-ray security check equipment.
Background
In some scenarios, it is desirable to identify the items carried by the person. For example, in a security check scene, some X-ray security check devices are usually set, the types of articles carried by people are identified through the X-ray security check devices, and then whether the articles which are not allowed to be carried are carried by the people can be judged according to the types of the articles.
Some related identification schemes generally include: the X-ray security inspection equipment emits X-rays to irradiate the object to be identified, and different objects show different colors on a display screen of the X-ray security inspection equipment due to different absorption degrees of the different objects to the X-rays. For example, food, plastic, etc. may appear orange, books, ceramics, etc. may appear green, metals may appear blue, etc. The security inspector can determine the category of the article empirically according to the articles with different colors displayed on the display screen of the X-ray security inspection equipment.
However, in the above solutions, a security inspector needs to manually identify the type of an article by experience, which consumes much labor, and a method capable of automatically identifying the type of the article needs to be provided.
Disclosure of Invention
The embodiment of the invention aims to provide an article type identification method, device and equipment based on X-ray security check equipment, so as to realize automatic identification of article types. The specific technical scheme is as follows:
in order to achieve the above object, an embodiment of the present invention provides an article class identification method based on an X-ray security inspection apparatus, including:
acquiring an X-ray image to be identified;
inputting the X-ray image to be recognized into a pre-trained recognition model to obtain the class information of the articles contained in the X-ray image to be recognized and output by the recognition model;
the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by transforming the sample X-ray image as supervision.
Optionally, the X-ray image to be identified includes: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified;
the sample X-ray image includes: a sample high-energy X-ray image, a sample low-energy X-ray image, and a sample atomic number image obtained by dual-energy resolving the sample high-energy X-ray image and the sample low-energy X-ray image.
Optionally, the color image is obtained by the following steps:
acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image;
carrying out dual-energy resolution on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a target sample atomic number image;
carrying out gray level fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray level fusion image;
colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image.
Optionally, the performing gray scale fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray scale fusion image includes:
determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image;
and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
Optionally, the colorizing the grayscale fusion image according to the target sample atomic number image to obtain a color image includes:
determining each item component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image;
respectively determining the color corresponding to each article component;
and determining a region of each article component mapped to the gray-scale fusion image, and coloring the region by using the color corresponding to the article component to obtain a color image.
Optionally, the coloring the region by using the color corresponding to the component of the article to obtain a color image includes:
determining the shade degree of the corresponding color of the component of the article according to the gray value of the area;
and coloring the region according to the depth of the color to obtain a color image.
Optionally, the acquiring the X-ray image to be identified includes:
irradiating the article by using high-energy X rays to obtain a high-energy X ray image to be identified;
irradiating the article by using low-energy X rays to obtain a low-energy X ray image to be identified;
and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
In order to achieve the above object, an embodiment of the present invention further provides an article type identification apparatus based on an X-ray security inspection device, including:
the first acquisition module is used for acquiring an X-ray image to be identified;
the identification module is used for inputting the X-ray image to be identified into a pre-trained identification model to obtain the category information of the articles contained in the X-ray image to be identified and output by the identification model; the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by transforming the sample X-ray image as supervision.
Optionally, the X-ray image to be identified includes: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified;
the sample X-ray image includes: a sample high-energy X-ray image, a sample low-energy X-ray image, and a sample atomic number image obtained by dual-energy resolving the sample high-energy X-ray image and the sample low-energy X-ray image.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image;
the dual-energy resolution module is used for performing dual-energy resolution on the high-energy X-ray image of the target sample and the low-energy X-ray image of the target sample to obtain an atomic number image of the target sample;
the fusion module is used for carrying out gray fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray fusion image;
and the colorizing module is used for colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image.
Optionally, the fusion module is specifically configured to:
determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image;
and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
Optionally, the colorization module includes:
the first determining submodule is used for determining each item component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image;
the second determining sub-module is used for respectively determining the color corresponding to each article component;
and the coloring sub-module is used for determining the region of each article component mapped to the gray-scale fusion image, and coloring the region by using the color corresponding to the article component to obtain a color image.
Optionally, the coloring sub-module is specifically configured to:
determining the shade degree of the color corresponding to the component of the article according to the gray value of the area;
and coloring the region according to the depth of the color to obtain a color image.
Optionally, the obtaining module is specifically configured to:
irradiating the article by using high-energy X rays to obtain a high-energy X ray image to be identified;
irradiating the article by using low-energy X rays to obtain a low-energy X ray image to be identified;
and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, including a processor and a memory;
a memory for storing a computer program;
and the processor is used for realizing any article type identification method based on the X-ray security check equipment when executing the program stored in the memory.
By applying the embodiment of the invention, the X-ray image to be identified is acquired; inputting the X-ray image to be recognized into a recognition model trained in advance to obtain the category information of the articles contained in the X-ray image to be recognized and output by the recognition model; the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by converting the sample X-ray image as supervision. In a first aspect, in the scheme, the pre-trained recognition model is used for recognizing the article type in the to-be-recognized X-ray image, so that the article type is automatically recognized; in the second aspect, in the scheme, the sample X-ray image is transformed to obtain a color image, the color image is labeled to obtain labeling information, the labeling information is used for supervision training to obtain the recognition model, the visual effect of the color image is better than that of the sample X-ray image, therefore, compared with the method for directly labeling the sample X-ray image, the labeling information obtained by labeling the color image is more accurate, and the recognition model obtained by training with the labeling information has higher accuracy.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of an article category identification method based on an X-ray security inspection apparatus according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a training process of a neural network according to an embodiment of the present invention;
fig. 4 is a second flowchart of the method for identifying an article type based on an X-ray security inspection apparatus according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an article category identification device based on an X-ray security inspection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In order to achieve the above object, embodiments of the present invention provide an article type identification method, apparatus and device based on an X-ray security inspection apparatus, where the method and apparatus may be applied to various electronic devices, such as the X-ray security inspection apparatus, or other devices connected to the X-ray security inspection apparatus, and are not limited specifically. The following first describes the method for identifying the article type based on the X-ray security inspection apparatus in detail.
Fig. 1 is a first flowchart of an article category identification method based on an X-ray security inspection apparatus according to an embodiment of the present invention, including:
s101: and acquiring an X-ray image to be identified.
For example, the embodiment of the invention can be applied to security inspection scenes, such as various security inspection scenes of train stations, airports, convention and exhibition centers, and the like. The X-ray image to be identified may be an image acquired by an X-ray security device. These X-ray security apparatuses can be classified into security apparatuses that emit high-energy X-rays and security apparatuses that emit low-energy X-rays. Wherein, the energy range of the X-ray can be: 0.1keV (kilo electron volts) to 200keV, the high energy level and the low energy level of the X-ray are relative, and the energy of the high energy level X-ray is higher than that of the low energy level X-ray, for example, the energy range of the high energy level X-ray may be 100keV to 200keV, the energy range of the low energy level X-ray may be 0.1keV to 10keV, and the specific energy range of the high energy level X-ray and the energy range of the low energy level X-ray are not limited. The X-ray image to be identified obtained in S101 may be a high-energy X-ray image obtained by irradiating the article with high-energy X-rays, or may also be a low-energy X-ray image obtained by irradiating the article with low-energy X-rays, which is not limited specifically.
In one embodiment, the X-ray image to be identified may be a multi-channel image, which may include: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified, and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
For example, the article may be irradiated with high-energy X-rays to obtain a high-energy X-ray image to be identified; irradiating the article by using low-energy X rays to obtain a low-energy X ray image to be identified; and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
Among these, the high-energy X-ray image can be understood as: an image expressing the degree of absorption of high-level X-rays by the irradiated article, or may also be understood as: an image expressing the attenuation coefficient of the irradiated article under high-level X-rays. The low energy X-ray image can be understood as: an image expressing the degree of absorption of low-level X-rays by the irradiated article, or may also be understood as: an image expressing the attenuation coefficient of the irradiated article under low-level X-rays.
For example, a dual-energy resolution table may be pre-established, and the table includes the correspondence between the pixel values of the pixels in the high-energy X-ray image and the pixel values and atomic numbers of the pixels in the low-energy X-ray image. The high-energy X-ray image to be identified and the low-energy X-ray image to be identified are images obtained by irradiating the same article, and pixel points in the two images can be in one-to-one correspondence. For example, assuming that a pixel point a1 in the high-energy X-ray image to be identified is taken as an arbitrary pixel point, a pixel point in the low-energy X-ray image to be identified corresponding to the pixel point a1 is taken as a2, a pixel value of the pixel point a1 and a pixel value of the pixel point a2 are determined, and then atomic numbers corresponding to the pixel point a1 and the pixel point a2 are determined by searching the dual-energy resolution table.
In the embodiment, the atomic number image obtained by performing dual energy resolution on the high-energy X-ray image and the low-energy X-ray image reduces information loss caused by a multiple image conversion process, compared with the method of converting the X-ray image into a color image and then acquiring the atomic number image through the color image.
S102: inputting the X-ray image to be recognized into a pre-trained recognition model to obtain the class information of the articles contained in the X-ray image to be recognized and output by the recognition model; the recognition model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by converting the sample X-ray image as supervision.
The color image may be an RGB (Red Green Blue) image, and the specific form of the color image is not limited.
In one embodiment, the X-ray image to be identified is a multi-channel image, which includes: compared with a single-channel image (any one of the three images), the multi-channel image has richer carried information, and the identification model is used for identifying the multi-channel image, so that the accuracy is higher.
It is understood that the training process of the recognition model is consistent with the input data type of the using process, and in this embodiment, the sample image used in the training process is also a multi-channel image, which may include: a sample high-energy X-ray image, a sample low-energy X-ray image, and a sample atomic number image obtained by dual-energy resolving the sample high-energy X-ray image and the sample low-energy X-ray image.
The neural network to be trained and the recognition model obtained after training have the same structure, and the training process can be understood as a process of iteratively adjusting parameters in the neural network. The specific structure of the neural network and the recognition model is not limited.
For example, referring to fig. 2, the neural network may include a convolutional neural network, and a classification network and a regression network respectively connected to the convolutional neural network. The input image enters a convolutional neural network for convolution processing, the feature maps output by the convolutional neural network are respectively input into a classification network and a regression network, the classification network is used for article classification and outputs the category information of the articles, the regression network is used for coordinate regression, and the regression network outputs the position information of the articles.
For example, the classification Network may be a ResNet (Residual Network), the regression Network may be a linear regression Network, and the specific classification Network and the regression Network are not limited. In the training process, the loss function of the classification network can be cross entropy loss function; the Loss function of the regression network may be mselos (Mean-Squared Loss function). Under the condition that the value of the loss function tends to be stable along with the increase of the training times, the completion of the neural network training can be judged. The specific loss functions of the classification network and the regression network are not limited.
In one case, the structure of the convolutional neural network may be as shown in table 1 below. Assuming that the resolution of the input image is 224 × 224, the process of obtaining the feature map using the convolutional neural network may include: the image with the resolution of 224 × 224 is input to the convolutional neural network, the resolution of the output image is obtained through one convolution layer, the resolution of the output image is 32 × 224, the output image is obtained through one maximum pooling layer, the resolution of the output image is 32 × 112, the output image is obtained through one convolution layer, one maximum pooling layer, three convolution layers, one maximum pooling layer, five convolution layers, one maximum pooling layer and seven convolution layers in sequence, the resolution of the output image is 1024 × 7, and Filters, Size/Stride and the resolution of the output image of the layers are shown in reference table 1.
Assuming that the number of categories of the items to be identified is 5 and the number of the preset frames is 9, the number of filters of the last convolution layer in table 1 may be 9 × (5+1+4) ═ 90, where 1 indicates that 1-dimensional data represents foreground-background classification, for example, 1 may indicate that a pixel belongs to a foreground pixel, 0 indicates that a pixel is a background pixel, 4 indicates that 4-dimensional data represents the position of the preset frame, for example, 4-dimensional data, which may indicate the length and width of the preset frame and the abscissa and ordinate of a point (such as a center point, an upper left corner point, a lower right corner point, etc.) specified by the preset frame, may indicate the position of the preset frame, and 5 indicates that the number of categories of the items to be detected is 5. The preset frame may be a rectangular frame preset according to an article to be identified, or the preset frame may also be in other shapes, such as a circle, an ellipse, and the like, the specific shape of the preset frame is not limited, and finally, the feature map with the resolution of 90 × 7 is obtained through the convolution layer with the number of 90 filters.
TABLE 1
Figure BDA0002977260710000091
Figure BDA0002977260710000101
In the embodiment of the invention, when the neural network is trained, the marking information of the color image obtained by the transformation of the sample X-ray image is taken as supervision. There are various ways to acquire color images, and the following exemplifies the various ways to acquire color images:
in one embodiment, the sample X-ray image is a multi-channel image, including: in this embodiment, the sample high-energy X-ray image and the sample low-energy X-ray image acquired for the same scene may be acquired as the target sample high-energy X-ray image and the target sample low-energy X-ray image; carrying out dual-energy resolution on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a target sample atomic number image; carrying out gray level fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray level fusion image; colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image.
In one case, colorizing the grayscale fusion image according to the target sample atomic number image to obtain a color image may include: determining each item component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image; respectively determining the color corresponding to each article component; and determining a region of each article component mapped to the gray-scale fusion image, and coloring the region by using the color corresponding to the article component to obtain a color image.
For example, the pixel value of each pixel in the atomic number image may be the atomic number of the atom corresponding to the pixel. Thus, each article component contained in the sample atomic number image can be determined according to the atomic number in the sample atomic number image, for example, if the atomic number corresponding to a certain pixel point in the sample atomic number image is 26, then it can be determined that the sample atomic number image contains iron. For each item component, a region of the item component mapped to the grayscale fusion image may be determined based on the region of the item component in the atomic number image. The region can be colored with the color corresponding to the iron to obtain a color image.
In one embodiment, the color corresponding to the composition of the article can be used directly to color the region to obtain a color image.
For example, if the component of the article is metal, the color corresponding to the metal may be determined, the region in the grayscale fusion image mapped by the metal may be determined, and the region may be colored by the color corresponding to the metal, so as to obtain a color image. For example, if the color corresponding to the metal is blue, the region of the metal mapped to the grayscale fusion image may be filled with blue to obtain a color image.
Alternatively, in another embodiment, coloring the region with a color corresponding to the composition of the article to obtain a color image may include: determining the shade degree of the color corresponding to the component of the article according to the gray value of the area; and coloring the area according to the depth of the color to obtain a color image.
For example, if the thickness of the object is larger, the depth of the X-ray penetrating through the object is smaller, which corresponds to a lower gray scale value in the gray-scale fusion image, and if the thickness of the object is smaller, the depth of the X-ray penetrating through the object is larger, which corresponds to a higher gray scale value in the gray-scale fusion image. That is, the smaller the grayscale value in the grayscale fusion image, the thicker the item is represented. The thicker the article, the darker the color of the article when it is colored in the grayscale fusion image. By applying the embodiment, the obtained color image can express the thickness of the article through the depth of the color, and the visual effect is better.
Therefore, by applying the embodiment, the color image can distinguish articles made of different materials through different colors, the thickness of the articles can be expressed through the depth of the color, and the visual effect is better.
In the above other embodiments, the sample X-ray image is a single-channel image, such as a sample high-energy X-ray image or a sample low-energy X-ray image, and in this embodiment, the corresponding color may be determined according to the depth of penetration of X-rays expressed in the sample high-energy X-ray image or the sample low-energy X-ray image.
It will be appreciated that the degree of penetration of X-rays through different articles is different, and that the type of article can be determined from the degree of penetration, and the corresponding color determined based on the type of article.
In the above embodiment, a neural network with a preset structure is trained to obtain a recognition model by using a multi-channel image as training data and using labeled information of a color image as supervision, and the following describes the embodiment with reference to fig. 3:
s301: acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image; and carrying out dual-energy resolution on the high-energy X-ray image and the low-energy X-ray image of the target sample to obtain an atomic number image of the target sample.
The target sample high-energy X-ray image and the target sample low-energy X-ray image are images obtained by irradiating the same article, and pixel points in the two images can be in one-to-one correspondence.
For example, a dual-energy resolution table may be pre-established, and the table includes the gray values of the pixels in the high-energy X-ray image and the gray values and atomic numbers of the pixels in the low-energy X-ray image. For example, assuming that a pixel point a1 in the high-energy X-ray image of the target sample is taken as an example, a pixel point in the low-energy X-ray image of the target sample corresponding to the pixel point a1 is taken as a2, a pixel value of the pixel point a1 and a pixel value of the pixel point a2 are determined, and then atomic numbers corresponding to the pixel point a1 and the pixel point a2 are determined by searching the dual-energy resolution table.
S302: and carrying out gray level fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray level fusion image.
In one embodiment, S302 may include: determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image; and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
For example, the matching pixel pair may include a pixel in the high-energy X-ray image of the target sample and a pixel in the low-energy X-ray image of the target sample; or, the matching pixel point pairs may include two pixel points in the high-energy X-ray image of the target sample and two pixel points in the low-energy X-ray image of the target sample, and the number of pixel points included in the specific matching pixel point pairs is not limited.
Taking the matching pixel point pair comprising one pixel point in the high-energy X-ray image of the target sample and one pixel point in the low-energy X-ray image of the target sample as an example, the matching pixel point pair can be determined by aiming at each pixel point in the high-energy X-ray image of the target sample, and the pixel point matched with each other in the low-energy X-ray image of the target sample are determined as the matching pixel point pair; and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
There are various ways to fuse the gray values of the pixels in each pair of matching pixel points, for example, the mean value of the gray values of the pixels in each pair of matching pixel points may be calculated, or the gray values of the pixels in each pair of matching pixel points may be weighted and fused. Weighted fusion can be understood as: and distributing the weight to each pixel point in the matched pixel point pair, and carrying out weighted calculation on the gray value of each pixel point according to the distributed weight.
For example, each pair of matching pixel points includes two pixel points, the gray values of the pixel points in each pair of matching pixel points can be weighted and fused according to the following formula:
fusion gray value of gray value A × weight 1+ gray value B × weight 2
The fusion gray value represents the gray value of a pixel point in a gray fusion image, the gray value A represents the gray value of the pixel point in a high-energy X-ray image of the target sample, the weight 1 represents the weight corresponding to the pixel point in the high-energy X-ray image of the target sample, the gray value B represents the gray value of the pixel point corresponding to the low-energy X-ray image of the target sample, and the weight 2 represents the weight corresponding to the pixel point in the low-energy X-ray image of the target sample.
If the X-ray irradiated article is thick, the high-energy X-ray image of the article has a higher sharpness of the outline of the article than the low-energy X-ray image of the article; if the object being X-rayed is thin, the low energy X-ray image of the object will have a higher sharpness of the outline of the object than the high energy X-ray image of the object. In the embodiment, the high-energy X-ray image and the low-energy X-ray image are subjected to gray level fusion, so that the outlines of the articles with different thicknesses can be clearly presented in the fused image, and therefore, the areas of different articles can be more clearly divided in the subsequent process of marking the images.
S303: and colorizing the gray level fusion image according to the sample atomic number image to obtain a color image.
In one case, S303 may include: determining each item component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image; respectively determining the color corresponding to each article component; and determining a region mapped to the gray-scale fusion image by the article component for each article component, and coloring the region by using the color corresponding to the article component to obtain a color image.
For example, the pixel value of each pixel in the atomic number image may be the atomic number of the atom corresponding to the pixel. Thus, each article component contained in the sample atomic number image can be determined according to the atomic number in the sample atomic number image, for example, if the atomic number corresponding to a certain pixel point in the sample atomic number image is 26, then it can be determined that the sample atomic number image contains iron. For each item component, a region of the item component mapped to the grayscale fusion image may be determined based on the region of the item component in the atomic number image. The region can be colored with the color corresponding to the iron to obtain a color image.
In one embodiment, the color corresponding to the composition of the article can be used directly to color the region to obtain a color image.
For example, if the component of the article is metal, the color corresponding to the metal may be determined, the region in the grayscale fusion image mapped by the metal may be determined, and the region may be colored by the color corresponding to the metal, so as to obtain a color image. For example, if the color corresponding to the metal is blue, the region of the metal mapped to the grayscale fusion image may be filled with blue to obtain a color image.
Alternatively, in another embodiment, coloring the region with a color corresponding to the composition of the article to obtain a color image may include: determining the shade degree of the color corresponding to the component of the article according to the gray value of the area; and coloring the area according to the shade degree of the color to obtain a color image.
For example, if the thickness of the object is larger, the depth of the X-ray penetrating through the object is smaller, which corresponds to a lower gray scale value in the gray-scale fusion image, and if the thickness of the object is smaller, the depth of the X-ray penetrating through the object is larger, which corresponds to a higher gray scale value in the gray-scale fusion image. That is, the smaller the grayscale value in the grayscale fusion image, the thicker the item is represented. The thicker the article, the darker the color of the article when it is colored in the grayscale fusion image. By applying the embodiment, the obtained color image can express the thickness of the article through the depth of the color, and the visual effect is better.
Therefore, by applying the embodiment, the color image can distinguish articles made of different materials through different colors, the thickness of the articles can be expressed through the depth of the color, and the visual effect is better.
S304: and acquiring the labeling information of the color image.
For example, the position and the category information of the article in the color image may be manually labeled on the color image to obtain the labeling information. Or, the position and the category information of the article in the color image may also be labeled by using an image labeling algorithm to obtain labeling information, and the embodiment of the present invention does not limit the specific image labeling algorithm.
Since the color image is obtained by image transformation of the target sample high-energy X-ray image, the target sample low-energy X-ray image and the target sample atomic number image, the position of the object in the color image corresponds to the position of the object in the target sample high-energy X-ray image, the target sample low-energy X-ray image and the target sample atomic number image one-to-one. Therefore, the labeling information of the color image can be applied to a high-energy X-ray image of the target sample, a low-energy X-ray image of the target sample, and an atomic number image of the target sample.
S305: and training the neural network with a preset structure by taking the high-energy X-ray image of the target sample, the low-energy X-ray image of the target sample and the atomic number image of the target sample as training data and taking the marking information of the color image as supervision to obtain the recognition model.
By applying the embodiment of the invention, in the first aspect, the pre-trained identification model is utilized to identify the article type in the X-ray image to be identified, so that the automatic identification of the article type irradiated by the X-ray security inspection equipment is realized.
In the second aspect, the sample X-ray image is transformed to obtain a color image, the color image is labeled to obtain labeling information, the labeling information is used for supervision training to obtain the recognition model, the color image has a better visual effect than the sample X-ray image, and therefore compared with the method of directly labeling the sample X-ray image, the labeling information obtained by labeling the color image is more accurate, and the recognition model obtained by training with the labeling information has higher accuracy.
In a third aspect, in one embodiment, the X-ray image to be identified is a multi-channel image, which includes: compared with a single-channel image (any one of the three images), the multi-channel image has richer carried information, and the identification model is used for identifying the multi-channel image, so that the accuracy is higher.
In a fourth aspect, in one embodiment, the atomic number image obtained by performing dual energy resolution on the high energy X-ray image and the low energy X-ray image reduces information loss caused by multiple image transformations compared to converting the X-ray image into a color image and then obtaining the atomic number image from the color image.
In a fifth aspect, in the above embodiment, a multi-channel image may be converted to obtain a color image, and a single-channel image may be converted to obtain a color image, that is, in this scheme, a high-energy X-ray image and a low-energy X-ray image may be simultaneously acquired by a dual-energy X-ray security inspection apparatus, or only one X-ray image may be acquired by a single-energy X-ray security inspection apparatus, and the type of the X-ray security inspection apparatus is not limited.
Fig. 4 is a second flowchart of the method for identifying an item category based on an X-ray security inspection apparatus according to the embodiment of the present invention, where the method includes:
s401: acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image; and carrying out double-energy resolution on the high-energy X-ray image and the low-energy X-ray image of the target sample to obtain an atomic number image of the target sample.
The target sample high-energy X-ray image and the target sample low-energy X-ray image are images obtained by irradiating the same article, and pixel points in the two images can be in one-to-one correspondence.
For example, a dual-energy resolution table may be pre-established, and the table includes the gray values of the pixels in the high-energy X-ray image and the gray values and atomic numbers of the pixels in the low-energy X-ray image. For example, assuming that a pixel point a1 in the high-energy X-ray image of the target sample is taken as an example, a pixel point in the low-energy X-ray image of the target sample corresponding to the pixel point a1 is taken as a2, a pixel value of the pixel point a1 and a pixel value of the pixel point a2 are determined, and then atomic numbers corresponding to the pixel point a1 and the pixel point a2 are determined by searching the dual-energy resolution table.
S402: determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image; and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
For example, the matching pixel pair may include a pixel in the high-energy X-ray image of the target sample and a pixel in the low-energy X-ray image of the target sample; or, the matching pixel point pairs may include two pixel points in the high-energy X-ray image of the target sample and two pixel points in the low-energy X-ray image of the target sample, and the number of pixel points included in the specific matching pixel point pairs is not limited.
Taking the matching pixel point pair comprising one pixel point in the high-energy X-ray image of the target sample and one pixel point in the low-energy X-ray image of the target sample as an example, the matching pixel point pair can be determined by aiming at each pixel point in the high-energy X-ray image of the target sample, and the pixel point matched with each other in the low-energy X-ray image of the target sample are determined as the matching pixel point pair; and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
There are various ways to fuse the gray values of the pixels in each pair of matching pixel pairs, for example, the mean value of the gray values of the pixels in each pair of matching pixel pairs may be calculated, or the gray values of the pixels in each pair of matching pixel pairs may be weighted and fused. Weighted fusion can be understood as: and distributing the weight to each pixel point in the matched pixel point pair, and carrying out weighted calculation on the gray value of each pixel point according to the distributed weight.
For example, each pair of matching pixel points includes two pixel points, the gray values of the pixel points in each pair of matching pixel points can be weighted and fused according to the following formula:
fusion gray-value as gray-value a × weight 1+ gray-value B × weight 2
The fusion gray value represents a gray value of a pixel point in a gray fusion image, the gray value A represents a gray value of a pixel point in a high-energy X-ray image of a target sample, the weight 1 represents a weight corresponding to the pixel point in the high-energy X-ray image of the target sample, the gray value B represents a gray value of a pixel point in a low-energy X-ray image of the target sample, and the weight 2 represents a weight corresponding to the pixel point in the low-energy X-ray image of the target sample.
If the X-ray irradiated article is thick, the high-energy X-ray image of the article has a higher sharpness of the outline of the article than the low-energy X-ray image of the article; if the object being X-rayed is thin, the low energy X-ray image of the object will have a higher sharpness of the outline of the object than the high energy X-ray image of the object. In the embodiment, the high-energy X-ray image and the low-energy X-ray image are subjected to gray level fusion, so that the outlines of the articles with different thicknesses can be clearly presented in the fused image, and therefore, the areas of different articles can be more clearly divided in the subsequent process of marking the images.
S403: determining each article component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image; the color corresponding to each article component is determined separately.
For example, the pixel value of each pixel in the atomic number image may be the atomic number of the atom corresponding to the pixel. Thus, each article component contained in the target sample atomic number image can be determined according to the atomic number in the target sample atomic number image, and if the atomic number corresponding to a certain pixel point in the target sample atomic number image is 26, then it can be determined that iron is contained in the target sample atomic number image, and the color corresponding to the iron can be determined.
S404: determining a region of each article component mapped to the gray level fusion image, and determining the shade degree of the corresponding color of the article component according to the gray level value of the region; and coloring the area according to the shade degree of the color to obtain a color image.
For example, if the thickness of the object is larger, the depth of the X-ray penetrating through the object is smaller, which corresponds to a lower gray scale value in the gray-scale fusion image, and if the thickness of the object is smaller, the depth of the X-ray penetrating through the object is larger, which corresponds to a higher gray scale value in the gray-scale fusion image. That is, the smaller the grayscale value in the grayscale fusion image, the thicker the item is represented. The thicker the article, the darker the color of the article when it is colored in the grayscale fusion image. By applying the embodiment, the obtained color image can express the thickness of the article through the depth degree of the color, and the visual effect is better.
S405: and acquiring the labeling information of the color image.
For example, the position and the category information of the article in the color image may be manually labeled on the color image to obtain the labeling information. Or, the position and the category information of the article in the color image may also be labeled by using an image labeling algorithm to obtain labeling information, and the embodiment of the present invention does not limit the specific image labeling algorithm.
Since the color image is obtained by image-converting the target sample high-energy X-ray image, the target sample low-energy X-ray image, and the target sample atomic number image, the position of the object in the color image corresponds to the position of the object in the target sample high-energy X-ray image, the target sample low-energy X-ray image, and the target sample atomic number image one-to-one. Thus, the labeling information of the color image can be used for a high-energy X-ray image of the target sample, a low-energy X-ray image of the target sample, and an atomic number image of the target sample.
In some related schemes, the position and type information of the object in the sample high-energy X-ray image, the sample low-energy X-ray image and the sample atomic number image are labeled directly, but in these schemes, the labeling accuracy of the position and type information of the object in the sample high-energy X-ray image, the sample low-energy X-ray image and the sample atomic number image is low. In the embodiment, the high-energy X-ray image of the target sample, the low-energy X-ray image of the target sample and the atomic number image of the target sample are subjected to image transformation to obtain a color image, different articles in the color image are distinguished by using different colors in the color image, and then the position and the category information of the articles in the color image are labeled, so that the labeling accuracy is improved.
S406: and training the neural network with a preset structure by taking the high-energy X-ray image of the target sample, the low-energy X-ray image of the target sample and the atomic number image of the target sample as training data and taking the marking information of the color image as supervision to obtain the recognition model.
S407: irradiating the article by using high-energy X-rays to obtain a high-energy X-ray image to be identified; irradiating the article by using low-energy X-rays to obtain a low-energy X-ray image to be identified; and carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified to obtain an atomic number image to be identified.
Among these, the high-energy X-ray image can be understood as: an image expressing the degree of absorption of high-level X-rays by the irradiated article, or may also be understood as: an image expressing the attenuation coefficient of the irradiated article under high-level X-rays. The low energy X-ray image can be understood as: an image expressing the degree of absorption of low-level X-rays by the irradiated article, or may also be understood as: an image expressing the attenuation coefficient of the irradiated article under low-level X-rays.
For example, a dual-energy resolution table may be pre-established, and the table includes the correspondence between the pixel values of the pixels in the high-energy X-ray image and the pixel values and atomic numbers of the pixels in the low-energy X-ray image. The high-energy X-ray image to be identified and the low-energy X-ray image to be identified are images obtained by irradiating the same article, and pixel points in the two images can be in one-to-one correspondence. For example, assuming that a pixel point a1 in the high-energy X-ray image to be identified is taken as an arbitrary pixel point, a pixel point in the low-energy X-ray image to be identified corresponding to the pixel point a1 is taken as a2, a pixel value of the pixel point a1 and a pixel value of the pixel point a2 are determined, and then atomic numbers corresponding to the pixel point a1 and the pixel point a2 are determined by searching the dual-energy resolution table.
S408: and inputting the high-energy X-ray image to be identified, the low-energy X-ray image to be identified and the atomic number image to be identified into the identification model to obtain the high-energy X-ray image to be identified, the low-energy X-ray image to be identified and the class information of the articles contained in the atomic number image to be identified, which are output by the identification model.
By applying the embodiment of the invention, in the first aspect, the pre-trained recognition model is used for recognizing the article type in the X-ray image to be recognized, so that the automatic recognition of the article type is realized.
In the second aspect, the sample X-ray image is transformed to obtain a color image, the color image is labeled to obtain labeling information, the labeling information is used for supervision training to obtain the recognition model, the visual effect of the color image is better than that of the sample X-ray image, therefore, compared with the method of directly labeling the sample X-ray image, the labeling information obtained by labeling the color image is more accurate, and the recognition model obtained by training with the labeling information has higher accuracy.
In a third aspect, the X-ray image to be identified is a multi-channel image comprising: compared with a single-channel image (any one of the three images), the multi-channel image has richer carried information, and the identification model is used for identifying the multi-channel image, so that the accuracy is higher.
In the fourth aspect, the atomic number image obtained by performing dual energy resolution on the high-energy X-ray image and the low-energy X-ray image reduces information loss caused by a multiple image conversion process, compared with the method of converting the X-ray image into a color image and then acquiring the atomic number image from the color image.
Corresponding to the above method embodiment, the present invention further provides an article type identification apparatus based on an X-ray security inspection device, as shown in fig. 5, including,
a first obtaining module 501, configured to obtain an X-ray image to be identified;
the identification module 502 is configured to input the X-ray image to be identified to a pre-trained identification model, so as to obtain category information of an article included in the X-ray image to be identified, which is output by the identification model; the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by transforming the sample X-ray image as supervision.
In one embodiment, the X-ray image to be identified includes: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified;
the sample X-ray image includes: the system comprises a sample high-energy X-ray image, a sample low-energy X-ray image and a sample atomic number image obtained by carrying out double-energy resolution on the sample high-energy X-ray image and the sample low-energy X-ray image.
In one embodiment, the apparatus further comprises: a second acquisition module, a dual-energy resolution module, a fusion module, a colorization module (not shown), wherein,
the second acquisition module is used for acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image;
the dual-energy resolution module is used for performing dual-energy resolution on the high-energy X-ray image of the target sample and the low-energy X-ray image of the target sample to obtain an atomic number image of the target sample;
the fusion module is used for carrying out gray fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray fusion image;
and the colorizing module is used for colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image.
In one embodiment, the fusion module is specifically configured to:
determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image;
and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
In one embodiment, the colorization module includes: a first determination sub-module, a second determination sub-module, a coloring sub-module (not shown), wherein,
the first determining submodule is used for determining each item component contained in the sample target atomic number image according to the atomic number in the target sample atomic number image;
the second determining sub-module is used for respectively determining the color corresponding to each article component;
and the coloring sub-module is used for determining the region of each article component mapped to the gray-scale fusion image, and coloring the region by using the color corresponding to the article component to obtain a color image.
In one embodiment, the coloring sub-module is specifically configured to:
determining the shade degree of the color corresponding to the component of the article according to the gray value of the area;
and coloring the region according to the depth of the color to obtain a color image.
In one embodiment, the obtaining module is specifically configured to:
irradiating the article by using high-energy X-rays to obtain a high-energy X-ray image to be identified;
irradiating the article by using low-energy X-rays to obtain a low-energy X-ray image to be identified;
and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
By applying the embodiment of the invention, the X-ray image to be identified is obtained; inputting the X-ray image to be recognized into a pre-trained recognition model to obtain the class information of the articles contained in the X-ray image to be recognized and output by the recognition model; the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by converting the sample X-ray image as supervision. In a first aspect, in the scheme, the pre-trained recognition model is used for recognizing the article type in the to-be-recognized X-ray image, so that the article type is automatically recognized; in the second aspect, in the scheme, the sample X-ray image is transformed to obtain a color image, the color image is labeled to obtain labeling information, the labeling information is used for supervision training to obtain the recognition model, the visual effect of the color image is better than that of the sample X-ray image, therefore, compared with the method for directly labeling the sample X-ray image, the labeling information obtained by labeling the color image is more accurate, and the recognition model obtained by training with the labeling information has higher accuracy.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601 and a memory 602,
a memory 602 for storing a computer program;
the processor 601 is configured to implement any one of the above-described article type identification methods based on the X-ray security inspection apparatus when executing the program stored in the memory 602.
For example, the electronic device may be an X-ray security apparatus, or may also be a data processing apparatus connected to the X-ray security apparatus, and the specific type of the electronic device is not limited.
The Memory mentioned in the above electronic device may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned article class identification methods based on an X-ray security inspection apparatus.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute any one of the above-mentioned article class identification methods based on an X-ray security inspection apparatus.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to be performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity as they are substantially similar to method embodiments, where relevant, reference may be made to some descriptions of method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An article category identification method based on X-ray security inspection equipment is characterized by comprising the following steps:
acquiring an X-ray image to be identified;
inputting the X-ray image to be recognized into a pre-trained recognition model to obtain the class information of the articles contained in the X-ray image to be recognized and output by the recognition model;
the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by transforming the sample X-ray image as supervision.
2. The method according to claim 1, characterized in that the X-ray image to be identified comprises: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified;
the sample X-ray image includes: the system comprises a sample high-energy X-ray image, a sample low-energy X-ray image and a sample atomic number image obtained by carrying out double-energy resolution on the sample high-energy X-ray image and the sample low-energy X-ray image.
3. The method of claim 2, wherein the color image is obtained by:
acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image;
carrying out dual-energy resolution on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a target sample atomic number image;
carrying out gray level fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray level fusion image;
colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image.
4. The method of claim 3, wherein said gray-scale fusing said target sample high-energy X-ray image and said target sample low-energy X-ray image to obtain a gray-scale fused image comprises:
determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image;
and fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image.
5. The method of claim 3, wherein colorizing the grayscale fusion image according to the target sample atomic number image to obtain a color image comprises:
determining each item component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image;
respectively determining the color corresponding to each article component;
and determining a region of each article component mapped to the gray-scale fusion image, and coloring the region by using the color corresponding to the article component to obtain a color image.
6. The method of claim 5, wherein said coloring said region with a color corresponding to the composition of the article to produce a color image comprises:
determining the shade degree of the color corresponding to the component of the article according to the gray value of the area;
and coloring the region according to the depth of the color to obtain a color image.
7. The method of claim 2, wherein the acquiring an X-ray image to be identified comprises:
irradiating the article by using high-energy X rays to obtain a high-energy X ray image to be identified;
irradiating the article by using low-energy X rays to obtain a low-energy X ray image to be identified;
and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
8. An article category identification device based on X-ray security check equipment is characterized by comprising:
the first acquisition module is used for acquiring an X-ray image to be identified;
the identification module is used for inputting the X-ray image to be identified into a pre-trained identification model to obtain the category information of the articles contained in the X-ray image to be identified and output by the identification model; the identification model is obtained by training a neural network with a preset structure by taking a sample X-ray image as training data and taking marking information of a color image obtained by transforming the sample X-ray image as supervision.
9. The apparatus according to claim 8, wherein the X-ray image to be identified comprises: the method comprises the steps of obtaining a high-energy X-ray image to be identified, a low-energy X-ray image to be identified and an atomic number image to be identified by carrying out double-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified;
the sample X-ray image includes: a sample high-energy X-ray image, a sample low-energy X-ray image, and a sample atomic number image obtained by performing dual-energy resolution on the sample high-energy X-ray image and the sample low-energy X-ray image;
the device further comprises:
the second acquisition module is used for acquiring a sample high-energy X-ray image and a sample low-energy X-ray image which are acquired aiming at the same scene and used as a target sample high-energy X-ray image and a target sample low-energy X-ray image;
the double-energy resolution module is used for carrying out double-energy resolution on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a target sample atomic number image;
the fusion module is used for carrying out gray fusion on the target sample high-energy X-ray image and the target sample low-energy X-ray image to obtain a gray fusion image;
the colorizing module is used for colorizing the gray level fusion image according to the target sample atomic number image to obtain a color image;
the fusion module is specifically configured to:
determining matching pixel point pairs in the target sample high-energy X-ray image and the target sample low-energy X-ray image, wherein the matching pixel point pairs comprise pixel points in the target sample high-energy X-ray image and pixel points in the target sample low-energy X-ray image;
fusing the gray values of the pixels in each pair of matched pixels to obtain a gray fused image;
the colorization module comprises:
the first determining submodule is used for determining each article component contained in the target sample atomic number image according to the atomic number in the target sample atomic number image;
the second determining submodule is used for respectively determining the color corresponding to each article component;
the coloring sub-module is used for determining the region of each article component mapped to the gray level fusion image and coloring the region by using the color corresponding to the article component to obtain a color image;
the coloring submodule is specifically used for:
determining the shade degree of the color corresponding to the component of the article according to the gray value of the area;
coloring the region according to the depth of the color to obtain a color image;
the obtaining module is specifically configured to:
irradiating the article by using high-energy X rays to obtain a high-energy X ray image to be identified;
irradiating the article by using low-energy X-rays to obtain a low-energy X-ray image to be identified;
and obtaining an atomic number image to be identified by performing dual-energy resolution on the high-energy X-ray image to be identified and the low-energy X-ray image to be identified.
10. An electronic device comprising a processor and a memory;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
CN202110277577.7A 2021-03-15 2021-03-15 Article category identification method, device and equipment based on X-ray security inspection equipment Pending CN115081469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277577.7A CN115081469A (en) 2021-03-15 2021-03-15 Article category identification method, device and equipment based on X-ray security inspection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277577.7A CN115081469A (en) 2021-03-15 2021-03-15 Article category identification method, device and equipment based on X-ray security inspection equipment

Publications (1)

Publication Number Publication Date
CN115081469A true CN115081469A (en) 2022-09-20

Family

ID=83241666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277577.7A Pending CN115081469A (en) 2021-03-15 2021-03-15 Article category identification method, device and equipment based on X-ray security inspection equipment

Country Status (1)

Country Link
CN (1) CN115081469A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401587A (en) * 2023-06-08 2023-07-07 乐山师范学院 Object category identification method based on X-rays
CN117347396A (en) * 2023-08-18 2024-01-05 北京声迅电子股份有限公司 XGBoost model-based substance type identification method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401587A (en) * 2023-06-08 2023-07-07 乐山师范学院 Object category identification method based on X-rays
CN116401587B (en) * 2023-06-08 2023-08-18 乐山师范学院 Object category identification method based on X-rays
CN117347396A (en) * 2023-08-18 2024-01-05 北京声迅电子股份有限公司 XGBoost model-based substance type identification method
CN117347396B (en) * 2023-08-18 2024-05-03 北京声迅电子股份有限公司 Material type identification method based on XGBoost model

Similar Documents

Publication Publication Date Title
CN111145177B (en) Image sample generation method, specific scene target detection method and system thereof
CN106485268B (en) Image identification method and device
EP3869459B1 (en) Target object identification method and apparatus, storage medium and electronic apparatus
Bowler et al. The bright end of the galaxy luminosity function at z≃ 7: before the onset of mass quenching?
CN103914802B (en) For the image selection using the depth information imported and the System and method for of masking
Rao et al. A ground-based imaging study of galaxies causing damped Lyman α (DLA), sub-DLA and Lyman limit system absorption in quasar spectra
CN110020647B (en) Contraband target detection method and device and computer equipment
CN108171203B (en) Method and device for identifying vehicle
Mosenkov et al. Does the stellar disc flattening depend on the galaxy type?
CN115081469A (en) Article category identification method, device and equipment based on X-ray security inspection equipment
Mery et al. Computer vision for x-ray testing: Imaging, systems, image databases, and algorithms
Zhang et al. Towards simulating foggy and hazy images and evaluating their authenticity
La Marca et al. Galaxy populations in the Hydra I cluster from the VEGAS survey-II. The ultra-diffuse galaxy population
CN110349216A (en) Container method for detecting position and device
Castro-Rodriguez et al. Intracluster light in the Virgo cluster: large scale distribution
CN111539251A (en) Security check article identification method and system based on deep learning
CN110992324B (en) Intelligent dangerous goods detection method and system based on X-ray image
CN111260607B (en) Automatic suspicious article detection method, terminal equipment, computer equipment and medium
IL233523A (en) System and method for quantifying reflection e.g. when analyzing laminated documents
Dehesa‐González et al. Lighting source classification applied in color images to contrast enhancement
CN108805190B (en) Image processing method and device
JP2020517331A (en) Device and method for modeling the composition of an object of interest
CN110992257B (en) Remote sensing image sensitive information automatic shielding method and device based on deep learning
JP6829778B2 (en) Object identification device and object identification method
CN114114457A (en) Fracture characterization method, device and equipment based on multi-modal logging data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination