CN113160341B - Method, system and equipment for generating X-ray image containing target object - Google Patents

Method, system and equipment for generating X-ray image containing target object Download PDF

Info

Publication number
CN113160341B
CN113160341B CN202110458638.XA CN202110458638A CN113160341B CN 113160341 B CN113160341 B CN 113160341B CN 202110458638 A CN202110458638 A CN 202110458638A CN 113160341 B CN113160341 B CN 113160341B
Authority
CN
China
Prior art keywords
target object
channel
image
pixel value
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110458638.XA
Other languages
Chinese (zh)
Other versions
CN113160341A (en
Inventor
张树武
刘杰
郑阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110458638.XA priority Critical patent/CN113160341B/en
Publication of CN113160341A publication Critical patent/CN113160341A/en
Application granted granted Critical
Publication of CN113160341B publication Critical patent/CN113160341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention belongs to the technical field of data enhancement, and particularly relates to a method, a system and equipment for generating an X-ray image containing a target object, aiming at solving the problem of insufficient real data of the X-ray image. The method comprises the following steps: acquiring a real image of a target object and an X-ray image without the target object; inputting the real image of the target object into a generative confrontation network model to obtain a synthetic image of the target object; preprocessing the target object composite image to obtain a target object intermediate image; and fusing the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain an X-ray generated image with the target object. The invention increases the diversity of the data set, provides a data base for the generalization of the subsequent detection model and effectively solves the problem of insufficient X-ray image data.

Description

Method, system and equipment for generating X-ray image containing target object
Technical Field
The invention belongs to the technical field of data enhancement, and particularly relates to a method, a system and equipment for generating an X-ray image containing a target object.
Background
The X-ray image is obtained by a security check X-ray machine in a certain processing mode, the principle of the operation of the X-ray image is that a checked package is conveyed into a crawler-type channel by means of a conveyor belt, after the package enters the channel, a ray source is triggered to emit an X-ray beam, the X-ray beam penetrates through a packaged object and falls onto a detector, the detector converts the received X-ray into an electric signal, the electric signal is conveyed to a control computer for further processing, and after complex calculation and imaging processing, a high-quality X-ray image is obtained. The staff needs to judge the article in real time according to a large number of X-ray image configurations. In order to increase the security inspection speed, reduce the waiting time of passengers and save a large amount of manpower and material resources, it is necessary to upgrade the monitoring system and increase the intelligence of the system.
The current popular target detection and identification method is a deep learning method in artificial intelligence, and the method takes X-ray image data containing contraband as training data to be input into a detection and identification model for training. In order to detect and identify the model with better generalization performance, a large number of X-ray images containing contraband such as control tools and firearms need to be obtained, so as to detect the contraband with different sizes, different angles, different brands and different categories.
However, because the real X-ray image data containing contraband is less and cannot meet the requirements of the artificial intelligence-based deep learning method, the invention provides a method, a system and equipment for generating an X-ray image containing a target object.
Disclosure of Invention
In order to solve the above-mentioned problem in the prior art, i.e. the problem of insufficient real data of the X-ray image, the present invention provides a method, a system and a device for generating an X-ray image containing a target object,
in a first aspect of the present invention, a method for generating an X-ray image containing a target object is provided, the method comprising:
acquiring a real image of a target object and an X-ray image without the target object;
inputting the real image of the target object into a generative confrontation network model to obtain a synthetic image of the target object;
preprocessing the target object composite image and the target object real image to obtain a target object intermediate image;
and fusing the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain an X-ray generation image with the target object.
Optionally, the acquiring the real image of the target item includes:
sending a first request to a target website of a target article material;
receiving a webpage source code returned by a target article material website;
analyzing the webpage source code to obtain a link of each image in the webpage source code;
sending a second request to the link for each of the images;
receiving a real image of the target item returned by the link for each of the images.
Optionally, the preprocessing the target article composite image and the target article real image to obtain a target article intermediate image includes:
and performing one or more operations of rotation, cutting and affine transformation on the target object composite image and the target object real image to obtain a target object intermediate image.
Optionally, the step of blending the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain an X-ray generated image containing the target object includes:
extracting an effective area of the intermediate image of the target object, wherein the effective area is an area where the target object is located;
determining the size range of the effective area in an X-ray image fusion area without the target object, wherein the fusion area is an area where the baggage is located in the X-ray image without the target object;
determining pixel values of RGB three channels of a target object in the intermediate image of the target object in a fusion area, wherein the pixel values of the RGB three channels comprise an R channel pixel value, a G channel pixel value and a B channel pixel value;
and adjusting the effective area according to the determined size range and the pixel values of the RGB three channels to be fused into the fusion area, so as to obtain an X-ray generated image containing the target object.
Optionally, the determining the size range of the effective area in the X-ray image fusion area without the target object comprises:
determining the category of the target object in the effective area, and the length value and the width value of the fusion area;
searching a preset proportion of the target object in the X-ray real image according to the category of the target object;
and determining the size range of the effective region according to the preset proportion, the length value and the width value of the fusion region, wherein the size range comprises a length threshold and a width threshold of the effective region.
Optionally, the determining the B-channel pixel value of the target item in the target item intermediate image in the fusion region includes:
determining the category of the target object in the effective area;
searching a preset B channel value range of the target object in the X-ray real image according to the category of the target object;
and determining the B channel pixel value of the target object in the fusion area according to the preset B channel value range.
Optionally, the determining R-channel pixel values of the target item in the target item intermediate image in the fusion region includes
Extracting pixel values of the X-ray image in the R channel of the fusion region and pixel values of the target object in the R channel of the effective region;
determining the proportion of the pixel value of the X-ray image in the R channel of the fusion region and the proportion of the pixel value of the target object;
determining a first product according to the proportion of the pixel value of the X-ray image in the R channel of the fusion region to the pixel value of the X-ray image in the R channel of the fusion region;
determining a second product according to the proportion of the pixel value of the target object in the R channel of the effective region to the pixel value of the target object in the R channel of the fusion region;
and determining the sum of the first product and the second product as the pixel value of the target object in the R channel of the fusion region.
Optionally, the determining the G-channel pixel value of the target item in the target item intermediate image in the fusion region includes:
extracting pixel values of the X-ray image in the G channel of the fusion area and pixel values of the target object in the G channel of the effective area;
determining the proportion of the pixel value of the X-ray image in the G channel of the fusion area and the proportion of the pixel value of the target object;
determining a third product according to the proportion of the pixel value of the X-ray image in the G channel of the fusion area to the pixel value of the X-ray image in the G channel of the fusion area;
determining a fourth product according to the proportion of the pixel value of the target object in the channel of the effective area G to the pixel value of the target object in the channel of the fusion area G;
and determining the sum of the third product and the fourth product as the pixel value of the target object in the G channel of the fusion area.
In a second aspect of the present invention, an X-ray image generating system including a target object is provided, the system comprising:
the acquisition unit is used for acquiring a real image of the target object and an X-ray image without the target object;
the synthesis unit is used for inputting the real image of the target object into the generative confrontation network model to obtain a synthetic image of the target object;
the preprocessing unit is used for preprocessing the target object composite image to obtain a target object intermediate image;
and the generating unit is used for fusing the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain an X-ray generated image containing the target object.
In a third aspect of the present invention, an apparatus is provided, which includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for implementing the method for generating an X-ray image containing a target object according to any one of the first aspect.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, which stores computer instructions for being executed by the computer to implement the method for generating an X-ray image containing a target object according to the first aspect.
The invention has the beneficial effects that: according to the method, the generation type countermeasure network model is used for carrying out data synthesis on the real image of the target object acquired from the network to obtain the synthetic image of the target object similar to the real data, the diversity of the data set is increased, the synthetic image of the target object and the real image of the target object are preprocessed, the data volume is further increased, a data basis is provided for the generalization of a subsequent detection model, and the problem of insufficient X-ray image data is effectively solved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is a schematic diagram of a method for generating an X-ray image including a target object according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a portion of a target item downloaded from a network in one embodiment of the present application;
FIG. 3 is a schematic diagram of a method for generating an X-ray image containing a target object according to yet another embodiment of the present application;
FIG. 4 is a schematic illustration of an X-ray real image containing a target item in one embodiment of the present application;
FIG. 5 is a schematic diagram of a process for fusing an intermediate image of a target object with an X-ray image without the target object according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an X-ray image generation system including a target article according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
The invention aims to generate an X-ray image containing a target object which is similar to real data under the condition of less real data, thereby effectively making up the defect of less data.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In a first aspect, the present invention provides a method for generating an X-ray image containing a target object, as shown in fig. 1, the method comprising the steps of:
step S101: and acquiring a real image of the target object and an X-ray image without the target object.
In the embodiment of the present application, the target object is an object which is not privately carried by law regulations and poses a threat to the safety of lives and properties of people in security inspection, for example, weapons, ammunition, explosive objects (such as explosives, detonators, blasting fuses and the like), highly toxic objects (such as sodium cyanide and potassium cyanide), narcotics (such as opium, heroin and morphine), radioactive objects, cutters and the like.
Optionally, the acquiring the real image of the target object includes:
and sending a first request to a target website of the target article material.
In this step, a web crawler method is used to obtain a real image of a target object, and first, addresses of professional material websites of the target object are determined, for example, 5 websites can be used as material websites, an optimal website can be selected from the 5 websites as a target material website of the target object, or a plurality of the 5 websites can be simultaneously used as target material target websites of the target object, and first requests are sent one by one and subsequent steps are respectively performed.
In one example, the first request may be sent with the requests network request of python to package the target web address of the illicit material.
And receiving the webpage source code returned by the target article material website.
And analyzing the webpage source code to obtain the link of each image in the webpage source code.
In this step, the structure of the web page source code may be analyzed by xpath or beautifulsoup, so as to resolve the link of each image.
A second request is sent to the link for each of the images.
And receiving the real image of the target object returned by the link of each image, and saving the real image of the target object in a local storage in an image file format.
Furthermore, the saved image files can be screened, and the images which do not meet the requirements can be deleted, either by manual screening or automatic screening.
In one example, a gun and a knife are used as the target object to be detected, and images of guns and knives of different sizes, shapes and categories can be acquired through the web crawler method, as shown in fig. 2, which shows images of a pistol, a submachine gun, a long pistol, a dagger, a kitchen knife, a long knife and the like.
In addition, the X-ray image without the target object is obtained from a related professional website, and can also be obtained through the X-ray image without the target object stored in the detection process by an X-ray machine.
Step S102: and inputting the real image of the target object into a generative confrontation network model to obtain a synthetic image of the target object.
In this step, the generative confrontation network model is obtained by training a large amount of real image data of the target object in advance, and the trained real image data of the target object is obtained by the web crawler method.
The generation type confrontation network model can be used for synthesizing based on the real image data of the target object to obtain a target object synthetic image which is similar to the real image of the target object, and the number of the target object synthetic images is far greater than that of the target object real images, so that a large amount of target object image data are obtained, and the data requirement is met.
Step S103: and preprocessing the target object composite image and the target object real image to obtain a target object intermediate image.
Optionally, the preprocessing the target article composite image and the target article real image to obtain a target article intermediate image includes:
and carrying out one or more operations of rotation, cutting and affine transformation on the target object synthetic image and the target object real image to obtain a target object intermediate image.
The preprocessing can further expand the data to obtain intermediate images of the target object, the number of which is more than that of the composite images of the target object.
Step S104: and fusing the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain an X-ray generation image with the target object.
Optionally, as shown in fig. 3, the step of blending the intermediate image of the target object into the X-ray image without the target object according to a preset rule to obtain the X-ray generated image with the target object includes the following steps:
step S301: and extracting an effective area of the intermediate image of the target object, wherein the effective area is an area where the target object is located.
In one example, the active area is also referred to as a foreground area. The active area of the image may be extracted using an image processing dependent extraction algorithm, such as the OPENCV function.
Step S302: determining the size range of the effective area in the X-ray image fusion area without the target object. The fusion area is the area where the baggage is located in the X-ray image without the target object.
Optionally, the determining the size range of the effective area in the X-ray image fusion area without the target object comprises:
and determining the category of the target object in the effective area, and the length value and the width value of the fusion area.
And searching the preset proportion of the target object in the X-ray real image according to the category of the target object.
And determining the size range of the effective area according to the preset proportion, the length value and the width value of the fusion area, wherein the size range comprises a length threshold and a width threshold of the effective area.
In one example, with a firearm and a knife as target objects to be detected, for example, a short knife has a preset ratio of 20-25% in an X-ray real image, a fusion area has a length of 25cm and a width of 15cm, and according to the preset ratio, the length of the short knife can be determined to be in a range of 5-6.25cm and the width of the long knife can be determined to be in a range of 3-3.75cm. The size of the effective area can be adjusted according to the size range and the position information of the target item is stored.
The preset proportion of the target object in the real X-ray image can be obtained by calibration in advance, and the calibration process is as follows:
in the first step, the large category of the target object to be detected is determined.
And secondly, refining the large category of the target object.
For example, the large categories of tools and firearms are respectively subdivided into a plurality of small categories such as pistols, submachine guns, long guns, daggers, kitchen knives, long knives, and the like.
Because the proportion of different types of target objects in the X-ray images is different, different individuals in the same type have certain difference, the types of the target objects are distinguished in detail in order that the X-ray generated images more meet the requirement of actual images, different parameters are set according to different types in the fusion process, and the X-ray generated images are more real.
And thirdly, acquiring an X-ray real image of the target object containing the category according to the defined category.
In one example, the SIXray image dataset and the data set published by the university team of the chinese academy of sciences are consistent with the pseudo-color image scanned by the current security inspection X-ray machine, and the composition of the image is more in line with the actual demand, so that the real X-ray image containing the target object can be obtained from the SIXray image dataset and the data set published by the science academy of sciences.
And fourthly, analyzing the X-ray real image, and calculating the length and width of each type of target object in the X-ray real image, the proportion of each type of target object in the X-ray real image, the storage range of each type of target object in the X-ray real image, the value information of each type of target object in the RGB three channels and other basic attributes.
And fifthly, summarizing the calculated basic attributes to obtain the corresponding preset proportion, RGB value information and the like of the target object of each category in the X-ray real image.
In one example, as shown in fig. 4, basic attributes of the types of handguns, submachine guns, long-guns, daggers, kitchen knives, long-knives, and the like are summarized, values of the length and width of the handguns, daggers, and kitchen knives compared with the length and width of the trunk are not more than 0.3 in many cases, and the colors presented are mainly blue.
Step S303: and determining pixel values of the target object in the intermediate image of the target object in three RGB channels of the fusion area, wherein the pixel values of the three RGB channels comprise an R channel pixel value, a G channel pixel value and a B channel pixel value.
In the following, taking the target objects of the gun and the tool as an example, it can be determined that the gun and the tool mainly appear blue with different shades in the X-ray image through summarizing and summarizing the X-ray real image. Therefore, the pixel value of the B channel is fixed in a range as follows:
optionally, the determining the B-channel pixel value of the target item in the target item intermediate image in the fusion region includes:
determining the category of the target object in the effective area;
searching a preset B channel value range of the target object in the X-ray real image according to the category of the target object; this range of values can be obtained from the calibration process described above.
And determining the B channel pixel value of the target object in the fusion area according to the preset B channel value range. As formula 1:
blue1 value ≤blue value ≤blue2 value formula (1)
Wherein, blue value For the fused B-channel pixel value, blue1 value And blue2 value And obtaining a preset B channel value range of the target object in the X-ray real image. Typically, the value ranges between 160 and 210.
In order to better embody the fusion effect of the X-ray image without the target object and the intermediate image of the target object,
optionally, the determining the R-channel pixel value of the target item in the target item intermediate image at the fusion region includes
Extracting pixel values of the X-ray image in the R channel of the fusion region and pixel values of the target object in the R channel of the effective region;
determining the proportion of the pixel value of the X-ray image in the R channel of the fusion region and the proportion of the pixel value of the target object;
determining a first product according to the proportion of the pixel value of the X-ray image in the R channel of the fusion region to the pixel value of the X-ray image in the R channel of the fusion region;
determining a second product according to the proportion of the pixel value of the target object in the R channel of the effective region and the pixel value of the target object in the R channel of the fusion region;
and determining the sum of the first product and the second product as the pixel value of the target object in the R channel of the fusion region. As formula 2:
Red value =thred1×Xray redValue +thred2×object redValue
Red value the R channel pixel value of the target object in the fusion area; xray redValue Pixel values of an R-channel X-ray image that is a fusion region; object(s) redValue Pixel values of target objects in the R channel of the effective area; thred1 is the proportion of the pixel value of the X-ray image in the R channel of the fusion region; the thred2 is the proportion of the pixel value of the target object in the R channel of the fusion area, and the thred1 is multiplied by the Xray redValue Being the first product, thred2 × object redValue Is the second product.
Optionally, the determining the G-channel pixel value of the target item in the target item intermediate image in the fusion region includes:
extracting pixel values of the X-ray image in the G channel of the fusion area and pixel values of the target object in the G channel of the effective area;
determining the proportion of the pixel values of the X-ray images in the G channel of the fusion area and the proportion of the pixel values of the target object;
determining a third product according to the proportion of the pixel value of the X-ray image in the G channel of the fusion area to the pixel value of the X-ray image in the G channel of the fusion area;
determining a fourth product according to the proportion of the pixel value of the target object in the channel of the effective area G and the pixel value of the target object in the channel of the fusion area G;
and determining the sum of the third product and the fourth product as the pixel value of the target object in the G channel of the fusion area. As in equation 3:
Green value =thred3×Xray greenValue +thred4×object greenValue
among them, green value G channel pixel values of the target object in the fusion area; xray greenValue Pixel values of a G-channel X-ray image of the fusion region; object (object) greenValue The pixel value of the target object in the G channel of the effective area is; thred3 is the proportion of the pixel value of the X-ray image in the G channel of the fusion area; the third 4 is the proportion of the pixel value of the target object in the G channel of the fusion area, and the third 3 Xray greenValue Being the third product, a thred4 × object greenValue Is the fourth product.
The proportion value can be adjusted according to the actual fusion effect through formula 2 and formula 3, so that the fusion effect is better.
It should be noted that, the above example is only directed to the target object of the firearm and the cutter, and if the target object is another target object, the color presented in the X-ray image may not be blue, for example, green, the pixel value of the G channel may be determined by a preset value range, and the pixel values of the R channel and the B channel may be determined by a manner of setting a ratio, which is not limited herein.
Step S304: and adjusting the effective area according to the determined size range and the pixel values of the RGB three channels to be fused into the fusion area, so as to obtain an X-ray generated image containing the target object.
As shown in fig. 5, the partial results of blending the guillotine and the firearm target item into the X-ray image are shown. (a) Images of types such as controlled tools, firearms and the like acquired in a network; (b) The white area is the area to be inserted into the X-ray image in the cutter and firearm images, namely the effective area; and (c) fusing the cutter image, the gun image and the X-ray image.
Based on the same concept, in a second aspect of the present invention, an X-ray image generating system including a target object is provided, as shown in fig. 6, the system comprising:
an acquiring unit 601, configured to acquire a real image of a target object and an X-ray image without the target object;
a synthesizing unit 602, configured to input the real image of the target item into a generative confrontation network model to obtain a synthesized image of the target item;
a preprocessing unit 603, configured to preprocess the target article composite image to obtain a target article intermediate image;
a generating unit 604, configured to blend the target object intermediate image into the X-ray image without the target object according to a preset rule, so as to obtain an X-ray generated image with the target object.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the X-ray image generating system including the target object provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An apparatus of a third embodiment of the invention comprises:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the processor for performing the method of generating an X-ray image containing a target item according to any one of the first aspect.
A computer-readable storage medium according to a fourth embodiment of the present invention stores computer instructions for being executed by the computer to implement the method for generating an X-ray image containing a target object according to any one of the first aspect.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Reference is now made to FIG. 7, which is a block diagram illustrating a computer system of a server configured to implement embodiments of the present methods, systems, and apparatus. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 701. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is apparent to those skilled in the art that the scope of the present invention is not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (6)

1. A method of generating an X-ray image containing a target object, the method comprising:
acquiring a real image of a target object and an X-ray image without the target object;
inputting the real image of the target object into a generative confrontation network model to obtain a synthetic image of the target object;
preprocessing the target object composite image and the target object real image to obtain a target object intermediate image;
fusing the target object intermediate image into the X-ray image without the target object according to a preset rule to obtain an X-ray generated image with the target object, which specifically comprises the following steps:
extracting an effective area of the intermediate image of the target object, wherein the effective area is an area where the target object is located;
determining the size range of the effective area in an X-ray image fusion area without the target object, wherein the fusion area is an area where the baggage is located in the X-ray image without the target object;
determining pixel values of three RGB channels of the target object in the intermediate image of the target object in the fusion area, wherein the pixel values of the three RGB channels comprise an R channel pixel value, a G channel pixel value and a B channel pixel value;
adjusting the effective area according to the determined size range and the pixel values of the RGB three channels to be fused into the fusion area to obtain an X-ray generated image containing a target object, wherein if the target object is a firearm and a cutter, the method specifically comprises the following steps: searching a preset B channel value range of the target object in the X-ray real image according to the category of the target object; determining the B channel pixel value of the target object in the fusion area according to the preset B channel value range:
blue1 value ≤blue value ≤blue2 value
therein, blue value For the fused B-channel pixel value, blue1 value And blue2 value A preset B channel value range of the target object in the X-ray real image is obtained;
determining the R-channel pixel value of the target item in the target item intermediate image at the fusion region comprises: extracting pixel values of the X-ray image in the R channel of the fusion region and pixel values of the target object in the R channel of the effective region; determining the proportion of the pixel value of the X-ray image in the R channel of the fusion region and the proportion of the pixel value of the target object; determining a first product according to the proportion of the pixel value of the X-ray image in the R channel of the fusion region to the pixel value of the X-ray image in the R channel of the fusion region; determining a second product according to the proportion of the pixel value of the target object in the R channel of the effective region and the pixel value of the target object in the R channel of the fusion region; determining the sum of the first product and the second product as the pixel value of the target object in the R channel of the fusion region:
Red value =thred1×Xray redValue +thred2×object redValue
Red value the R channel pixel value of the target object in the fusion area; xray redValue Pixel values of an R-channel X-ray image that is a fusion region; object(s) redValue Pixel values of the target object in the R channel of the effective area; the thred1 is the proportion of the pixel value of the X-ray image in the R channel of the fusion region; the thred2 is the proportion of the pixel value of the target object in the R channel of the fusion area, and the thred1 is multiplied by the Xray redValue Being the first product, thred2 × object redValue Is the second product;
determining the G-channel pixel value of the target item in the target item intermediate image in the fusion area comprises:
extracting pixel values of the X-ray image in the G channel of the fusion area and pixel values of the target object in the G channel of the effective area;
determining the proportion of the pixel value of the X-ray image in the G channel of the fusion area and the proportion of the pixel value of the target object;
determining a third product according to the proportion of the pixel value of the X-ray image in the G channel of the fusion area to the pixel value of the X-ray image in the G channel of the fusion area;
determining a fourth product according to the proportion of the pixel value of the target object in the channel of the effective area G and the pixel value of the target object in the channel of the fusion area G;
determining the sum of the third product and the fourth product as the pixel value of the target item in the G channel of the fusion area:
Green value =thred3×Xray greenValue +thred4×object greenValue
among them, green value G channel pixel values of the target object in the fusion area; xray greenValue Pixel values of the G-channel X-ray image which is a fusion area; object (object) greenValue Is an effective region G throughPixel values of target items in the lane; thred3 is the proportion of the pixel value of the X-ray image in the G channel of the fusion area; the third 4 is the proportion of the pixel value of the target object in the G channel of the fusion area, and the third 3 Xray greenValue Being the third product, thred4 × object greenValue Is the fourth product;
if the target object is not a firearm or a cutter, the pixel value of one of the G channel or the R channel can be determined through a preset value range, and the pixel values of the other channels can be determined in a mode of setting proportion.
2. The method of claim 1, wherein the obtaining a true image of the target item comprises:
sending a first request to a target website of a target article material;
receiving a webpage source code returned by a target article material website;
analyzing the webpage source code to obtain a link of each image in the webpage source code;
sending a second request to the link for each of the images;
receiving a real image of the target item returned by the link for each of the images.
3. The method according to claim 1, wherein the preprocessing the target item composite image and the target item real image to obtain a target item intermediate image comprises:
and performing one or more operations of rotation, cutting and affine transformation on the target object composite image and the target object real image to obtain a target object intermediate image.
4. The method of claim 1, wherein determining the size range of the active area in the X-ray image fusion area without the target object comprises:
determining the category of the target object in the effective area, and the length value and the width value of the fusion area;
searching a preset proportion of the target object in the X-ray real image according to the category of the target object;
and determining the size range of the effective area according to the preset proportion, the length value and the width value of the fusion area, wherein the size range comprises a length threshold and a width threshold of the effective area.
5. An X-ray image generation system containing a target item, the system comprising:
the acquisition unit is used for acquiring a real image of the target object and an X-ray image without the target object;
the synthesizing unit is used for inputting the real image of the target object into the generative confrontation network model to obtain a synthesized image of the target object;
the preprocessing unit is used for preprocessing the target object composite image to obtain a target object intermediate image;
the generating unit is configured to blend the target object intermediate image into the X-ray image without the target object according to a preset rule to obtain an X-ray generated image with the target object, and specifically includes:
extracting an effective area of the intermediate image of the target object, wherein the effective area is an area where the target object is located;
determining the size range of the effective area in an X-ray image fusion area without the target object, wherein the fusion area is an area where the baggage is located in the X-ray image without the target object;
determining pixel values of three RGB channels of the target object in the intermediate image of the target object in the fusion area, wherein the pixel values of the three RGB channels comprise an R channel pixel value, a G channel pixel value and a B channel pixel value;
adjusting the effective area according to the determined size range and the pixel values of the RGB three channels to be fused into the fusion area to obtain an X-ray generated image containing a target object, wherein if the target object is a firearm and a cutter, the method specifically comprises the following steps: searching a preset B channel value range of the target object in the X-ray real image according to the category of the target object; determining the B channel pixel value of the target object in the fusion area according to the preset B channel value range:
blue1 value ≤blue value ≤blue2 value
therein, blue value For the fused B-channel pixel value, blue1 value And blue2 value A preset B channel value range of the target object in the X-ray real image is obtained;
determining the R-channel pixel value of the target item in the target item intermediate image at the fusion region comprises: extracting pixel values of the X-ray image in the R channel of the fusion region and pixel values of the target object in the R channel of the effective region; determining the proportion of the pixel value of the X-ray image in the R channel of the fusion area and the proportion of the pixel value of the target object; determining a first product according to the proportion of the pixel value of the X-ray image in the R channel of the fusion region to the pixel value of the X-ray image in the R channel of the fusion region; determining a second product according to the proportion of the pixel value of the target object in the R channel of the effective region and the pixel value of the target object in the R channel of the fusion region; determining the sum of the first product and the second product as the pixel value of the target item in the R channel of the fusion region:
Red value =thred1×Xray redValue +thred2×object redValue
Red value the R channel pixel value of the target object in the fusion area; xray redValue Pixel values of an R-channel X-ray image of the fusion region; object (object) redValue Pixel values of target objects in the R channel of the effective area; thred1 is the proportion of the pixel value of the X-ray image in the R channel of the fusion region; the thred2 is the proportion of the pixel value of the target object in the R channel of the fusion area, and the thred1 is multiplied by the Xray redValue Being the first product, thred2 × object redValue Is a second product;
determining the G-channel pixel value of the target item in the target item intermediate image in the fusion area comprises:
extracting pixel values of the X-ray image in the G channel of the fusion area and pixel values of the target object in the G channel of the effective area;
determining the proportion of the pixel value of the X-ray image in the G channel of the fusion area and the proportion of the pixel value of the target object;
determining a third product according to the proportion of the pixel value of the X-ray image in the G channel of the fusion area to the pixel value of the X-ray image in the G channel of the fusion area;
determining a fourth product according to the proportion of the pixel value of the target object in the channel of the effective area G and the pixel value of the target object in the channel of the fusion area G;
determining the sum of the third product and the fourth product as the pixel value of the target object in the G channel of the fusion area:
Green value =thred3×Xray greenValue +thred4×object greenValue
among them, green value G channel pixel values of the target object in the fusion area; xray greenValue Pixel values of a G-channel X-ray image of the fusion region; object(s) greenValue The pixel value of the target object in the G channel of the effective area is; thred3 is the proportion of the pixel value of the X-ray image in the G channel of the fusion area; the thred4 is the proportion of the pixel value of the target object in the G channel of the fusion area, and the thred3 is multiplied by the Xray greenValue Being the third product, a thred4 × object greenValue Is the fourth product;
if the target object is not a firearm or a cutter, the pixel value of one of the G channel or the R channel can be determined through a preset value range, and the pixel values of the other channels can be determined in a mode of setting proportion.
6. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for performing the method of generating an X-ray image containing a target item of any one of claims 1 to 4.
CN202110458638.XA 2021-04-27 2021-04-27 Method, system and equipment for generating X-ray image containing target object Active CN113160341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110458638.XA CN113160341B (en) 2021-04-27 2021-04-27 Method, system and equipment for generating X-ray image containing target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110458638.XA CN113160341B (en) 2021-04-27 2021-04-27 Method, system and equipment for generating X-ray image containing target object

Publications (2)

Publication Number Publication Date
CN113160341A CN113160341A (en) 2021-07-23
CN113160341B true CN113160341B (en) 2022-11-25

Family

ID=76871186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110458638.XA Active CN113160341B (en) 2021-04-27 2021-04-27 Method, system and equipment for generating X-ray image containing target object

Country Status (1)

Country Link
CN (1) CN113160341B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506283A (en) * 2021-07-26 2021-10-15 浙江大华技术股份有限公司 Image processing method and device, storage medium and electronic device
CN114998277B (en) * 2022-06-16 2024-05-17 吉林大学 Grabbing point identification method and device, electronic equipment and computer storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902643A (en) * 2019-03-07 2019-06-18 浙江啄云智能科技有限公司 Intelligent safety inspection method, device, system and its electronic equipment based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10825219B2 (en) * 2018-03-22 2020-11-03 Northeastern University Segmentation guided image generation with adversarial networks
US11030486B2 (en) * 2018-04-20 2021-06-08 XNOR.ai, Inc. Image classification through label progression
CN110378432B (en) * 2019-07-24 2022-04-12 阿里巴巴(中国)有限公司 Picture generation method, device, medium and electronic equipment
CN110765976B (en) * 2019-11-01 2021-02-09 重庆紫光华山智安科技有限公司 Generation method of human face characteristic points, training method of data network and related device
CN111242905B (en) * 2020-01-06 2021-03-26 科大讯飞(苏州)科技有限公司 Method and equipment for generating X-ray sample image and storage device
CN111832745B (en) * 2020-06-12 2023-08-01 北京百度网讯科技有限公司 Data augmentation method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902643A (en) * 2019-03-07 2019-06-18 浙江啄云智能科技有限公司 Intelligent safety inspection method, device, system and its electronic equipment based on deep learning

Also Published As

Publication number Publication date
CN113160341A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113160341B (en) Method, system and equipment for generating X-ray image containing target object
US20230162342A1 (en) Image sample generating method and system, and target detection method
US20220189111A1 (en) Method for the automatic material classification and texture simulation for 3d models
US20200218944A1 (en) Method, Apparatus, and Electronic Device for Processing Point Cloud Data, and Computer Readable Storage Medium
US7702183B1 (en) Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery
WO2020173021A1 (en) Artificial intelligence-based forbidden object identification method, apparatus and device, and storage medium
US10436932B2 (en) Inspection systems for quarantine and methods thereof
US9552521B2 (en) Human body security inspection method and system
CN110543857A (en) Contraband identification method, device and system based on image analysis and storage medium
US9305755B2 (en) Mass analysis data processing method and mass analysis data processing apparatus
KR101102415B1 (en) Editing system and editing method for the security space in the photoreconnaissance inage
CN109978892B (en) Intelligent security inspection method based on terahertz imaging
CN106251310B (en) A kind of multispectral remote sensing geochemical anomalies studying method
CN107633433B (en) Advertisement auditing method and device
CN104036003B (en) search result integration method and device
CA3149539A1 (en) Probabilistic image analysis
CN116363366A (en) Transmission line mountain fire monitoring method and device based on semantic segmentation and storage medium
CN116310927A (en) Multi-source data analysis fire monitoring and identifying method and system based on deep learning
US20160300098A1 (en) Data processing method and system
JP6897804B2 (en) Display processing equipment, imaging mass spectrometry system and display processing method
US12007341B2 (en) X-ray baggage and parcel inspection system with efficient third-party image processing
CN112068097B (en) Radar remote sensing data labeling method and device, electronic equipment and storage medium
US11864551B1 (en) Aerial wildlife survey and wounded game animal tracking process for completing accurate aerial wildlife surveys by machine learning and AI-supported filtering of non-relevant information to count and report on specific targeted aspects of wildlife
CN111859052B (en) Grading display method and system for field investigation result
CN114648706B (en) Forest tree species identification method, device and equipment based on satellite remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant