CN116342396A - CT image enhancement method and device - Google Patents

CT image enhancement method and device Download PDF

Info

Publication number
CN116342396A
CN116342396A CN202111584983.4A CN202111584983A CN116342396A CN 116342396 A CN116342396 A CN 116342396A CN 202111584983 A CN202111584983 A CN 202111584983A CN 116342396 A CN116342396 A CN 116342396A
Authority
CN
China
Prior art keywords
image
projection
neural network
network model
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111584983.4A
Other languages
Chinese (zh)
Inventor
何竞择
宋诗宇
张文杰
徐圆飞
李保磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hangxing Machinery Manufacturing Co Ltd
Original Assignee
Beijing Hangxing Machinery Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hangxing Machinery Manufacturing Co Ltd filed Critical Beijing Hangxing Machinery Manufacturing Co Ltd
Priority to CN202111584983.4A priority Critical patent/CN116342396A/en
Publication of CN116342396A publication Critical patent/CN116342396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention relates to a CT image enhancement method and a device, which relate to the technical field of X-ray detection and are used for solving the problem of low resolution of CT images, wherein the method comprises the following steps: acquiring a CT projection image of a detection target; performing primary image enhancement on the CT projection image through a first neural network model so as to eliminate deformation and distortion of the CT projection image; and performing secondary image enhancement on the CT image subjected to primary image enhancement through a second neural network model so as to improve the resolution of the CT image. The technical scheme that this application provided can greatly improve CT image's resolution ratio to realize independently using CT to detect article in the security inspection in-process.

Description

CT image enhancement method and device
Technical Field
The present invention relates to the field of X-ray detection technologies, and in particular, to a method and apparatus for enhancing CT images.
Background
In daily security inspection, the X-ray article security inspection machine is the most widely applied principal security inspection equipment at present, and is widely applied to occasions such as public security, traffic, government, large-scale activity sites and the like. In security inspection occasions with higher requirements such as civil aviation and customs, security inspection CT equipment has been rapidly popularized. The security CT can not only obtain the three-dimensional form data of the detected object, but also further obtain the electron density and the effective atomic number data of each point, so the security CT has the capability of accurately distinguishing the species of the substance. Thus, security CT has a very prominent advantage for detecting drugs, explosives and biological tissues compared to common item security machines.
In the practical use process of the security inspection CT, the requirement of security inspection equipment on the over-package speed ensures that the quality of CT images obtained by us is obviously lower than that obtained by an article security inspection machine in terms of spatial resolution, thereby causing adverse effects on follow-up manual image judgment and intelligent identification.
Disclosure of Invention
In view of the above analysis, the present invention aims to provide a method and apparatus for enhancing CT images, which improves the spatial resolution of CT images.
The aim of the invention is mainly realized by the following technical scheme:
in a first aspect, an embodiment of the present application provides a CT image enhancement method, including:
acquiring a CT projection image of a detection target;
performing primary image enhancement on the CT projection image through a first neural network model so as to eliminate deformation and distortion of the CT projection image;
and performing secondary image enhancement on the CT image subjected to primary image enhancement through a second neural network model so as to improve the resolution of the CT image.
Further, the performing image enhancement on the CT projection image once through the first neural network model to eliminate deformation and distortion of the CT projection image includes:
the first neural network model takes the CT projection image as input and takes the repair image as output;
the repair image is an image shot when the shooting direction of the CT detector is fixed, and the fixed shooting direction is the same as the projection direction of the CT projection image.
Further, the training samples of the first neural network model include: a projection image of a plurality of projection directions corresponding to the same detection object and a repair image corresponding to each projection image.
Further, in the training process of the first neural network model, the method for acquiring the repair image includes:
determining an initial placement mode of a detection object used in training;
determining a projection direction of the CT projection image;
taking the initial placement mode as a reference, and determining the shooting direction of the repair image and/or the placement mode of the detection object according to the projection direction;
and obtaining a repair image corresponding to the projection direction through the CT detector according to the shooting direction of the repair image and/or the placement mode of the detection object.
Further, the performing secondary image enhancement on the CT image after the primary image enhancement through the second neural network model, to improve the resolution of the CT image, includes:
the second neural network model takes the CT image after the primary image enhancement as input and takes a high-resolution image as output;
the resolution of the high-resolution image is based on the resolution of a high-resolution detector, which is a detector in the security field with higher resolution than a CT detector.
Further, the acquiring a CT projection image of the detection target includes:
determining a projection direction of the CT projection image;
and acquiring the CT projection image according to the projection direction.
Further, determining an equivalent metal atomic number of the metal region according to the CT image, wherein the equivalent metal atomic number is used for representing an atomic number average value of each pixel of the metal region in the CT image;
determining a third neural network model according to the equivalent metal atomic number, wherein the third neural network model takes an image containing the artifact of the equivalent metal atomic number as an input and takes an image not containing the artifact of the equivalent metal atomic number as an output;
and removing metal artifacts in the CT image through the third neural network model.
In a second aspect, embodiments of the present application provide a CT image enhancement apparatus, including: the device comprises an acquisition module, a first image enhancement module and a second image enhancement module;
the acquisition module is used for acquiring CT projection images of the detection targets;
the first image enhancement module is used for carrying out primary image enhancement on the CT projection image through a first neural network model, and eliminating deformation and distortion of the CT projection image;
the second image enhancement module is used for carrying out secondary image enhancement on the CT image after primary image enhancement through a second neural network model, and improving the resolution ratio of the CT image.
Further, the first neural network model takes the CT projection image as input and takes a repair image as output;
the repair image is an image shot when the shooting direction of the CT detector is fixed, and the fixed shooting direction is the same as the projection direction of the CT projection image.
Further, the second neural network model takes the CT image after the primary image enhancement as input and takes a high-resolution image as output;
the resolution of the high resolution image is based on the resolution of a high resolution detector, which is an X-ray detector of higher resolution than a CT detector in the art.
The technical scheme provided by the embodiment of the invention has at least one of the following technical effects:
1. the deformation and the distortion of the CT projection image are repaired through the first neural network model, a foundation is laid for obtaining a high-quality CT image, the resolution of the CT projection image is improved through the second neural network model, so that the quality of the CT image is improved, the object identification capability of the CT detector is further improved, and therefore the object detection by independently using CT in the security inspection process is realized.
2. And eliminating metal artifacts on the CT images after the two times of enhancement through a third network model, and further improving the quality of the CT images, so that the CT can be independently used for detecting metal objects and objects containing metal in the security inspection process.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a flowchart of a CT image enhancement method according to an embodiment of the present invention;
FIG. 2 is a schematic view of an embodiment of the present invention for detecting the orientation of each detection direction passing through a security inspection machine;
fig. 3 is a schematic structural diagram of a CT image enhancement apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
DR (digital radiology) digital X-ray imaging technology is well established, and can clearly shoot X-ray transmission images of the interior of the package under a set angle. The DR has the most outstanding advantages of high resolution, clear, fine and wide application of images, and is the current security inspection assistance. However, it is difficult to independently accomplish effective identification of detected drugs, explosives, and biological tissues due to occlusion, limitation of shooting angle, and the like.
CT can accurately obtain equivalent atomic numbers and electron densities of each position in space through reconstruction of high-low energy spectrum images, so that the work of identifying substances is efficiently completed. CT images have the advantage of having a good discrimination for the identification of drugs, explosives and biological tissues, but their CT images are not of high resolution. Particularly when applied to security inspection equipment, the resolution of CT images can be lower and the projected images thereof can even be distorted under the influence of the screw pitch and the security inspection time period.
Specifically, under a security inspection scene, a certain requirement is provided for the over-packing speed of the CT, a CT image acquisition technology with a larger pitch is required to be adopted, so that the acquired CT sinogram resolution is low, the CT reconstruction re-projection image resolution is low, partial distortion exists, and the image recognition re-projection image is difficult.
In addition, the CT detector is a surrounding type detector, and a plurality of angles need to be sampled in a short time, so the CT detector is a single-row detector in design, the crystal design is smaller, the photosensitive time is shorter, and the definition is naturally lower; and DR is because of fixed angle and crystal are great, and the sensitization time is more abundant, and natural imaging quality can be clearer. Therefore, the definition of the detector used for CT photography is also greatly different from that of the detector of DR.
Based on the characteristics of the CT detector and the DR detector, the prior art uses both the CT detector and the DR detector in security inspection equipment. However, in a real scene, DR detector position is fixed and only fixed angle images can be taken. This means that if the images of the same angle obtained by the DR detector and the CT detector do not reflect the real properties of the detected object, only manual inspection is performed, which greatly reduces the security inspection efficiency.
In order to solve the above technical problems, an embodiment of the present invention provides a CT image enhancement method, including the following steps:
and step 1, acquiring a CT projection image of the detection target.
In the prior art, when the CT detector and the DR detector coexist, the DR detector is fixedly arranged, so that the projection direction corresponding to the slice image in the prior art defaults to the shooting direction corresponding to the DR detector. However, in the embodiment of the present application, the security inspection device has only the CT detector, so the projection direction of the CT projection image needs to be determined first. And then acquiring a CT projection image according to the projection direction. The projection direction of the CT projection image may be set in advance or may be temporarily specified. For example, in order to facilitate detection, a projection direction is preset, and when a CT projection image obtained in the preset direction cannot reflect the detection target attribute, a detection person may manually set a new projection direction so as to observe the detection target from a different direction.
And 2, performing primary image enhancement on the CT projection image through the first neural network model, and eliminating deformation and distortion of the CT projection image.
And 3, performing secondary image enhancement on the CT image subjected to primary image enhancement through a second neural network model, and improving the resolution of the CT image.
In this embodiment of the application, the article is detected by using the CT detector alone, and based on the characteristics of the CT detector, when the article is detected by using the CT detector alone, the following two problems need to be solved:
1. image deformation and distortion caused by large-pitch rotation CT reconstruction re-projection;
CT detector imaging quality is poor.
Wherein, step 2 is used for solving problem 1, and step 3 is used for solving problem 2. The image distortion and the distortion cannot be eliminated by improving the resolution, but the resolution of the image can be partially improved by eliminating the image distortion and the distortion, so that the image distortion and the distortion are eliminated firstly and then the image resolution is improved.
Specifically, for problem 1, the CT detector rotates at a certain pitch to obtain a CT sinusoidal image, which is the root cause of the problem that CT projection images generate image distortion and distortion. Therefore, in the embodiment of the application, the first neural network takes the CT projection image as input and takes the repair image as output, so as to obtain the image shot by the CT detector when the shooting direction is fixed, and the shooting direction is the same as the projection direction corresponding to the CT projection image. In the above way, the image distortion and the image distortion in the CT projection image are repaired as much as possible.
Projections of different detection objects in different directions will deform and distort to different extents. Some of these deformations and distortions are easy to repair, and some are not. If training is carried out from a single direction during training, the accuracy difference of various types of detection objects is obvious, and finally the calculation accuracy of the whole model is reduced. For example, when the shooting direction is the horizontal viewing angle, the accuracy corresponding to the repair images of the A, B, C three detection objects is 60%, 80% and 90%, respectively, and if the model is applied to the actual scene, the detection object a is largely undetectable.
In order to solve the above problems, for the same object, CT projection images in a plurality of projection directions and repair images corresponding to the respective CT projection images are collected, and all the collected projection directions and repair images are used as training samples.
Specifically, as shown in fig. 2, the over-wrapping direction is set to be z-direction, the horizontal viewing angle is set to be x-direction and the vertical viewing angle is set to be y-direction. In the prior art, DR detectors would take a picture of the package from both a horizontal and a vertical view. Therefore, in the embodiment of the present application, the projection direction defaults to the horizontal viewing angle and the vertical viewing angle, and at this time, the shooting direction of the repair image is the horizontal viewing angle and the vertical viewing angle.
Preferably, other projection angles may also be provided based on the default projection direction, such as 30 ° -60 ° offset from the y-direction, 30 ° -60 ° offset from the x-direction, 30 ° -60 ° offset from the z-direction, and combinations of the above offset angles.
For other projection angles, the position of the CT detector in the security inspection channel is required to be adjusted when corresponding repair images are acquired, or the placement mode of the detection object is adjusted, or both are adjusted, so that the content displayed in the projection images is the same as the content observed when the CT detector is at a fixed angle. The specific process of adjustment is as follows:
determining an initial placement mode of a detection object used in training; determining a projection direction; and determining the shooting direction of the restored image and/or the placement mode of the detected object according to the projection direction by taking the initial placement mode as a reference.
After obtaining enough training samples in the above manner, for example 10000 training samples. Thereafter, a first neural network model is trained. In an embodiment of the present application, the first neural network model is an countermeasure generation network model, including two parts: a part of the images are used for taking CT projection images as input and repairing images as output, and are called a generator; the other part takes the restored image obtained by the generator and the image which is not deformed and distorted as input, and takes the fraction of the restored image as output, which is called a discriminator. Wherein the score of the repair image is used to characterize the degree of deformation and distortion of the repair image. For example, for the same detection object, the image A is an image which is not deformed, the image B is a repair image, and the discriminator determines whether the deformation exists in the image B by comparing the image characteristics of the image A and the image B, the more the characteristics of deformation or distortion are, the lower the score is, and the higher the score is conversely.
During training, the discriminator is trained first, so that the discriminator can distinguish the repaired image from the image which is not deformed and distorted according to the rule A, and parameters of the discriminator are fixed, so that the discriminator can only judge according to the rule A. Thereafter, the generator is trained so that it gets a repair image satisfying rule a. When the score output by the discriminator reaches a preset value, the fact that the repair image meets the rule A is indicated, and parameters of the generator are fixed at the moment, so that the repair image can be generated only according to the rule A. The discriminator is trained again, so that the discriminator can distinguish the repaired image from the image which is not deformed and distorted according to the rule A and the rule B, parameters of the discriminator are fixed, the discriminator can only judge according to the rule A and the rule B, … …, and iteration is repeated until the discriminator cannot generate new rules.
Specifically, 10000 reconstructed re-projection images Xi (default projection image is initial state of repair image) and corresponding 10000 fixed angle CT shot images are collected as sample data Yi (image without deformation and distortion):
a. training the discriminator D1: data Y1, Y2...y 10000 and X1, X2...x10000 are input into D1, and the maximum total score V of D1 is adjusted by a gradient descent method maxD1 The general formula of the total fraction of D1 is:
Figure BDA0003427563390000091
where m represents the total number of samples.
b. Retraining generator G1: data X1, X2...x10000 was input into G1 to obtain X1', X2 '..x 10000'.
c. Keeping the parameters of D1 unchanged, the arbiter D1 takes X1', X2'. X10000' and data Y1, Y2...y 10000 as inputs, takes the total score of G1 as output, and the general formula of the total score of G1 is:
Figure BDA0003427563390000092
Figure BDA0003427563390000093
d. adjusting the maximum total fraction Vmax of G1 by gradient descent G1 So as to V G Reaching a preset value.
e. Keeping the parameters of G1 unchanged, taking X1', X2'. Times.X 10000' and data Y1, Y2... Times.Y 10000 as input training D2, and adjusting the maximum total score V of D2 by a gradient descent method maxD2 The general formula of the total fraction of D2 is:
Figure BDA0003427563390000094
f. repeatedly executing a-e to obtain N generation generators GN and DN, when V DN And V D(N-1) And ending the current flow when the difference does not exceed the preset range.
In the using process, inputting data into the GN, obtaining a corrected image through the GN, and using the image obtained by the GN to participate in the next calculation.
In the embodiment of the application, when the first neural network model is actually used, when a plurality of CT projection images are input to the same detection target, an image with the best enhancement effect is selected as the input of the second neural network model according to the primary image enhancement results of each projection direction of each CT projection image.
In the embodiment of the application, after the image distortion and deformation are eliminated, the second neural network model takes a CT image after primary image enhancement as an input and takes a high-resolution image as an output. The resolution of the high resolution image is based on the resolution of a high resolution detector, which is a detector in the security field with a higher resolution than the CT detector. That is, the present application refers not only to the resolution of the DR detector but also to other detectors in the art, such as terahertz detectors, when improving the resolution of the CT image.
During training, the input of the second neural network is an image photographed when the photographing direction of the CT detector is fixed, and the output of the second neural network is a high-resolution image photographed by the high-resolution detector in the same photographing direction. Because the second neural network model is mainly used for improving the resolution and does not involve changing the shape of the image, when the second neural network model is trained, a plurality of projection angles are not required to be set, and only the same shooting angle of the input CT image and the output high-resolution image is required to be ensured.
The second neural network model performs convolutional self-encoder training to obtain a self-encoder.
The self-encoder includes an encoder and a decoder: wherein both the encoder and the decoder are trained using convolutional neural network algorithms.
The encoder comprises an input layer, a convolution layer, a pooling layer and an output layer:
the input layer, the input data is CT projection images F1 (a 11, a12, a 13.) of the plurality of projection directions acquired;
the convolution layer consists of 4 layers of convolutions with convolution kernel size of 3 multiplied by 3 and step length of 2, and is used for extracting the characteristics of input data from the input layer;
the pooling layer adopts average pooling with the step length of 2;
the output layer is the result T1 (w 1, w2, w3..) TN (w 1, w2, w3..) output after feature extraction of the convolutional layer and pooling layer, and provides input to the decoder of the next step;
the decoder includes an input layer, a de-pooling layer, a de-convolution layer, and an output layer:
the input layer inputs data which are output results of the encoder;
the reverse pooling layer is an inverse average pooling layer with the step length of 2;
the deconvolution layer consists of deconvolution with the size of 4 layers of convolution kernels of 3 multiplied by 3 and the step length of 2;
the output layer is a result G'1 (a 11, a12, a 13.) outputted after the feature is restored to the reverse pooling layer and the reverse convolution layer;
loss function used in training:
Figure BDA0003427563390000111
wherein G' i is the reconstructed image vector after passing through the encoder back encoder, and Gi is the repair image corresponding to the projection image.
When the detection target is metal, metal artifacts exist in the CT image, and in order to ensure the image processing quality, the metal artifacts in the CT image need to be removed. The following problems exist in removing metal artifacts:
first, the artifact areas of different metals are different, and for image processing, the difficulty in accurately determining the artifact boundaries is different, and in general, the larger the artifact area, the easier it is to determine the boundaries. Second, when the attenuation degree of the X-ray by the metal area of the detection target is larger than that of other areas, metal artifacts are caused at the boundaries of the metal area and other areas. The different metals have different attenuation degrees on the X-rays, so that the formed artifacts have slightly different image characteristics, such as artifact brightness and artifact boundary smoothness, and the factors have different degrees of influence on the subsequent image processing effect. Finally, the material of the case metal part and the tool is usually alloy or gold-plated, so that in the above cases, the metal artifact is determined by multiple elements together, and the influence of different substances on the metal artifact needs to be distinguished from the chemical aspect.
In view of the above technical problems, after step 3, a method for removing metal artifacts in CT images by using a neural network model is provided in an embodiment of the present application, including:
and S1, determining the equivalent metal atomic number of the metal region according to the CT image.
In the embodiment of the application, the equivalent atomic number of the metal is used for representing the atomic number average value of each pixel of the metal area in the CT image. Specifically, the atomic numbers of the pixels in the CT image are calculated respectively, the metal areas in the CT image are determined according to the atomic numbers, and then the average atomic number of each pixel in each metal area is determined to be the equivalent metal atomic number.
The equivalent metal atomic number is a fixed value or within a certain fixed variation range. In particular, in the field of alloys, the composition of the alloy generally varies within a range, for example an ultra-high strength ferritic steel, part of its chemical composition if: 2 to 15 percent of Ni, 2 to 10 percent of Mn, 1 to 6 percent of Al, 1.5 to 4 percent of Cu and 8 to 12 percent of Cr. The equivalent metal atomic number is a fixed variation range. The partial chemical components are as follows: 2% of Ni, 10% of Mn, 6% of Al, 4% of Cu and 12% of Cr. The equivalent metal atomic number is a fixed value at this time.
And S2, determining a third neural network model according to the equivalent metal atomic number.
In the embodiment of the present application, the third neural network model takes an image containing an artifact of an equivalent metal atomic number as an input and takes an image not containing an artifact of an equivalent metal atomic number as an output. The third neural network model is generally classified into copper-iron-nickel-chromium and alloys, titanium and titanium alloys, and platinum based on the equivalent metal atomic number.
In the embodiment of the application, the step 3 obtains the CT image with high definition, so that when the metal artifact is removed, the boundary of the artifact and the equivalent metal atomic number are more accurately determined, and the removal effect of the metal artifact is improved. The third neural network model may be a convolutional self-encoder, VAE algorithm.
The training method of the third neural network model comprises the following steps:
according to the preset model atomic number or the model atomic number range, acquiring a CT slice image without artifacts, wherein the CT slice image comprises at least one preset model atomic number or a metal area corresponding to the at least one preset model atomic number range:
adding metal artifacts to the CT slice image according to the atomic number of the preset model;
and taking the CT slice image without the artifact and the CT slice image with the metal artifact added as training samples of the corresponding neural network model.
The embodiment of the application provides a CT image enhancement device based on DR images, which comprises: an acquisition module 301, a first image enhancement module 302 and a second image enhancement module 303;
the acquisition module 301 is configured to acquire a CT projection image of a detection target;
the first image enhancement module 302 is configured to perform image enhancement on the CT projection image once through the first neural network model, and eliminate deformation and distortion of the CT projection image;
the second image enhancement module 303 is configured to perform secondary image enhancement on the CT image after the primary image enhancement through the second neural network model, and improve the resolution of the CT image.
In the embodiment of the application, the first neural network model takes a CT projection image as an input and takes a repair image as an output; the repair image is an image shot when the shooting direction of the CT detector is fixed, and the fixed shooting direction is the same as the projection direction of the CT projection image.
In the embodiment of the application, the second neural network model takes a CT image after primary image enhancement as input and takes a high-resolution image as output; the resolution of the high resolution image is based on the resolution of a high resolution detector, which is an X-ray detector in the art having a higher resolution than the CT detector.
To illustrate the feasibility of the above scheme, the following examples are given:
example 1
The embodiment of the application obtains the first neural network model and the second neural network model by using the training method. 1000 wrapped data were then collected and 4 sets of experiments were performed:
a first group: and identifying 1000 packages by adopting a DR image and a common dangerous goods identification model.
In this application embodiment, dangerous goods identification model that can be used includes: yolv5, RCNN or SSD.
Second group: identifying 1000 packages by adopting an untreated CT projection image and a common dangerous goods identification model;
third group: identifying 1000 packages by using the untreated CT projection image and a first neural network model of the application;
fourth group: 1000 packages were identified using the untreated CT projection image, the first neural network model, and the second neural network model of the present application.
The positive test rate was used to evaluate the recognition results of the four groups in the examples of the present application, as shown in table 1:
table 1 positive check rate of each group
First group of Second group of Third group of Fourth group
Positive rate of detection 98.1741% 13.6815% 78.2381% 97.8362%
As can be seen from table 1, the first neural network model is capable of eliminating distortion and distortion of the image, thereby improving the positive detection rate by nearly 5 times. But the positive detection rate of the third group is still lower than that of the first and fourth groups because the resolution is not high. Meanwhile, the positive detection rate of the first neural network model and the second neural network model (the technical scheme of the application) is increased to 97%, and is basically equal to the positive detection rate of the first group, so that the technical scheme of the application can solve the problem 1 and the problem 2, and the CT detector is independently used for security inspection.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (10)

1. A method of CT image enhancement, comprising:
acquiring a CT projection image of a detection target;
performing primary image enhancement on the CT projection image through a first neural network model so as to eliminate deformation and distortion of the CT projection image;
and performing secondary image enhancement on the CT image subjected to primary image enhancement through a second neural network model so as to improve the resolution of the CT image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the performing image enhancement on the CT projection image once through the first neural network model to eliminate deformation and distortion of the CT projection image includes:
the first neural network model takes the CT projection image as input and takes the repair image as output;
the repair image is an image shot when the shooting direction of the CT detector is fixed, and the fixed shooting direction is the same as the projection direction of the CT projection image.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the training sample of the first neural network model comprises: a projection image of a plurality of projection directions corresponding to the same detection object and a repair image corresponding to each projection image.
4. The method of claim 3, wherein the step of,
in the training process of the first neural network model, the method for acquiring the repair image comprises the following steps:
determining an initial placement mode of a detection object used in training;
determining a projection direction of the CT projection image;
taking the initial placement mode as a reference, and determining the shooting direction of the repair image and/or the placement mode of the detection object according to the projection direction;
and obtaining a repair image corresponding to the projection direction through the CT detector according to the shooting direction of the repair image and/or the placement mode of the detection object.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the second image enhancement is performed on the CT image after the first image enhancement through the second neural network model, so as to improve the resolution of the CT image, and the method comprises the following steps:
the second neural network model takes the CT image after the primary image enhancement as input and takes a high-resolution image as output;
the resolution of the high-resolution image is based on the resolution of a high-resolution detector, which is a detector in the security field with higher resolution than a CT detector.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the acquiring the CT projection image of the detection target comprises the following steps:
determining a projection direction of the CT projection image;
and acquiring the CT projection image according to the projection direction.
7. The method according to claims 1 to 6, wherein the detection target is a metal, the method further comprising:
determining equivalent metal atomic numbers of the metal areas according to the CT images, wherein the equivalent metal atomic numbers are used for representing atomic number average values of pixels of the metal areas in the CT images;
determining a third neural network model according to the equivalent metal atomic number, wherein the third neural network model takes an image containing the artifact of the equivalent metal atomic number as an input and takes an image not containing the artifact of the equivalent metal atomic number as an output;
and removing metal artifacts in the CT image through the third neural network model.
8. A CT image enhancement apparatus, comprising: the device comprises an acquisition module, a first image enhancement module and a second image enhancement module;
the acquisition module is used for acquiring CT projection images of the detection targets;
the first image enhancement module is used for carrying out primary image enhancement on the CT projection image through a first neural network model, and eliminating deformation and distortion of the CT projection image;
the second image enhancement module is used for carrying out secondary image enhancement on the CT image after primary image enhancement through a second neural network model, and improving the resolution ratio of the CT image.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the first neural network model takes the CT projection image as input and takes the repair image as output;
the repair image is an image shot when the shooting direction of the CT detector is fixed, and the fixed shooting direction is the same as the projection direction of the CT projection image.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the second neural network model takes the CT image after the primary image enhancement as input and takes a high-resolution image as output;
the resolution of the high resolution image is based on the resolution of a high resolution detector, which is an X-ray detector of higher resolution than a CT detector in the art.
CN202111584983.4A 2021-12-22 2021-12-22 CT image enhancement method and device Pending CN116342396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111584983.4A CN116342396A (en) 2021-12-22 2021-12-22 CT image enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111584983.4A CN116342396A (en) 2021-12-22 2021-12-22 CT image enhancement method and device

Publications (1)

Publication Number Publication Date
CN116342396A true CN116342396A (en) 2023-06-27

Family

ID=86874797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111584983.4A Pending CN116342396A (en) 2021-12-22 2021-12-22 CT image enhancement method and device

Country Status (1)

Country Link
CN (1) CN116342396A (en)

Similar Documents

Publication Publication Date Title
CN109948562B (en) Security check system deep learning sample generation method based on X-ray image
WO2021203618A1 (en) Image sample generating method and system, and target detection method
CN102096917B (en) Automatic eliminating method for redundant image data of capsule endoscope
US9489562B2 (en) Image processing method and apparatus
EP2235678B1 (en) Edge detection
EP2555158A1 (en) Method for brightness level calculation in the area of interest of a digital x-ray image for medical applications
Fantini et al. Automatic detection of motion artifacts on MRI using Deep CNN
CN104486618B (en) The noise detecting method and device of video image
CN108537787B (en) Quality judgment method for face image
CN110555834B (en) CT bad channel real-time detection and reconstruction method based on deep learning network
JP6987352B2 (en) Medical image processing equipment and medical image processing method
CN104066378B (en) Image processing apparatus and image processing method
WO2014050045A1 (en) Body movement detection device and method
CN106651792A (en) Method and device for removing strip noise of satellite image
CN110889807A (en) Image processing method for channel type X-ray security inspection equipment
Bellos et al. A convolutional neural network for fast upsampling of undersampled tomograms in X-ray CT time-series using a representative highly sampled tomogram
EP1522878A1 (en) Method for determining the displacement of luggage in order to scan a suspicious region in the luggage
CN105608674B (en) A kind of image enchancing method based on image registration, interpolation and denoising
CN116342396A (en) CT image enhancement method and device
CN116485766A (en) Grain imperfect grain detection and counting method based on improved YOLOX
JP5992848B2 (en) Body motion display device and method
CN113963161A (en) System and method for segmenting and identifying X-ray image based on ResNet model feature embedding UNet
Yan et al. Highly dynamic X-ray image enhancement based on generative adversarial network
CN104766288B (en) A kind of mineral picture contrast adjusting method based on Poisson's equation
CN118015006B (en) Embryo cell vacuole detection method based on dynamic circular convolution and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination