WO2019128797A1 - 图像处理方法、装置及计算机可读存储介质 - Google Patents

图像处理方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2019128797A1
WO2019128797A1 PCT/CN2018/122038 CN2018122038W WO2019128797A1 WO 2019128797 A1 WO2019128797 A1 WO 2019128797A1 CN 2018122038 W CN2018122038 W CN 2018122038W WO 2019128797 A1 WO2019128797 A1 WO 2019128797A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
undersampled
original image
detectors
original
Prior art date
Application number
PCT/CN2018/122038
Other languages
English (en)
French (fr)
Inventor
王琪
刘必成
徐光明
Original Assignee
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 同方威视技术股份有限公司 filed Critical 同方威视技术股份有限公司
Publication of WO2019128797A1 publication Critical patent/WO2019128797A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Definitions

  • the present disclosure relates to an image processing method, apparatus, and computer readable storage medium.
  • X-ray imaging technology is the most basic and widely used technology in the field of contraband inspection. It is also the most widely used inspection technology for containers or vehicles.
  • an image processing method comprising: acquiring a first undersampled image to be processed; and according to a mapping relationship between an undersampled image and a normally sampled original image, The sampled image is reconstructed into a corresponding first original image, wherein the mapping relationship is obtained by training the machine learning model with the second undersampled image and its corresponding normally sampled second original image as training samples.
  • the method further comprises: downsampling the second original image to obtain the second undersampled image; training the second undersampled image and the second original image The sample trains the machine learning model to obtain the mapping relationship.
  • the downsampling the second original image to obtain the second undersampled image comprises: downsampling the second original image to obtain a second downsampled image;
  • the second downsampled image is upsampled to obtain a second upsampled image of the same size as the second original image, with the second upsampled image as the second undersampled image.
  • the reconstructing the first undersampled image into a corresponding first original image according to a mapping relationship between the undersampled image and the original image comprises: upsampling the first undersampled image And obtaining a first upsampled image having the same size as the first original image, using the first upsampled image as a third undersampled image; reconstructing the third undersampled image according to the mapping relationship to Corresponding third original image, using the third original image as the first original image.
  • the training the machine learning model with the second undersampled image and the second original image as training samples comprises: dividing the second undersampled image into multiple undersampling An image block; dividing the second original image into a plurality of original image blocks, one original image block corresponding to one undersampled image block, each original image block having the same size as a corresponding undersampled image block; The undersampled image block and the plurality of original image blocks train the machine learning model for the training samples.
  • the training the machine learning model with the plurality of undersampled image blocks and the plurality of original image blocks as training samples comprises: owing each of the plurality of undersampled image blocks Processing the sampled image block to determine a difference image block between each of the undersampled image blocks and the corresponding original image block; adding the difference image block to the corresponding undersampled image block to obtain a predicted image block; The predicted image block and the original image block optimize the machine learning model until a difference between the predicted image block and the original image block satisfies a preset condition.
  • the downsampling the second original image comprises: amplifying the second original image to obtain at least one augmented image; for the second original image and the expanding The enhanced image is downsampled to obtain a plurality of said second undersampled images.
  • the first undersampled image is obtained as follows: when the first detected object moves in the first direction, the radiation emitted by the first emitter penetrates the first detected object along After the cross section of the second direction is received by the first set of detectors disposed opposite the first emitter, thereby generating the first undersampled image, wherein the first direction is perpendicular to the second direction
  • the first set of detectors includes one or more rows of first detectors
  • the second original image is obtained as follows: when the second detected object moves in the third direction, the radiation emitted by the second emitter After passing through the cross section of the second detected object along the fourth direction, being received by a second set of detectors disposed opposite the second emitter, thereby generating the second original image, wherein the third direction Vertical to the fourth direction, the second set of detectors includes one or more rows of second detectors.
  • a size of the first undersampled image in the first direction is smaller than a size of the first original image in the first direction; the second downsampled image is in the first The size in the three directions is smaller than the size of the second original image in the third direction.
  • the first set of detectors comprises M1 rows of first detectors arranged in the first direction, the distance between first detectors of adjacent rows is S1;
  • an image processing apparatus comprising: an acquisition module, configured to acquire a first undersampled image to be processed; and a reconstruction module, configured to: according to the undersampled image and the normally sampled original image a mapping relationship between the first undersampled image and a corresponding first original image, wherein the mapping relationship is that the second undersampled image and the corresponding second sampled image of the normal sample are used as training samples to the machine Learning models are trained to get.
  • the apparatus further includes: a downsampling module, configured to downsample the second original image to obtain the second undersampled image; and a training module, configured to use the second The sampled image and the second original image train the machine learning model for training samples to obtain the mapping relationship.
  • a downsampling module configured to downsample the second original image to obtain the second undersampled image
  • a training module configured to use the second The sampled image and the second original image train the machine learning model for training samples to obtain the mapping relationship.
  • the downsampling module is configured to downsample the second original image to obtain a second downsampled image; the apparatus further includes: a second upsampling module, configured to The second downsampled image is upsampled to obtain a second upsampled image of the same size as the second original image, with the second upsampled image as the second undersampled image.
  • the apparatus further includes: a first upsampling module, configured to upsample the first undersampled image to obtain a first upsampled image having the same size as the first original image
  • the first upsampled image is used as a third undersampled image
  • the reconstruction module is configured to reconstruct the third undersampled image into a corresponding third original image according to the mapping relationship, to the third original
  • the image serves as the first original image.
  • the training module is configured to divide the second undersampled image into a plurality of undersampled image blocks; divide the second original image into a plurality of original image blocks, one original image block corresponding to one An undersampled image block, each original image block having the same size as a corresponding undersampled image block; the machine learning model is trained with the plurality of undersampled image blocks and the plurality of original image blocks as training samples.
  • the training module is configured to process each of the plurality of undersampled image blocks to determine a difference between each of the undersampled image blocks and a corresponding original image block An image block; adding the difference image block to a corresponding undersampled image block to obtain a predicted image block; optimizing the machine learning model according to the predicted image block and the original image block until the prediction The difference between the image block and the original image block satisfies a preset condition.
  • the downsampling module is configured to amplify the second original image to obtain at least one augmented image; downsample the second original image and the augmented image to A plurality of the second undersampled images are obtained.
  • the first undersampled image is obtained as follows: when the first detected object moves in the first direction, the radiation emitted by the first emitter penetrates the first detected object along After the cross section of the second direction is received by the first set of detectors disposed opposite the first emitter, thereby generating the first undersampled image, wherein the first direction is perpendicular to the second direction
  • the first set of detectors includes one or more rows of first detectors
  • the second original image is obtained as follows: when the second detected object moves in the third direction, the radiation emitted by the second emitter After passing through the cross section of the second detected object along the fourth direction, being received by a second set of detectors disposed opposite the second emitter, thereby generating the second original image, wherein the third direction Vertical to the fourth direction, the second set of detectors includes one or more rows of second detectors.
  • a size of the first undersampled image in the first direction is smaller than a size of the first original image in the first direction; the second downsampled image is in the first The size in the three directions is smaller than the size of the second original image in the third direction.
  • the first set of detectors comprises M1 rows of first detectors arranged in the first direction, the distance between first detectors of adjacent rows is S1;
  • an image processing apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform the above-described arbitrary based on an instruction stored in the memory The method described in one embodiment.
  • a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implements the method described in any one of the above embodiments.
  • Figure 1 is a schematic illustration of a scene for generating an undersampled image
  • FIG. 2 is a flow diagram of an image processing method in accordance with some embodiments of the present disclosure.
  • FIG. 3 is a schematic flow chart of an image processing method according to further embodiments of the present disclosure.
  • FIG. 4 is a flow diagram of training a machine learning model in accordance with some embodiments of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to further embodiments of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to further embodiments of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present disclosure.
  • a particular component when it is described that a particular component is located between the first component and the second component, there may be intervening components between the particular component and the first component or the second component, or there may be no intervening components.
  • that particular component when it is described that a particular component is connected to other components, that particular component can be directly connected to the other component without the intervening component, or can be directly connected to the other component without having the intervening component.
  • an X-ray source (such as an electron linac) alternately produces X-rays of two different energies at high frequencies, referred to as high-energy X-rays and low-energy X-rays, respectively.
  • the two X-rays alternately pass through the slit collimator to form a fan-shaped X-ray beam, respectively.
  • the detectors located on the other side of the detected object sequentially receive and generate image data.
  • the fan-shaped X-ray beam sequentially scans a series of cross-sections of the object to be detected, thereby forming a high-energy and low-energy X-ray transmission image of the entire object to be detected.
  • the velocity v of the detected object and the frequency f of the X-ray source emitted by the X-ray source should satisfy the following formula:
  • N is the number of rows of detectors
  • pitch is the distance between detectors in adjacent rows
  • D is the distance from the X-ray source to the object being detected
  • T is the distance from the object to be detected to the detector.
  • FIG. 1 is a schematic illustration of a scene in which an undersampled image is produced.
  • the time t1, the time t2, the time t3, and the time t4 are the emission timings of the X-rays, and a part of the detected object 101 is between the adjacent X-ray emission timings (for example, the time t1 and the time t2). Not scanned by X-rays, resulting in the generation of undersampled images.
  • An undersampled image not only loses information about the object being inspected, but also does not match the actual shape of the item in the object being detected.
  • the object in the object has a rounded wheel, but the wheel in the resulting undersampled image is elliptical.
  • the quick check system is one of the dual energy X-ray inspection systems.
  • the quick inspection system does not move, and the detected object (such as a vehicle) directly passes through the passage of the quick inspection system, so the passing rate of the detected object is high.
  • the moving speed of the detected object is fast, this requires the X-ray source to have a high X-ray exit frequency to avoid obtaining an undersampled image.
  • the X-ray emission frequency of the X-ray source cannot be increased without limitation, which limits the moving speed of the detected object and reduces the passing rate of the detected object.
  • FIG. 2 is a schematic flow chart of an image processing method according to some embodiments of the present disclosure.
  • a first undersampled image to be processed is acquired.
  • the first undersampled image may be obtained as follows: when the first detected object moves in the first direction, the radiation emitted by the first emitter penetrates the first detected object along the second direction After the cross section, it is received by a first set of detectors disposed opposite the first emitter to generate a first undersampled image.
  • the first direction and the second direction may be substantially perpendicular.
  • the first set of detectors can include a row or a plurality of rows of first detectors arranged in a first direction.
  • the first detected object may include, but is not limited to, a container or a vehicle carrying other items such as a container or the like.
  • the rays may be, for example, X-rays, visible rays, infrared rays, ultraviolet rays, or the like.
  • the moving first detected object may be illuminated by single-energy X-ray or dual-energy X-ray to obtain a first undersampled image.
  • first undersampled image may also be obtained by other means.
  • the first undersampled image is reconstructed into a corresponding first original image based on a mapping relationship between the undersampled image and the normally sampled original image.
  • the mapping relationship is obtained by training the machine learning model with the second undersampled image and its corresponding normally sampled second original image as training samples.
  • the second original image may be obtained according to the following manner: when the second detected object moves in the third direction, the radiation emitted by the second emitter penetrates the second detected object along the fourth direction After the cross section, it is received by a second set of detectors disposed opposite the second emitter to generate a second original image.
  • the second undersampled image can be obtained by downsampling the second original image.
  • the third direction and the fourth direction may be substantially perpendicular.
  • the second set of detectors may include one or more rows of second detectors arranged in a third direction.
  • the first set of detectors may include M1 rows of first detectors arranged in a first direction, the distance between first detectors of adjacent rows being S1; the second set of detectors may be included The second detector of the M2 row arranged in three directions, the distance between the second detectors of the adjacent row is S2.
  • machine learning such as a dictionary learning model, a BP (Back Propagation) neural network model, or a convolutional neural network model may be based on one or more second undersampled images and corresponding one or more second original images.
  • the model is trained to obtain a mapping relationship between the undersampled image and the normally sampled original image.
  • the machine learning model may reconstruct any of the input undersampled images according to the trained mapping relationship to output the original sampled original image corresponding to the undersampled image. Therefore, after the first undersampled image is input into the machine learning model, the machine training model may reconstruct the first undersampled image according to the trained mapping relationship, thereby outputting the first original image corresponding to the first undersampled image.
  • the first undersampled image to be processed can be reconstructed into a corresponding first original image by using a mapping relationship between the trained undersampled image and the normally sampled original image.
  • the first original image obtained according to the method of the above embodiment is more accurate than the conventional interpolation method.
  • FIG. 3 is a schematic flow chart of an image processing method according to further embodiments of the present disclosure.
  • the normally sampled second original image is downsampled to obtain a second undersampled image.
  • the normally sampled second original image can be downsampled to obtain a second downsampled image.
  • the second downsampled image is directly used as the second undersampled image.
  • the normally sampled second original image may be downsampled to obtain a second downsampled image; then the second downsampled image is upsampled to obtain the same size as the second original image.
  • the second upsampled image is used as the second undersampled image.
  • the size of the second upsampled image is the same as that of the second original image, since the second upsampled image is obtained by downsampling the second original image and then upsampling, in this sense, The second upsampled image is actually an undersampled image corresponding to the second original image, so the second upsampled image can be used as the second undersampled image.
  • the second original image may be first amplified to obtain at least one amplified image; then the second original image and the amplified image are downsampled to obtain a plurality of second downsampled images, thereby obtaining a plurality of second downsampled images, thereby Training samples can be added.
  • at least one of an amplification operation such as flipping, rotating, brightness adjustment, scaling, etc. may be performed on the second original image to obtain at least one amplified image.
  • the second original image can be rotated by a preset angle, such as 90 degrees, 270 degrees, and the like.
  • the second original image may be scaled by an algorithm such as bicubic interpolation, for example, the second downsampled image may be 0.6 times, 0.7 times, 0.8 times, 0.9 times, or 1.2 times the second original image. Wait.
  • the size of the second downsampled image in the third direction may be smaller than the size of the second original image in the third direction.
  • the size of the second downsampled image in the fourth direction may be equal to the size of the second original image in the fourth direction.
  • the size of the second downsampled image in the fourth direction may also be smaller than the size of the second original image in the fourth direction.
  • the size of the second original image is m (row) pixels ⁇ kn (column) pixels, and k is the number of rows of the second detector.
  • k is 1, for example, the even column pixels in the second original image may be deleted, the odd column pixels in the second original image may be retained, thereby obtaining the second downsampled image; and, for example, the second original image may be deleted
  • the odd-numbered columns of pixels retain the even-numbered columns of pixels in the second original image, resulting in a second downsampled image.
  • k is greater than 1
  • k columns of pixels in the second original image may be deleted every k columns of pixels, thereby obtaining a second downsampled image.
  • n is an even number
  • the size of the second downsampled picture may be m pixels ⁇ n/2 pixels.
  • the machine learning model is trained with the second undersampled image and the second original image as training samples to obtain a mapping relationship between the undersampled image and the normally sampled original image.
  • a first undersampled image to be processed is acquired.
  • the size of the first undersampled image in the first direction may be smaller than the size of the first original image in the first direction.
  • the first undersampled image is reconstructed into a corresponding first original image according to a mapping relationship between the undersampled image and the normally sampled original image.
  • the first undersampled image can be upsampled to obtain a first upsampled image of the same size as the first original image, with the first upsampled image as the third undersampled image. Then, the third undersampled image is reconstructed into a corresponding third original image according to the mapping relationship, and the third original image is taken as the first original image.
  • the first undersampled image may be upsampled by bicubic interpolation, nearest neighbor interpolation, bilinear interpolation, image edge based interpolation, etc. to obtain a first upsampled image.
  • the machine model is first trained according to the second undersampled image and the second original image, and then the first undersampled image to be processed can be reconstructed into a corresponding first original image by using the trained mapping relationship.
  • FIG. 4 is a flow diagram of training a machine learning model in accordance with some embodiments of the present disclosure.
  • the second undersampled image is segmented into a plurality of undersampled image blocks.
  • the second upsampled image obtained above is taken as the second undersampled image.
  • the second undersampled image can be segmented into a plurality of undersampled image blocks of the same size.
  • the second original image is segmented into a plurality of original image blocks.
  • one original image block corresponds to one undersampled image block, and each original image block has the same size as the corresponding undersampled image block.
  • the second undersampled image may be segmented into a plurality of undersampled image blocks of size m (row) pixels ⁇ m (column) pixels, and the second original image corresponding to the second undersampled image is segmented into multiple sizes.
  • the machine learning model is trained with a plurality of undersampled image blocks and a plurality of original image blocks as training samples.
  • a plurality of undersampled image blocks and a plurality of original image blocks may be divided into sets of image blocks, each set of image blocks may include a predetermined number of undersampled image blocks and corresponding original image blocks.
  • a set of image blocks in a plurality of sets of image blocks can be used as training samples.
  • 64 undersampled image blocks and corresponding 64 original image blocks may be grouped together.
  • the following uses the convolutional neural network model as an example to introduce some implementations of training machine learning models.
  • each of the plurality of undersampled image blocks is processed to determine a difference image block between each of the undersampled image blocks and the corresponding original image block.
  • a convolutional neural network model may include 20 convolutional layers with a non-linear layer between adjacent convolutional layers, such as a ReLu (Rectified Linear Unit) activation function.
  • the output of each convolutional layer is nonlinearly processed by the nonlinear layer as the input to the next convolutional layer.
  • the size of the undersampled image block of the first convolutional layer input be n ⁇ n.
  • the first convolutional layer uses 64 convolution kernels, each of which has a size of 3 x 3 x 1 and a step size of one.
  • the first convolutional layer outputs a feature map of size n ⁇ n ⁇ 64.
  • the second to the 19th convolutional layers each use 64 convolution kernels, each of which has a size of 3 ⁇ 3 ⁇ 64 and a step size of 1.
  • Each of the second to the 19th convolutional layers outputs a feature map having a size of n ⁇ n ⁇ 64.
  • the 20th convolutional layer uses a convolution kernel of size 3 ⁇ 3 ⁇ 64, and the size of the output feature map is n ⁇ n, which is the same size as the input undersampled image block.
  • a difference image block between each undersampled image block and the corresponding original image block can be obtained, that is, the output of the 20th convolutional layer Feature map.
  • the difference image block is added to the corresponding undersampled image block to obtain a predicted image block.
  • the feature image outputted by the twentieth convolutional layer and the undersampled image block input by the first convolutional layer may be added to obtain a predicted image block corresponding to the undersampled image block.
  • the machine learning model is optimized according to the predicted image block and the original image block until the difference between the predicted image block and the original image block satisfies a preset condition.
  • the weight of the convolution kernel can be initialized with MSRA, and the offset terms can all be initialized to zero.
  • Each training session can randomly select a set of image blocks as training samples for training.
  • the training objective is to minimize the value of the minimized loss function Loss represented by the following formula:
  • n is the number of original image blocks in a group of image blocks
  • y i is the original image block
  • f(x i ) is the predicted image block
  • is the regular term coefficient
  • w is the weight of the convolution kernel.
  • can be, for example, 1 ⁇ 10 -4 .
  • the optimization of the model can be done using the Adam algorithm.
  • An independent learning rate can be adaptively set for the weighting term and the bias term by calculating the first moment estimate and the second moment estimate of the gradient.
  • the Adam algorithm can make the convergence function of the minimized loss function faster, and there is no need to set the threshold clip_gradient to prevent the gradient explosion.
  • the second undersampled image is divided into a plurality of undersampled image blocks, the second original image is divided into multiple original image blocks, and then the plurality of undersampled image blocks and the plurality of original image blocks are used as training sample pairs.
  • the machine learning model is trained to make the mapping relationship between the trained undersampled image and the original image more accurate.
  • FIG. 5 is a block diagram showing an image processing apparatus according to some embodiments of the present disclosure. As shown in FIG. 5, the apparatus of this embodiment includes an acquisition module 501 and a reconstruction module 502.
  • the obtaining module 501 is configured to acquire a first undersampled image to be processed.
  • the first undersampled image may be obtained as follows: when the first detected object moves in the first direction, the radiation emitted by the first emitter penetrates the first detected object along the second direction After the cross section, it is received by a first set of detectors disposed opposite the first emitter to generate a first undersampled image.
  • the first direction and the second direction may be substantially perpendicular.
  • the size of the first undersampled image in the first direction may be smaller than the size of the corresponding first original image in the first direction.
  • the reconstruction module 502 is configured to reconstruct the first undersampled image into a corresponding first original image according to a mapping relationship between the undersampled image and the normally sampled original image.
  • the mapping relationship here is obtained by training the machine learning model with the second undersampled image and its corresponding normally sampled second original image as training samples.
  • the second original image may be obtained according to the following manner: when the second detected object moves in the third direction, the radiation emitted by the second emitter penetrates the second detected object along the fourth direction After the cross section, it is received by a second set of detectors disposed opposite the second emitter to generate a second original image.
  • the third direction and the fourth direction may be substantially perpendicular.
  • the first set of detectors may include one or more rows of first detectors, and the second set of detectors may include one or more rows of second detectors.
  • the first group of detectors may include the M1 row first detectors arranged in the first direction, the distance between the first detectors of the adjacent rows is S1; the second group of detectors may be included in The third detector of the M2 row arranged in the third direction, and the distance between the second detectors in the adjacent row is S2.
  • the apparatus of the above embodiment may reconstruct the first undersampled image to be processed into a corresponding first original image by using a mapping relationship between the trained undersampled image and the normally sampled original image.
  • the first original image obtained by the apparatus according to the above embodiment is more accurate than the conventional interpolation method.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to further embodiments of the present disclosure. As shown in FIG. 6, the apparatus of this embodiment includes a downsampling module 601, a training module 602, an obtaining module 601, and a reconstruction module 502.
  • the downsampling module 601 is configured to downsample the normally sampled second original image to obtain a second undersampled image. As some specific implementation manners, the downsampling module 601 can be configured to amplify the second original image to obtain at least one amplified image; and downsample the second original image and the amplified image to obtain a plurality of second owes Sample the image.
  • the training module 602 is configured to train the machine learning model with the second undersampled image and the second original image as training samples to obtain a mapping relationship between the undersampled image and the normally sampled original image.
  • the obtaining module 603 is configured to acquire a first undersampled image to be processed.
  • the reconstruction module 604 is configured to reconstruct the first undersampled image into a corresponding first original image according to a mapping relationship between the undersampled image and the normally sampled original image.
  • the apparatus of the above embodiment may first train the machine model according to the second undersampled image and the second original image, and then reconstruct the first undersampled image to be processed into a corresponding first original image by using the trained mapping relationship.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to further embodiments of the present disclosure.
  • the apparatus of this embodiment includes a downsampling module 701, a second upsampling module 702, a training module 703, an obtaining module 704, and a reconstruction module 705.
  • the downsampling module 701 is configured to downsample the normally sampled second original image to obtain a second downsampled image. For example, the size of the second downsampled image in the third direction is smaller than the size of the second original image in the third direction.
  • the second upsampling module 702 is configured to upsample the second downsampled image to obtain a second upsampled image of the same size as the second original image, and use the second upsampled image as the second undersampled image.
  • the training module 703 is configured to train the machine learning model with the second undersampled image and the second original image as training samples to obtain a mapping relationship between the undersampled image and the normally sampled original image.
  • the obtaining module 704 is configured to acquire a first undersampled image to be processed.
  • the reconstruction module 705 is configured to reconstruct the first undersampled image into a corresponding first original image according to a mapping relationship between the undersampled image and the normally sampled original image.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus according to still other embodiments of the present disclosure.
  • the apparatus of this embodiment includes a downsampling module 801, a second upsampling module 802, a training module 803, an obtaining module 804, a first upsampling module 805, and a reconstruction module 806.
  • the downsampling module 801 is configured to downsample the normally sampled second original image to obtain a second downsampled image.
  • the second upsampling module 802 is configured to upsample the second downsampled image to obtain a second upsampled image of the same size as the second original image, and use the second upsampled image as the second undersampled image.
  • the training module 803 is configured to train the machine learning model with the second undersampled image and the second original image as training samples to obtain a mapping relationship between the undersampled image and the normally sampled original image.
  • the obtaining module 804 is configured to acquire a first undersampled image to be processed.
  • the first upsampling module 805 is configured to upsample the first undersampled image to obtain a first upsampled image having the same size as the first original image, and use the first upsampled image as the third undersampled image.
  • the reconstruction module 806 is configured to reconstruct the third undersampled image into a corresponding third original image according to the trained mapping relationship, and use the third original image as the first original image corresponding to the first undersampled image.
  • the training module 703 in FIG. 7 and the training module 803 in FIG. 8 may be configured to divide the second undersampled image into a plurality of undersampled image blocks; and divide the second original image into multiple original images.
  • Block one original image block corresponds to one undersampled image block, each original image block has the same size as the corresponding undersampled image block; the machine learning model is performed by using a plurality of undersampled image blocks and a plurality of original image blocks as training samples training.
  • the training module 703 and the training module 803 can be configured to process each of the plurality of undersampled image blocks to determine between each of the undersampled image blocks and the corresponding original image block. a difference image block; adding the difference image block to the corresponding undersampled image block to obtain a predicted image block; optimizing the machine learning model according to the predicted image block and the original image block until between the predicted image block and the original image block The difference meets the preset conditions.
  • the second undersampled image is divided into a plurality of undersampled image blocks, the second original image is divided into multiple original image blocks, and then the plurality of undersampled image blocks and the plurality of original image blocks are used as training sample pairs.
  • the machine learning model is trained to make the mapping relationship between the trained undersampled image and the original image more accurate.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present disclosure.
  • the apparatus of this embodiment includes a memory 901 and a processor 902.
  • Memory 901 can be a magnetic disk, flash memory, or any other non-volatile storage medium.
  • the memory 901 is for storing instructions corresponding to the method of any of the foregoing embodiments.
  • the processor 902 is coupled to the memory 901 and can be implemented as one or more integrated circuits, such as a microprocessor or a microcontroller.
  • the processor 902 is configured to execute instructions stored in the memory 901.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present disclosure.
  • the apparatus 1000 of this embodiment includes a memory 1001 and a processor 1002.
  • the processor 1002 is coupled to the memory 1001 via a bus (BUS) 1003.
  • the device 1000 can also be connected to the external storage device 1005 via the storage interface 1004 to invoke external data, and can also be connected to a network or an external computer system (not shown) via the network interface 1006.
  • the second original image and the first undersampled image are obtained using the same fast inspection system.
  • the quick check system may include a single row detector or a plurality of rows of detectors, and each of the plurality of rows of detectors may be sequentially arranged in the direction of motion of the object to be inspected.
  • the first undersampled image is caused by the moving speed of the detected object while passing through the quick check system.
  • the solution of the present disclosure is such that the first undersampled image can be reconstructed into a first original image of normal sample size without increasing the frequency of the emitter exiting the ray. Therefore, the detected object can pass the quick check system at a faster speed, ensuring the passing rate of the detected object.
  • the second original image and the first undersampled image are obtained using different fast inspection systems.
  • the second original image is obtained using a first fast inspection system including 8 rows of detectors
  • the first undersampled image to be processed is obtained using a second quick inspection system including 4 rows of detectors.
  • the distance between the detectors of adjacent rows in the second quick inspection system may be twice the distance between the detectors of adjacent rows in the first quick inspection system.
  • the image data obtained by the eight rows of detectors is xxxxxxxx
  • the image data obtained by the four rows of detectors can be expressed as xoxoxoxo or oxoxoxox (x indicates that there is data, o indicates no data.
  • x indicates that there is data
  • o indicates no data.
  • Number of columns of image data obtained by using four rows of detectors It is that the eight rows of detectors obtain half of the number of columns of image data, so the image obtained by using the four rows of detectors is an undersampled image.
  • the first undersampled image can be reconstructed into a first original image of normal sample size using the scheme of the present disclosure. Therefore, the solution of the present disclosure can reduce the number of rows of detectors while ensuring image accuracy, and reduce hardware costs.
  • the second original image and the first undersampled image are obtained using different fast inspection systems.
  • the second original image is obtained using a first fast inspection system including four rows of detectors
  • the first undersampled image to be processed is also obtained using a second quick inspection system including four rows of detectors.
  • the distance between the detectors of the adjacent rows in the second fast inspection system is twice the distance between the detectors of the adjacent rows in the first fast inspection system, so the second quick inspection system will obtain an undersampled image.
  • the first undersampled image can be reconstructed into a first original image of normal sample size using the scheme of the present disclosure.
  • the distance between the detectors of the adjacent rows in the second quick inspection system is twice the distance between the detectors of the adjacent rows in the first quick inspection system, the speed of movement of the detected object can be improved, and the speed can be improved.
  • the passing rate of the detected object since the distance between the detectors of the adjacent rows in the second quick inspection system is twice the distance between the detectors of the adjacent rows in the first quick inspection system, the speed of movement of the detected object can be improved, and the speed can be improved. The passing rate of the detected object.
  • embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code. .
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the methods and apparatus of the present disclosure may be implemented in a number of ways.
  • the methods and apparatus of the present disclosure may be implemented in software, hardware, firmware or any combination of software, hardware, firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless otherwise specifically stated.
  • the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置及计算机可读存储介质,涉及图像处理技术领域,所述方法包括:获取待处理的第一欠采样图像(S202);根据欠采样图像与正常采样的原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像(S204),其中所述映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。

Description

图像处理方法、装置及计算机可读存储介质
相关申请的交叉引用
本申请是以CN申请号为201711434581.X,申请日为2017年12月26日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理方法、装置及计算机可读存储介质。
背景技术
利用集装箱或车辆偷运***、弹药、毒品、***物、甚至大规模杀伤性武器或放射性散布装置等违禁品,已经成为困扰各国政府、干扰国际货物运输正常秩序的国际公害。因此,集装箱或车辆安全检查是全世界共同关注的课题。
X射线成像技术是违禁品检查领域最基本的、也是最早得到广泛应用的技术,目前也是应用最广泛的用于集装箱或车辆的检查技术。
发明内容
根据本公开实施例的一方面,提供一种图像处理方法,包括:获取待处理的第一欠采样图像;根据欠采样图像与正常采样的原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像,其中所述映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
在一些实施例中,所述方法还包括:对所述第二原始图像进行下采样,以得到所述第二欠采样图像;以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练,以得到所述映射关系。
在一些实施例中,所述对所述第二原始图像进行下采样,以得到所述第二欠采样图像包括:对所述第二原始图像进行下采样,以得到第二下采样图像;对所述第二下采样图像进行上采样,以得到与所述第二原始图像的尺寸相同的第二上采样图像,以所述第二上采样图像作为所述第二欠采样图像。
在一些实施例中,所述根据欠采样图像与原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像包括:对所述第一欠采样图像进行上采样,以 得到与所述第一原始图像的尺寸相同的第一上采样图像,以所述第一上采样图像作为第三欠采样图像;根据所述映射关系将所述第三欠采样图像重建为对应的第三原始图像,以所述第三原始图像作为所述第一原始图像。
在一些实施例中,所述以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练包括:将所述第二欠采样图像分割成多个欠采样图像块;将所述第二原始图像分割成多个原始图像块,一个原始图像块对应一个欠采样图像块,每个原始图像块与对应的欠采样图像块的尺寸相同;以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练。
在一些实施例中,所述以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练包括:对所述多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块;将所述差异图像块与对应的欠采样图像块相加,以得到预测图像块;根据所述预测图像块与所述原始图像块对所述机器学习模型进行优化,直到所述预测图像块与所述原始图像块之间的差异满足预设条件。
在一些实施例中,所述对所述第二原始图像进行下采样包括:对所述第二原始图像进行扩增,以得到至少一个扩增图像;对所述第二原始图像和所述扩增图像进行下采样,以得到多个所述第二欠采样图像。
在一些实施例中,所述第一欠采样图像根据如下方式获得:在第一被检测物体沿着第一方向运动时,第一发射器发射的射线穿透所述第一被检测物体沿着第二方向的截面后,由与所述第一发射器相对设置的第一组探测器接收,从而生成所述第一欠采样图像,其中所述第一方向与所述第二方向垂直,所述第一组探测器包括一排或多排第一探测器;所述第二原始图像根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透所述第二被检测物体沿着第四方向的截面后,由与所述第二发射器相对设置的第二组探测器接收,从而生成所述第二原始图像,其中所述第三方向与所述第四方向垂直,所述第二组探测器包括一排或多排第二探测器。
在一些实施例中,所述第一欠采样图像在所述第一方向上的尺寸小于所述第一原始图像在所述第一方向上的尺寸;所述第二下采样图像在所述第三方向上的尺寸小于所述第二原始图像在所述第三方向上的尺寸。
在一些实施例中,所述第一组探测器包括在所述第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;所述第二组探测器包括在所述第三方向 上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2;其中,2≤M1≤M2,S1=N×S2,N为大于或等于2的整数。
根据本公开实施例的另一方面,提供一种图像处理装置,包括:获取模块,用于获取待处理的第一欠采样图像;重建模块,用于根据欠采样图像与正常采样的原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像,其中所述映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
在一些实施例中,所述装置还包括:下采样模块,用于对所述第二原始图像进行下采样,以得到所述第二欠采样图像;训练模块,用于以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练,以得到所述映射关系。
在一些实施例中,所述下采样模块用于对所述第二原始图像进行下采样,以得到第二下采样图像;所述装置还包括:第二上采样模块,用于对所述第二下采样图像进行上采样,以得到与所述第二原始图像的尺寸相同的第二上采样图像,以所述第二上采样图像作为所述第二欠采样图像。
在一些实施例中,所述装置还包括:第一上采样模块,用于对所述第一欠采样图像进行上采样,以得到与所述第一原始图像的尺寸相同的第一上采样图像,以所述第一上采样图像作为第三欠采样图像;所述重建模块用于根据所述映射关系将所述第三欠采样图像重建为对应的第三原始图像,以所述第三原始图像作为所述第一原始图像。
在一些实施例中,所述训练模块用于将所述第二欠采样图像分割成多个欠采样图像块;将所述第二原始图像分割成多个原始图像块,一个原始图像块对应一个欠采样图像块,每个原始图像块与对应的欠采样图像块的尺寸相同;以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练。
在一些实施例中,所述训练模块用于对所述多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块;将所述差异图像块与对应的欠采样图像块相加,以得到预测图像块;根据所述预测图像块与所述原始图像块对所述机器学习模型进行优化,直到所述预测图像块与所述原始图像块之间的差异满足预设条件。
在一些实施例中,所述下采样模块用于对所述第二原始图像进行扩增,以得到至少一个扩增图像;对所述第二原始图像和所述扩增图像进行下采样,以得到多个所述第二欠采样图像。
在一些实施例中,所述第一欠采样图像根据如下方式获得:在第一被检测物体沿着第一方向运动时,第一发射器发射的射线穿透所述第一被检测物体沿着第二方向的截面后,由与所述第一发射器相对设置的第一组探测器接收,从而生成所述第一欠采样图像,其中所述第一方向与所述第二方向垂直,所述第一组探测器包括一排或多排第一探测器;所述第二原始图像根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透所述第二被检测物体沿着第四方向的截面后,由与所述第二发射器相对设置的第二组探测器接收,从而生成所述第二原始图像,其中所述第三方向与所述第四方向垂直,所述第二组探测器包括一排或多排第二探测器。
在一些实施例中,所述第一欠采样图像在所述第一方向上的尺寸小于所述第一原始图像在所述第一方向上的尺寸;所述第二下采样图像在所述第三方向上的尺寸小于所述第二原始图像在所述第三方向上的尺寸。
在一些实施例中,所述第一组探测器包括在所述第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;所述第二组探测器包括在所述第三方向上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2;其中,2≤M1≤M2,S1=N×S2,N为大于或等于2的整数。
根据本公开实施例的又一方面,提供一种图像处理装置,包括:存储器;以及耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器的指令执行上述任意一个实施例所述的方法。
根据本公开实施例的再一方面,提供一种计算机可读存储介质,其上存储有计算机程序指令,该指令被处理器执行时实现上述任意一个实施例所述的方法。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同说明书一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1是产生欠采样图像的一个场景的示意图;
图2是根据本公开一些实施例的图像处理方法的流程示意图;
图3是根据本公开另一些实施例的图像处理方法的流程示意图;
图4是根据本公开一些实施例的对机器学习模型进行训练的流程示意图;
图5是根据本公开一些实施例的图像处理装置的结构示意图;
图6是根据本公开另一些实施例的图像处理装置的结构示意图;
图7是根据本公开另一些实施例的图像处理装置的结构示意图。
图8是根据本公开又一些实施例的图像处理装置的结构示意图;
图9是根据本公开再一些实施例的图像处理装置的结构示意图;
图10是根据本公开还一些实施例的图像处理装置的结构示意图。
应当明白,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。此外,相同或类似的参考标号表示相同或类似的构件。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。对示例性实施例的描述仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。本公开可以以许多不同的形式实现,不限于这里所述的实施例。提供这些实施例是为了使本公开透彻且完整,并且向本领域技术人员充分表达本公开的范围。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值应被解释为仅仅是示例性的,而不是作为限制。
本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的部分。“包括”或者“包含”等类似的词语意指在该词前的要素涵盖在该词后列举的要素,并不排除也涵盖其他要素的可能。“上”、“下”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
在本公开中,当描述到特定部件位于第一部件和第二部件之间时,在该特定部件与第一部件或第二部件之间可以存在居间部件,也可以不存在居间部件。当描述到特定部件连接其它部件时,该特定部件可以与所述其它部件直接连接而不具有居间部件,也可以不与所述其它部件直接连接而具有居间部件。
本公开使用的所有术语(包括技术术语或者科学术语)与本公开所属领域的普通技术人员理解的含义相同,除非另外特别定义。还应当理解,在诸如通用字典中定义的术语应当被解释为具有与它们在相关技术的上下文中的含义相一致的含义,而不应用理想化或极度形式化的意义来解释,除非这里明确地这样定义。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在一个典型的双能X射线检查***中,X射线源(例如电子直线加速器)以很高的频率交替产生两种不同能量的X射线,分别称为高能X射线和低能X射线。这两种X射线交替通过狭缝准直器后分别形成扇形X射线束。两种扇形X射线束从被检测物体的一侧交替穿透被检测物体的横断面后,由位于被检测物体另一侧的探测器依次接收并生成图像数据。当被检测物体通过扇形X射线束时,扇形X射线束依次扫描被检测物体的一系列横断面,从而形成整个被检测物体的高能和低能X射线透射图像。
为了将被检测物体的所有区域都扫描到,被检测物体的运动速度v与X射线源出射X射线的频率f应满足以下公式:
Figure PCTCN2018122038-appb-000001
这里,N为探测器的排数,pitch为相邻排的探测器之间的距离,D为X射线源到被检测物体的距离,T为被检测物体到探测器的距离。
当被检测物体的运动速度过大或X射线源出射X射线的频率过小时,会得到欠采样图像。图1是产生欠采样图像的一个场景的示意图。如图1所示,t1时刻、t2时刻、t3时刻和t4时刻为X射线的出射时刻,在相邻的X射线出射时刻(例如t1时刻和t2时刻)之间,被检测物体101的一部分区域没有被X射线扫描到,从而导致欠采样图像的产生。欠采样图像不但会丢失被检测物体的信息,而且与被检测物体中物品的实际形状不符。例如,被检测物体中具有圆形的车轮,但得到的欠采样图像中的车轮却为椭圆形。
快检***是双能X射线检查***中的一种。在快检时,快检***不动,被检测物体(例如车辆)直接通过快检***的通道,因此被检测物体的通过率高。但是,由于被检测物体的运动速度较快,这就要求X射线源具有很高的X射线出射频率,以避免得到欠采样图像。但是,受制于硬件限制,X射线源的X射线出射频率不能无限制地提高,这就限制了被检测物体的运动速度,降低了被检测物体的通过率。
对于欠采样图像,传统的处理方法是使用插值法,将欠采样图像重建为正常采样的图像尺寸。但是通过插值法重建的图像通常会过于平滑,并伴随锯齿状的伪影,与正常采样的图像差距较大。
为此,本公开提出了如下技术方案。
图2是根据本公开一一些实施例的图像处理方法的流程示意图。
在步骤202,获取待处理的第一欠采样图像。
在一些实施例中,第一欠采样图像可以根据如下方式获得:在第一被检测物体沿 着第一方向运动时,第一发射器发射的射线穿透第一被检测物体沿着第二方向的截面后,由与第一发射器相对设置的第一组探测器接收,从而生成第一欠采样图像。这里,第一方向与第二方向可以基本垂直。第一组探测器可以包括一排或在第一方向上布置的多排第一探测器。
作为一些示例,第一被检测物体可以包括但不限于集装箱或载有集装箱等其他物品的车辆等。作为一些示例,射线例如可以是X射线、可见光、红外线、紫外线等。
例如,可以通过单能X射线或双能X射线对运动的第一被检测物体进行照射,以得到第一欠采样图像。然而,应理解,本公开并不限于此,第一欠采样图像也可以是通过其他方式得到的。
在步骤204,根据欠采样图像与正常采样的原始图像之间的映射关系,将第一欠采样图像重建为对应的第一原始图像。这里,映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
在一些实施例中,第二原始图像可以根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透第二被检测物体沿着第四方向的截面后,由与第二发射器相对设置的第二组探测器接收,从而生成第二原始图像。第二欠采样图像可以通过对第二原始图像进行下采样来得到。这里,第三方向与第四方向可以基本垂直。第二组探测器可以包括一排或多排在第三方向上布置的第二探测器。
作为一些实现方式,第一组探测器可以包括在第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;第二组探测器可以包括在第三方向上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2。这里,2≤M1≤M2,S1=N×S2,N为大于或等于2的整数。
例如,可以基于一个或多个第二欠采样图像和对应的一个或多个第二原始图像对字典学习模型、BP(Back Propagation,反向传播)神经网络模型或卷积神经网络模型等机器学习模型进行训练,从而得到欠采样图像和正常采样的原始图像之间的映射关系。
对机器学习模型进行训练后,机器学习模型可以根据训练好的映射关系对输入的任意一个欠采样图像进行重建,以输出该欠采样图像对应的正常采样的原始图像。因此,在将第一欠采样图像输入机器学习模型后,机器训练模型可以根据训练好的映射关系对第一欠采样图像进行重建,从而输出第一欠采样图像对应的第一原始图像。
上述实施例中,利用训练好的欠采样图像与正常采样的原始图像之间的映射关系 可以将待处理的第一欠采样图像重建为对应的第一原始图像。相对于传统的插值法,根据上述实施例的方法得到的第一原始图像更加准确。
图3是根据本公开另一些实施例的图像处理方法的流程示意图。
在步骤302,对正常采样的第二原始图像进行下采样,以得到第二欠采样图像。
在一些实现方式中,可以对正常采样的第二原始图像进行下采样,以得到第二下采样图像。在这样的实现方式中,以第二下采样图像直接作为第二欠采样图像。
在另一些实现方式中,可以对正常采样的第二原始图像进行下采样,以得到第二下采样图像;然后对第二下采样图像进行上采样,以得到与第二原始图像的尺寸相同的第二上采样图像。在这样的实现方式中,以第二上采样图像作为第二欠采样图像。这里,虽然第二上采样图像的尺寸与第二原始图像相同,但由于第二上采样图像是通过对第二原始图像进行下采样、再进行上采样得来,因此,从这个意义上来说,第二上采样图像实际上为第二原始图像对应的欠采样图像,故可以以第二上采样图像作为第二欠采样图像。
在一些实施例中,可以先对第二原始图像进行扩增,以得到至少一个扩增图像;然后对第二原始图像和扩增图像进行下采样,以得到多个第二下采样图像,从而可以增加训练样本。例如,可以对第二原始图像进行翻转、旋转、亮度调整、缩放等扩增操作中的至少一个,以得到至少一个扩增图像。在一些例子中,可以将第二原始图像旋转预设角度,例如90度、270度等。在另一些例子中,可以通过双立方插值等算法对第二原始图像进行缩放,例如,第二下采样图像可以为第二原始图像的0.6倍、0.7倍、0.8倍、0.9倍、或1.2倍等。
在一些实施例中,第二下采样图像在第三方向上的尺寸可以小于第二原始图像在第三方向上的尺寸。一种情况下,第二下采样图像在第四方向上的尺寸可以等于第二原始图像在第四方向上的尺寸。另一种情况下,第二下采样图像在第四方向上的尺寸也可以小于第二原始图像在第四方向上的尺寸。
假设第二原始图像的尺寸为m(行)像素×kn(列)像素,k为第二探测器的排数。在k为1的情况下,例如,可以删除第二原始图像中的偶数列像素,保留第二原始图像中的奇数列像素,从而得到第二下采样图像;又例如,可以删除第二原始图像中的奇数列像素,保留第二原始图像中的偶数列像素,从而得到第二下采样图像。在k大于1的情况下,例如,可以每隔k列像素删除第二原始图像中的k列像素,从而得到第二下采样图像。作为一个示例,n为偶数,第二下采样图的尺寸可以为m像素 ×n/2像素。
在步骤304,以第二欠采样图像和第二原始图像为训练样本对机器学习模型进行训练,以得到欠采样图像与正常采样的原始图像之间的映射关系。
在步骤306,获取待处理的第一欠采样图像。例如,第一欠采样图像在第一方向上的尺寸可以小于第一原始图像在第一方向上的尺寸。
在步骤308,根据欠采样图像与正常采样的原始图像的映射关系,将第一欠采样图像重建为对应的第一原始图像。
在一些实现方式中,可以对第一欠采样图像进行上采样,以得到与第一原始图像的尺寸相同的第一上采样图像,以第一上采样图像作为第三欠采样图像。然后,根据映射关系将第三欠采样图像重建为对应的第三原始图像,以第三原始图像作为第一原始图像。
例如,可以通过双立方插值、最近邻插值、双线性插值、基于图像边缘的插值等算法对第一欠采样图像进行上采样,以得到第一上采样图像。
上述实施例中,首先根据第二欠采样图像和第二原始图像对机器模型进行训练,然后利用训练好的映射关系可以将待处理的第一欠采样图像重建为对应的第一原始图像。
图4是根据本公开一些实施例的对机器学习模型进行训练的流程示意图。
在步骤402,将第二欠采样图像分割成多个欠采样图像块。这里,以上面得到的第二上采样图像作为第二欠采样图像。
在一些实施例中,可以将第二欠采样图像分割成尺寸相同的多个欠采样图像块。
在步骤404,将第二原始图像分割成多个原始图像块。这里,一个原始图像块对应一个欠采样图像块,并且,每个原始图像块与对应的欠采样图像块的尺寸相同。
例如,可以将第二欠采样图像分割成多个尺寸为m(行)像素×m(列)像素的欠采样图像块,将第二欠采样图像对应的第二原始图像分割成多个尺寸为m(行)像素×m(列)像素的原始图像块。
在步骤406,以多个欠采样图像块和多个原始图像块为训练样本对机器学习模型进行训练。
例如,可以将多个欠采样图像块和多个原始图像块划分为多组图像块,每组图像块可以包括预定数量的欠采样图像块和对应的原始图像块。每次在对机器学习模型进行训练时,可以以多组图像块中的一组图像块作为训练样本。在一些实施例中,可以 将64个欠采样图像块和对应的64个原始图像块作为一组。
下面以卷积神经网络模型为例介绍对机器学习模型进行训练的一些实现方式。
首先,对多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块。
例如,卷积神经网络模型可以包括20个卷积层,相邻的两个卷积层之间有一个非线性层,例如ReLu(Rectified Linear Unit,修正线性单元)激活函数。每个卷积层的输出经非线性层进行非线性处理后作为下一个卷积层的输入。
设第1个卷积层输入的欠采样图像块的尺寸为n×n。第1个卷积层使用64个卷积核,每个卷积核的尺寸为3×3×1,步长为1。第1个卷积层输出尺寸为n×n×64的特征图。第2到第19个卷积层均使用64个卷积核,每个卷积核的尺寸为3×3×64,步长为1。第2到第19个卷积层中的每个卷积层均输出尺寸为n×n×64的特征图。第20个卷积层使用1个尺寸为3×3×64的卷积核,输出的特征图的尺寸为n×n,与输入的欠采样图像块的尺寸相同。
通过20个卷积层对每个欠采样图像块的卷积处理,可以得到每个欠采样图像块与对应的原始图像块之间的差异图像块,也即,第20个卷积层输出的特征图。
其次,将差异图像块与对应的欠采样图像块相加,以得到预测图像块。
例如,可以将第20个卷积层输出的特征图与第1个卷积层输入的欠采样图像块相加,即可得到欠采样图像块对应的预测图像块。
之后,根据预测图像块与原始图像块对机器学习模型进行优化,直到预测图像块与原始图像块之间的差异满足预设条件。
在训练时,卷积核的权重可以采用MSRA进行初始化,偏置项可以均初始化为0。每次训练时可以随机挑选一组图像块作为训练样本进行训练,训练目标是使得下式表示的最小化损失函数Loss的值最小:
Figure PCTCN2018122038-appb-000002
这里,n为一组图像块中原始图像块的数量,y i为原始图像块,f(x i)为预测图像块,λ为正则项系数,w为卷积核的权重。λ例如可以为1×10 -4
可以理解的是,f(x i)的表达式中包含偏置项。因此,可以通过调整偏置项和权重的大小,使得Loss的值最小,即满足预设条件。
模型的优化可以采用Adam算法。通过计算梯度的一阶矩估计和二阶矩估计可以自适应地为权重项和偏置项设置独立的学习率。相比随机梯度下降法,采用Adam算 法可以使得最小化损失函数的收敛速度更快,并且,无需为防止梯度***而设置阈值clip_gradient。
上述实施例中,将第二欠采样图像划分为多个欠采样图像块,将第二原始图像划分为多原始图像块,然后以多个欠采样图像块和多个原始图像块作为训练样本对机器学习模型进行训练,从而可以使得训练出的欠采样图像和原始图像之间的映射关系更加准确。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于装置实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
图5是根据本公开一些实施例的图像处理装置的结构示意图。如图5所示,该实施例的装置包括获取模块501和重建模块502。
获取模块501用于获取待处理的第一欠采样图像。
在一些实施例中,第一欠采样图像可以根据如下方式获得:在第一被检测物体沿着第一方向运动时,第一发射器发射的射线穿透第一被检测物体沿着第二方向的截面后,由与第一发射器相对设置的第一组探测器接收,从而生成第一欠采样图像。这里,第一方向与第二方向可以基本垂直。例如,第一欠采样图像在第一方向上的尺寸可以小于对应的第一原始图像在第一方向上的尺寸。
重建模块502用于根据欠采样图像与正常采样的原始图像之间的映射关系,将第一欠采样图像重建为对应的第一原始图像。这里的映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
在一些实施例中,第二原始图像可以根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透第二被检测物体沿着第四方向的截面后,由与第二发射器相对设置的第二组探测器接收,从而生成第二原始图像。这里,第三方向与第四方向可以基本垂直。
上述第一组探测器可以包括一排或多排第一探测器,上述第二组探测器可以包括一排或多排第二探测器。
作为一些具体实现方式,第一组探测器可以包括在第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;第二组探测器可以包括在第三方向上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2。这里,2≤M1≤ M2,S1=N×S2,N为大于或等于2的整数。
上述实施例的装置可以利用训练好的欠采样图像与正常采样的原始图像之间的映射关系将待处理的第一欠采样图像重建为对应的第一原始图像。相对于传统的插值法,根据上述实施例的装置得到的第一原始图像更加准确。
图6是根据本公开另一些实施例的图像处理装置的结构示意图。如图6所示,该实施例的装置包括下采样模块601、训练模块602、获取模块601和重建模块502。
下采样模块601用于对正常采样的第二原始图像进行下采样,以得到第二欠采样图像。作为一些具体实现方式,下采样模块601可以用于对第二原始图像进行扩增,以得到至少一个扩增图像;对第二原始图像和扩增图像进行下采样,以得到多个第二欠采样图像。
训练模块602用于以第二欠采样图像和第二原始图像为训练样本对机器学习模型进行训练,以得到欠采样图像与正常采样的原始图像之间的映射关系。
获取模块603用于获取待处理的第一欠采样图像。
重建模块604用于根据欠采样图像与正常采样的原始图像之间的映射关系,将第一欠采样图像重建为对应的第一原始图像。
上述实施例的装置首先可以根据第二欠采样图像和第二原始图像对机器模型进行训练,然后利用训练好的映射关系可以将待处理的第一欠采样图像重建为对应的第一原始图像。
图7是根据本公开另一些实施例的图像处理装置的结构示意图。如图7所示,该实施例的装置包括下采样模块701、第二上采样模块702、训练模块703、获取模块704和重建模块705。
下采样模块701用于对正常采样的第二原始图像进行下采样,以得到第二下采样图像。例如,第二下采样图像在第三方向上的尺寸小于第二原始图像在第三方向上的尺寸。
第二上采样模块702用于对第二下采样图像进行上采样,以得到与第二原始图像的尺寸相同的第二上采样图像,以第二上采样图像作为第二欠采样图像。
训练模块703用于以第二欠采样图像和第二原始图像为训练样本对机器学习模型进行训练,以得到欠采样图像与正常采样的原始图像之间的映射关系。
获取模块704用于获取待处理的第一欠采样图像。
重建模块705用于根据欠采样图像与正常采样的原始图像之间的映射关系,将第 一欠采样图像重建为对应的第一原始图像。
图8是根据本公开又一些实施例的图像处理装置的结构示意图。如图8所示,该实施例的装置包括下采样模块801、第二上采样模块802、训练模块803、获取模块804、第一上采样模块805和重建模块806。
下采样模块801用于对正常采样的第二原始图像进行下采样,以得到第二下采样图像。第二上采样模块802用于对第二下采样图像进行上采样,以得到与第二原始图像的尺寸相同的第二上采样图像,以第二上采样图像作为第二欠采样图像。训练模块803用于以第二欠采样图像和第二原始图像为训练样本对机器学习模型进行训练,以得到欠采样图像与正常采样的原始图像之间的映射关系。获取模块804用于获取待处理的第一欠采样图像。第一上采样模块805用于对第一欠采样图像进行上采样,以得到与第一原始图像的尺寸相同的第一上采样图像,以第一上采样图像作为第三欠采样图像。重建模块806用于根据训练好的映射关系将第三欠采样图像重建为对应的第三原始图像,以第三原始图像作为第一欠采样图像对应的第一原始图像。
作为一些具体实现方式,图7中的训练模块703和图8中的训练模块803可以用于将第二欠采样图像分割成多个欠采样图像块;将第二原始图像分割成多个原始图像块,一个原始图像块对应一个欠采样图像块,每个原始图像块与对应的欠采样图像块的尺寸相同;以多个欠采样图像块和多个原始图像块为训练样本对机器学习模型进行训练。
作为一些具体实现方式,训练模块703和训练模块803可以用于对多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块;将差异图像块与对应的欠采样图像块相加,以得到预测图像块;根据预测图像块与原始图像块对机器学习模型进行优化,直到预测图像块与原始图像块之间的差异满足预设条件。
上述实现方式中,将第二欠采样图像划分为多个欠采样图像块,将第二原始图像划分为多原始图像块,然后以多个欠采样图像块和多个原始图像块作为训练样本对机器学习模型进行训练,从而可以使得训练出的欠采样图像和原始图像之间的映射关系更加准确。
图9是根据本公开再一些实施例的图像处理装置的结构示意图。如图9所示,该实施例的装置包括存储器901和处理器902。存储器901可以是磁盘、闪存或其它任何非易失性存储介质。存储器901用于存储前述任意一个实施例的方法对应的指令。 处理器902耦接至存储器901,可以被实施为一个或多个集成电路,例如微处理器或微控制器。处理器902用于执行存储器901中存储的指令。
图10是根据本公开还一些实施例的图像处理装置的结构示意图。如图10所示,该实施例的装置1000包括存储器1001和处理器1002。处理器1002通过总线(BUS)1003耦合至存储器1001。该装置1000还可以通过存储接口1004连接至外部存储装置1005以便调用外部数据,还可以通过网络接口1006连接至网络或者外部计算机***(未示出)。
本公开各实施例提供的方案可以应用于多种场景,下面列举三个示例性的应用场景。
<第一应用场景>
第二原始图像和第一欠采样图像是利用相同的快检***得到的。例如,快检***可以包括单排探测器或多排探测器,多排探测器中的每排探测器可以在被检测物体的运动方向上依次布置。该应用场景中,第一欠采样图像是由于被检测物体在通过快检***时的运动速度过快导致的。
本公开的方案使得在不增加发射器出射射线的频率的情况下,可以将第一欠采样图像重建为正常采样尺寸的第一原始图像。因此,被检测物体可以以较快的速度通过快检***,保证了被检测物体的通过率。
<第二应用场景>
第二原始图像和第一欠采样图像是利用不同的快检***得到的。例如,第二原始图像是利用包括8排探测器的第一快检***得到的,而待处理的第一欠采样图像是利用包括4排探测器的第二快检***得到的。这里,第二快检***中相邻排的探测器之间的距离可以是第一快检***中相邻排的探测器之间的距离的2倍。
假设8排探测器获得的图像数据是xxxxxxxx,则4排探测器获得的图像数据可以表示为xoxoxoxo或oxoxoxox(x表示有数据,o表示没有数据。利用4排探测器获得的图像数据的列数是8排探测器获得图像数据的列数一半,故利用4排探测器获得的图像为欠采样图像。
利用本公开的方案可以将第一欠采样图像重建为正常采样尺寸的第一原始图像。因此,本公开的方案可以在保证图像准确性的情况下减少探测器的排数,降低了硬件成本。
<第三应用场景>
第二原始图像和第一欠采样图像是利用不同的快检***得到的。例如,第二原始图像是利用包括4排探测器的第一快检***得到的,待处理的第一欠采样图像也是利用包括4排探测器的第二快检***得到的。第二快检***中相邻排的探测器之间的距离为第一快检***中相邻排的探测器之间的距离的2倍,故第二快检***将得到欠采样图像。
利用本公开的方案可以将第一欠采样图像重建为正常采样尺寸的第一原始图像。但由于第二快检***中相邻排的探测器之间的距离为第一快检***中相邻排的探测器之间的距离的2倍,因此可以提高被检测物体移动的速度,提高被检测物体的通过率。
本领域内的技术人员应明白,本公开的实施例可提供为方法、装置、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(***)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图中的每一流程和/或方框图中一个方框或多个方框中指定的功能。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
至此,已经详细描述了本公开。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开 的技术方案。
可能以许多方式来实现本公开的方法以及装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法以及装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改。本公开的范围由所附权利要求来限定。

Claims (22)

  1. 一种图像处理方法,包括:
    获取待处理的第一欠采样图像;
    根据欠采样图像与正常采样的原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像,其中所述映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
  2. 根据权利要求1所述的方法,还包括:
    对所述第二原始图像进行下采样,以得到所述第二欠采样图像;
    以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练,以得到所述映射关系。
  3. 根据权利要求2所述的方法,其中,所述对所述第二原始图像进行下采样,以得到所述第二欠采样图像包括:
    对所述第二原始图像进行下采样,以得到第二下采样图像;
    对所述第二下采样图像进行上采样,以得到与所述第二原始图像的尺寸相同的第二上采样图像,以所述第二上采样图像作为所述第二欠采样图像。
  4. 根据权利要求3所述的方法,其中,所述根据欠采样图像与原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像包括:
    对所述第一欠采样图像进行上采样,以得到与所述第一原始图像的尺寸相同的第一上采样图像,以所述第一上采样图像作为第三欠采样图像;
    根据所述映射关系将所述第三欠采样图像重建为对应的第三原始图像,以所述第三原始图像作为所述第一原始图像。
  5. 根据权利要求3所述的方法,其中,所述以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练包括:
    将所述第二欠采样图像分割成多个欠采样图像块;
    将所述第二原始图像分割成多个原始图像块,一个原始图像块对应一个欠采样图 像块,每个原始图像块与对应的欠采样图像块的尺寸相同;
    以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练。
  6. 根据权利要求5所述的方法,其中,所述以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练包括:
    对所述多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块;
    将所述差异图像块与对应的欠采样图像块相加,以得到预测图像块;
    根据所述预测图像块与所述原始图像块对所述机器学习模型进行优化,直到所述预测图像块与所述原始图像块之间的差异满足预设条件。
  7. 根据权利要求2所述的方法,其中,所述对所述第二原始图像进行下采样包括:
    对所述第二原始图像进行扩增,以得到至少一个扩增图像;
    对所述第二原始图像和所述扩增图像进行下采样,以得到多个所述第二欠采样图像。
  8. 根据权利要求3所述的方法,其中,
    所述第一欠采样图像根据如下方式获得:在第一被检测物体沿着第一方向运动时,第一发射器发射的射线穿透所述第一被检测物体沿着第二方向的截面后,由与所述第一发射器相对设置的第一组探测器接收,从而生成所述第一欠采样图像,其中所述第一方向与所述第二方向垂直,所述第一组探测器包括一排或多排第一探测器;
    所述第二原始图像根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透所述第二被检测物体沿着第四方向的截面后,由与所述第二发射器相对设置的第二组探测器接收,从而生成所述第二原始图像,其中所述第三方向与所述第四方向垂直,所述第二组探测器包括一排或多排第二探测器。
  9. 根据权利要求8所述的方法,其中,
    所述第一欠采样图像在所述第一方向上的尺寸小于所述第一原始图像在所述第 一方向上的尺寸;
    所述第二下采样图像在所述第三方向上的尺寸小于所述第二原始图像在所述第三方向上的尺寸。
  10. 根据权利要求8所述的方法,其中,
    所述第一组探测器包括在所述第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;
    所述第二组探测器包括在所述第三方向上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2;
    其中,2≤M1≤M2,S1=N×S2,N为大于或等于2的整数。
  11. 一种图像处理装置,包括:
    获取模块,用于获取待处理的第一欠采样图像;
    重建模块,用于根据欠采样图像与正常采样的原始图像之间的映射关系,将所述第一欠采样图像重建为对应的第一原始图像,其中所述映射关系是以第二欠采样图像及其对应的正常采样的第二原始图像为训练样本对机器学习模型进行训练得到的。
  12. 根据权利要求11所述的装置,还包括:
    下采样模块,用于对所述第二原始图像进行下采样,以得到所述第二欠采样图像;
    训练模块,用于以所述第二欠采样图像和所述第二原始图像为训练样本对所述机器学习模型进行训练,以得到所述映射关系。
  13. 根据权利要求12所述的装置,其中,
    所述下采样模块用于对所述第二原始图像进行下采样,以得到第二下采样图像;
    所述装置还包括:
    第二上采样模块,用于对所述第二下采样图像进行上采样,以得到与所述第二原始图像的尺寸相同的第二上采样图像,以所述第二上采样图像作为所述第二欠采样图像。
  14. 根据权利要求13所述的装置,还包括:
    第一上采样模块,用于对所述第一欠采样图像进行上采样,以得到与所述第一原始图像的尺寸相同的第一上采样图像,以所述第一上采样图像作为第三欠采样图像;
    所述重建模块用于根据所述映射关系将所述第三欠采样图像重建为对应的第三原始图像,以所述第三原始图像作为所述第一原始图像。
  15. 根据权利要求13所述的装置,其中,
    所述训练模块用于将所述第二欠采样图像分割成多个欠采样图像块;将所述第二原始图像分割成多个原始图像块,一个原始图像块对应一个欠采样图像块,每个原始图像块与对应的欠采样图像块的尺寸相同;以所述多个欠采样图像块和所述多个原始图像块为训练样本对机器学习模型进行训练。
  16. 根据权利要求15所述的装置,其中,
    所述训练模块用于对所述多个欠采样图像块中的每个欠采样图像块进行处理,以确定每个欠采样图像块与对应的原始图像块之间的差异图像块;将所述差异图像块与对应的欠采样图像块相加,以得到预测图像块;根据所述预测图像块与所述原始图像块对所述机器学习模型进行优化,直到所述预测图像块与所述原始图像块之间的差异满足预设条件。
  17. 根据权利要求12所述的装置,其中,
    所述下采样模块用于对所述第二原始图像进行扩增,以得到至少一个扩增图像;对所述第二原始图像和所述扩增图像进行下采样,以得到多个所述第二欠采样图像。
  18. 根据权利要求13所述的装置,其中,
    所述第一欠采样图像根据如下方式获得:在第一被检测物体沿着第一方向运动时,第一发射器发射的射线穿透所述第一被检测物体沿着第二方向的截面后,由与所述第一发射器相对设置的第一组探测器接收,从而生成所述第一欠采样图像,其中所述第一方向与所述第二方向垂直,所述第一组探测器包括一排或多排第一探测器;
    所述第二原始图像根据如下方式获得:在第二被检测物体沿着第三方向运动时,第二发射器发射的射线穿透所述第二被检测物体沿着第四方向的截面后,由与所述第二发射器相对设置的第二组探测器接收,从而生成所述第二原始图像,其中所述第三 方向与所述第四方向垂直,所述第二组探测器包括一排或多排第二探测器。
  19. 根据权利要求18所述的装置,其中,
    所述第一欠采样图像在所述第一方向上的尺寸小于所述第一原始图像在所述第一方向上的尺寸;
    所述第二下采样图像在所述第三方向上的尺寸小于所述第二原始图像在所述第三方向上的尺寸。
  20. 根据权利要求18所述的装置,其中,
    所述第一组探测器包括在所述第一方向上布置的M1排第一探测器,相邻排的第一探测器之间的距离为S1;
    所述第二组探测器包括在所述第三方向上布置的M2排第二探测器,相邻排的第二探测器之间的距离为S2;
    其中,2≤M1≤M2,S1=N×S2,N为大于或等于2的整数。
  21. 一种图像处理装置,包括:
    存储器;以及
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器的指令执行如权利要求1-10任意一项所述的方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序指令,该指令被处理器执行时实现如权利要求1-10任意一项所述的方法。
PCT/CN2018/122038 2017-12-26 2018-12-19 图像处理方法、装置及计算机可读存储介质 WO2019128797A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711434581.XA CN109978809B (zh) 2017-12-26 2017-12-26 图像处理方法、装置及计算机可读存储介质
CN201711434581.X 2017-12-26

Publications (1)

Publication Number Publication Date
WO2019128797A1 true WO2019128797A1 (zh) 2019-07-04

Family

ID=65019260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122038 WO2019128797A1 (zh) 2017-12-26 2018-12-19 图像处理方法、装置及计算机可读存储介质

Country Status (5)

Country Link
US (1) US10884156B2 (zh)
EP (1) EP3506198B1 (zh)
CN (1) CN109978809B (zh)
PL (1) PL3506198T3 (zh)
WO (1) WO2019128797A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11449989B2 (en) * 2019-03-27 2022-09-20 The General Hospital Corporation Super-resolution anatomical magnetic resonance imaging using deep learning for cerebral cortex segmentation
JP7250331B2 (ja) * 2019-07-05 2023-04-03 株式会社イシダ 画像生成装置、検査装置及び学習装置
CN113689341A (zh) * 2020-05-18 2021-11-23 京东方科技集团股份有限公司 图像处理方法及图像处理模型的训练方法
CN111815546A (zh) * 2020-06-23 2020-10-23 浙江大华技术股份有限公司 图像重建方法以及相关设备、装置
US20240185430A1 (en) * 2021-03-19 2024-06-06 Owl Navigation, Inc. Brain image segmentation using trained convolutional neural networks
CN113240617B (zh) * 2021-07-09 2021-11-02 同方威视技术股份有限公司 扫描图像重建方法、检查设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405098A (zh) * 2015-10-29 2016-03-16 西北工业大学 一种基于稀疏表示和自适应滤波的图像超分辨率重建方法
US9324133B2 (en) * 2012-01-04 2016-04-26 Sharp Laboratories Of America, Inc. Image content enhancement using a dictionary technique
CN106127684A (zh) * 2016-06-22 2016-11-16 中国科学院自动化研究所 基于双向递归卷积神经网络的图像超分辨率增强方法
CN106204449A (zh) * 2016-07-06 2016-12-07 安徽工业大学 一种基于对称深度网络的单幅图像超分辨率重建方法
CN106780333A (zh) * 2016-12-14 2017-05-31 深圳市华星光电技术有限公司 一种图像超分辨率重建方法

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7817838B2 (en) * 2007-05-24 2010-10-19 University Of Utah Research Foundation Method and system for constrained reconstruction of imaging data using data reordering
US8081842B2 (en) * 2007-09-07 2011-12-20 Microsoft Corporation Image resizing for web-based image search
EP2232444B1 (en) * 2007-12-20 2011-10-12 Wisconsin Alumni Research Foundation Method for dynamic prior image constrained image reconstruction
US8538200B2 (en) * 2008-11-19 2013-09-17 Nec Laboratories America, Inc. Systems and methods for resolution-invariant image representation
CN101900696B (zh) * 2009-05-27 2012-01-04 清华大学 双能欠采样物质识别方法和***
CN101900694B (zh) * 2009-05-27 2012-05-30 清华大学 基于直线轨迹扫描的双能欠采样物质识别***和方法
JP2011180798A (ja) * 2010-03-01 2011-09-15 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
CN102947864B (zh) * 2010-06-21 2015-08-12 皇家飞利浦电子股份有限公司 用于执行低剂量ct成像的方法和***
US9336611B2 (en) * 2010-09-14 2016-05-10 Massachusetts Institute Of Technology Multi-contrast image reconstruction with joint bayesian compressed sensing
US20130028538A1 (en) * 2011-07-29 2013-01-31 Simske Steven J Method and system for image upscaling
US9430854B2 (en) * 2012-06-23 2016-08-30 Wisconsin Alumni Research Foundation System and method for model consistency constrained medical image reconstruction
US9453895B2 (en) * 2012-10-05 2016-09-27 Siemens Aktiengesellschaft Dynamic image reconstruction with tight frame learning
US9563817B2 (en) * 2013-11-04 2017-02-07 Varian Medical Systems, Inc. Apparatus and method for reconstructing an image using high-energy-based data
KR101629165B1 (ko) * 2013-12-10 2016-06-21 삼성전자 주식회사 자기공명영상장치 및 그 제어방법
CN104749648A (zh) * 2013-12-27 2015-07-01 清华大学 多能谱静态ct设备
US9734601B2 (en) * 2014-04-04 2017-08-15 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
RU2568929C1 (ru) * 2014-04-30 2015-11-20 Самсунг Электроникс Ко., Лтд. Способ и система для быстрой реконструкции изображения мрт из недосемплированных данных
CN103971405A (zh) * 2014-05-06 2014-08-06 重庆大学 一种激光散斑结构光及深度信息的三维重建方法
US20170200258A1 (en) * 2014-05-28 2017-07-13 Peking University Shenzhen Graduate School Super-resolution image reconstruction method and apparatus based on classified dictionary database
CN104240210B (zh) * 2014-07-21 2018-08-10 南京邮电大学 基于压缩感知的ct图像迭代重建方法
DE102015201057A1 (de) * 2015-01-22 2016-07-28 Siemens Healthcare Gmbh Verfahren zur Bildqualitätsverbesserung eines Magnetresonanzbilddatensatzes, Recheneinrichtung und Computerprogramm
US11056314B2 (en) * 2015-10-22 2021-07-06 Northwestern University Method for acquiring intentionally limited data and the machine learning approach to reconstruct it
US10755395B2 (en) * 2015-11-27 2020-08-25 Canon Medical Systems Corporation Dynamic image denoising using a sparse representation
CN105374020B (zh) * 2015-12-17 2018-04-17 深圳职业技术学院 一种快速高分辨率的超声成像检测方法
WO2017113205A1 (zh) * 2015-12-30 2017-07-06 中国科学院深圳先进技术研究院 一种基于深度卷积神经网络的快速磁共振成像方法及装置
US10671939B2 (en) * 2016-04-22 2020-06-02 New York University System, method and computer-accessible medium for learning an optimized variational network for medical image reconstruction
CN106886054A (zh) * 2017-04-13 2017-06-23 西安邮电大学 基于三维x射线成像的危险品自动识别装置及方法
CN107064845B (zh) * 2017-06-06 2019-07-30 深圳先进技术研究院 基于深度卷积网的一维部分傅里叶并行磁共振成像方法
CN109300166B (zh) * 2017-07-25 2023-04-25 同方威视技术股份有限公司 重建ct图像的方法和设备以及存储介质
CN107576925B (zh) * 2017-08-07 2020-01-03 上海东软医疗科技有限公司 磁共振多对比度图像重建方法和装置
US10989779B2 (en) * 2017-09-29 2021-04-27 Yonsei University, University - Industry Foundation (UIF) Apparatus and method for reconstructing magnetic resonance image using learning, and under-sampling apparatus method and recording medium thereof
US10796221B2 (en) * 2017-10-19 2020-10-06 General Electric Company Deep learning architecture for automated image feature extraction
CN110249365B (zh) * 2017-11-10 2023-05-30 上海联影医疗科技股份有限公司 用于图像重建的***和方法
US10573031B2 (en) * 2017-12-06 2020-02-25 Siemens Healthcare Gmbh Magnetic resonance image reconstruction with deep reinforcement learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9324133B2 (en) * 2012-01-04 2016-04-26 Sharp Laboratories Of America, Inc. Image content enhancement using a dictionary technique
CN105405098A (zh) * 2015-10-29 2016-03-16 西北工业大学 一种基于稀疏表示和自适应滤波的图像超分辨率重建方法
CN106127684A (zh) * 2016-06-22 2016-11-16 中国科学院自动化研究所 基于双向递归卷积神经网络的图像超分辨率增强方法
CN106204449A (zh) * 2016-07-06 2016-12-07 安徽工业大学 一种基于对称深度网络的单幅图像超分辨率重建方法
CN106780333A (zh) * 2016-12-14 2017-05-31 深圳市华星光电技术有限公司 一种图像超分辨率重建方法

Also Published As

Publication number Publication date
PL3506198T3 (pl) 2022-11-14
EP3506198B1 (en) 2022-05-25
US10884156B2 (en) 2021-01-05
US20190196051A1 (en) 2019-06-27
EP3506198A1 (en) 2019-07-03
CN109978809B (zh) 2022-02-22
CN109978809A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2019128797A1 (zh) 图像处理方法、装置及计算机可读存储介质
US10984565B2 (en) Image processing method using convolutional neural network, image processing device and storage medium
US10769821B2 (en) Method and device for reconstructing CT image and storage medium
US9778391B2 (en) Systems and methods for multi-view imaging and tomography
US7840053B2 (en) System and methods for tomography image reconstruction
US10213176B2 (en) Apparatus and method for hybrid pre-log and post-log iterative image reconstruction for computed tomography
US10713759B2 (en) Denoising and/or zooming of inspection images
JP2020506742A (ja) 断層撮影再構成に使用するためのデータのディープラーニングに基づく推定
JP7106307B2 (ja) 医用画像診断装置、医用信号復元方法、医用信号復元プログラム、モデル学習方法、モデル学習プログラム、および磁気共鳴イメージング装置
US11307153B2 (en) Method and device for acquiring tomographic image data by oversampling, and control program
JP2007209756A (ja) 断層撮影の画像データセットにおけるノイズリダクション方法
JP2010223963A (ja) コンテナを検査する方法及びシステム
US20190228546A1 (en) Iterative image reconstruction with dynamic suppression of formation of noise-induced artifacts
JP6917356B2 (ja) 主成分材料組み合わせによる分解方法および装置
US20150078506A1 (en) Practical Model Based CT Construction
US10089757B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable storage medium
US9226723B2 (en) Ordered subset scheme in spectral CT
JP7273272B2 (ja) 角度オフセットによる断層画像データの取得方法、取得装置、および制御プログラム
KR102023285B1 (ko) 3차원 영상 재구성 방법
Kong et al. Ordered-subset split-Bregman algorithm for interior tomography
WO2021039211A1 (ja) 機械学習装置、機械学習方法及びプログラム
CN113240617B (zh) 扫描图像重建方法、检查设备及计算机可读存储介质
Shi et al. Multi-stage Deep Learning Artifact Reduction for Computed Tomography
US20220292743A1 (en) Processing Device for Obtaining Gap Filler Sinogram Information, Computer Tomograph, Method and Computer Program
Wirtti et al. A soft-threshold filtering approach for tomography reconstruction from a limited number of projections with bilateral edge preservation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18895903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18895903

Country of ref document: EP

Kind code of ref document: A1