WO2023234171A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2023234171A1
WO2023234171A1 PCT/JP2023/019509 JP2023019509W WO2023234171A1 WO 2023234171 A1 WO2023234171 A1 WO 2023234171A1 JP 2023019509 W JP2023019509 W JP 2023019509W WO 2023234171 A1 WO2023234171 A1 WO 2023234171A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cnn
neural network
convolutional neural
input
Prior art date
Application number
PCT/JP2023/019509
Other languages
French (fr)
Japanese (ja)
Inventor
佑弥 大西
二三生 橋本
Original Assignee
浜松ホトニクス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浜松ホトニクス株式会社 filed Critical 浜松ホトニクス株式会社
Publication of WO2023234171A1 publication Critical patent/WO2023234171A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/29Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation
    • G01T1/2914Measurement of spatial distribution of radiation
    • G01T1/2985In depth localisation, e.g. using positron emitters; Tomographic imaging (longitudinal and transverse section imaging; apparatus for radiation diagnosis sequentially in different planes, steroscopic radiation diagnosis)
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/29Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation
    • G01T1/2914Measurement of spatial distribution of radiation
    • G01T1/2992Radioisotope data or image processing not related to a particular imaging system; Off-line processing of pictures, e.g. rescanners
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to an image processing device and an image processing method that reduce noise in a target image to create a noise-reduced image.
  • the image may contain noise.
  • An example of an image containing noise is a tomographic image of a subject reconstructed based on information acquired by a radiation tomography apparatus.
  • Radiation tomography devices include PET (Positron Emission Tomography) devices and SPECT (Single Photon Emission Computed Tomography) devices.
  • the PET apparatus is equipped with a detection section having a large number of small radiation detectors arranged around a measurement space in which a subject is placed.
  • the PET apparatus uses a coincidence method to detect photon pairs with an energy of 511 keV generated by the annihilation of electrons and positrons in a subject to whom a positron-emitting isotope (RI source) has been administered, and uses this coincidence method to detect photon pairs with an energy of 511 keV. Collect information.
  • Such a PET apparatus plays an important role in the field of nuclear medicine and the like, and can be used to conduct research on, for example, biological functions and higher-order functions of the brain.
  • the reconstructed tomographic image of the subject contains a lot of statistical noise, it is required to reduce the noise in this tomographic image. Furthermore, there are cases where it is required to create a noise-reduced image by reducing noise not only from such a PET image (tomographic image) but also from a target image containing noise.
  • CNN convolutional neural network
  • DIP technology Deep Image Prior technology
  • DIP technology uses the property of CNNs that meaningful structures in an image are learned faster than random noise (i.e., random noise is less likely to be learned) to reduce noise through unsupervised learning. A reduced image can be created.
  • Non-Patent Document 1 describes a technique for reducing noise in a target image (PET image) using the DIP technique.
  • the technique described in this document performs noise reduction processing in two steps.
  • the first step for each of a plurality of sets of a first input image (MRI image) and a teacher image (PET image), the first input image is input to the CNN to create a first output image by the CNN.
  • the CNN is trained (supervised pre-learning) based on the evaluation result of the error between the output image and the teacher image.
  • the second input image (MRI image) is input to the CNN that has been pre-trained in the first step, a second output image is created by the CNN, and this second output image and the target image (PET image) are The CNN is further trained (unsupervised learning) based on the evaluation result of the error between. Then, the second output image is a noise-reduced image in which the noise of the target image (PET image) is reduced.
  • the noise reduction technology described in Non-Patent Document 1 is such that in the first step, the CNN is trained (supervised pre-learning) to convert the first input image to the teacher image, and in the second step, the CNN is trained to convert the first input image into the teacher image. Further learning (unsupervised learning) is performed to create a noise-reduced image in which the noise of the target image is reduced.
  • the noise reduction technology described in Non-Patent Document 1 is said to be able to improve noise reduction performance compared to the case of only DIP technology (the case of only unsupervised learning). .
  • Non-Patent Document 1 requires preparing not only a large number of teacher images but also a large number of first input images in order to perform the first step of supervised pre-learning. However, it is not easy to prepare a large number of first input images separately from preparing a large number of teacher images.
  • An object of the present invention is to provide an image processing device and an image processing method that can easily reduce noise in a target image by performing unsupervised learning after supervised pre-learning on a CNN.
  • An embodiment of the present invention is an image processing device.
  • the image processing device is an image processing device that reduces noise in a target image to create a noise-reduced image, and includes (1) a first input image in which pixel values of some regions are changed based on a teacher image, and a convolution neural (2) a first CNN processing unit that generates a first output image using a convolutional neural network by inputting the input to the network; and (2) evaluating an error between the first output image and the teacher image and convolutional neural a first CNN learning unit that trains the network; (3) a second CNN processing unit that inputs a second input image to a convolutional neural network and creates a second output image by the convolutional neural network; and (4) a second output image.
  • a second CNN learning unit that evaluates an error between the target image and the convolutional neural network based on the error evaluation result; and a first CNN processing unit for each of the plurality of sets of the teacher image and the first input image. After repeating the processing of each of the and first CNN learning units multiple times, the processing of the second CNN processing unit and the second CNN learning unit is repeated multiple times, and the second output image is made into a noise-reduced image.
  • An embodiment of the present invention is an image processing method.
  • the image processing method is an image processing method that reduces noise in a target image to create a noise-reduced image, and includes: (1) convolution neural (2) evaluating the error between the first output image and the teacher image and convolutional neural network processing based on the error evaluation result; a first CNN learning step in which the network is trained; (3) a second CNN processing step in which the second input image is input to the convolutional neural network and a second output image is created by the convolutional neural network; and (4) a second output image.
  • the processes of the second CNN processing step and the second CNN learning step are repeatedly performed multiple times to make the second output image a noise-reduced image.
  • CNN convolutional neural network
  • FIG. 1 is a diagram showing the configuration of an image processing device 1. As shown in FIG. FIG. 2 is a flowchart of the image processing method.
  • FIG. 3 is a diagram showing an example of the configuration of CNN.
  • FIG. 4 is a diagram showing input images (MRI images) to the CNN (a) to (c).
  • FIG. 5 is a diagram showing (a) to (c) phantom images (correct images).
  • FIG. 6 is a diagram showing (a) to (c) tomographic images (target images) simulating a head having a tumor and to which 18 F-FDG was administered as a drug.
  • FIG. 7 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1 (a) to (c).
  • FIG. 8 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) to (c).
  • FIG. 9 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Examples (a) to (c).
  • FIG. 10 is a table summarizing the values of PSNR, SSIM, and CNR of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2.
  • FIG. 11 is a diagram showing images (a) to (c) input to CNN (MRI images).
  • FIG. 12 is a diagram showing (a) to (c) tomographic images (target images) of the head to which 18 F-AV45 was administered as the drug.
  • FIG. 13 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1 (a) to (c).
  • FIG. 14 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) to (c).
  • FIG. 15 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Examples (a) to (c).
  • FIG. 16 is a diagram showing images (a) and (b) input to CNN (MRI images).
  • FIG. 17 is a diagram showing (a) and (b) tomographic images (target images) of the head to which 11 C-PIB was administered as a drug.
  • FIGS. 18(a) and 18(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 19 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) and (b).
  • FIG. 20 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example (a) and (b).
  • FIG. 21 is a diagram showing images (a) and (b) input to CNN (MRI images).
  • FIG. 22 is a diagram showing (a) and (b) tomographic images (target images) of the head to which 18 F-FDG was administered as a drug.
  • FIG. 23(a) and 23(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 24 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) and (b).
  • FIGS. 25(a) and 25(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of the embodiment.
  • FIG. 26 is a table summarizing the CNR values of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2.
  • FIG. 1 is a diagram showing the configuration of an image processing device 1.
  • the image processing device 1 includes an input image creation section 10, a first calculation section 20, and a second calculation section 30, and reduces noise in a target image 52 to create a noise-reduced image.
  • a PET image of the head is shown as an example of each of the first input image 40, first output image 41, teacher image 42, second output image 51, and target image 52, and a head PET image is shown as an example of the second input image 50.
  • the present embodiment is suitable for reducing noise in a target image that is a tomographic image (for example, a PET image) of a subject reconstructed based on information acquired by a radiation tomography apparatus.
  • a target image that is a tomographic image (for example, a PET image) of a subject reconstructed based on information acquired by a radiation tomography apparatus.
  • These images may be two-dimensional images or three-dimensional images.
  • the image processing device 1 includes a GPU (Graphics Processing Unit) that performs processing using CNN, an input unit (for example, a keyboard or mouse) that receives input from an operator, a display unit (for example, a liquid crystal display) that displays images, etc. It is equipped with a storage unit that stores programs and data for executing various processes.
  • a computer having a CPU, RAM, ROM, hard disk drive, etc. is used.
  • the input image creation unit 10 creates a first input image 40 in which the pixel values of some regions are changed based on the teacher image 42.
  • the first calculation unit 20 includes a first CNN processing unit 21 and a first CNN learning unit 22, and performs supervised pre-learning processing.
  • the first CNN processing unit 21 inputs the first input image 40 to the CNN and creates a first output image 41 using the CNN.
  • the first CNN learning unit 22 evaluates the error between the first output image 41 and the teacher image 42 and trains the CNN based on the error evaluation result.
  • the first calculation unit 20 repeatedly performs the processing of the first CNN processing unit 21 and the first CNN learning unit 22 multiple times for each of the plurality of sets of the teacher image 42 and the first input image 40.
  • the second calculation unit 30 includes a second CNN processing unit 31 and a second CNN learning unit 32, and performs unsupervised learning processing.
  • the second CNN processing unit 31 inputs the second input image 50 to the CNN and creates a second output image 51 using the CNN.
  • the second CNN learning unit 32 evaluates the error between the second output image 51 and the target image 52 and trains the CNN based on the error evaluation result.
  • the second calculating section 30 After the repetitive processing of the first calculating section 20 ends, the second calculating section 30 performs multiple processings of each of the second CNN processing section 31 and the second CNN learning section 32, using the learning state of the CNN at the time of the end as an initial value. Do this repeatedly. Then, the second output image 51 at the end of the repetitive processing of the second calculation unit 30 is set as the noise-reduced image.
  • FIG. 2 is a flowchart of the image processing method.
  • the image processing method reduces noise in the target image 52 and creates a noise-reduced image by sequentially performing input image creation step S10, first calculation step S20, and second calculation step S30.
  • the input image creation step S10 is a process performed by the input image creation section 10.
  • a first input image 40 is created with the pixel values of some regions changed based on the teacher image 42.
  • the first calculation step S20 is a supervised pre-learning process performed by the first calculation unit 20, and includes a first CNN processing step S21 and a first CNN learning step S22.
  • the first CNN processing step S21 is a process performed by the first CNN processing unit 21.
  • the first CNN learning step S22 is a process performed by the first CNN learning section 22.
  • the first CNN processing step S21 the first input image 40 is input to the CNN, and the first output image 41 is created by the CNN.
  • the first CNN learning step S22 the error between the first output image 41 and the teacher image 42 is evaluated, and the CNN is trained based on the error evaluation result.
  • the processes of the first CNN processing step S21 and the first CNN learning step S22 are repeated multiple times for each of the plurality of sets of the teacher image 42 and the first input image 40.
  • the second calculation step S30 is an unsupervised learning process performed by the second calculation unit 30, and includes a second CNN processing step S31 and a second CNN learning step S32.
  • the second CNN processing step S31 is a process performed by the second CNN processing unit 31.
  • the second CNN learning step S32 is a process performed by the second CNN learning section 32.
  • the second input image 50 is input to the CNN, and the second output image 51 is created by the CNN.
  • the second CNN learning step S32 the error between the second output image 51 and the target image 52 is evaluated, and the CNN is trained based on the error evaluation result.
  • each process of the second CNN processing step S31 and the second CNN learning step S32 is performed multiple times using the learning state of the CNN at the time of the completion as an initial value. Do this repeatedly. Then, the second output image 51 at the end of the repetitive processing in the second calculation step S30 is set as the noise-reduced image.
  • FIG. 3 is a diagram showing an example of the configuration of CNN.
  • the CNN is preferably of a U-net structure including an encoder and a decoder. This figure shows the size of each layer of the CNN, assuming that the number of pixels of the input image input to the CNN is N ⁇ N ⁇ 64.
  • the first CNN learning unit 22 trains both the CNN encoder and decoder.
  • the second CNN learning unit 32 may learn both the CNN encoder and decoder, but in order to shorten the time required for learning, the second CNN learning unit 32 may selectively train the decoder of the CNN encoder and decoder. It is preferable to have them learn.
  • any function may be used to evaluate the error between two images.
  • the error evaluation function for example, L1 norm, L2 norm, negative log likelihood in Poisson distribution, etc. can be used.
  • each image is as follows.
  • the target image 52 is an image whose noise is to be reduced, and here is a PET image of the head.
  • a PET image generally includes an image region of the head, which is a region to be subjected to noise reduction (noise reduction target region), and a background region around the head region.
  • the teacher image 42 is a PET image of the head similar to the target image 52.
  • a plurality of teacher images 42 may be prepared, or one teacher image 42 may be prepared.
  • the target image 52 itself may be used as the teacher image 42.
  • the first input image 40 is an image in which pixel values of a partial area are changed based on the teacher image 42, and preferably, pixel values of a partial area of the noise reduction target area are changed based on the teacher image 42. This is what I did.
  • a plurality of first input images 40 may be prepared for each teacher image 42.
  • the pixel values of the first input image 40 with respect to the teacher image 42 can be changed in any manner.
  • the partial area whose pixel value is to be changed may have any shape, size, or number. Changes in pixel values in some areas include, for example, nonlinear conversion of pixel values, exchange of pixel values between multiple pixels, replacement with fixed pixel values, replacement with random pixel values, etc. good.
  • the manner of changing pixel values (for example, the shape, size, and number of partial regions whose pixel values are changed, and the method of changing pixel values) is different among the plurality of first input images 40. .
  • the input image creation unit 10 creates such a first input image 40 based on the teacher image 42.
  • the second input image 50 may be an image representing morphological information of the subject, and may be an MRI image as shown in FIG. 1, a CT image, or a static PET image. good.
  • the second input image 50 may be a random noise image.
  • FIGS. 4 to 10 Phantom images were obtained from BrainWeb (https://brainweb.bic.mni.mcgill.ca/brainweb/).
  • the image processing method of the example is based on the above embodiment.
  • the image processing method of Comparative Example 1 is based on conventional DIP technology.
  • the image processing method of Comparative Example 2 is based on the technique described in Non-Patent Document 1.
  • FIG. 4 to 9 are diagrams showing tomographic images used or created in unsupervised learning in the image processing methods of the example and comparative examples 1 and 2, respectively.
  • FIG. 4 is a diagram showing an input image (MRI image) to CNN.
  • FIG. 5 is a diagram showing a phantom image (correct image).
  • FIG. 6 is a diagram showing a tomographic image (target image) simulating a head having a tumor and to which 18 F-FDG was administered as a drug.
  • FIG. 7 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 8 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2.
  • FIG. 9 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example.
  • (a) is an image of a transverse section
  • (b) is an image of a coronal section
  • (c) is an image of a sagittal section.
  • FIG. 10 is a table summarizing the values of PSNR, SSIM, and CNR of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2.
  • PSNR Peak Signal to Noise Ratio
  • SSIM structural similarity index
  • CNR Consrast to Noise Ratio
  • tomographic images of clinical data were used as teacher images. These tomographic images are PET images of the head to which 18 F-AV45 was administered as a drug.
  • 32 first input images were used, which were created with various pixel value change modes for each teacher image.
  • an MRI image was used as an input image to the CNN.
  • the target image was a tomographic image of a head to which any of 18 F-AV45, 11 C-PIB, and 18 F-FDG had been administered as a drug. was used.
  • MRI images were used as input images to CNN.
  • FIG. 11 to 15 are images used in unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 18F -AV45 was administered as a drug was used as a target image.
  • FIG. 2 is a diagram showing a tomographic image that has been created or created.
  • FIG. 11 is a diagram showing an input image (MRI image) to CNN.
  • FIG. 12 is a diagram showing a tomographic image (target image) of the head to which 18 F-AV45 was administered as a drug.
  • FIG. 13 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 14 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2.
  • FIG. 15 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example.
  • (a) is an image of a cross section
  • (b) is an image of a coronal section
  • (c) is an image of a sagittal section.
  • Figures 16 to 20 are images used in unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 11C -PIB was administered as a drug was used as a target image.
  • FIG. 2 is a diagram showing a tomographic image that has been created or created.
  • FIG. 16 is a diagram showing an input image (MRI image) to CNN.
  • FIG. 17 is a diagram showing a tomographic image (target image) of a head to which 11 C-PIB was administered as a drug.
  • FIG. 18 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 19 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2.
  • FIG. 20 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 16 to 20, (a) is an image of a cross section, and (b) is an image of a sagittal section.
  • FIG. 21 to 25 show the results of unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 18F -FDG was administered as a drug was used as a target image.
  • FIG. 2 is a diagram showing a tomographic image that has been created or created.
  • FIG. 21 is a diagram showing an input image (MRI image) to CNN.
  • FIG. 22 is a diagram showing a tomographic image (target image) of the head to which 18 F-FDG was administered as a drug.
  • FIG. 23 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1.
  • FIG. 24 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2.
  • FIG. 25 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 21 to 25, (a) is an image of a cross section, and (b) is an image of a sagittal section.
  • FIG. 26 is a table summarizing the CNR values of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2. This table shows the CNR values when 18 F-AV45, 11 C-PIB, and 18 F-FDG were each administered to the head as drugs.
  • the image processing device and the image processing method are not limited to the embodiments and configuration examples described above, and various modifications are possible.
  • the image processing apparatus of the first aspect is an image processing apparatus that reduces noise in a target image to create a noise-reduced image, and includes: (1) changing pixel values of a partial area based on a teacher image; (2) a first CNN processing unit that inputs the first input image into a convolutional neural network to create a first output image by the convolutional neural network; and (2) evaluates the error between the first output image and the teacher image and (3) a second CNN processing unit that causes the convolutional neural network to input a second input image to create a second output image by the convolutional neural network; (4) a second CNN learning unit that evaluates the error between the second output image and the target image and learns the convolutional neural network based on the error evaluation result, and comprises a plurality of teacher images and first input images; After repeating the processing of the first CNN processing unit and the first CNN learning unit multiple times for each set, the processing of the second CNN processing unit and the second CNN learning unit is repeated multiple times to reduce noise in the second output image. Make it an
  • the configuration of the first aspect may further include an input image creation unit that creates a first input image in which pixel values of a partial region are changed based on the teacher image.
  • the convolutional neural network has a U-net structure including an encoder and a decoder
  • the first CNN learning unit has an encoder of the convolutional neural network.
  • a decoder and the second CNN learning unit may be configured to selectively learn the decoder of the convolutional neural network.
  • the first CNN processing unit convolves each of the plurality of first input images in which the pixel value change mode is different from each other for each teacher image. It may be configured such that the first output image is created by a convolutional neural network for each first input image by inputting the first input image to a neural network.
  • the first CNN processing unit changes the pixel values of a partial area of the noise reduction target area based on the teacher image.
  • a configuration may also be adopted in which one input image is input to the convolutional neural network.
  • the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by the radiation tomography device.
  • a configuration may also be used.
  • the first aspect of the image processing method is an image processing method for reducing noise in a target image to create a noise-reduced image, and includes (1) changing pixel values in some areas based on a teacher image; a first CNN processing step of inputting a first input image to a convolutional neural network and creating a first output image by the convolutional neural network; (2) evaluating an error between the first output image and the teacher image; (3) a second CNN processing step of inputting a second input image to the convolutional neural network and creating a second output image by the convolutional neural network; 4) a second CNN learning step in which the error between the second output image and the target image is evaluated and the convolutional neural network is trained based on the error evaluation result; After repeating the first CNN processing step and the first CNN learning step for each set multiple times, the second CNN processing step and the second CNN learning step are repeated multiple times to convert the second output image into a noise-reduced image. shall be.
  • the configuration of the first aspect may further include an input image creation step of creating a first input image in which pixel values of a partial area are changed based on the teacher image.
  • the convolutional neural network has a U-net structure including an encoder and a decoder, and in the first CNN learning step, the encoder of the convolutional neural network and a decoder, and in the second CNN learning step, the decoder of the convolutional neural network may be selectively trained.
  • the first CNN processing step for each teacher image, each of the plurality of first input images in which the pixel value change mode is different from each other is convolved. It may be configured such that the first output image is created by a convolutional neural network for each first input image by inputting the first input image to a neural network.
  • the pixel values of a partial area of the noise reduction target area are changed based on the teacher image.
  • a configuration may also be adopted in which one input image is input to the convolutional neural network.
  • the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by a radiation tomography apparatus.
  • a configuration may also be used.
  • the present invention can be used as an image processing device and an image processing method that can easily reduce noise in a target image by performing unsupervised learning after supervised pre-learning on a CNN.
  • SYMBOLS 1 Image processing device, 10... Input image creation part, 20... First calculating part, 21... First CNN processing part, 22... First CNN learning part, 30... Second calculating part, 31... Second CNN processing part, 32... 2nd CNN Learning Department.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Image Processing (AREA)
  • Nuclear Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

An image processing device (1) comprises an input image creating unit (10), a first computing unit (20), and a second computing unit (30), and reduces noise in a target image (52) to create a noise-reduced image. The first computing unit (20) includes a first CNN processing unit (21) and a first CNN learning unit (22), and performs supervised pre-learning processing. A first input image (40) is an image in which pixel values in some regions have been changed on the basis of a teacher image (42). The second calculating unit (30) includes a second CNN processing unit (31) and a second CNN learning unit (32), and performs unsupervised learning processing. As a result, an image processing device is achieved that can easily reduce noise in a target image, by subjecting a CNN to supervised pre-learning followed by unsupervised learning.

Description

画像処理装置および画像処理方法Image processing device and image processing method
 本開示は、対象画像のノイズを低減してノイズ低減画像を作成する画像処理装置および画像処理方法に関するものである。 The present disclosure relates to an image processing device and an image processing method that reduce noise in a target image to create a noise-reduced image.
 画像はノイズを含む場合がある。ノイズを含む画像の例として、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像が挙げられる。放射線断層撮影装置は、PET(Positron Emission Tomography)装置およびSPECT(Single Photon Emission Computed Tomography)装置などである。 The image may contain noise. An example of an image containing noise is a tomographic image of a subject reconstructed based on information acquired by a radiation tomography apparatus. Radiation tomography devices include PET (Positron Emission Tomography) devices and SPECT (Single Photon Emission Computed Tomography) devices.
 PET装置は、被検体が置かれる測定空間の周囲に配列された多数の小型の放射線検出器を有する検出部を備えている。PET装置は、陽電子放出アイソトープ(RI線源)が投与された被検体内における電子・陽電子の対消滅に伴って発生するエネルギ511keVの光子対を検出部により同時計数法で検出し、この同時計数情報を収集する。 The PET apparatus is equipped with a detection section having a large number of small radiation detectors arranged around a measurement space in which a subject is placed. The PET apparatus uses a coincidence method to detect photon pairs with an energy of 511 keV generated by the annihilation of electrons and positrons in a subject to whom a positron-emitting isotope (RI source) has been administered, and uses this coincidence method to detect photon pairs with an energy of 511 keV. Collect information.
 そして、この収集した多数の同時計数情報に基づいて、測定空間における光子対の発生頻度の空間分布(すなわち、RI線源の空間分布)を表す断層画像を再構成することができる。このようなPET装置は核医学分野等で重要な役割を果たしており、これを用いて、例えば生体機能や脳の高次機能の研究を行うことができる。 Based on this collected large amount of coincidence information, it is possible to reconstruct a tomographic image representing the spatial distribution of the frequency of occurrence of photon pairs in the measurement space (that is, the spatial distribution of the RI source). Such a PET apparatus plays an important role in the field of nuclear medicine and the like, and can be used to conduct research on, for example, biological functions and higher-order functions of the brain.
 再構成された被検体の断層画像は多くの統計ノイズを含むことから、この断層画像のノイズを低減することが要求されている。また、このようなPET画像(断層画像)に限らず、ノイズを含む対象画像からノイズを低減してノイズ低減画像を作成することが要求される場合がある。 Since the reconstructed tomographic image of the subject contains a lot of statistical noise, it is required to reduce the noise in this tomographic image. Furthermore, there are cases where it is required to create a noise-reduced image by reducing noise not only from such a PET image (tomographic image) but also from a target image containing noise.
 ノイズ低減技術として様々な手法が知られている。そのなかでも、深層ニューラルネットワークの一種である畳み込みニューラルネットワークを用いた Deep Image Prior技術によりノイズを低減する技術が注目されている。以下では、畳み込みニューラルネットワーク(Convolutional Neural Network)を「CNN」といい、Deep Image Prior技術を「DIP技術」という。 Various methods are known as noise reduction techniques. Among these, technology that reduces noise using Deep Image Prior technology, which uses convolutional neural networks, which is a type of deep neural network, is attracting attention. Hereinafter, the convolutional neural network (Convolutional Neural Network) will be referred to as "CNN", and the Deep Image Prior technology will be referred to as "DIP technology".
 DIP技術では、画像中の意味のある構造の方が、ランダムなノイズより早く学習される(すなわち、ランダムなノイズは学習されにくい)というCNNの性質を利用して、教師なし学習により、ノイズが低減された画像を作成することができる。 DIP technology uses the property of CNNs that meaningful structures in an image are learned faster than random noise (i.e., random noise is less likely to be learned) to reduce noise through unsupervised learning. A reduced image can be created.
 非特許文献1に、DIP技術により対象画像(PET画像)のノイズを低減する技術が記載されている。この文献に記載された技術は、ノイズ低減処理を2つのステップに分けて行う。第1ステップでは、第1入力画像(MRI画像)および教師画像(PET画像)の複数の組それぞれについて、第1入力画像をCNNに入力させてCNNにより第1出力画像を作成し、この第1出力画像と教師画像との間の誤差の評価結果に基づいてCNNを学習(教師あり事前学習)させる。 Non-Patent Document 1 describes a technique for reducing noise in a target image (PET image) using the DIP technique. The technique described in this document performs noise reduction processing in two steps. In the first step, for each of a plurality of sets of a first input image (MRI image) and a teacher image (PET image), the first input image is input to the CNN to create a first output image by the CNN. The CNN is trained (supervised pre-learning) based on the evaluation result of the error between the output image and the teacher image.
 続く第2ステップでは、第1ステップで事前学習済みのCNNに第2入力画像(MRI画像)を入力させてCNNにより第2出力画像を作成し、この第2出力画像と対象画像(PET画像)との間の誤差の評価結果に基づいてCNNを更に学習(教師なし学習)させる。そして、第2出力画像を、対象画像(PET画像)のノイズを低減したノイズ低減画像とする。 In the subsequent second step, the second input image (MRI image) is input to the CNN that has been pre-trained in the first step, a second output image is created by the CNN, and this second output image and the target image (PET image) are The CNN is further trained (unsupervised learning) based on the evaluation result of the error between. Then, the second output image is a noise-reduced image in which the noise of the target image (PET image) is reduced.
 要するに、非特許文献1に記載されたノイズ低減技術は、第1ステップではCNNが第1入力画像を教師画像へ変換するようにCNNを学習(教師あり事前学習)させ、第2ステップではCNNを更に学習(教師なし学習)させることで、対象画像のノイズを低減したノイズ低減画像を作成する。このようにすることにより、非特許文献1に記載されたノイズ低減技術は、DIP技術のみの場合(教師なし学習のみの場合)と比べて、ノイズ低減性能の向上が可能であるとされている。 In short, the noise reduction technology described in Non-Patent Document 1 is such that in the first step, the CNN is trained (supervised pre-learning) to convert the first input image to the teacher image, and in the second step, the CNN is trained to convert the first input image into the teacher image. Further learning (unsupervised learning) is performed to create a noise-reduced image in which the noise of the target image is reduced. By doing so, the noise reduction technology described in Non-Patent Document 1 is said to be able to improve noise reduction performance compared to the case of only DIP technology (the case of only unsupervised learning). .
 非特許文献1に記載されたノイズ低減技術は、第1ステップの教師あり事前学習を行う為に、多数の教師画像だけでなく多数の第1入力画像を用意する必要がある。しかし、多数の教師画像の用意とは別個に多数の第1入力画像を用意することは容易ではない。 The noise reduction technique described in Non-Patent Document 1 requires preparing not only a large number of teacher images but also a large number of first input images in order to perform the first step of supervised pre-learning. However, it is not easy to prepare a large number of first input images separately from preparing a large number of teacher images.
 本発明は、CNNに対し教師あり事前学習の後に教師なし学習を行うことで対象画像のノイズを低減することが容易にできる画像処理装置および画像処理方法を提供することを目的とする。 An object of the present invention is to provide an image processing device and an image processing method that can easily reduce noise in a target image by performing unsupervised learning after supervised pre-learning on a CNN.
 本発明の実施形態は、画像処理装置である。画像処理装置は、対象画像のノイズを低減してノイズ低減画像を作成する画像処理装置であって、(1)教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理部と、(2)第1出力画像と教師画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第1CNN学習部と、(3)第2入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理部と、(4)第2出力画像と対象画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第2CNN学習部と、を備え、教師画像および第1入力画像の複数の組それぞれについて第1CNN処理部および第1CNN学習部それぞれの処理を複数回繰り返し行った後、第2CNN処理部および第2CNN学習部それぞれの処理を複数回繰り返し行って、第2出力画像をノイズ低減画像とする。 An embodiment of the present invention is an image processing device. The image processing device is an image processing device that reduces noise in a target image to create a noise-reduced image, and includes (1) a first input image in which pixel values of some regions are changed based on a teacher image, and a convolution neural (2) a first CNN processing unit that generates a first output image using a convolutional neural network by inputting the input to the network; and (2) evaluating an error between the first output image and the teacher image and convolutional neural a first CNN learning unit that trains the network; (3) a second CNN processing unit that inputs a second input image to a convolutional neural network and creates a second output image by the convolutional neural network; and (4) a second output image. a second CNN learning unit that evaluates an error between the target image and the convolutional neural network based on the error evaluation result; and a first CNN processing unit for each of the plurality of sets of the teacher image and the first input image. After repeating the processing of each of the and first CNN learning units multiple times, the processing of the second CNN processing unit and the second CNN learning unit is repeated multiple times, and the second output image is made into a noise-reduced image.
 本発明の実施形態は、画像処理方法である。画像処理方法は、対象画像のノイズを低減してノイズ低減画像を作成する画像処理方法であって、(1)教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理ステップと、(2)第1出力画像と教師画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第1CNN学習ステップと、(3)第2入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理ステップと、(4)第2出力画像と対象画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第2CNN学習ステップと、を備え、教師画像および第1入力画像の複数の組それぞれについて第1CNN処理ステップおよび第1CNN学習ステップそれぞれの処理を複数回繰り返し行った後、第2CNN処理ステップおよび第2CNN学習ステップそれぞれの処理を複数回繰り返し行って、第2出力画像をノイズ低減画像とする。 An embodiment of the present invention is an image processing method. The image processing method is an image processing method that reduces noise in a target image to create a noise-reduced image, and includes: (1) convolution neural (2) evaluating the error between the first output image and the teacher image and convolutional neural network processing based on the error evaluation result; a first CNN learning step in which the network is trained; (3) a second CNN processing step in which the second input image is input to the convolutional neural network and a second output image is created by the convolutional neural network; and (4) a second output image. a second CNN learning step of evaluating an error with the target image and learning the convolutional neural network based on the error evaluation result, and a first CNN processing step for each of the plurality of sets of the teacher image and the first input image. After repeating the processes of each of the and first CNN learning steps multiple times, the processes of the second CNN processing step and the second CNN learning step are repeatedly performed multiple times to make the second output image a noise-reduced image.
 本発明の実施形態によれば、畳み込みニューラルネットワーク(CNN)に対する教師あり事前学習の際に用いるCNNへの入力画像を教師画像に基づいて容易に作成することができ、CNNに対し教師あり事前学習の後に教師なし学習を行うことで対象画像のノイズを低減することが容易にできる。 According to the embodiments of the present invention, it is possible to easily create an input image to a convolutional neural network (CNN) based on a teacher image for use in supervised pre-learning for a CNN, and to perform supervised pre-learning for a CNN. By performing unsupervised learning after this, it is easy to reduce the noise in the target image.
図1は、画像処理装置1の構成を示す図である。FIG. 1 is a diagram showing the configuration of an image processing device 1. As shown in FIG. 図2は、画像処理方法のフローチャートである。FIG. 2 is a flowchart of the image processing method. 図3は、CNNの構成例を示す図である。FIG. 3 is a diagram showing an example of the configuration of CNN. 図4は、(a)~(c)CNNへの入力画像(MRI画像)を示す図である。FIG. 4 is a diagram showing input images (MRI images) to the CNN (a) to (c). 図5は、(a)~(c)ファントム画像(正解画像)を示す図である。FIG. 5 is a diagram showing (a) to (c) phantom images (correct images). 図6は、(a)~(c)腫瘍を有し薬剤として18F-FDGを投与した頭部を模擬した断層画像(対象画像)を示す図である。FIG. 6 is a diagram showing (a) to (c) tomographic images (target images) simulating a head having a tumor and to which 18 F-FDG was administered as a drug. 図7は、(a)~(c)比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 7 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1 (a) to (c). 図8は、(a)~(c)比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 8 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) to (c). 図9は、(a)~(c)実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 9 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Examples (a) to (c). 図10は、実施例および比較例1,2それぞれの画像処理方法によるノイズ低減処理後の断層画像のPSNR、SSIMおよびCNRの各値を纏めた表である。FIG. 10 is a table summarizing the values of PSNR, SSIM, and CNR of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2. 図11は、(a)~(c)CNNへの入力画像(MRI画像)を示す図である。FIG. 11 is a diagram showing images (a) to (c) input to CNN (MRI images). 図12は、(a)~(c)薬剤として18F-AV45を投与した頭部の断層画像(対象画像)を示す図である。FIG. 12 is a diagram showing (a) to (c) tomographic images (target images) of the head to which 18 F-AV45 was administered as the drug. 図13は、(a)~(c)比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 13 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1 (a) to (c). 図14は、(a)~(c)比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 14 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) to (c). 図15は、(a)~(c)実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 15 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Examples (a) to (c). 図16は、(a)、(b)CNNへの入力画像(MRI画像)を示す図である。FIG. 16 is a diagram showing images (a) and (b) input to CNN (MRI images). 図17は、(a)、(b)薬剤として11C-PIBを投与した頭部の断層画像(対象画像)を示す図である。FIG. 17 is a diagram showing (a) and (b) tomographic images (target images) of the head to which 11 C-PIB was administered as a drug. 図18は、(a)、(b)比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIGS. 18(a) and 18(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1. 図19は、(a)、(b)比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 19 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) and (b). 図20は、(a)、(b)実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 20 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example (a) and (b). 図21は、(a)、(b)CNNへの入力画像(MRI画像)を示す図である。FIG. 21 is a diagram showing images (a) and (b) input to CNN (MRI images). 図22は、(a)、(b)薬剤として18F-FDGを投与した頭部の断層画像(対象画像)を示す図である。FIG. 22 is a diagram showing (a) and (b) tomographic images (target images) of the head to which 18 F-FDG was administered as a drug. 図23は、(a)、(b)比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIGS. 23(a) and 23(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of Comparative Example 1. 図24は、(a)、(b)比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIG. 24 is a diagram showing tomographic images after noise reduction processing by the image processing method of Comparative Example 2 (a) and (b). 図25は、(a)、(b)実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。FIGS. 25(a) and 25(b) are diagrams showing tomographic images after noise reduction processing by the image processing method of the embodiment. 図26は、実施例および比較例1,2それぞれの画像処理方法によるノイズ低減処理後の断層画像のCNRの値を纏めた表である。FIG. 26 is a table summarizing the CNR values of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2.
 以下、添付図面を参照して、画像処理装置および画像処理方法の実施の形態を詳細に説明する。なお、図面の説明において同一の要素には同一の符号を付し、重複する説明を省略する。本発明は、これらの例示に限定されるものではなく、特許請求の範囲によって示され、特許請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 Hereinafter, embodiments of an image processing device and an image processing method will be described in detail with reference to the accompanying drawings. In addition, in the description of the drawings, the same elements are given the same reference numerals, and redundant description will be omitted. The present invention is not limited to these examples, but is indicated by the claims, and is intended to include all changes within the meaning and scope equivalent to the claims.
 図1は、画像処理装置1の構成を示す図である。画像処理装置1は、入力画像作成部10、第1演算部20および第2演算部30を備え、対象画像52のノイズを低減してノイズ低減画像を作成する。 FIG. 1 is a diagram showing the configuration of an image processing device 1. The image processing device 1 includes an input image creation section 10, a first calculation section 20, and a second calculation section 30, and reduces noise in a target image 52 to create a noise-reduced image.
 この図では、第1入力画像40、第1出力画像41、教師画像42、第2出力画像51および対象画像52それぞれの例として頭部のPET画像を示し、第2入力画像50の例として頭部のMRI画像を示している。以降では、これを前提として説明をする。ただし、本実施形態は、これに限られない。 In this figure, a PET image of the head is shown as an example of each of the first input image 40, first output image 41, teacher image 42, second output image 51, and target image 52, and a head PET image is shown as an example of the second input image 50. This shows an MRI image of the area. The following explanation will be based on this premise. However, this embodiment is not limited to this.
 本実施形態は、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像(例えばPET画像)を対象画像として、この対象画像のノイズを低減するのに好適なものである。これらの画像は、2次元画像であってもよいし、3次元画像であってもよい。 The present embodiment is suitable for reducing noise in a target image that is a tomographic image (for example, a PET image) of a subject reconstructed based on information acquired by a radiation tomography apparatus. be. These images may be two-dimensional images or three-dimensional images.
 画像処理装置1は、CNNを用いた処理を行うGPU(Graphics Processing Unit)、操作者の入力を受け付ける入力部(例えばキーボードやマウス)、画像等を表示する表示部(例えば液晶ディスプレイ)、および、様々な処理を実行する為のプログラムやデータを記憶する記憶部を備える。画像処理装置1として、CPU、RAM、ROMおよびハードディスクドライブ等を有するコンピュータが用いられる。 The image processing device 1 includes a GPU (Graphics Processing Unit) that performs processing using CNN, an input unit (for example, a keyboard or mouse) that receives input from an operator, a display unit (for example, a liquid crystal display) that displays images, etc. It is equipped with a storage unit that stores programs and data for executing various processes. As the image processing device 1, a computer having a CPU, RAM, ROM, hard disk drive, etc. is used.
 入力画像作成部10は、教師画像42に基づいて一部領域の画素値を変更した第1入力画像40を作成する。 The input image creation unit 10 creates a first input image 40 in which the pixel values of some regions are changed based on the teacher image 42.
 第1演算部20は、第1CNN処理部21および第1CNN学習部22を含み、教師あり事前学習の処理を行う。第1CNN処理部21は、第1入力画像40をCNNに入力させて、CNNにより第1出力画像41を作成する。第1CNN学習部22は、第1出力画像41と教師画像42との間の誤差を評価して当該誤差評価結果に基づいてCNNを学習させる。 The first calculation unit 20 includes a first CNN processing unit 21 and a first CNN learning unit 22, and performs supervised pre-learning processing. The first CNN processing unit 21 inputs the first input image 40 to the CNN and creates a first output image 41 using the CNN. The first CNN learning unit 22 evaluates the error between the first output image 41 and the teacher image 42 and trains the CNN based on the error evaluation result.
 第1演算部20は、教師画像42および第1入力画像40の複数の組それぞれについて、第1CNN処理部21および第1CNN学習部22それぞれの処理を複数回繰り返し行う。 The first calculation unit 20 repeatedly performs the processing of the first CNN processing unit 21 and the first CNN learning unit 22 multiple times for each of the plurality of sets of the teacher image 42 and the first input image 40.
 第2演算部30は、第2CNN処理部31および第2CNN学習部32を含み、教師なし学習の処理を行う。第2CNN処理部31は、第2入力画像50をCNNに入力させて、CNNにより第2出力画像51を作成する。第2CNN学習部32は、第2出力画像51と対象画像52との間の誤差を評価して当該誤差評価結果に基づいてCNNを学習させる。 The second calculation unit 30 includes a second CNN processing unit 31 and a second CNN learning unit 32, and performs unsupervised learning processing. The second CNN processing unit 31 inputs the second input image 50 to the CNN and creates a second output image 51 using the CNN. The second CNN learning unit 32 evaluates the error between the second output image 51 and the target image 52 and trains the CNN based on the error evaluation result.
 第2演算部30は、第1演算部20の繰り返し処理の終了の後、その終了時点でのCNNの学習状態を初期値として、第2CNN処理部31および第2CNN学習部32それぞれの処理を複数回繰り返し行う。そして、第2演算部30の繰り返し処理の終了時点での第2出力画像51をノイズ低減画像とする。 After the repetitive processing of the first calculating section 20 ends, the second calculating section 30 performs multiple processings of each of the second CNN processing section 31 and the second CNN learning section 32, using the learning state of the CNN at the time of the end as an initial value. Do this repeatedly. Then, the second output image 51 at the end of the repetitive processing of the second calculation unit 30 is set as the noise-reduced image.
 図2は、画像処理方法のフローチャートである。画像処理方法は、入力画像作成ステップS10、第1演算ステップS20および第2演算ステップS30を順に行うことで、対象画像52のノイズを低減してノイズ低減画像を作成する。 FIG. 2 is a flowchart of the image processing method. The image processing method reduces noise in the target image 52 and creates a noise-reduced image by sequentially performing input image creation step S10, first calculation step S20, and second calculation step S30.
 入力画像作成ステップS10は、入力画像作成部10により行われる処理である。入力画像作成ステップS10では、教師画像42に基づいて一部領域の画素値を変更した第1入力画像40を作成する。 The input image creation step S10 is a process performed by the input image creation section 10. In the input image creation step S10, a first input image 40 is created with the pixel values of some regions changed based on the teacher image 42.
 第1演算ステップS20は、第1演算部20により行われる教師あり事前学習の処理であり、第1CNN処理ステップS21および第1CNN学習ステップS22を含む。第1CNN処理ステップS21は、第1CNN処理部21により行われる処理である。第1CNN学習ステップS22は、第1CNN学習部22により行われる処理である。 The first calculation step S20 is a supervised pre-learning process performed by the first calculation unit 20, and includes a first CNN processing step S21 and a first CNN learning step S22. The first CNN processing step S21 is a process performed by the first CNN processing unit 21. The first CNN learning step S22 is a process performed by the first CNN learning section 22.
 第1CNN処理ステップS21では、第1入力画像40をCNNに入力させて、CNNにより第1出力画像41を作成する。第1CNN学習ステップS22では、第1出力画像41と教師画像42との間の誤差を評価して当該誤差評価結果に基づいてCNNを学習させる。第1演算ステップS20では、教師画像42および第1入力画像40の複数の組それぞれについて、第1CNN処理ステップS21および第1CNN学習ステップS22それぞれの処理を複数回繰り返し行う。 In the first CNN processing step S21, the first input image 40 is input to the CNN, and the first output image 41 is created by the CNN. In the first CNN learning step S22, the error between the first output image 41 and the teacher image 42 is evaluated, and the CNN is trained based on the error evaluation result. In the first calculation step S20, the processes of the first CNN processing step S21 and the first CNN learning step S22 are repeated multiple times for each of the plurality of sets of the teacher image 42 and the first input image 40.
 第2演算ステップS30は、第2演算部30により行われる教師なし学習の処理であり、第2CNN処理ステップS31および第2CNN学習ステップS32を含む。第2CNN処理ステップS31は、第2CNN処理部31により行われる処理である。第2CNN学習ステップS32は、第2CNN学習部32により行われる処理である。 The second calculation step S30 is an unsupervised learning process performed by the second calculation unit 30, and includes a second CNN processing step S31 and a second CNN learning step S32. The second CNN processing step S31 is a process performed by the second CNN processing unit 31. The second CNN learning step S32 is a process performed by the second CNN learning section 32.
 第2CNN処理ステップS31では、第2入力画像50をCNNに入力させて、CNNにより第2出力画像51を作成する。第2CNN学習ステップS32では、第2出力画像51と対象画像52との間の誤差を評価して当該誤差評価結果に基づいてCNNを学習させる。第2演算ステップS30では、第1演算ステップS20における繰り返し処理の終了の後、その終了時点でのCNNの学習状態を初期値として、第2CNN処理ステップS31および第2CNN学習ステップS32それぞれの処理を複数回繰り返し行う。そして、第2演算ステップS30における繰り返し処理の終了時点での第2出力画像51をノイズ低減画像とする。 In the second CNN processing step S31, the second input image 50 is input to the CNN, and the second output image 51 is created by the CNN. In the second CNN learning step S32, the error between the second output image 51 and the target image 52 is evaluated, and the CNN is trained based on the error evaluation result. In the second calculation step S30, after the repeated processing in the first calculation step S20 is completed, each process of the second CNN processing step S31 and the second CNN learning step S32 is performed multiple times using the learning state of the CNN at the time of the completion as an initial value. Do this repeatedly. Then, the second output image 51 at the end of the repetitive processing in the second calculation step S30 is set as the noise-reduced image.
 図3は、CNNの構成例を示す図である。この図に示されるように、CNNは、エンコーダとデコーダとを含むU-net構造のものであるのが好適である。この図には、CNNに入力される入力画像の画素数をN×N×64として、CNNの各層のサイズが示されている。 FIG. 3 is a diagram showing an example of the configuration of CNN. As shown in this figure, the CNN is preferably of a U-net structure including an encoder and a decoder. This figure shows the size of each layer of the CNN, assuming that the number of pixels of the input image input to the CNN is N×N×64.
 第1CNN学習ステップS22において、第1CNN学習部22は、CNNのエンコーダおよびデコーダの双方を学習させる。第2CNN学習ステップS32において、第2CNN学習部32は、CNNのエンコーダおよびデコーダの双方を学習させてもよいが、学習に要する時間を短縮する為にCNNのエンコーダおよびデコーダのうちのデコーダを選択的に学習させるのが好適である。 In the first CNN learning step S22, the first CNN learning unit 22 trains both the CNN encoder and decoder. In the second CNN learning step S32, the second CNN learning unit 32 may learn both the CNN encoder and decoder, but in order to shorten the time required for learning, the second CNN learning unit 32 may selectively train the decoder of the CNN encoder and decoder. It is preferable to have them learn.
 第1CNN学習ステップS22における第1CNN学習部22および第2CNN学習ステップS32における第2CNN学習部32それぞれにおいて、二つの画像の間の誤差を評価する関数は任意でよい。誤差評価関数は、例えば、L1ノルム、L2ノルム、ポアソン分布における負の対数尤度などを用いることができる。 In each of the first CNN learning unit 22 in the first CNN learning step S22 and the second CNN learning unit 32 in the second CNN learning step S32, any function may be used to evaluate the error between two images. As the error evaluation function, for example, L1 norm, L2 norm, negative log likelihood in Poisson distribution, etc. can be used.
 画像処理装置1または画像処理方法において、各画像は次のようなものである。対象画像52は、ノイズを低減する対象となる画像であり、ここでは頭部のPET画像である。このようなPET画像は、一般に、ノイズを低減する対象とすべき領域(ノイズ低減対象領域)である頭部の画像領域と、頭部領域の周囲にある背景領域とからなる。 In the image processing device 1 or the image processing method, each image is as follows. The target image 52 is an image whose noise is to be reduced, and here is a PET image of the head. Such a PET image generally includes an image region of the head, which is a region to be subjected to noise reduction (noise reduction target region), and a background region around the head region.
 教師画像42は、対象画像52と同様の頭部のPET画像である。複数の教師画像42が用意されてもよいし、一つの教師画像42が用意されてもよい。対象画像52そのものを教師画像42としてもよい。 The teacher image 42 is a PET image of the head similar to the target image 52. A plurality of teacher images 42 may be prepared, or one teacher image 42 may be prepared. The target image 52 itself may be used as the teacher image 42.
 第1入力画像40は、教師画像42に基づいて一部領域の画素値を変更したものであり、好ましくは、教師画像42に基づいてノイズ低減対象領域のうちの一部領域の画素値を変更したものである。各教師画像42に対して複数の第1入力画像40が用意されてもよい。 The first input image 40 is an image in which pixel values of a partial area are changed based on the teacher image 42, and preferably, pixel values of a partial area of the noise reduction target area are changed based on the teacher image 42. This is what I did. A plurality of first input images 40 may be prepared for each teacher image 42.
 教師画像42に対する第1入力画像40の画素値変更は、任意の態様が可能である。画素値を変更する一部領域は、任意の形状、任意の大きさ、任意の個数であってよい。一部領域における画素値の変更は、例えば、画素値の非線形変換、複数の画素の間での画素値の交換、一定の画素値への置換、ランダムな画素値への置換、等であってよい。 The pixel values of the first input image 40 with respect to the teacher image 42 can be changed in any manner. The partial area whose pixel value is to be changed may have any shape, size, or number. Changes in pixel values in some areas include, for example, nonlinear conversion of pixel values, exchange of pixel values between multiple pixels, replacement with fixed pixel values, replacement with random pixel values, etc. good.
 画素値変更の態様(例えば、画素値を変更する一部領域の形状、大きさ及び個数や、画素値変更の方法)は、複数の第1入力画像40の間で互いに異なっているのが好ましい。入力画像作成ステップS10において、入力画像作成部10は、このような第1入力画像40を教師画像42に基づいて作成する。 It is preferable that the manner of changing pixel values (for example, the shape, size, and number of partial regions whose pixel values are changed, and the method of changing pixel values) is different among the plurality of first input images 40. . In the input image creation step S10, the input image creation unit 10 creates such a first input image 40 based on the teacher image 42.
 第2入力画像50は、被検体の形態情報を表す画像であってよく、図1中に示されているようなMRI画像であってもよいし、CT画像や静的PET画像であってもよい。第2入力画像50は、ランダムノイズ画像であってもよい。 The second input image 50 may be an image representing morphological information of the subject, and may be an MRI image as shown in FIG. 1, a CT image, or a static PET image. good. The second input image 50 may be a random noise image.
 次に、デジタル脳ファントム画像を用いて頭部用PET装置のモンテカルロ・シミュレーションにより対象画像を作成し、これを用いて実施例および比較例1,2それぞれの画像処理方法によりノイズ低減処理を行った結果について、図4~図10を用いて説明する。ファントム画像は、BrainWeb(https://brainweb.bic.mni.mcgill.ca/brainweb/)から入手したものである。 Next, a target image was created by Monte Carlo simulation of a head PET device using a digital brain phantom image, and noise reduction processing was performed using the image processing method of Example and Comparative Examples 1 and 2. The results will be explained using FIGS. 4 to 10. Phantom images were obtained from BrainWeb (https://brainweb.bic.mni.mcgill.ca/brainweb/).
 実施例の画像処理方法は、上記実施形態によるものである。比較例1の画像処理方法は、従来のDIP技術によるものである。比較例2の画像処理方法は、非特許文献1に記載された技術によるものである。 The image processing method of the example is based on the above embodiment. The image processing method of Comparative Example 1 is based on conventional DIP technology. The image processing method of Comparative Example 2 is based on the technique described in Non-Patent Document 1.
 実施例および比較例2それぞれの画像処理方法における教師あり事前学習では、ファントム画像に基づいて作成した20個の教師画像を用いた。実施例の画像処理方法では、各教師画像に対し画素値変更態様を様々に異ならせて作成した32個の第1入力画像を用いた。比較例2の画像処理方法では、CNNへの入力画像としてMRI画像を用いた。 In the supervised pre-learning in each of the image processing methods of Example and Comparative Example 2, 20 teacher images created based on phantom images were used. In the image processing method of the example, 32 first input images were used, which were created with various pixel value change modes for each teacher image. In the image processing method of Comparative Example 2, an MRI image was used as an input image to the CNN.
 実施例および比較例1,2それぞれの画像処理方法における教師なし学習では、対象画像として、腫瘍を有し薬剤として18F-FDGを投与した頭部を模擬した断層画像を用いた。CNNへの入力画像としてMRI画像を用いた。 In the unsupervised learning in the image processing methods of Examples and Comparative Examples 1 and 2, a tomographic image simulating a head with a tumor and to which 18 F-FDG was administered as a drug was used as a target image. MRI images were used as input images to CNN.
 図4~図9それぞれは、実施例および比較例1,2それぞれの画像処理方法における教師なし学習で使用された又は作成された断層画像を示す図である。図4は、CNNへの入力画像(MRI画像)を示す図である。図5は、ファントム画像(正解画像)を示す図である。図6は、腫瘍を有し薬剤として18F-FDGを投与した頭部を模擬した断層画像(対象画像)を示す図である。 4 to 9 are diagrams showing tomographic images used or created in unsupervised learning in the image processing methods of the example and comparative examples 1 and 2, respectively. FIG. 4 is a diagram showing an input image (MRI image) to CNN. FIG. 5 is a diagram showing a phantom image (correct image). FIG. 6 is a diagram showing a tomographic image (target image) simulating a head having a tumor and to which 18 F-FDG was administered as a drug.
 図7は、比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図8は、比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図9は、実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図4~図9それぞれにおいて、(a)は横断面の画像であり、(b)は冠状断面の画像であり、(c)は矢状断面の画像である。 FIG. 7 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1. FIG. 8 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2. FIG. 9 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 4 to 9, (a) is an image of a transverse section, (b) is an image of a coronal section, and (c) is an image of a sagittal section.
 図10は、実施例および比較例1,2それぞれの画像処理方法によるノイズ低減処理後の断層画像のPSNR、SSIMおよびCNRの各値を纏めた表である。PSNR(Peak Signal to Noise Ratio)は、画像の品質をデシベル(dB)で表したものである。SSIM(structural similarity index)は、画像の輝度およびコントラストならびに構造の変化を定量化する指標である。CNR(Contrast to Noise Ratio)は、画像のコントラストとノイズとの比である。PSNR、SSIMおよびCNRの何れも、値が大きいほど良好な画質であることを意味する。 FIG. 10 is a table summarizing the values of PSNR, SSIM, and CNR of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2. PSNR (Peak Signal to Noise Ratio) represents the quality of an image in decibels (dB). SSIM (structural similarity index) is an index that quantifies changes in image brightness and contrast as well as structure. CNR (Contrast to Noise Ratio) is the ratio of image contrast to noise. For all of PSNR, SSIM, and CNR, the larger the value, the better the image quality.
 図4~図10に示されるとおり、比較例1,2と比べて実施例では、PSNR、SSIMおよびCNRの何れも値が大きく、より効果的に対象画像のノイズを低減することができることが確認できた。 As shown in FIGS. 4 to 10, compared to Comparative Examples 1 and 2, the values of PSNR, SSIM, and CNR are all larger in the example, confirming that it is possible to more effectively reduce noise in the target image. did it.
 次に、臨床データを用いて実施例および比較例1,2それぞれの画像処理方法によりノイズ低減処理を行った結果について、図11~図26を用いて説明する。 Next, the results of noise reduction processing performed using the image processing methods of Example and Comparative Examples 1 and 2 using clinical data will be described using FIGS. 11 to 26.
 実施例および比較例2それぞれの画像処理方法における教師あり事前学習では、臨床データの24個の断層画像を教師画像として用いた。これらの断層画像は、薬剤として18F-AV45を投与した頭部のPET画像である。実施例の画像処理方法では、各教師画像に対し画素値変更態様を様々に異ならせて作成した32個の第1入力画像を用いた。比較例2の画像処理方法では、CNNへの入力画像としてMRI画像を用いた。 In the supervised pre-learning in the image processing methods of Example and Comparative Example 2, 24 tomographic images of clinical data were used as teacher images. These tomographic images are PET images of the head to which 18 F-AV45 was administered as a drug. In the image processing method of the example, 32 first input images were used, which were created with various pixel value change modes for each teacher image. In the image processing method of Comparative Example 2, an MRI image was used as an input image to the CNN.
 実施例および比較例1,2それぞれの画像処理方法における教師なし学習では、対象画像として、薬剤として18F-AV45,11C-PIBおよび18F-FDGの何れかを投与した頭部の断層画像を用いた。CNNへの入力画像としてMRI画像を用いた。 In the unsupervised learning in the image processing methods of Examples and Comparative Examples 1 and 2, the target image was a tomographic image of a head to which any of 18 F-AV45, 11 C-PIB, and 18 F-FDG had been administered as a drug. was used. MRI images were used as input images to CNN.
 図11~図15それぞれは、薬剤として18F-AV45を投与した頭部の断層画像を対象画像として用いた場合に、実施例および比較例1,2それぞれの画像処理方法における教師なし学習で使用された又は作成された断層画像を示す図である。図11は、CNNへの入力画像(MRI画像)を示す図である。図12は、薬剤として18F-AV45を投与した頭部の断層画像(対象画像)を示す図である。 Figures 11 to 15 are images used in unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 18F -AV45 was administered as a drug was used as a target image. FIG. 2 is a diagram showing a tomographic image that has been created or created. FIG. 11 is a diagram showing an input image (MRI image) to CNN. FIG. 12 is a diagram showing a tomographic image (target image) of the head to which 18 F-AV45 was administered as a drug.
 図13は、比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図14は、比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図15は、実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図11~図15それぞれにおいて、(a)は横断面の画像であり、(b)は冠状断面の画像であり、(c)は矢状断面の画像である。 FIG. 13 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1. FIG. 14 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2. FIG. 15 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 11 to 15, (a) is an image of a cross section, (b) is an image of a coronal section, and (c) is an image of a sagittal section.
 図16~図20それぞれは、薬剤として11C-PIBを投与した頭部の断層画像を対象画像として用いた場合に、実施例および比較例1,2それぞれの画像処理方法における教師なし学習で使用された又は作成された断層画像を示す図である。図16は、CNNへの入力画像(MRI画像)を示す図である。図17は、薬剤として11C-PIBを投与した頭部の断層画像(対象画像)を示す図である。 Figures 16 to 20 are images used in unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 11C -PIB was administered as a drug was used as a target image. FIG. 2 is a diagram showing a tomographic image that has been created or created. FIG. 16 is a diagram showing an input image (MRI image) to CNN. FIG. 17 is a diagram showing a tomographic image (target image) of a head to which 11 C-PIB was administered as a drug.
 図18は、比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図19は、比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図20は、実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図16~図20それぞれにおいて、(a)は横断面の画像であり、(b)は矢状断面の画像である。 FIG. 18 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1. FIG. 19 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2. FIG. 20 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 16 to 20, (a) is an image of a cross section, and (b) is an image of a sagittal section.
 図21~図25それぞれは、薬剤として18F-FDGを投与した頭部の断層画像を対象画像として用いた場合に、実施例および比較例1,2それぞれの画像処理方法における教師なし学習で使用された又は作成された断層画像を示す図である。図21は、CNNへの入力画像(MRI画像)を示す図である。図22は、薬剤として18F-FDGを投与した頭部の断層画像(対象画像)を示す図である。 Figures 21 to 25 show the results of unsupervised learning in the image processing methods of Example and Comparative Examples 1 and 2, respectively, when a tomographic image of a head to which 18F -FDG was administered as a drug was used as a target image. FIG. 2 is a diagram showing a tomographic image that has been created or created. FIG. 21 is a diagram showing an input image (MRI image) to CNN. FIG. 22 is a diagram showing a tomographic image (target image) of the head to which 18 F-FDG was administered as a drug.
 図23は、比較例1の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図24は、比較例2の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図25は、実施例の画像処理方法によるノイズ低減処理後の断層画像を示す図である。図21~図25それぞれにおいて、(a)は横断面の画像であり、(b)は矢状断面の画像である。 FIG. 23 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 1. FIG. 24 is a diagram showing a tomographic image after noise reduction processing by the image processing method of Comparative Example 2. FIG. 25 is a diagram showing a tomographic image after noise reduction processing by the image processing method of the example. In each of FIGS. 21 to 25, (a) is an image of a cross section, and (b) is an image of a sagittal section.
 図26は、実施例および比較例1,2それぞれの画像処理方法によるノイズ低減処理後の断層画像のCNRの値を纏めた表である。この表には、薬剤として18F-AV45,11C-PIBおよび18F-FDGそれぞれを頭部に投与した場合について、CNRの値が示されている。 FIG. 26 is a table summarizing the CNR values of tomographic images after noise reduction processing by the image processing methods of Example and Comparative Examples 1 and 2. This table shows the CNR values when 18 F-AV45, 11 C-PIB, and 18 F-FDG were each administered to the head as drugs.
 図11~図26から次のようなことが分かる。比較例1,2と比べて実施例では、CNRの値が大きく、より効果的に対象画像のノイズを低減することができることが確認できた。薬剤として18F-FDGを頭部に投与したときの断層画像を対象画像とした場合には、比較例1と比べて比較例2ではCNRの改善の程度が僅かであるのに対して、比較例1,2と比べて実施例ではCNRの改善の程度が大きかった。 The following can be seen from FIGS. 11 to 26. In comparison with Comparative Examples 1 and 2, it was confirmed that the CNR value was larger in the example, and noise in the target image could be reduced more effectively. When the target image was a tomographic image obtained when 18F -FDG was administered to the head as a drug, the degree of improvement in CNR was slight in Comparative Example 2 compared to Comparative Example 1; Compared to Examples 1 and 2, the degree of improvement in CNR was greater in Example.
 このように、教師画像と対象画像との間で頭部に投与された薬剤の種類が互いに異なる場合、比較例2では、ノイズ除去の程度が比較例1と同程度であるのに対して、実施例では、ノイズ除去の程度が比較例1,2より向上することが確認できた。 In this way, when the types of drugs administered to the head are different between the teacher image and the target image, the degree of noise removal in Comparative Example 2 is the same as in Comparative Example 1, whereas In the Example, it was confirmed that the degree of noise removal was improved compared to Comparative Examples 1 and 2.
 以上のとおり、本実施形態によれば、CNNに対する教師あり事前学習の際に用いるCNNへの入力画像を教師画像に基づいて容易に作成することができるので、CNNに対し教師あり事前学習の後に教師なし学習を行うことで対象画像のノイズを低減することが容易にできる。また、教師画像と対象画像との間で被検体に投与された薬剤の種類が互いに異なる場合であっても、効果的に対象画像のノイズを低減することができる。 As described above, according to this embodiment, it is possible to easily create an input image to the CNN, which is used during supervised pre-learning for the CNN, based on the teacher image. Noise in the target image can be easily reduced by performing unsupervised learning. Moreover, even if the types of drugs administered to the subject are different between the teacher image and the target image, noise in the target image can be effectively reduced.
 画像処理装置および画像処理方法は、上述した実施形態及び構成例に限定されるものではなく、種々の変形が可能である。 The image processing device and the image processing method are not limited to the embodiments and configuration examples described above, and various modifications are possible.
 上記実施形態による第1態様の画像処理装置は、対象画像のノイズを低減してノイズ低減画像を作成する画像処理装置であって、(1)教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理部と、(2)第1出力画像と教師画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第1CNN学習部と、(3)第2入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理部と、(4)第2出力画像と対象画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第2CNN学習部と、を備え、教師画像および第1入力画像の複数の組それぞれについて第1CNN処理部および第1CNN学習部それぞれの処理を複数回繰り返し行った後、第2CNN処理部および第2CNN学習部それぞれの処理を複数回繰り返し行って、第2出力画像をノイズ低減画像とする。 The image processing apparatus of the first aspect according to the above embodiment is an image processing apparatus that reduces noise in a target image to create a noise-reduced image, and includes: (1) changing pixel values of a partial area based on a teacher image; (2) a first CNN processing unit that inputs the first input image into a convolutional neural network to create a first output image by the convolutional neural network; and (2) evaluates the error between the first output image and the teacher image and (3) a second CNN processing unit that causes the convolutional neural network to input a second input image to create a second output image by the convolutional neural network; (4) a second CNN learning unit that evaluates the error between the second output image and the target image and learns the convolutional neural network based on the error evaluation result, and comprises a plurality of teacher images and first input images; After repeating the processing of the first CNN processing unit and the first CNN learning unit multiple times for each set, the processing of the second CNN processing unit and the second CNN learning unit is repeated multiple times to reduce noise in the second output image. Make it an image.
 第2態様の画像処理装置では、第1態様の構成において、教師画像に基づいて一部領域の画素値を変更した第1入力画像を作成する入力画像作成部を更に備える構成としてもよい。 In the image processing device of the second aspect, the configuration of the first aspect may further include an input image creation unit that creates a first input image in which pixel values of a partial region are changed based on the teacher image.
 第3態様の画像処理装置では、第1または第2態様の構成において、畳み込みニューラルネットワークは、エンコーダとデコーダとを含むU-net構造のものであり、第1CNN学習部は、畳み込みニューラルネットワークのエンコーダおよびデコーダの双方を学習させ、第2CNN学習部は、畳み込みニューラルネットワークのデコーダを選択的に学習させる構成としてもよい。 In the image processing device according to the third aspect, in the configuration according to the first or second aspect, the convolutional neural network has a U-net structure including an encoder and a decoder, and the first CNN learning unit has an encoder of the convolutional neural network. and a decoder, and the second CNN learning unit may be configured to selectively learn the decoder of the convolutional neural network.
 第4態様の画像処理装置では、第1~第3態様の何れかの構成において、第1CNN処理部は、各教師画像について、画素値変更の態様が互いに異なる複数の第1入力画像それぞれを畳み込みニューラルネットワークに入力させて、各第1入力画像に対して畳み込みニューラルネットワークにより第1出力画像を作成する構成としてもよい。 In the image processing device of the fourth aspect, in the configuration of any one of the first to third aspects, the first CNN processing unit convolves each of the plurality of first input images in which the pixel value change mode is different from each other for each teacher image. It may be configured such that the first output image is created by a convolutional neural network for each first input image by inputting the first input image to a neural network.
 第5態様の画像処理装置では、第1~第4態様の何れかの構成において、第1CNN処理部は、教師画像に基づいてノイズ低減対象領域のうちの一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させる構成としてもよい。 In the image processing device of the fifth aspect, in the configuration of any one of the first to fourth aspects, the first CNN processing unit changes the pixel values of a partial area of the noise reduction target area based on the teacher image. A configuration may also be adopted in which one input image is input to the convolutional neural network.
 第6態様の画像処理装置では、第1~第5態様の何れかの構成において、対象画像および教師画像は、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像である構成としてもよい。 In the image processing device of the sixth aspect, in the configuration of any one of the first to fifth aspects, the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by the radiation tomography device. A configuration may also be used.
 上記実施形態による第1態様の画像処理方法は、対象画像のノイズを低減してノイズ低減画像を作成する画像処理方法あって、(1)教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理ステップと、(2)第1出力画像と教師画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第1CNN学習ステップと、(3)第2入力画像を畳み込みニューラルネットワークに入力させて畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理ステップと、(4)第2出力画像と対象画像との間の誤差を評価して当該誤差評価結果に基づいて畳み込みニューラルネットワークを学習させる第2CNN学習ステップと、を備え、教師画像および第1入力画像の複数の組それぞれについて第1CNN処理ステップおよび第1CNN学習ステップそれぞれの処理を複数回繰り返し行った後、第2CNN処理ステップおよび第2CNN学習ステップそれぞれの処理を複数回繰り返し行って、第2出力画像をノイズ低減画像とする。 The first aspect of the image processing method according to the embodiment described above is an image processing method for reducing noise in a target image to create a noise-reduced image, and includes (1) changing pixel values in some areas based on a teacher image; a first CNN processing step of inputting a first input image to a convolutional neural network and creating a first output image by the convolutional neural network; (2) evaluating an error between the first output image and the teacher image; (3) a second CNN processing step of inputting a second input image to the convolutional neural network and creating a second output image by the convolutional neural network; 4) a second CNN learning step in which the error between the second output image and the target image is evaluated and the convolutional neural network is trained based on the error evaluation result; After repeating the first CNN processing step and the first CNN learning step for each set multiple times, the second CNN processing step and the second CNN learning step are repeated multiple times to convert the second output image into a noise-reduced image. shall be.
 第2態様の画像処理方法では、第1態様の構成において、教師画像に基づいて一部領域の画素値を変更した第1入力画像を作成する入力画像作成ステップを更に備える構成としてもよい。 In the image processing method of the second aspect, the configuration of the first aspect may further include an input image creation step of creating a first input image in which pixel values of a partial area are changed based on the teacher image.
 第3態様の画像処理方法では、第1または第2態様の構成において、畳み込みニューラルネットワークは、エンコーダとデコーダとを含むU-net構造のものであり、第1CNN学習ステップでは、畳み込みニューラルネットワークのエンコーダおよびデコーダの双方を学習させ、第2CNN学習ステップでは、畳み込みニューラルネットワークのデコーダを選択的に学習させる構成としてもよい。 In the image processing method of the third aspect, in the configuration of the first or second aspect, the convolutional neural network has a U-net structure including an encoder and a decoder, and in the first CNN learning step, the encoder of the convolutional neural network and a decoder, and in the second CNN learning step, the decoder of the convolutional neural network may be selectively trained.
 第4態様の画像処理方法では、第1~第3態様の何れかの構成において、第1CNN処理ステップでは、各教師画像について、画素値変更の態様が互いに異なる複数の第1入力画像それぞれを畳み込みニューラルネットワークに入力させて、各第1入力画像に対して畳み込みニューラルネットワークにより第1出力画像を作成する構成としてもよい。 In the image processing method of the fourth aspect, in the configuration of any one of the first to third aspects, in the first CNN processing step, for each teacher image, each of the plurality of first input images in which the pixel value change mode is different from each other is convolved. It may be configured such that the first output image is created by a convolutional neural network for each first input image by inputting the first input image to a neural network.
 第5態様の画像処理方法では、第1~第4態様の何れかの構成において、第1CNN処理ステップでは、教師画像に基づいてノイズ低減対象領域のうちの一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させる構成としてもよい。 In the image processing method of the fifth aspect, in the configuration of any one of the first to fourth aspects, in the first CNN processing step, the pixel values of a partial area of the noise reduction target area are changed based on the teacher image. A configuration may also be adopted in which one input image is input to the convolutional neural network.
 第6態様の画像処理方法では、第1~第5態様の何れかの構成において、対象画像および教師画像は、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像である構成としてもよい。 In the image processing method of the sixth aspect, in any of the configurations of the first to fifth aspects, the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by a radiation tomography apparatus. A configuration may also be used.
 本発明は、CNNに対し教師あり事前学習の後に教師なし学習を行うことで対象画像のノイズを低減することが容易にできる画像処理装置および画像処理方法として利用可能である。 The present invention can be used as an image processing device and an image processing method that can easily reduce noise in a target image by performing unsupervised learning after supervised pre-learning on a CNN.
 1…画像処理装置、10…入力画像作成部、20…第1演算部、21…第1CNN処理部、22…第1CNN学習部、30…第2演算部、31…第2CNN処理部、32…第2CNN学習部。 DESCRIPTION OF SYMBOLS 1... Image processing device, 10... Input image creation part, 20... First calculating part, 21... First CNN processing part, 22... First CNN learning part, 30... Second calculating part, 31... Second CNN processing part, 32... 2nd CNN Learning Department.

Claims (12)

  1.  対象画像のノイズを低減してノイズ低減画像を作成する画像処理装置であって、
     教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて前記畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理部と、
     前記第1出力画像と前記教師画像との間の誤差を評価して当該誤差評価結果に基づいて前記畳み込みニューラルネットワークを学習させる第1CNN学習部と、
     第2入力画像を前記畳み込みニューラルネットワークに入力させて前記畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理部と、
     前記第2出力画像と前記対象画像との間の誤差を評価して当該誤差評価結果に基づいて前記畳み込みニューラルネットワークを学習させる第2CNN学習部と、
    を備え、
     前記教師画像および前記第1入力画像の複数の組それぞれについて前記第1CNN処理部および前記第1CNN学習部それぞれの処理を複数回繰り返し行った後、前記第2CNN処理部および前記第2CNN学習部それぞれの処理を複数回繰り返し行って、前記第2出力画像を前記ノイズ低減画像とする、画像処理装置。
    An image processing device that reduces noise in a target image to create a noise-reduced image,
    a first CNN processing unit that inputs a first input image in which pixel values of some regions are changed based on a teacher image to a convolutional neural network and creates a first output image by the convolutional neural network;
    a first CNN learning unit that evaluates an error between the first output image and the teacher image and trains the convolutional neural network based on the error evaluation result;
    a second CNN processing unit that inputs a second input image to the convolutional neural network and creates a second output image by the convolutional neural network;
    a second CNN learning unit that evaluates an error between the second output image and the target image and trains the convolutional neural network based on the error evaluation result;
    Equipped with
    After repeating the processing of the first CNN processing unit and the first CNN learning unit multiple times for each of the plurality of sets of the teacher image and the first input image, the processing of the second CNN processing unit and the second CNN learning unit respectively An image processing device that repeats processing multiple times to make the second output image the noise-reduced image.
  2.  教師画像に基づいて一部領域の画素値を変更した第1入力画像を作成する入力画像作成部を更に備える、請求項1に記載の画像処理装置。 The image processing device according to claim 1, further comprising an input image creation unit that creates a first input image in which pixel values of some regions are changed based on the teacher image.
  3.  前記畳み込みニューラルネットワークは、エンコーダとデコーダとを含むU-net構造のものであり、
     前記第1CNN学習部は、前記畳み込みニューラルネットワークのエンコーダおよびデコーダの双方を学習させ、
     前記第2CNN学習部は、前記畳み込みニューラルネットワークのデコーダを選択的に学習させる、請求項1または2に記載の画像処理装置。
    The convolutional neural network has a U-net structure including an encoder and a decoder,
    The first CNN learning unit trains both an encoder and a decoder of the convolutional neural network,
    The image processing device according to claim 1 or 2, wherein the second CNN learning section selectively trains a decoder of the convolutional neural network.
  4.  前記第1CNN処理部は、各教師画像について、画素値変更の態様が互いに異なる複数の第1入力画像それぞれを前記畳み込みニューラルネットワークに入力させて、各第1入力画像に対して前記畳み込みニューラルネットワークにより第1出力画像を作成する、請求項1~3のいずれか一項に記載の画像処理装置。 The first CNN processing unit causes the convolutional neural network to input each of a plurality of first input images in which pixel value changes are different from each other for each teacher image, and performs processing using the convolutional neural network for each first input image. The image processing device according to any one of claims 1 to 3, which creates a first output image.
  5.  前記第1CNN処理部は、教師画像に基づいてノイズ低減対象領域のうちの一部領域の画素値を変更した第1入力画像を前記畳み込みニューラルネットワークに入力させる、請求項1~4のいずれか一項に記載の画像処理装置。 5. The method according to claim 1, wherein the first CNN processing unit causes the convolutional neural network to input a first input image in which pixel values of a partial region of the noise reduction target region are changed based on the teacher image. The image processing device described in .
  6.  前記対象画像および前記教師画像は、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像である、請求項1~5のいずれか一項に記載の画像処理装置。 The image processing device according to any one of claims 1 to 5, wherein the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by a radiation tomography apparatus.
  7.  対象画像のノイズを低減してノイズ低減画像を作成する画像処理方法であって、
     教師画像に基づいて一部領域の画素値を変更した第1入力画像を畳み込みニューラルネットワークに入力させて前記畳み込みニューラルネットワークにより第1出力画像を作成する第1CNN処理ステップと、
     前記第1出力画像と前記教師画像との間の誤差を評価して当該誤差評価結果に基づいて前記畳み込みニューラルネットワークを学習させる第1CNN学習ステップと、
     第2入力画像を前記畳み込みニューラルネットワークに入力させて前記畳み込みニューラルネットワークにより第2出力画像を作成する第2CNN処理ステップと、
     前記第2出力画像と前記対象画像との間の誤差を評価して当該誤差評価結果に基づいて前記畳み込みニューラルネットワークを学習させる第2CNN学習ステップと、
    を備え、
     前記教師画像および前記第1入力画像の複数の組それぞれについて前記第1CNN処理ステップおよび前記第1CNN学習ステップそれぞれの処理を複数回繰り返し行った後、前記第2CNN処理ステップおよび前記第2CNN学習ステップそれぞれの処理を複数回繰り返し行って、前記第2出力画像を前記ノイズ低減画像とする、画像処理方法。
    An image processing method for reducing noise in a target image to create a noise-reduced image, the method comprising:
    a first CNN processing step of inputting a first input image with pixel values of some regions changed based on the teacher image to a convolutional neural network to create a first output image by the convolutional neural network;
    a first CNN learning step of evaluating an error between the first output image and the teacher image and learning the convolutional neural network based on the error evaluation result;
    a second CNN processing step of inputting a second input image to the convolutional neural network and creating a second output image by the convolutional neural network;
    a second CNN learning step of evaluating an error between the second output image and the target image and learning the convolutional neural network based on the error evaluation result;
    Equipped with
    After repeating the first CNN processing step and the first CNN learning step for each of the plurality of sets of the teacher image and the first input image, the second CNN processing step and the second CNN learning step are performed. An image processing method, wherein the process is repeated a plurality of times to make the second output image the noise-reduced image.
  8.  教師画像に基づいて一部領域の画素値を変更した第1入力画像を作成する入力画像作成ステップを更に備える、請求項7に記載の画像処理方法。 The image processing method according to claim 7, further comprising an input image creation step of creating a first input image in which pixel values of some regions are changed based on the teacher image.
  9.  前記畳み込みニューラルネットワークは、エンコーダとデコーダとを含むU-net構造のものであり、
     前記第1CNN学習ステップでは、前記畳み込みニューラルネットワークのエンコーダおよびデコーダの双方を学習させ、
     前記第2CNN学習ステップでは、前記畳み込みニューラルネットワークのデコーダを選択的に学習させる、請求項7または8に記載の画像処理方法。
    The convolutional neural network has a U-net structure including an encoder and a decoder,
    In the first CNN learning step, both an encoder and a decoder of the convolutional neural network are trained,
    The image processing method according to claim 7 or 8, wherein in the second CNN learning step, a decoder of the convolutional neural network is selectively trained.
  10.  前記第1CNN処理ステップでは、各教師画像について、画素値変更の態様が互いに異なる複数の第1入力画像それぞれを前記畳み込みニューラルネットワークに入力させて、各第1入力画像に対して前記畳み込みニューラルネットワークにより第1出力画像を作成する、請求項7~9のいずれか一項に記載の画像処理方法。 In the first CNN processing step, for each teacher image, each of a plurality of first input images having different pixel value change modes is input to the convolutional neural network, and the convolutional neural network processes each first input image. The image processing method according to any one of claims 7 to 9, wherein a first output image is created.
  11.  前記第1CNN処理ステップでは、教師画像に基づいてノイズ低減対象領域のうちの一部領域の画素値を変更した第1入力画像を前記畳み込みニューラルネットワークに入力させる、請求項7~10のいずれか一項に記載の画像処理方法。 Any one of claims 7 to 10, wherein in the first CNN processing step, a first input image in which pixel values of a part of the noise reduction target area are changed based on the teacher image is input to the convolutional neural network. Image processing method described in Section.
  12.  前記対象画像および前記教師画像は、放射線断層撮影装置により取得された情報に基づいて再構成された被検体の断層画像である、請求項7~11のいずれか一項に記載の画像処理方法。 The image processing method according to any one of claims 7 to 11, wherein the target image and the teacher image are tomographic images of the subject reconstructed based on information acquired by a radiation tomography apparatus.
PCT/JP2023/019509 2022-05-31 2023-05-25 Image processing device and image processing method WO2023234171A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022088585A JP2023176348A (en) 2022-05-31 2022-05-31 Image processing device and image processing method
JP2022-088585 2022-05-31

Publications (1)

Publication Number Publication Date
WO2023234171A1 true WO2023234171A1 (en) 2023-12-07

Family

ID=89024913

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/019509 WO2023234171A1 (en) 2022-05-31 2023-05-25 Image processing device and image processing method

Country Status (2)

Country Link
JP (1) JP2023176348A (en)
WO (1) WO2023234171A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218000A1 (en) * 2018-05-15 2019-11-21 Monash University Method and system of motion correction for magnetic resonance imaging
JP2019211900A (en) * 2018-06-01 2019-12-12 株式会社デンソー Object identification device, system for moving object, object identification method, learning method of object identification model and learning device for object identification model
JP2020036877A (en) * 2018-08-06 2020-03-12 ゼネラル・エレクトリック・カンパニイ Iterative image reconstruction framework
JP2020128882A (en) * 2019-02-07 2020-08-27 浜松ホトニクス株式会社 Image processing device and image processing method
JP2020205030A (en) * 2019-06-17 2020-12-24 株式会社アクセル Learning method, computer program, classifier, generator, and processing system
JP2021071936A (en) * 2019-10-31 2021-05-06 浜松ホトニクス株式会社 Image processing device, image processing method, image processing program and recording medium
JP2021117866A (en) * 2020-01-29 2021-08-10 浜松ホトニクス株式会社 Image processing device and image processing method
JP2023059177A (en) * 2021-10-14 2023-04-26 株式会社島津製作所 X-ray imaging method and x-ray imaging device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218000A1 (en) * 2018-05-15 2019-11-21 Monash University Method and system of motion correction for magnetic resonance imaging
JP2019211900A (en) * 2018-06-01 2019-12-12 株式会社デンソー Object identification device, system for moving object, object identification method, learning method of object identification model and learning device for object identification model
JP2020036877A (en) * 2018-08-06 2020-03-12 ゼネラル・エレクトリック・カンパニイ Iterative image reconstruction framework
JP2020128882A (en) * 2019-02-07 2020-08-27 浜松ホトニクス株式会社 Image processing device and image processing method
JP2020205030A (en) * 2019-06-17 2020-12-24 株式会社アクセル Learning method, computer program, classifier, generator, and processing system
JP2021071936A (en) * 2019-10-31 2021-05-06 浜松ホトニクス株式会社 Image processing device, image processing method, image processing program and recording medium
JP2021117866A (en) * 2020-01-29 2021-08-10 浜松ホトニクス株式会社 Image processing device and image processing method
JP2023059177A (en) * 2021-10-14 2023-04-26 株式会社島津製作所 X-ray imaging method and x-ray imaging device

Also Published As

Publication number Publication date
JP2023176348A (en) 2023-12-13

Similar Documents

Publication Publication Date Title
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
Whiteley et al. DirectPET: full-size neural network PET reconstruction from sinogram data
Cui et al. Deep reconstruction model for dynamic PET images
US11893660B2 (en) Image processing device and image processing method
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
Ruan et al. Separation of a mixture of simultaneous dual-tracer PET signals: a data-driven approach
Feng et al. Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning
Unal et al. An unsupervised reconstruction method for low-dose CT using deep generative regularization prior
Liu et al. Deep-learning-based framework for PET image reconstruction from sinogram domain
WO2021153604A1 (en) Image processing device and image processing method
Peng et al. Feasibility evaluation of PET scan-time reduction for diagnosing amyloid-β levels in Alzheimer's disease patients using a deep-learning-based denoising algorithm
Khaleghi et al. Metal artifact reduction in computed tomography images based on developed generative adversarial neural network
WO2023234171A1 (en) Image processing device and image processing method
CN116245969A (en) Low-dose PET image reconstruction method based on deep neural network
US20230386036A1 (en) Methods and systems for medical imaging
Zhang et al. Deep generalized learning model for PET image reconstruction
Li et al. MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints
Chuang et al. A maximum likelihood expectation maximization algorithm with thresholding
Cheng et al. Super-resolution reconstruction for parallel-beam SPECT based on deep learning and transfer learning: a preliminary simulation study
CN111402357A (en) PET reconstruction method based on anatomical prior information
WO2023228910A1 (en) Image processing device and image processing method
WO2024075705A1 (en) Image processing device and image processing method
WO2023149403A1 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23815928

Country of ref document: EP

Kind code of ref document: A1