WO2021193008A1 - Program, information processing method, information processing device, and model generation method - Google Patents

Program, information processing method, information processing device, and model generation method Download PDF

Info

Publication number
WO2021193008A1
WO2021193008A1 PCT/JP2021/009234 JP2021009234W WO2021193008A1 WO 2021193008 A1 WO2021193008 A1 WO 2021193008A1 JP 2021009234 W JP2021009234 W JP 2021009234W WO 2021193008 A1 WO2021193008 A1 WO 2021193008A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
medical image
artifact
input
medical
Prior art date
Application number
PCT/JP2021/009234
Other languages
French (fr)
Japanese (ja)
Inventor
陽 井口
悠介 関
雄紀 坂口
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2021193008A1 publication Critical patent/WO2021193008A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a program, an information processing method, an information processing device, and a model generation method.
  • An artifact is a virtual image that is not intended or does not actually exist, and is an image that is formed due to a device for capturing a medical image, imaging conditions, or the like.
  • a skilled medical worker can distinguish an artifact, but an inexperienced person may have difficulty in distinguishing an artifact, which may cause problems such as misdiagnosis.
  • Patent Document 1 it is a method of reducing ring-down artifacts from an intravascular tomographic image imaged by an ultrasonic diagnostic imaging apparatus, in which a reference frame is determined from a plurality of frames of tomographic images, and the reference frame and the current frame are combined.
  • a method of generating a tomographic image with reduced ring-down artifacts from the difference is disclosed.
  • Patent Document 1 extracts the artifacts in the image on a rule basis, and the accuracy is not always good.
  • One aspect is to provide a program or the like that can suitably present the artifacts in the medical image.
  • a program relating to one aspect acquires a medical image generated based on a signal detected by a catheter inserted into a biological lumen and, when the medical image is input, corresponds to an artifact in the medical image.
  • the artifacts in the medical image can be preferably presented.
  • FIG. It is a flowchart which shows the procedure of the generation process of an estimation model. It is a flowchart which shows the processing procedure of artifact detection and image defect estimation. It is explanatory drawing about the detection model which concerns on Embodiment 3. It is explanatory drawing which shows the display screen example of the diagnostic imaging apparatus which concerns on Embodiment 3.
  • FIG. It is a flowchart which shows the procedure of the generation processing of the detection model which concerns on Embodiment 3.
  • FIG. It is explanatory drawing about the generative model. It is a flowchart which shows the procedure of the generation process of a generation model. It is a flowchart which shows the procedure of the artifact reduction processing.
  • FIG. 1 is an explanatory diagram showing a configuration example of an diagnostic imaging system.
  • an diagnostic imaging system that detects an artifact from a medical image of the inside of the living lumen of a subject and presents the detected artifact to a user (medical worker) will be described.
  • the diagnostic imaging system includes an information processing device 1 and a diagnostic imaging device 2.
  • the information processing device 1 and the diagnostic imaging device 2 are communicated and connected via a network N such as a LAN (Local Area Network) or the Internet.
  • a network N such as a LAN (Local Area Network) or the Internet.
  • the diagnostic imaging device 2 is a device unit for imaging the inside of the living lumen of a subject, for example, a device unit for performing an ultrasonic examination in a blood vessel of a subject using a catheter 21.
  • the diagnostic imaging device 2 includes a catheter 21, an MDU (Motor Drive Unit) 22, an image processing device 23, and a display device 24.
  • the catheter 21 is a medical device inserted into a blood vessel of a subject, and includes an imaging core that transmits ultrasonic waves based on a pulse signal and receives reflected waves from the blood vessels.
  • the diagnostic imaging apparatus 2 generates a tomographic image (medical image) in the blood vessel based on the signal of the reflected wave received by the catheter 21.
  • the MDU 22 is a drive device to which the catheter 21 is detachably attached, and by driving the built-in motor according to the operation of the user, the imaging core of the catheter 21 inserted into the blood vessel in the longitudinal direction and the rotation direction. Control the operation.
  • the image processing device 23 is a processing device that processes the reflected wave data received by the catheter 21 to generate a tomographic image. In addition to displaying the generated tomographic image on the display device 24, various set values at the time of inspection are performed. It is equipped with an input interface for accepting the input of.
  • an intravascular examination will be described as an example, but the biological lumen to be examined is not limited to blood vessels, and may be an organ such as an intestine.
  • the medical image is not limited to the ultrasonic image, and may be, for example, an OCT (Optical Coherence Tomography) image.
  • the information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer.
  • the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity.
  • the server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging device 2, or may be a cloud server communication-connected to the diagnostic imaging device 2 via the Internet or the like.
  • the server 1 functions as a detection device that detects an artifact from a medical image generated (imaged) by the diagnostic imaging apparatus 2, and provides the detection result to the diagnostic imaging apparatus 2.
  • the server 1 performs machine learning to learn training data, inputs a medical image, and outputs a detection result of detecting an image area corresponding to an artifact in the medical image. (See FIG. 4) is prepared.
  • the server 1 acquires a medical image from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect an image region corresponding to an artifact.
  • the server 1 outputs the detection result to the diagnostic imaging apparatus 2, and causes the server 1 to display guidance so that the artifact in the medical image can be identified.
  • an artifact area the image area corresponding to the artifact is referred to as an "artifact area”.
  • the artifact is detected on the server 1 which is separate from the image diagnostic device 2, but the detection model 141 generated by the server 1 by machine learning is used as the image diagnostic device 2 (image processing device 23). ), It may be possible to detect the artifact with the diagnostic imaging apparatus 2.
  • FIG. 2 is a block diagram showing a configuration example of the server 1.
  • the server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
  • the control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed.
  • the main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing.
  • the communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the detection model 141.
  • the detection model 141 is a machine learning model in which training data has been trained as described above, and is a model that inputs a medical image and outputs a detection result of an artifact region.
  • the auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 is provided with a reading unit that reads a portable storage medium 1a such as a CD (CompactDisk) -ROM, a DVD (DigitalVersatileDisc) -ROM, and reads and executes a program P from the portable storage medium 1a. You can do it. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
  • a portable storage medium 1a such as a CD (CompactDisk) -ROM, a DVD (DigitalVersatileDisc) -ROM
  • FIG. 3 is an explanatory diagram regarding the artifact.
  • FIG. 3 conceptually illustrates the five types of artifacts that occur in medical images.
  • an artifact is a virtual image that is not intended for inspection or does not actually exist, and is an image that is imaged due to the device, shooting conditions, and so on.
  • the artifacts include multiple reflection (echo), ring-down (Ring-Down), acoustic shadow (Acoustic Shadow), side lobe (Side Lobe), NURD (Non-Uniformed Rotational Distortion), and the like. ..
  • ultrasonic waves are reflected by an object M1 (for example, a calcified tissue) in a blood vessel to generate an artifact A1.
  • object M1 for example, a calcified tissue
  • the artifact A1 is generated at a position equal to the distance between the object M1 and the catheter 21, and is projected as an image very similar to the object M1.
  • the ring down is a ring-shaped image near the center of the image due to multiple reflections between the vibrator and the sheath.
  • a state in which a ring-shaped artifact A2 appears in the center of the image is illustrated.
  • the ring down is projected as a white ring of a certain width.
  • Acoustic shadow is a phenomenon in which a part of an image is blacked out due to a large attenuation in the process of transmitting ultrasonic waves toward the outside in the radial direction of the catheter 21.
  • the appearance in which the area outside the catheter 21 in the radial direction with respect to the object M2 is missing in black is conceptually illustrated as artifact A3.
  • the black missing area is shown by hatching.
  • the acoustic shadow when the hard object M2 is present in the blood vessel, most of the ultrasonic waves are reflected by the object M2, so that the ultrasonic waves transmitted radially outward of the catheter 21 are attenuated more than the object M2. It happens because of that.
  • the side lobe is a weak ultrasonic wave (secondary pole) transmitted at a certain angle from the main lobe (main pole) of the ultrasonic wave transmitted with a certain directivity. Due to the side lobes, the actual object M3 (eg, a stent) in the blood vessel is projected as a larger image than the real thing. In the example of FIG. 3, the image caused by the side lobe is illustrated as artifact A4. The artifact A4 is generated when the catheter 21 simultaneously receives the reflected wave from the object M3 on the side lobe and the reflected wave from the main lobe.
  • NURD is image distortion caused by the drive shaft of the catheter 21 not rotating normally. NURD is caused by bending in a blood vessel, twisting of the shaft of a catheter 21, or the like. In the example of FIG. 3, the portion where the image on the left half is distorted due to the uneven rotation speed is illustrated as artifact A5 surrounded by a broken line.
  • the server 1 detects the above-mentioned various artifacts from the medical image. Specifically, the server 1 detects an artifact using the detected detection model 141 that has learned the artifact region in the medical image as follows.
  • FIG. 4 is an explanatory diagram of the detection model 141.
  • the detection model 141 is a machine learning model that takes a medical image as an input and outputs a detection result of detecting an artifact region in the medical image as an output.
  • the server 1 performs machine learning to learn predetermined training data and generates the detection model 141 in advance. Then, the server 1 acquires the medical image of the subject from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect the artifact region in the medical image.
  • the detection model 141 will be described with reference to FIG.
  • the detection model 141 is, for example, a neural network model generated by deep learning, and is a CNN (Convolutional Neural Network) that extracts features of an input image in a large number of convolution layers.
  • the detection model 141 includes an intermediate layer (hidden layer) in which a convolution layer that convolves the pixel information of the input image and a pooling layer that maps the convolved pixel information are alternately connected, and features (features) of the input image. Quantity map) is extracted.
  • the detection model 141 is described as being a CNN, but other learning algorithms such as GAN (Generative Adversarial Network), RNN (Recurrent Neural Network), SVM (Support Vector Machine), decision tree, etc. It may be a model based on.
  • the server 1 generates a detection model 141 that identifies on a pixel-by-pixel basis whether or not each pixel in the input medical image is a pixel corresponding to the artifact region.
  • the server 1 generates a semantic segmentation model, MASK R-CNN (Region CNN), or the like as the detection model 141.
  • the semantic segmentation model is a type of CNN, and is a type of EncoderDecoder model that generates output data from input data.
  • the semantic segmentation model includes a deconvolution layer (Deconvolution Layer) that maps (enlarges) the features obtained by compression to the original image size, in addition to the convolution layer that compresses the data of the input image.
  • Deconvolution Layer maps (enlarges) the features obtained by compression to the original image size, in addition to the convolution layer that compresses the data of the input image.
  • inverse convolution layer a label image that identifies which object exists in which position in the image based on the feature amount extracted by the convolution layer and binarizes which object each pixel corresponds to is generated. Generate.
  • MASK R-CNN is a modification of Faster R-CNN mainly used for object detection, and has a configuration in which a reverse convolution layer is connected to Faster R-CNN.
  • the feature amount of the image extracted by CNN and the information of the coordinate area of the target object extracted by RPN are input to the inverse convolution layer, and finally the object in the input image.
  • RPN Registered Proposal Network
  • the server 1 generates these models as a detection model 141 and uses them for detecting artifacts. It should be noted that all of the above models are examples, and the detection model 141 may be sufficient as long as it can identify the position and shape of the artifact in the medical image. In the present embodiment, as an example, the detection model 141 will be described as a semantic segmentation model.
  • the server 1 learns from the medical image for training using the training data in which the data indicating the artifact region is labeled. Specifically, in the training data, labels (metadata) indicating the coordinate range corresponding to the artifact region and the type of the artifact are attached to the medical image for training.
  • the server 1 inputs a medical image for training into the detection model 141, and acquires the detection result of detecting the artifact region as an output. Specifically, as shown by hatching on the right side of the detection model 141 in FIG. 4, a label image in which data indicating the type of artifact is labeled is acquired as an output for each pixel in the artifact region.
  • the server 1 compares the detection result output from the detection model 141 with the coordinate range of the correct artifact region indicated by the training data and the type of artifact, and sets parameters such as weights between neurons so that the two can be approximated. Optimize. As a result, the server 1 generates the detection model 141.
  • the detection model 141 accepts a plurality of frames of medical images (moving images) that are continuous in time series as input, and detects an artifact from the medical images of each frame. Specifically, the detection model 141 receives a plurality of frames of medical images continuous along the longitudinal direction of the blood vessel as input according to the scanning of the catheter 21. The detection model 141 detects an artifact from a medical image of each consecutive frame along the time axis t.
  • the server 1 may input each frame image into the detection model 141 one by one and process it, but can input a plurality of consecutive frame images at the same time so that the artifact region can be detected simultaneously from the plurality of frame images. Then it is suitable.
  • the server 1 uses the detection model 141 as a 3D-CNN (for example, 3D U-net) that handles three-dimensional input data. Then, the server 1 handles the data as three-dimensional data in which the coordinates of the two-dimensional frame image are set as two axes and the time (generation time point) t at which each frame image is acquired is set as one axis.
  • 3D-CNN for example, 3D U-net
  • the server 1 inputs a plurality of frame images (for example, 16 frames) for a predetermined unit time into the detection model 141 as a set, and simultaneously outputs an image with a label in the artifact region for each of the plurality of frame images.
  • a plurality of frame images for example, 16 frames
  • the artifact can be detected in consideration of the frame images before and after being continuous in time series, and the detection accuracy can be improved.
  • the server 1 may be able to detect an artifact region from a continuous plurality of frame images by using the detection model 141 as a model in which CNN and RNN are combined.
  • the detection model 141 as a model in which CNN and RNN are combined.
  • an RSTM (Long-Short Term Memory) layer is inserted after the intermediate layer related to CNN, and the artifact region is detected with reference to the feature amount extracted from the previous frame image.
  • the processing can be performed in consideration of the frame images before and after, and the detection accuracy can be improved.
  • the server 1 learns the training data as described above and generates the detection model 141.
  • the server 1 acquires a medical image from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect an artifact region and an artifact type.
  • the artifact may be detected in real time at the time of the examination, or the medical images (moving images) recorded after the examination may be collectively acquired to detect the artifact.
  • the detection of artifacts will be described in real time at the time of inspection.
  • the server 1 outputs the detection result of the artifact to the diagnostic imaging device 2. Then, the server 1 causes the user to perform a guidance display for presenting the artifact area in the medical image as follows.
  • the output destination of the detection result is the diagnostic imaging apparatus 2, but the detection result is output to a device other than the diagnostic imaging apparatus 2 (for example, a personal computer) which is the acquisition source of the medical image, and the guidance is displayed.
  • a device other than the diagnostic imaging apparatus 2 for example, a personal computer
  • FIG. 5 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2.
  • FIG. 5 illustrates an example of a display screen displayed by the diagnostic imaging apparatus 2 when an artifact is detected.
  • the diagnostic imaging apparatus 2 displays the detection result of the artifact region in association with the medical image. Specifically, as shown by hatching in FIG. 5, the diagnostic imaging apparatus 2 displays a second medical image showing the detected artifact region in a display mode (for example, color display) different from that of other image regions.
  • a display mode for example, color display
  • the second medical image is a medical image obtained by processing an artifact region so that it can be distinguished from other regions, and is an image obtained by superimposing the label image output from the detection model 141 on the original medical image.
  • the server 1 When the artifact region is detected, the server 1 generates a second medical image and outputs it to the diagnostic imaging apparatus 2.
  • the server 1 processes the label image into a translucent mask having a display color other than black and white, superimposes it on the artifact region of the medical image expressed in black and white, and generates a second medical image.
  • the server 1 changes the display mode (display color) according to the type of artifact.
  • the user can intuitively grasp various artifacts generated by different causes, and the convenience can be improved.
  • the artifact region is displayed in color in the above, the present embodiment is not limited to this, and for example, the outline (edge) portion of the artifact region may be highlighted. As described above, it is sufficient that the artifact region can be displayed so as to be distinguishable from other image regions, and the display mode thereof is not particularly limited.
  • the diagnostic imaging device 2 displays a second medical image and notifies the user that an artifact has occurred. Further, the label name indicating the type of the artifact is displayed in association with the display color of the artifact area (the type of hatching in FIG. 5).
  • the artifact area is detected in pixel units and the artifact area is displayed in pixel units, but the present embodiment is not limited to this.
  • the artifact area may be simply surrounded by a bounding box (rectangular frame) and displayed.
  • the configuration for detecting the artifact region on a pixel-by-pixel basis is not essential, and it is sufficient if the location corresponding to the artifact can be detected and displayed.
  • FIG. 6 is a flowchart showing the procedure of the generation process of the detection model 141. Based on FIG. 6, the processing content when the detection model 141 is generated by machine learning will be described.
  • the control unit 11 of the server 1 acquires the training data in which the artifact region is labeled with respect to the medical image for training (step S11). Specifically, as described above, the training data to which the coordinate range of the artifact region and the label (metadata) indicating the type of the artifact are attached to the medical image for training is acquired.
  • the control unit 11 generates a detection model 141 that outputs a detection result of detecting an artifact region and an artifact type when a medical image is input based on the training data (step S12). Specifically, as described above, the control unit 11 generates a semantic segmentation model that identifies an object in the medical image on a pixel-by-pixel basis as a detection model 141. The control unit 11 inputs a medical image for training into the detection model 141, and acquires the detection result of detecting the artifact region and the type of the artifact as an output. The control unit 11 compares the detection result with the correct answer value (correct answer label), optimizes parameters such as weights between neurons so that the two are close to each other, and generates the detection model 141. The control unit 11 ends a series of processes.
  • FIG. 7 is a flowchart showing the procedure of the artifact detection process. Based on FIG. 7, the processing content when detecting an artifact from the medical image of the subject will be described.
  • the control unit 11 of the server 1 acquires a medical image of the subject from the diagnostic imaging apparatus 2 (step S31).
  • the control unit 11 inputs the acquired medical image into the detection model 141 and detects the artifact region and the type of the artifact (step S32).
  • the control unit 11 determines whether or not the artifact region has been detected in step S32 (step S33). When it is determined that the artifact region is not detected (S33: NO), the control unit 11 causes the diagnostic imaging apparatus 2 to display the original medical image as it is (step S34). When it is determined that the artifact region has been detected (S33: YES), the control unit 11 generates a second medical image obtained by processing the artifact region (step S35). Specifically, as described above, the control unit 11 generates a second medical image that displays the artifact region in different display modes depending on the type of artifact.
  • the control unit 11 outputs the detection result of the artifact region to the diagnostic imaging apparatus 2 and displays it in association with the medical image (step S36). Specifically, as described above, the control unit 11 causes the diagnostic imaging apparatus 2 to display the second medical image.
  • the control unit 11 determines whether or not the inspection by the diagnostic imaging apparatus 2 has been completed (step S37). If it is determined that the inspection has not been completed (S37: NO), the control unit 11 returns the process to step S31. When it is determined that the inspection is completed (S37: YES), the control unit 11 ends a series of processes.
  • the server 1 may further receive an input for modifying the detection result of the artifact region from the user and perform re-learning based on the information of the modified artifact region. good. Specifically, the server 1 accepts an input as to whether or not the area displayed as an artifact is actually an artifact on the display screen illustrated in FIG. Further, the server 1 accepts the input of the correct artifact type, coordinate range, etc. when the displayed coordinate range, coordinate range, etc. of the artifact are different from the actual ones.
  • the server 1 When the correction input of the detection result is received, the server 1 performs re-learning using the medical image labeled with the corrected detection result (artifact region and type) as training data, and updates the detection model 141. As a result, the accuracy of detecting artifacts can be improved through the operation of this system.
  • the artifact in the medical image can be detected with high accuracy and presented to the user.
  • the artifact in the medical image can be preferably presented to the user.
  • the first embodiment by simultaneously processing a plurality of consecutive frame images to detect an artifact, it is possible to improve the detection accuracy in consideration of the previous and next frame images.
  • Embodiment 2 In the present embodiment, in addition to detecting artifacts, the presence or absence of image defects due to improper use, damage, failure, etc. of the diagnostic imaging apparatus 2 is estimated, and guidance information for removing the cause of the image defects is provided.
  • the form to be presented to the user will be described.
  • the contents overlapping with the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
  • FIG. 8 is a block diagram showing a configuration example of the server 1 according to the second embodiment.
  • the auxiliary storage unit 14 of the server 1 according to the present embodiment stores an estimation model 142 for estimating image defects.
  • the estimation model 142 is a machine learning model in which training data has been trained in the same manner as the detection model 141, and is a model in which a medical image is input and the presence / absence and cause of image defects in the medical image are output.
  • the estimation model 142 is expected to be used as a program module that functions as a part of artificial intelligence software.
  • FIG. 9 is an explanatory diagram regarding an image defect that occurs in the diagnostic imaging apparatus 2.
  • An image defect to be estimated in the present embodiment will be described with reference to FIG.
  • Various image defects may occur in the medical image generated by the diagnostic imaging apparatus 2 due to improper use, damage, failure, or the like of the diagnostic imaging apparatus 2.
  • a typical image defect generated in the image diagnostic apparatus 2 is illustrated in a form of contrasting with the cause portion of the image defect.
  • the causes of image defects include air traps, disconnection of the drive shaft inside the catheter 21, rotation inhibition of the drive shaft inside the catheter 21, poor connection between the catheter 21 and the MDU22, and failure of the MDU22.
  • Image defects caused by the air trap are caused by air bubbles remaining in the air trap at the tip of the catheter 21. If the air trap bubbles are not sufficiently removed by pre-inspection priming, the bubbles attenuate the ultrasound and darken part or all of the image. Further, when air bubbles are present on the vibrator at the tip of the catheter 21, a phenomenon occurs in which the dark portion of the image rotates according to the rotation of the drive shaft. Note that, for convenience, FIG. 9 shows how a part of the image is darkened by hatching.
  • the server 1 estimates the presence / absence and cause (type) of these image defects from the medical image. Then, the server 1 outputs guidance information that guides the user of countermeasures for removing the cause of the image defect.
  • the above-mentioned image defect and its cause are merely examples, and are not limited to the above.
  • FIG. 10 is an explanatory diagram of the estimation model 142.
  • the estimation model 142 is a machine learning model that outputs an estimation result of estimating the cause of image defects when a medical image in which image defects occur is input. Similar to the detection model 141, the server 1 learns the training data and generates the estimation model 142 in advance. Then, when the server 1 acquires the medical image from the diagnostic imaging apparatus 2, the server 1 inputs it into the estimation model 142 to estimate the presence / absence and the cause of the image defect.
  • the estimation model 142 will be described with reference to FIG.
  • the medical image input to the estimation model 142 may be an image during the examination with the catheter 21 inserted in the blood vessel (living lumen) of the subject, or a test image before the examination. It may be.
  • the estimation of image defects before and during the inspection, and the guidance of the image defects before and during the inspection will be described in detail later.
  • the estimation model 142 is, for example, CNN, includes an intermediate layer in which convolution layers and pooling layers are alternately connected, and extracts a feature amount (feature amount map) of an input image.
  • the estimation model 142 according to the present embodiment is a CNN that deals with the classification problem, and does not have an inverse convolutional layer like the semantic segmentation model.
  • estimation model 142 is described as being a CNN in the present embodiment, it may be a model based on other learning algorithms such as GAN, RNN, SVM, and decision tree.
  • the server 1 learns from the medical image for training by using the training data in which the presence or absence of an image defect in the medical image and, if there is an image defect, the data indicating the cause are labeled.
  • each medical image for training includes "normal” indicating that the image is normal, or “air trap”, “connection failure”, “disconnection”, and “rotation inhibition” indicating the cause of the image defect.
  • Or "MDU failure” is given a label (metadata).
  • the server 1 feeds the training data to the estimation model 142 and performs training.
  • the normal medical image is also learned as the training data, but the normal medical image may not be included in the training data and only the medical image in which the image defect has occurred may be learned.
  • the server 1 comprehensively determines the probability value of each image defect, and if, for example, the probability values of all image defects are equal to or less than the threshold value (for example, 70% or less), it is considered to be normal. You can estimate it.
  • the user may visually determine the presence or absence of an image defect, and if it is determined that there is an image defect, the image may be transmitted to the server 1 to cause the server 1 to execute the estimation process.
  • the estimation model 142 only needs to be able to estimate the cause of the image defect when at least a medical image in which the image defect has occurred is input, and the configuration for estimating the presence or absence of the image defect is not essential. ..
  • the server 1 inputs the medical image for training into the estimation model 142, and acquires the estimation result of estimating the presence / absence and the cause of the image defect as an output. Specifically, the probability value corresponding to each label such as "normal” and "air trap” is acquired as an output.
  • the output from the estimation model 142 may not be a probability value, but may be a value determined by a binary value (“0” or “1”) as to whether or not it corresponds to the label.
  • the server 1 compares the estimation result output from the estimation model 142 with the correct answer value of the training data, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the estimation model 142.
  • the estimation model 142 is also preferably capable of estimating from a plurality of consecutive frame images in time series.
  • the estimation model 142 may be a 3D-CNN (for example, C3D) or a model in which CNN and RNN are combined.
  • the operation information of the diagnostic imaging apparatus 2 at the time of generation of the medical image is used as the input in addition to the medical image.
  • the operation information is a log showing the operation status of the diagnostic imaging apparatus 2 by the user, and is data capable of identifying the inspection status of the subject using the diagnostic imaging apparatus 2.
  • the server 1 determines from the operation information at the time of generation of the medical image whether the time is before the examination or during the examination (or after the examination). Then, the server 1 inputs the determination result of determining before or during the examination into the estimation model 142 together with the medical image corresponding to the time point. Before the examination, the catheter 21 is not inserted into the blood vessel of the subject (test before the examination), and during the examination, the catheter 21 is inserted into the blood vessel of the subject.
  • the server 1 inputs binary data indicating whether the test is before or during the test as a categorical variable indicating the attributes of the medical image into the estimation model 142.
  • the training data includes operation information as input data in association with the medical image, and the server 1 also inputs the determination result before or during the examination determined from the operation information into the estimation model 142 for learning. conduct.
  • the diagnostic imaging apparatus 2 there are image defects that are likely to occur during the inspection and image defects that occur regardless of the inspection, depending on whether or not the inspection is in progress. For example, image defects due to the above-mentioned disconnection, rotation inhibition, etc. are likely to occur during an examination in which the catheter 21 is operated. On the other hand, image defects caused by air traps, poor connections, etc. occur regardless of whether or not the inspection is in progress, so it is easy to find them even before the inspection. Therefore, the estimation accuracy can be improved by learning the estimation model 142 including the inspection status at the time of medical image generation.
  • the server 1 learns the training data as described above and generates the estimation model 142.
  • the server 1 acquires a medical image from the diagnostic imaging apparatus 2, the server 1 detects an artifact using the detection model 141, and inputs the medical image to the estimation model 142 to estimate the presence / absence and the cause of the image defect.
  • the server 1 outputs the estimation result of the image defect and the guidance information for removing the estimated cause of the image defect to the image diagnostic apparatus 2.
  • FIG. 11 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2 according to the second embodiment.
  • FIG. 11 illustrates an example of a display screen in the diagnostic imaging apparatus 2 when an image defect has occurred.
  • FIG. 11 shows a display screen when it is estimated that there is a sign of damage (for example, disconnection) of the catheter 21 during the examination.
  • the diagnostic imaging apparatus 2 displays a medical image (tomographic image) that images the inside of the blood vessel of the subject, as in the first embodiment. Then, when it is estimated that there is an image defect, the diagnostic imaging apparatus 2 alerts and displays the estimation result related to the image defect according to the output from the server 1.
  • a medical image tomographic image
  • the image diagnostic device 2 displays guidance information that guides countermeasures for removing the cause of image defects. For example, when it is presumed that there is a sign of disconnection of the catheter 21, the diagnostic imaging apparatus 2 guides the operation method of the catheter 21 so as to slowly push the catheter 21 while checking the displayed image. Further, the diagnostic imaging apparatus 2 guides the catheter 21 to be replaced when the defective image is not resolved by the operation.
  • the server 1 may generate a third medical image that visualizes the characteristic portion of the image as the basis for estimating the image defect, and display it on the diagnostic imaging apparatus 2.
  • the third medical image is an image showing an image region referred to as a feature portion when the estimation model 142 estimates an image defect, and is, for example, an image showing the region with a heat map.
  • the server 1 generates a third medical image by using the method of Grad-CAM.
  • Grade-CAM is a method of visualizing which part of an input image is captured as a feature in CNN, and is a method of extracting an image part having a large contribution to output.
  • a portion having a large gradient when extracting a feature amount in the intermediate layer of CNN is regarded as a feature portion, and extraction is performed.
  • the server 1 uses the output value (probability value of each label) from the output layer of the estimation model 142 (CNN) and the input gradient data for the last convolution layer of the intermediate layer as an activation function. Enter and generate a heatmap.
  • the server 1 superimposes the generated heat map on the original medical image to generate a third medical image. As shown in the lower right of FIG. 11, the server 1 displays the third medical image in parallel with the original medical image.
  • a third medical image may be generated by using another method such as Guided Grade-CAM.
  • the estimation model 142 can present the basis for estimating the image defect to the user and have the user check whether the estimation result is correct.
  • the server 1 may superimpose and display the translucent mask corresponding to the artifact region and the heat map corresponding to the image defect on the same medical image.
  • FIG. 12 is a flowchart showing a procedure for generating the estimation model 142. Based on FIG. 12, the processing content when learning the training data and generating the estimation model 142 will be described.
  • the control unit 11 of the server 1 acquires training data to which data indicating the presence / absence and cause of image defects in the medical image is added to the medical image and operation information for training (step S201). Based on the training data, the control unit 11 generates an estimation model 142 that outputs an estimation result that estimates the presence / absence and cause of image defects when a medical image is input (step S202). For example, the control unit 11 generates the CNN model as the estimation model 142 as described above.
  • the control unit 11 inputs the medical image for training and the determination result of whether or not it is before the inspection determined from the operation information into the estimation model 142, and outputs the estimation result of estimating the presence or absence of the image defect and the cause. Get as.
  • the control unit 11 compares the estimation result with the correct answer value, optimizes parameters such as weights between neurons so that the two approximate each other, and generates an estimation model 142.
  • the control unit 11 ends a series of processes.
  • FIG. 13 is a flowchart showing a processing procedure of artifact detection and image defect estimation. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
  • the control unit 11 of the server 1 executes the following processing.
  • the control unit 11 acquires the operation information of the diagnostic imaging apparatus 2 at the time of generating the medical image (step S221).
  • the control unit 11 inputs the acquired medical image and the determination result of whether or not it is before the inspection determined from the operation information into the estimation model 142, and estimates the presence or absence and the cause of the image defect in the medical image ( Step S222).
  • the control unit 11 determines whether or not there is an image defect based on the estimation result in step S222 (step S223).
  • the control unit 11 outputs and displays the guidance information for guiding the countermeasure for removing the estimated cause of the image defect to the image diagnostic apparatus 2 (step).
  • S224 the server 1 displays an alert indicating that an image defect has occurred, and displays guidance information for guiding the operation method of the catheter 21 for removing the cause of the image defect. ..
  • control unit 11 shifts the process to step S32.
  • detection model 141 and the estimation model 142 have been described above as separate models, they may be the same model.
  • estimation model 142 common to before and during the examination was used, but the estimation model 142 that learned the medical image before the examination and the estimation model 142 that learned the medical image during the examination are prepared separately.
  • different estimation models 142 may be used depending on whether or not the test has been performed. By preparing separate models depending on whether or not they are before the inspection, the estimation accuracy can be improved.
  • the estimation model 142 also accepts the correction input of the estimation result in the same manner as the detection model 141, gives the medical image labeled with the corrected estimation result (presence / absence and cause of image defect) to the estimation model 142 as training data, and re-uses it. You may study.
  • FIG. 14 is an explanatory diagram of the detection model 141 according to the third embodiment.
  • the server 1 learns training data in which data indicating an image area (hereinafter, referred to as “object area”) of an object to be inspected is labeled in addition to the artifact area for a medical image for training. Then, the detection model 141 is generated.
  • An object is an object in a blood vessel (living lumen) intended for diagnosis or treatment, such as a plaque.
  • the object is not limited to the living tissue existing in the blood vessel, and may be a substance other than the living tissue such as a stent placed in the blood vessel of the subject (patient).
  • data related to the object is added to the medical image for training in addition to the artifact data (coordinate range of the artifact region and the type of the artifact) or in place of the artifact data.
  • the artifact data coordinate range of the artifact region and the type of the artifact
  • the coordinate range of the object region and the object is given.
  • the server 1 generates the detection model 141 based on the above training data. Since it is the same as that of the first embodiment except that the object area is added, detailed description thereof will be omitted in the present embodiment.
  • the server 1 acquires a medical image from the diagnostic imaging apparatus 2, the server 1 inputs the medical image to the detection model 141 to detect the artifact region and / or the object region, and outputs the detection result to the diagnostic imaging apparatus 2.
  • FIG. 15 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2 according to the third embodiment.
  • the diagnostic imaging apparatus 2 displays a second medical image showing an object area in addition to the artifact area and presents it to the user.
  • the server 1 detects the artifact region and the object region at the same time, the server 1 generates a second medical image in which the display modes (display colors) of the regions are different from each other, and displays the second medical image on the diagnostic imaging apparatus 2.
  • the server 1 may determine the size of the object from the coordinate values of the object area and display the object together.
  • FIG. 16 is a flowchart showing a procedure for generating the detection model 141 according to the third embodiment.
  • the control unit 11 of the server 1 acquires the training data in which the data relating to the artifact region and / or the object region is labeled with respect to the medical image for training (step S301). Based on the training data, the control unit 11 generates a detection model 141 that detects an artifact region and / or an object region when a medical image is input (step S302). The control unit 11 ends a series of processes.
  • FIG. 17 is a flowchart showing the procedure of the artifact and object detection process. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
  • the control unit 11 of the server 1 After acquiring the medical image from the diagnostic imaging apparatus 2 (step S31), the control unit 11 of the server 1 executes the following processing. The control unit 11 inputs the acquired medical image into the detection model 141 and detects an artifact region and / or an object region in the medical image (step S321).
  • the control unit 11 determines whether or not the artifact region and / or the object region is detected in step S321 (step S322). When it is determined that the artifact region and / or the object region is not detected (S322: NO), the control unit 11 displays the original medical image as it is on the diagnostic imaging apparatus 2 (step S323), and shifts the process to step S37. ..
  • control unit 11 When it is determined that the artifact region and / or the object region is detected (S322: YES), the control unit 11 generates a second medical image obtained by processing the artifact region and / or the object region (step S324). The control unit 11 outputs the generated second medical image to the diagnostic imaging apparatus 2 and displays it (step S325). The control unit 11 shifts the process to step S37.
  • an artifact and an object can be simultaneously detected from the medical image and presented to the user, and it is possible to identify which is the desired object or the artifact. can.
  • FIG. 18 is a block diagram showing a configuration example of the server 1 according to the fourth embodiment.
  • the auxiliary storage unit 14 of the server 1 according to the present embodiment stores the generation model 143.
  • the generative model 143 is a machine learning model in which training data has been trained in the same manner as the detection model 141.
  • the generative model 143 is expected to be used as a program module that functions as a part of artificial intelligence software.
  • FIG. 19 is an explanatory diagram of the generative model 143.
  • the generative model 143 is a machine learning model that generates a second medical image obtained by converting the first medical image by inputting the first medical image imaged by the diagnostic imaging apparatus 2.
  • GAN is used as the generative model 143.
  • the GAN includes a generator that generates output data from input data and a discriminator that discriminates the authenticity of the data generated by the generator, and the generator and the discriminator compete with each other for learning. Build a network by doing.
  • the generator related to GAN accepts random noise (latent variable) input and generates output data.
  • the discriminator learns the truth of the data given by the generator by using the true data given for learning and the data given by the generator.
  • the network is constructed so that the loss function of the generator is finally minimized and the loss function of the discriminator is maximized.
  • the server 1 generates pix2pix as a generation model 143 for reducing artifacts.
  • the server 1 uses a medical image including an artifact and a second medical image having fewer artifacts than the medical image as training data for generating the generative model 143.
  • the server 1 gives a medical image for training to the generator to generate a second medical image.
  • the server 1 uses the pair of the medical image and the second medical image corresponding to the input / output of the generator as fake data, and the pair of the medical image and the second medical image included in the training data as true data in the classifier.
  • the server 1 generates the generative model 143 by optimizing the parameters so that the loss function of the generator is minimized and the loss function of the discriminator is maximized.
  • the generation model 143 has been described above as being a pix2pix, it may be another GAN having a network structure different from that of the pix2pix, such as CycleGAN and StarGAN, which will be described later.
  • the generative model 143 is not limited to GAN, and may be a model based on a neural network such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • VAE Variational Autoencoder
  • CNN for example, U-net
  • another learning algorithm such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
  • the server 1 inputs the medical image acquired from the diagnostic imaging apparatus 2 into the generation model 143, and generates a second medical image with reduced artifacts. For example, the server 1 first detects an artifact with the detection model 141, and when an artifact is detected from each tomographic image (frame image) acquired sequentially, the tomographic image is input to the generative model 143 to reduce the artifact. Convert to an image. As a result, an intravascular tomographic image with reduced artifacts can be presented to the user, and endovascular treatment can be suitably supported.
  • FIG. 20 is a flowchart showing the procedure of the generation process of the generation model 143. Based on FIG. 20, the processing content when the generation model 143 is generated by machine learning will be described.
  • the control unit 11 of the server 1 acquires training data including a medical image including an artifact and a second medical image having fewer artifacts than the medical image (step S401). Based on the training data, the control unit 11 generates a generation model 143 that generates a second medical image with reduced artifacts of the medical image when the medical image is input (step S402).
  • the control unit 11 ends a series of processes.
  • FIG. 21 is a flowchart showing the procedure of the artifact reduction process. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
  • the control unit 11 of the server 1 inputs the medical image acquired from the diagnostic imaging apparatus 2 into the generative model 143, and inputs the second medical image with reduced artifacts. Generate (step S421).
  • the control unit 11 outputs the generated second medical image to the diagnostic imaging apparatus 2 and displays it (step S422), and shifts the process to step S37.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

This program executes, in a computer, a process for: acquiring a medical image generated on the basis of a signal detected by means of a catheter inserted into a living body lumen; inputting the acquired medical image to a model, which is trained to output a detection result obtained by detecting an image area corresponding to an artifact within the medical image, when the medical image is input, and detecting the image area corresponding to the artifact; and outputting the detection result in association with the medical image.

Description

プログラム、情報処理方法、情報処理装置及びモデル生成方法Program, information processing method, information processing device and model generation method
 本発明は、プログラム、情報処理方法、情報処理装置及びモデル生成方法に関する。 The present invention relates to a program, an information processing method, an information processing device, and a model generation method.
 医療分野において、診断や治療のために撮影した医用画像にアーチファクトが出現することが問題になっている。アーチファクトとは、目的としていない、あるいは実際に存在しない虚像のことであり、医用画像を撮影する装置や撮影条件などに由来して結像してしまう像である。熟練した医療従事者であればアーチファクトを見分けることができるが、未熟な者である場合、アーチファクトを見分けることが難しく、誤診等の問題を生じ得る。 In the medical field, the appearance of artifacts in medical images taken for diagnosis and treatment has become a problem. An artifact is a virtual image that is not intended or does not actually exist, and is an image that is formed due to a device for capturing a medical image, imaging conditions, or the like. A skilled medical worker can distinguish an artifact, but an inexperienced person may have difficulty in distinguishing an artifact, which may cause problems such as misdiagnosis.
 上記の事情から、アーチファクトに対処するための種々の手法が提案されている。例えば特許文献1では、超音波画像診断装置でイメージングされた血管内断層像からリングダウンアーチファクトを低減する方法であって、複数フレームの断層像からリファレンスフレームを決定し、リファレンスフレームと現在フレームとの差分から、リングダウンアーチファクトを低減した断層像を生成する方法等が開示されている。 From the above circumstances, various methods for dealing with artifacts have been proposed. For example, in Patent Document 1, it is a method of reducing ring-down artifacts from an intravascular tomographic image imaged by an ultrasonic diagnostic imaging apparatus, in which a reference frame is determined from a plurality of frames of tomographic images, and the reference frame and the current frame are combined. A method of generating a tomographic image with reduced ring-down artifacts from the difference is disclosed.
特開2019-165970号公報JP-A-2019-165970
 しかしながら、特許文献1に係る発明はルールベースで画像内のアーチファクトを抽出することとしており、必ずしも精度が良いものとは言えない。 However, the invention according to Patent Document 1 extracts the artifacts in the image on a rule basis, and the accuracy is not always good.
 一つの側面では、医用画像内のアーチファクトを好適に提示することができるプログラム等を提供することを目的とする。 One aspect is to provide a program or the like that can suitably present the artifacts in the medical image.
 一つの側面に係るプログラムは、生体管腔内に挿入されたカテーテルで検出した信号に基づき生成された医用画像を取得し、前記医用画像を入力した場合に、該医用画像内のアーチファクトに対応する画像領域を検出した検出結果を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記アーチファクトに対応する画像領域を検出し、検出結果を前記医用画像と関連付けて出力する処理をコンピュータに実行させる。 A program relating to one aspect acquires a medical image generated based on a signal detected by a catheter inserted into a biological lumen and, when the medical image is input, corresponds to an artifact in the medical image. A process of inputting the acquired medical image into a model trained to output the detection result of detecting the image area, detecting the image area corresponding to the artifact, and outputting the detection result in association with the medical image. Let the computer do it.
 一つの側面では、医用画像内のアーチファクトを好適に提示することができる。 On one side, the artifacts in the medical image can be preferably presented.
画像診断システムの構成例を示す説明図である。It is explanatory drawing which shows the structural example of the diagnostic imaging system. サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a server. アーチファクトに関する説明図である。It is explanatory drawing about an artifact. 検出モデルに関する説明図である。It is explanatory drawing about the detection model. 画像診断装置の表示画面例を示す説明図である。It is explanatory drawing which shows the display screen example of the image diagnostic apparatus. 検出モデルの生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the generation process of a detection model. アーチファクト検出処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the artifact detection processing. 実施の形態2に係るサーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the server which concerns on Embodiment 2. FIG. 画像診断装置において発生する画像不良に関する説明図である。It is explanatory drawing about the image defect which occurs in the image diagnostic apparatus. 推定モデルに関する説明図である。It is explanatory drawing about the estimation model. 実施の形態2に係る画像診断装置の表示画面例を示す説明図である。It is explanatory drawing which shows the display screen example of the diagnostic imaging apparatus which concerns on Embodiment 2. FIG. 推定モデルの生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the generation process of an estimation model. アーチファクト検出及び画像不良推定の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of artifact detection and image defect estimation. 実施の形態3に係る検出モデルに関する説明図である。It is explanatory drawing about the detection model which concerns on Embodiment 3. 実施の形態3に係る画像診断装置の表示画面例を示す説明図である。It is explanatory drawing which shows the display screen example of the diagnostic imaging apparatus which concerns on Embodiment 3. FIG. 実施の形態3に係る検出モデルの生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the generation processing of the detection model which concerns on Embodiment 3. アーチファクト及びオブジェクト検出処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the artifact and object detection processing. 実施の形態4に係るサーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the server which concerns on Embodiment 4. FIG. 生成モデルに関する説明図である。It is explanatory drawing about the generative model. 生成モデルの生成処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the generation process of a generation model. アーチファクト低減処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the artifact reduction processing.
 以下、本発明をその実施の形態を示す図面に基づいて詳述する。
(実施の形態1)
 図1は、画像診断システムの構成例を示す説明図である。本実施の形態では、被検者の生体管腔内をイメージングした医用画像からアーチファクトを検出し、検出したアーチファクトをユーザ(医療従事者)に提示する画像診断システムについて説明する。画像診断システムは、情報処理装置1、画像診断装置2を含む。情報処理装置1及び画像診断装置2は、LAN(Local Area Network)、インターネット等のネットワークNを介して通信接続されている。
Hereinafter, the present invention will be described in detail with reference to the drawings showing the embodiments thereof.
(Embodiment 1)
FIG. 1 is an explanatory diagram showing a configuration example of an diagnostic imaging system. In the present embodiment, an diagnostic imaging system that detects an artifact from a medical image of the inside of the living lumen of a subject and presents the detected artifact to a user (medical worker) will be described. The diagnostic imaging system includes an information processing device 1 and a diagnostic imaging device 2. The information processing device 1 and the diagnostic imaging device 2 are communicated and connected via a network N such as a LAN (Local Area Network) or the Internet.
 画像診断装置2は、被検者の生体管腔内をイメージングするための装置ユニットであり、例えばカテーテル21を用いた被検者血管内の超音波検査を行うための装置ユニットである。画像診断装置2は、カテーテル21、MDU(Motor Drive Unit)22、画像処理装置23、表示装置24を備える。カテーテル21は被検者の血管内に挿入される医用器具であり、パルス信号に基づく超音波を送信すると共に血管内からの反射波を受信するイメージングコアを備える。画像診断装置2は、カテーテル21で受信した反射波の信号に基づいて血管内の断層像(医用画像)を生成する。MDU22は、カテーテル21が着脱可能に取り付けられた駆動装置であり、使用者の操作に応じて内蔵モータを駆動することにより、血管内に挿入されたカテーテル21のイメージングコアの長手方向及び回転方向の動作を制御する。画像処理装置23は、カテーテル21で受信した反射波のデータを処理して断層像を生成する処理装置であり、生成した断層像を表示装置24に表示させるほか、検査を行う際の各種設定値の入力を受け付けるための入力インターフェイスなどを備える。 The diagnostic imaging device 2 is a device unit for imaging the inside of the living lumen of a subject, for example, a device unit for performing an ultrasonic examination in a blood vessel of a subject using a catheter 21. The diagnostic imaging device 2 includes a catheter 21, an MDU (Motor Drive Unit) 22, an image processing device 23, and a display device 24. The catheter 21 is a medical device inserted into a blood vessel of a subject, and includes an imaging core that transmits ultrasonic waves based on a pulse signal and receives reflected waves from the blood vessels. The diagnostic imaging apparatus 2 generates a tomographic image (medical image) in the blood vessel based on the signal of the reflected wave received by the catheter 21. The MDU 22 is a drive device to which the catheter 21 is detachably attached, and by driving the built-in motor according to the operation of the user, the imaging core of the catheter 21 inserted into the blood vessel in the longitudinal direction and the rotation direction. Control the operation. The image processing device 23 is a processing device that processes the reflected wave data received by the catheter 21 to generate a tomographic image. In addition to displaying the generated tomographic image on the display device 24, various set values at the time of inspection are performed. It is equipped with an input interface for accepting the input of.
 なお、本実施の形態では血管内検査を一例として説明するが、検査対象とする生体管腔は血管に限定されず、例えば腸などの臓器であってもよい。また、医用画像は超音波画像に限定されず、例えばOCT(Optical Coherence Tomography)画像などであってもよい。 In the present embodiment, an intravascular examination will be described as an example, but the biological lumen to be examined is not limited to blood vessels, and may be an organ such as an intestine. Further, the medical image is not limited to the ultrasonic image, and may be, for example, an OCT (Optical Coherence Tomography) image.
 情報処理装置1は、種々の情報処理、情報の送受信が可能な情報処理装置であり、例えばサーバコンピュータ、パーソナルコンピュータ等である。本実施の形態では情報処理装置1がサーバコンピュータであるものとし、以下では簡潔のためサーバ1と読み替える。なお、サーバ1は画像診断装置2と同じ施設(病院等)に設置されたローカルサーバであってもよく、インターネット等を介して画像診断装置2に通信接続されたクラウドサーバであってもよい。サーバ1は、画像診断装置2で生成(イメージング)された医用画像からアーチファクトを検出する検出装置として機能し、検出結果を画像診断装置2に提供する。具体的には後述のように、サーバ1は、訓練データを学習する機械学習を行い、医用画像を入力として、医用画像内のアーチファクトに対応する画像領域を検出した検出結果を出力する検出モデル141(図4参照)を用意してある。サーバ1は、画像診断装置2から医用画像を取得し、検出モデル141に入力してアーチファクトに対応する画像領域を検出する。サーバ1は、検出結果を画像診断装置2に出力し、医用画像内のアーチファクトを識別可能なようにガイダンス表示を行わせる。 The information processing device 1 is an information processing device capable of transmitting and receiving various types of information processing and information, such as a server computer and a personal computer. In the present embodiment, it is assumed that the information processing device 1 is a server computer, and in the following, it will be read as server 1 for the sake of brevity. The server 1 may be a local server installed in the same facility (hospital or the like) as the diagnostic imaging device 2, or may be a cloud server communication-connected to the diagnostic imaging device 2 via the Internet or the like. The server 1 functions as a detection device that detects an artifact from a medical image generated (imaged) by the diagnostic imaging apparatus 2, and provides the detection result to the diagnostic imaging apparatus 2. Specifically, as will be described later, the server 1 performs machine learning to learn training data, inputs a medical image, and outputs a detection result of detecting an image area corresponding to an artifact in the medical image. (See FIG. 4) is prepared. The server 1 acquires a medical image from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect an image region corresponding to an artifact. The server 1 outputs the detection result to the diagnostic imaging apparatus 2, and causes the server 1 to display guidance so that the artifact in the medical image can be identified.
 以下の説明では便宜上、アーチファクトに対応する画像領域を「アーチファクト領域」と呼ぶ。 In the following description, for convenience, the image area corresponding to the artifact is referred to as an "artifact area".
 なお、本実施の形態では画像診断装置2とは別体のサーバ1においてアーチファクトの検出を行うものとするが、サーバ1が機械学習によって生成した検出モデル141を画像診断装置2(画像処理装置23)にインストールし、画像診断装置2でアーチファクトの検出を可能としてもよい。 In the present embodiment, the artifact is detected on the server 1 which is separate from the image diagnostic device 2, but the detection model 141 generated by the server 1 by machine learning is used as the image diagnostic device 2 (image processing device 23). ), It may be possible to detect the artifact with the diagnostic imaging apparatus 2.
 図2は、サーバ1の構成例を示すブロック図である。サーバ1は、制御部11、主記憶部12、通信部13、及び補助記憶部14を備える。
 制御部11は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)等の演算処理装置を有し、補助記憶部14に記憶されたプログラムPを読み出して実行することにより、種々の情報処理、制御処理等を行う。主記憶部12は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部11が演算処理を実行するために必要なデータを一時的に記憶する。通信部13は、通信に関する処理を行うための通信モジュールであり、外部と情報の送受信を行う。
FIG. 2 is a block diagram showing a configuration example of the server 1. The server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
The control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed. The main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing. Remember. The communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
 補助記憶部14は、大容量メモリ、ハードディスク等の不揮発性記憶領域であり、制御部11が処理を実行するために必要なプログラムP、その他のデータを記憶している。また、補助記憶部14は、検出モデル141を記憶している。検出モデル141は、上述の如く訓練データを学習済みの機械学習モデルであり、医用画像を入力として、アーチファクト領域の検出結果を出力とするモデルである。 The auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores a program P and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the detection model 141. The detection model 141 is a machine learning model in which training data has been trained as described above, and is a model that inputs a medical image and outputs a detection result of an artifact region.
 なお、補助記憶部14はサーバ1に接続された外部記憶装置であってもよい。また、サーバ1は複数のコンピュータからなるマルチコンピュータであっても良く、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。 The auxiliary storage unit 14 may be an external storage device connected to the server 1. Further, the server 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
 また、本実施の形態においてサーバ1は上記の構成に限られず、例えば操作入力を受け付ける入力部、画像を表示する表示部等を含んでもよい。また、サーバ1は、CD(Compact Disk)-ROM、DVD(Digital Versatile Disc)-ROM等の可搬型記憶媒体1aを読み取る読取部を備え、可搬型記憶媒体1aからプログラムPを読み取って実行するようにしても良い。あるいはサーバ1は、半導体メモリ1bからプログラムPを読み込んでも良い。 Further, in the present embodiment, the server 1 is not limited to the above configuration, and may include, for example, an input unit that accepts operation input, a display unit that displays an image, and the like. Further, the server 1 is provided with a reading unit that reads a portable storage medium 1a such as a CD (CompactDisk) -ROM, a DVD (DigitalVersatileDisc) -ROM, and reads and executes a program P from the portable storage medium 1a. You can do it. Alternatively, the server 1 may read the program P from the semiconductor memory 1b.
 図3は、アーチファクトに関する説明図である。図3では、医用画像において発生する5種類のアーチファクトを概念的に図示している。 FIG. 3 is an explanatory diagram regarding the artifact. FIG. 3 conceptually illustrates the five types of artifacts that occur in medical images.
 既に述べたように、アーチファクトは、検査の目的としていない、あるいは実際に存在しない虚像のことであり、装置や撮影条件などに由来して画像化されてしまう像である。図3に示すように、アーチファクトには、多重反射(エコー)、リングダウン(Ring-Down)、音響陰影(Acoustic Shadow)、サイドローブ(Side Lobe)、NURD(Non-Uniformed Rotational Distortion)などがある。 As already mentioned, an artifact is a virtual image that is not intended for inspection or does not actually exist, and is an image that is imaged due to the device, shooting conditions, and so on. As shown in FIG. 3, the artifacts include multiple reflection (echo), ring-down (Ring-Down), acoustic shadow (Acoustic Shadow), side lobe (Side Lobe), NURD (Non-Uniformed Rotational Distortion), and the like. ..
 多重反射は、カテーテル21から送信される超音波が生体管腔内で何度も反射することで発生する虚像である。図3の例では、血管内のオブジェクトM1(例えば石灰化した組織など)に超音波が反射してアーチファクトA1が発生する様子を図示している。血管内に硬いオブジェクトM1が存在する場合、オブジェクトM1及びカテーテル21の間の距離と等間隔の位置にアーチファクトA1が発生し、オブジェクトM1とよく似た像として映し出される。 Multiple reflection is a virtual image generated by the ultrasonic waves transmitted from the catheter 21 being reflected many times in the lumen of the living body. In the example of FIG. 3, ultrasonic waves are reflected by an object M1 (for example, a calcified tissue) in a blood vessel to generate an artifact A1. When the hard object M1 is present in the blood vessel, the artifact A1 is generated at a position equal to the distance between the object M1 and the catheter 21, and is projected as an image very similar to the object M1.
 リングダウンは、振動子とシースとの間の多重反射によって画像中央付近にリング状の像である。図3の例では、画像中心にリング状のアーチファクトA2が出現する様子を図示している。リングダウンは一定幅の白いリングとして映し出される。 The ring down is a ring-shaped image near the center of the image due to multiple reflections between the vibrator and the sheath. In the example of FIG. 3, a state in which a ring-shaped artifact A2 appears in the center of the image is illustrated. The ring down is projected as a white ring of a certain width.
 音響陰影は、超音波がカテーテル21の径方向外側に向けて透過していく過程で大きく減衰することにより、画像の一部が黒く抜け落ちる現象である。図3の例では、オブジェクトM2よりもカテーテル21の径方向外側の領域が黒く抜け落ちている様子をアーチファクトA3として概念的に図示している。なお、図3では図示の便宜上、黒く抜け落ちている領域をハッチングで図示している。音響陰影は、硬いオブジェクトM2が血管内に存在する場合に、オブジェクトM2に大部分の超音波が反射されることで、オブジェクトM2よりもカテーテル21の径方向外側に透過する超音波が大きく減衰することで発生する。 Acoustic shadow is a phenomenon in which a part of an image is blacked out due to a large attenuation in the process of transmitting ultrasonic waves toward the outside in the radial direction of the catheter 21. In the example of FIG. 3, the appearance in which the area outside the catheter 21 in the radial direction with respect to the object M2 is missing in black is conceptually illustrated as artifact A3. In FIG. 3, for convenience of illustration, the black missing area is shown by hatching. In the acoustic shadow, when the hard object M2 is present in the blood vessel, most of the ultrasonic waves are reflected by the object M2, so that the ultrasonic waves transmitted radially outward of the catheter 21 are attenuated more than the object M2. It happens because of that.
 サイドローブは、一定の指向性を持って送信される超音波のメインローブ(主極)とは一定の角度をなして送信される弱い超音波(副極)である。サイドローブに起因して、実際の血管内のオブジェクトM3(例えばステント)が実物よりも大きな像として映し出される。図3の例では、サイドローブに起因する像をアーチファクトA4として図示している。サイドローブ上のオブジェクトM3からの反射波と、メインローブからの反射波をカテーテル21が同時に受信することで、アーチファクトA4が発生する。 The side lobe is a weak ultrasonic wave (secondary pole) transmitted at a certain angle from the main lobe (main pole) of the ultrasonic wave transmitted with a certain directivity. Due to the side lobes, the actual object M3 (eg, a stent) in the blood vessel is projected as a larger image than the real thing. In the example of FIG. 3, the image caused by the side lobe is illustrated as artifact A4. The artifact A4 is generated when the catheter 21 simultaneously receives the reflected wave from the object M3 on the side lobe and the reflected wave from the main lobe.
 NURDは、カテーテル21のドライブシャフトが正常な回転をしていないことにより発生する画像の歪みである。NURDは、血管内の屈曲、あるいはカテーテル21のシャフトの捻じれなどに起因して発生する。図3の例では、回転速度のムラによって左半分の画像が歪んでいる部分を、破線で囲んだアーチファクトA5として図示している。 NURD is image distortion caused by the drive shaft of the catheter 21 not rotating normally. NURD is caused by bending in a blood vessel, twisting of the shaft of a catheter 21, or the like. In the example of FIG. 3, the portion where the image on the left half is distorted due to the uneven rotation speed is illustrated as artifact A5 surrounded by a broken line.
 本実施の形態でサーバ1は、上述の各種アーチファクトを医用画像から検出する。具体的には以下のように、サーバ1は、医用画像内のアーチファクト領域を学習済みの検出モデル141を用いてアーチファクトを検出する。 In the present embodiment, the server 1 detects the above-mentioned various artifacts from the medical image. Specifically, the server 1 detects an artifact using the detected detection model 141 that has learned the artifact region in the medical image as follows.
 なお、多重反射、リングダウン等はいずれもアーチファクトの例示であって、検出対象とするアーチファクトはこれらに限定されない。 Note that multiple reflections, ringdowns, etc. are all examples of artifacts, and the artifacts to be detected are not limited to these.
 図4は、検出モデル141に関する説明図である。検出モデル141は、医用画像を入力として、医用画像内のアーチファクト領域を検出した検出結果を出力とする機械学習モデルである。サーバ1は、所定の訓練データを学習する機械学習を行って検出モデル141を事前に生成しておく。そしてサーバ1は、画像診断装置2から被検者の医用画像を取得して検出モデル141に入力し、医用画像内のアーチファクト領域を検出する。図4に基づき、検出モデル141について説明する。 FIG. 4 is an explanatory diagram of the detection model 141. The detection model 141 is a machine learning model that takes a medical image as an input and outputs a detection result of detecting an artifact region in the medical image as an output. The server 1 performs machine learning to learn predetermined training data and generates the detection model 141 in advance. Then, the server 1 acquires the medical image of the subject from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect the artifact region in the medical image. The detection model 141 will be described with reference to FIG.
 検出モデル141は、例えば深層学習によって生成されるニューラルネットワークモデルであり、多数のコンボリューション層で入力画像の特徴量を抽出するCNN(Convolutional Neural Network)である。検出モデル141は、入力画像の画素情報を畳み込むコンボリューション層と、畳み込んだ画素情報をマッピングするプーリング層とが交互に連結された中間層(隠れ層)を備え、入力画像の特徴量(特徴量マップ)を抽出する。 The detection model 141 is, for example, a neural network model generated by deep learning, and is a CNN (Convolutional Neural Network) that extracts features of an input image in a large number of convolution layers. The detection model 141 includes an intermediate layer (hidden layer) in which a convolution layer that convolves the pixel information of the input image and a pooling layer that maps the convolved pixel information are alternately connected, and features (features) of the input image. Quantity map) is extracted.
 なお、本実施の形態では検出モデル141がCNNであるものとして説明するが、例えばGAN(Generative Adversarial Network)、RNN(Recurrent Neural Network)、SVM(Support Vector Machine)、決定木等、その他の学習アルゴリズムに基づくモデルであってもよい。 In this embodiment, the detection model 141 is described as being a CNN, but other learning algorithms such as GAN (Generative Adversarial Network), RNN (Recurrent Neural Network), SVM (Support Vector Machine), decision tree, etc. It may be a model based on.
 本実施の形態でサーバ1は、入力される医用画像内の各画素がアーチファクト領域に対応する画素であるか否か、画素単位で識別する検出モデル141を生成する。例えばサーバ1は、検出モデル141として、セマンティックセグメンテーションモデル、あるいはMASK R-CNN(Region CNN)などを生成する。 In the present embodiment, the server 1 generates a detection model 141 that identifies on a pixel-by-pixel basis whether or not each pixel in the input medical image is a pixel corresponding to the artifact region. For example, the server 1 generates a semantic segmentation model, MASK R-CNN (Region CNN), or the like as the detection model 141.
 セマンティックセグメンテーションモデルはCNNの一種であり、入力データから出力データを生成するEncoderDecoderモデルの一種である。セマンティックセグメンテーションモデルは、入力画像のデータを圧縮するコンボリューション層以外に、圧縮して得た特徴量を元の画像サイズにマッピング(拡大)する逆コンボリューション層(Deconvolution Layer)を備える。逆コンボリューション層では、コンボリューション層で抽出した特徴量に基づいて画像内にどの物体がどの位置に存在するかを識別し、各画素がどの物体に対応するかを二値化したラベル画像を生成する。 The semantic segmentation model is a type of CNN, and is a type of EncoderDecoder model that generates output data from input data. The semantic segmentation model includes a deconvolution layer (Deconvolution Layer) that maps (enlarges) the features obtained by compression to the original image size, in addition to the convolution layer that compresses the data of the input image. In the inverse convolution layer, a label image that identifies which object exists in which position in the image based on the feature amount extracted by the convolution layer and binarizes which object each pixel corresponds to is generated. Generate.
 MASK R-CNNは、主に物体検出に用いられるFaster R-CNNの変形であり、Faster R-CNNに逆コンボリューション層を連結した構成を有する。MASK R-CNNでは、CNNで抽出した画像の特徴量と、RPN(Region Proposal Network)で抽出した対象物体の座標領域の情報とを逆コンボリューション層に入力し、最終的に入力画像内の物体の座標領域をマスクするマスク画像を生成する。 MASK R-CNN is a modification of Faster R-CNN mainly used for object detection, and has a configuration in which a reverse convolution layer is connected to Faster R-CNN. In MASK R-CNN, the feature amount of the image extracted by CNN and the information of the coordinate area of the target object extracted by RPN (Region Proposal Network) are input to the inverse convolution layer, and finally the object in the input image. Generate a mask image that masks the coordinate area of.
 サーバ1は、これらのモデルを検出モデル141として生成し、アーチファクトの検出に用いる。なお、上記のモデルはいずれも例示であって、検出モデル141は、医用画像内のアーチファクトの位置や形状を識別可能であればよい。本実施の形態では一例として、検出モデル141がセマンティックセグメンテーションモデルであるものとして説明する。 The server 1 generates these models as a detection model 141 and uses them for detecting artifacts. It should be noted that all of the above models are examples, and the detection model 141 may be sufficient as long as it can identify the position and shape of the artifact in the medical image. In the present embodiment, as an example, the detection model 141 will be described as a semantic segmentation model.
 サーバ1は、訓練用の医用画像に対し、アーチファクト領域を示すデータがラベリングされた訓練データを用いて学習を行う。具体的には、訓練データでは、訓練用の医用画像に対し、アーチファクト領域に対応する座標範囲と、アーチファクトの種類とを表すラベル(メタデータ)が付与されている。 The server 1 learns from the medical image for training using the training data in which the data indicating the artifact region is labeled. Specifically, in the training data, labels (metadata) indicating the coordinate range corresponding to the artifact region and the type of the artifact are attached to the medical image for training.
 サーバ1は、訓練用の医用画像を検出モデル141に入力して、アーチファクト領域を検出した検出結果を出力として取得する。具体的には、図4において検出モデル141の右側にハッチングで図示するように、アーチファクト領域の各画素に対し、アーチファクトの種類を示すデータがラベリングされたラベル画像を出力として取得する。 The server 1 inputs a medical image for training into the detection model 141, and acquires the detection result of detecting the artifact region as an output. Specifically, as shown by hatching on the right side of the detection model 141 in FIG. 4, a label image in which data indicating the type of artifact is labeled is acquired as an output for each pixel in the artifact region.
 サーバ1は、検出モデル141から出力された検出結果を、訓練データが示す正解のアーチファクト領域の座標範囲、及びアーチファクトの種類と比較し、両者が近似するように、ニューロン間の重み等のパラメータを最適化する。これにより、サーバ1は検出モデル141を生成する。 The server 1 compares the detection result output from the detection model 141 with the coordinate range of the correct artifact region indicated by the training data and the type of artifact, and sets parameters such as weights between neurons so that the two can be approximated. Optimize. As a result, the server 1 generates the detection model 141.
 本実施の形態では、検出モデル141は、時系列で連続する複数フレームの医用画像(動画)を入力として受け付け、各フレームの医用画像からアーチファクトを検出する。具体的には、検出モデル141は、カテーテル21の走査に従い、血管の長手方向に沿って連続する複数フレームの医用画像を入力として受け付ける。検出モデル141は、時間軸tに沿って連続する各フレームの医用画像からアーチファクトを検出する。 In the present embodiment, the detection model 141 accepts a plurality of frames of medical images (moving images) that are continuous in time series as input, and detects an artifact from the medical images of each frame. Specifically, the detection model 141 receives a plurality of frames of medical images continuous along the longitudinal direction of the blood vessel as input according to the scanning of the catheter 21. The detection model 141 detects an artifact from a medical image of each consecutive frame along the time axis t.
 なお、以下の説明では便宜上、連続する各フレームの医用画像を単に「フレーム画像」と呼ぶ。 In the following description, for convenience, the medical images of each continuous frame are simply referred to as "frame images".
 サーバ1は、各フレーム画像を検出モデル141に一枚ずつ入力して処理してもよいが、連続する複数のフレーム画像を同時に入力して、複数のフレーム画像からアーチファクト領域を同時に検出できるようにすると好適である。例えばサーバ1は、検出モデル141を、3次元の入力データを取り扱う3D-CNN(例えば3D U-net)とする。そしてサーバ1は、2次元のフレーム画像の座標を2軸とし、各フレーム画像を取得した時間(生成時点)tを1軸とする3次元データとして取り扱う。サーバ1は、所定の単位時間分の複数フレーム画像(例えば16フレーム)を一セットとして検出モデル141に入力し、複数のフレーム画像それぞれに対してアーチファクト領域にラベルを付した画像を同時に出力する。これにより、時系列で連続する前後のフレーム画像も考慮してアーチファクトを検出することができ、検出精度を向上させることができる。 The server 1 may input each frame image into the detection model 141 one by one and process it, but can input a plurality of consecutive frame images at the same time so that the artifact region can be detected simultaneously from the plurality of frame images. Then it is suitable. For example, the server 1 uses the detection model 141 as a 3D-CNN (for example, 3D U-net) that handles three-dimensional input data. Then, the server 1 handles the data as three-dimensional data in which the coordinates of the two-dimensional frame image are set as two axes and the time (generation time point) t at which each frame image is acquired is set as one axis. The server 1 inputs a plurality of frame images (for example, 16 frames) for a predetermined unit time into the detection model 141 as a set, and simultaneously outputs an image with a label in the artifact region for each of the plurality of frame images. As a result, the artifact can be detected in consideration of the frame images before and after being continuous in time series, and the detection accuracy can be improved.
 なお、上記では時間軸も含めた3次元データとすることで時系列のフレーム画像を処理可能としたが、本実施の形態はこれに限定されるものではない。例えばサーバ1は、検出モデル141を、CNNとRNNとを組み合わせたモデルとすることで、連続する複数フレーム画像からアーチファクト領域を検出可能としてもよい。この場合、例えばCNNに係る中間層の後ろにLSTM(Long-Short Term Memory)層を挿入し、前のフレーム画像から抽出した特徴量を参照してアーチファクト領域の検出を行う。この場合でも上記と同様に、前後のフレーム画像を考慮して処理を行うことができ、検出精度を向上させることができる。 In the above, it is possible to process a time-series frame image by using three-dimensional data including the time axis, but the present embodiment is not limited to this. For example, the server 1 may be able to detect an artifact region from a continuous plurality of frame images by using the detection model 141 as a model in which CNN and RNN are combined. In this case, for example, an RSTM (Long-Short Term Memory) layer is inserted after the intermediate layer related to CNN, and the artifact region is detected with reference to the feature amount extracted from the previous frame image. Even in this case, similarly to the above, the processing can be performed in consideration of the frame images before and after, and the detection accuracy can be improved.
 サーバ1は、上述の如く訓練データを学習して検出モデル141を生成する。実際に被検者の医用画像のアーチファクト検出を行う場合、サーバ1は、画像診断装置2から医用画像を取得して検出モデル141に入力し、アーチファクト領域と、アーチファクトの種類とを検出する。なお、アーチファクトの検出は検査時のリアルタイムで行ってもよく、あるいは検査後に録画された医用画像(動画)をまとめて取得し、アーチファクトの検出を行ってもよい。本実施の形態では一例として、アーチファクトの検出を検査時のリアルタイムで行うものとして説明する。 The server 1 learns the training data as described above and generates the detection model 141. When actually detecting an artifact of a medical image of a subject, the server 1 acquires a medical image from the diagnostic imaging apparatus 2 and inputs it to the detection model 141 to detect an artifact region and an artifact type. The artifact may be detected in real time at the time of the examination, or the medical images (moving images) recorded after the examination may be collectively acquired to detect the artifact. In the present embodiment, as an example, the detection of artifacts will be described in real time at the time of inspection.
 サーバ1は、アーチファクトの検出結果を画像診断装置2に出力する。そしてサーバ1は、以下のように、医用画像内のアーチファクト領域をユーザに提示するガイダンス表示を行わせる。 The server 1 outputs the detection result of the artifact to the diagnostic imaging device 2. Then, the server 1 causes the user to perform a guidance display for presenting the artifact area in the medical image as follows.
 なお、本実施の形態では検出結果の出力先が画像診断装置2であるものとして説明するが、医用画像の取得元である画像診断装置2以外の装置(例えばパーソナルコンピュータ)に出力し、ガイダンス表示を行わせてもよいことは勿論である。 In the present embodiment, it is assumed that the output destination of the detection result is the diagnostic imaging apparatus 2, but the detection result is output to a device other than the diagnostic imaging apparatus 2 (for example, a personal computer) which is the acquisition source of the medical image, and the guidance is displayed. Of course, you may let them do.
 図5は、画像診断装置2の表示画面例を示す説明図である。図5では、アーチファクトを検出した場合に画像診断装置2が表示する表示画面例を図示している。 FIG. 5 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2. FIG. 5 illustrates an example of a display screen displayed by the diagnostic imaging apparatus 2 when an artifact is detected.
 画像診断装置2は、サーバ1においてアーチファクト領域が検出された場合、アーチファクト領域の検出結果を医用画像と関連付けて表示する。具体的には図5でハッチングにより示すように、画像診断装置2は、検出されたアーチファクト領域を、その他の画像領域とは異なる表示態様(例えばカラー表示)で示す第2医用画像を表示する。 When the artifact region is detected on the server 1, the diagnostic imaging apparatus 2 displays the detection result of the artifact region in association with the medical image. Specifically, as shown by hatching in FIG. 5, the diagnostic imaging apparatus 2 displays a second medical image showing the detected artifact region in a display mode (for example, color display) different from that of other image regions.
 第2医用画像は、他の領域と識別可能なようにアーチファクト領域を加工した医用画像であり、検出モデル141から出力されたラベル画像を元の医用画像に重畳した画像である。アーチファクト領域を検出した場合、サーバ1は第2医用画像を生成して画像診断装置2に出力する。例えばサーバ1は、ラベル画像を白黒以外の表示色の半透明マスクに加工し、白黒で表現される医用画像のアーチファクト領域に重畳して第2医用画像を生成する。 The second medical image is a medical image obtained by processing an artifact region so that it can be distinguished from other regions, and is an image obtained by superimposing the label image output from the detection model 141 on the original medical image. When the artifact region is detected, the server 1 generates a second medical image and outputs it to the diagnostic imaging apparatus 2. For example, the server 1 processes the label image into a translucent mask having a display color other than black and white, superimposes it on the artifact region of the medical image expressed in black and white, and generates a second medical image.
 この場合にサーバ1は、アーチファクトの種類に応じて表示態様(表示色)を変更すると好適である。これにより、異なる原因で発生した各種アーチファクトをユーザは直感的に把握することができ、利便性を向上させることができる。 In this case, it is preferable that the server 1 changes the display mode (display color) according to the type of artifact. As a result, the user can intuitively grasp various artifacts generated by different causes, and the convenience can be improved.
 なお、上記ではアーチファクト領域をカラー表示したが、本実施の形態はこれに限定されず、例えばアーチファクト領域の輪郭(エッジ)部分を強調表示するなどしてもよい。このように、アーチファクト領域を他の画像領域と識別可能に表示できればよく、その表示態様は特に限定されない。 Although the artifact region is displayed in color in the above, the present embodiment is not limited to this, and for example, the outline (edge) portion of the artifact region may be highlighted. As described above, it is sufficient that the artifact region can be displayed so as to be distinguishable from other image regions, and the display mode thereof is not particularly limited.
 画像診断装置2は、第2医用画像を表示し、アーチファクトが発生していることをユーザに報知する。また、アーチファクトの種類を示すラベル名を、アーチファクト領域の表示色(図5ではハッチングの種類)と対応付けて表示させる。 The diagnostic imaging device 2 displays a second medical image and notifies the user that an artifact has occurred. Further, the label name indicating the type of the artifact is displayed in association with the display color of the artifact area (the type of hatching in FIG. 5).
 なお、上記では画素単位でアーチファクト領域を検出し、画素単位でアーチファクト領域を表示するものとしたが、本実施の形態はこれに限定されるものではない。例えば、単にアーチファクト領域をバウンディングボックス(矩形状の枠)で囲んで表示するのみであってもよい。このように、画素単位でアーチファクト領域を検出する構成は必須ではなく、アーチファクトに対応する箇所を検出して表示可能であればよい。 In the above, the artifact area is detected in pixel units and the artifact area is displayed in pixel units, but the present embodiment is not limited to this. For example, the artifact area may be simply surrounded by a bounding box (rectangular frame) and displayed. As described above, the configuration for detecting the artifact region on a pixel-by-pixel basis is not essential, and it is sufficient if the location corresponding to the artifact can be detected and displayed.
 図6は、検出モデル141の生成処理の手順を示すフローチャートである。図6に基づき、機械学習によって検出モデル141を生成する際の処理内容について説明する。
 サーバ1の制御部11は、訓練用の医用画像に対し、アーチファクト領域がラベリングされた訓練データを取得する(ステップS11)。具体的には上述の如く、訓練用の医用画像に対し、アーチファクト領域の座標範囲と、アーチファクトの種類とを示すラベル(メタデータ)が付与された訓練データを取得する。
FIG. 6 is a flowchart showing the procedure of the generation process of the detection model 141. Based on FIG. 6, the processing content when the detection model 141 is generated by machine learning will be described.
The control unit 11 of the server 1 acquires the training data in which the artifact region is labeled with respect to the medical image for training (step S11). Specifically, as described above, the training data to which the coordinate range of the artifact region and the label (metadata) indicating the type of the artifact are attached to the medical image for training is acquired.
 制御部11は、訓練データに基づき、医用画像を入力した場合に、アーチファクト領域及びアーチファクトの種類を検出した検出結果を出力する検出モデル141を生成する(ステップS12)。具体的には上述の如く、制御部11は、医用画像内のオブジェクトを画素単位で識別するセマンティックセグメンテーションモデルを検出モデル141として生成する。制御部11は、訓練用の医用画像を検出モデル141に入力し、アーチファクト領域及びアーチファクトの種類を検出した検出結果を出力として取得する。制御部11は、検出結果を正解値(正解のラベル)と比較し、両者が近似するように、ニューロン間の重み等のパラメータを最適化して検出モデル141を生成する。制御部11は一連の処理を終了する。 The control unit 11 generates a detection model 141 that outputs a detection result of detecting an artifact region and an artifact type when a medical image is input based on the training data (step S12). Specifically, as described above, the control unit 11 generates a semantic segmentation model that identifies an object in the medical image on a pixel-by-pixel basis as a detection model 141. The control unit 11 inputs a medical image for training into the detection model 141, and acquires the detection result of detecting the artifact region and the type of the artifact as an output. The control unit 11 compares the detection result with the correct answer value (correct answer label), optimizes parameters such as weights between neurons so that the two are close to each other, and generates the detection model 141. The control unit 11 ends a series of processes.
 図7は、アーチファクト検出処理の手順を示すフローチャートである。図7に基づき、被検者の医用画像からアーチファクトを検出する際の処理内容について説明する。
 サーバ1の制御部11は、被検者の医用画像を画像診断装置2から取得する(ステップS31)。制御部11は、取得した医用画像を検出モデル141に入力して、アーチファクト領域及びアーチファクトの種類を検出する(ステップS32)。
FIG. 7 is a flowchart showing the procedure of the artifact detection process. Based on FIG. 7, the processing content when detecting an artifact from the medical image of the subject will be described.
The control unit 11 of the server 1 acquires a medical image of the subject from the diagnostic imaging apparatus 2 (step S31). The control unit 11 inputs the acquired medical image into the detection model 141 and detects the artifact region and the type of the artifact (step S32).
 制御部11は、ステップS32でアーチファクト領域が検出されたか否かを判定する(ステップS33)。アーチファクト領域が検出されなかったと判定した場合(S33:NO)、制御部11は元の医用画像をそのまま画像診断装置2に表示させる(ステップS34)。アーチファクト領域が検出されたと判定した場合(S33:YES)、制御部11は、アーチファクト領域を加工した第2医用画像を生成する(ステップS35)。具体的には上述の如く、制御部11は、アーチファクトの種類に応じて異なる表示態様でアーチファクト領域を表示する第2医用画像を生成する。 The control unit 11 determines whether or not the artifact region has been detected in step S32 (step S33). When it is determined that the artifact region is not detected (S33: NO), the control unit 11 causes the diagnostic imaging apparatus 2 to display the original medical image as it is (step S34). When it is determined that the artifact region has been detected (S33: YES), the control unit 11 generates a second medical image obtained by processing the artifact region (step S35). Specifically, as described above, the control unit 11 generates a second medical image that displays the artifact region in different display modes depending on the type of artifact.
 制御部11は、アーチファクト領域の検出結果を画像診断装置2に出力し、医用画像と関連付けて表示させる(ステップS36)。具体的には上述の如く、制御部11は、第2医用画像を画像診断装置2に表示させる。 The control unit 11 outputs the detection result of the artifact region to the diagnostic imaging apparatus 2 and displays it in association with the medical image (step S36). Specifically, as described above, the control unit 11 causes the diagnostic imaging apparatus 2 to display the second medical image.
 制御部11は、画像診断装置2による検査が終了したか否かを判定する(ステップS37)。検査が終了していないと判定した場合(S37:NO)、制御部11は処理をステップS31に戻す。検査が終了したと判定した場合(S37:YES)、制御部11は一連の処理を終了する。 The control unit 11 determines whether or not the inspection by the diagnostic imaging apparatus 2 has been completed (step S37). If it is determined that the inspection has not been completed (S37: NO), the control unit 11 returns the process to step S31. When it is determined that the inspection is completed (S37: YES), the control unit 11 ends a series of processes.
 なお、上記ではアーチファクト領域を表示する点を説明したが、さらにサーバ1はさらに、アーチファクト領域の検出結果を修正する入力をユーザから受け付け、修正されたアーチファクト領域の情報に基づく再学習を行ってもよい。具体的には、サーバ1は、図5で例示した表示画面において、アーチファクトとして表示した領域が実際にアーチファクトであるか否か、入力を受け付ける。さらにサーバ1は、表示したアーチファクトの座標範囲、種類などが実際と異なる場合は、正しいアーチファクトの種類、座標範囲などの入力を受け付ける。検出結果の修正入力を受け付けた場合、サーバ1は、修正された検出結果(アーチファクト領域及び種類)をラベリングした医用画像を訓練データとする再学習を行い、検出モデル141を更新する。これにより、本システムの運用を通じてアーチファクトの検出精度を向上させることができる。 Although the point of displaying the artifact area has been described above, the server 1 may further receive an input for modifying the detection result of the artifact region from the user and perform re-learning based on the information of the modified artifact region. good. Specifically, the server 1 accepts an input as to whether or not the area displayed as an artifact is actually an artifact on the display screen illustrated in FIG. Further, the server 1 accepts the input of the correct artifact type, coordinate range, etc. when the displayed coordinate range, coordinate range, etc. of the artifact are different from the actual ones. When the correction input of the detection result is received, the server 1 performs re-learning using the medical image labeled with the corrected detection result (artifact region and type) as training data, and updates the detection model 141. As a result, the accuracy of detecting artifacts can be improved through the operation of this system.
 以上より、本実施の形態1によれば、医用画像内のアーチファクトを精度良く検出し、ユーザに提示することができる。 From the above, according to the first embodiment, the artifact in the medical image can be detected with high accuracy and presented to the user.
 また、本実施の形態1によれば、アーチファクト領域を加工した第2医用画像を表示することで、ユーザに医用画像内のアーチファクトを好適に提示することができる。 Further, according to the first embodiment, by displaying the second medical image in which the artifact region is processed, the artifact in the medical image can be preferably presented to the user.
 また、本実施の形態1によれば、連続する複数のフレーム画像を同時に処理してアーチファクトを検出することで、前後のフレーム画像も考慮して検出精度を高めることができる。 Further, according to the first embodiment, by simultaneously processing a plurality of consecutive frame images to detect an artifact, it is possible to improve the detection accuracy in consideration of the previous and next frame images.
(実施の形態2)
 本実施の形態では、アーチファクトの検出以外に、画像診断装置2の不適切な使用、破損、故障等に起因する画像不良の有無を推定し、その画像不良の原因を除去するための案内情報をユーザに提示する形態について述べる。なお、実施の形態1と重複する内容については同一の符号を付して説明を省略する。
(Embodiment 2)
In the present embodiment, in addition to detecting artifacts, the presence or absence of image defects due to improper use, damage, failure, etc. of the diagnostic imaging apparatus 2 is estimated, and guidance information for removing the cause of the image defects is provided. The form to be presented to the user will be described. The contents overlapping with the first embodiment are designated by the same reference numerals and the description thereof will be omitted.
 図8は、実施の形態2に係るサーバ1の構成例を示すブロック図である。本実施の形態に係るサーバ1の補助記憶部14は、画像不良推定用の推定モデル142を記憶している。推定モデル142は、検出モデル141と同様に訓練データを学習済みの機械学習モデルであり、医用画像を入力として、当該医用画像における画像不良の有無及び原因を出力するモデルである。推定モデル142は、人工知能ソフトウェアの一部として機能するプログラムモジュールとしての利用が想定される。 FIG. 8 is a block diagram showing a configuration example of the server 1 according to the second embodiment. The auxiliary storage unit 14 of the server 1 according to the present embodiment stores an estimation model 142 for estimating image defects. The estimation model 142 is a machine learning model in which training data has been trained in the same manner as the detection model 141, and is a model in which a medical image is input and the presence / absence and cause of image defects in the medical image are output. The estimation model 142 is expected to be used as a program module that functions as a part of artificial intelligence software.
 図9は、画像診断装置2において発生する画像不良に関する説明図である。図9に基づき、本実施の形態で推定対象とする画像不良について説明する。
 画像診断装置2で生成する医用画像には、画像診断装置2の不適切な使用、破損、故障等に起因して、種々の画像不良が発生し得る。図9では、画像診断装置2で発生する代表的な画像不良を、その画像不良の原因箇所と対比する形で例示している。
FIG. 9 is an explanatory diagram regarding an image defect that occurs in the diagnostic imaging apparatus 2. An image defect to be estimated in the present embodiment will be described with reference to FIG.
Various image defects may occur in the medical image generated by the diagnostic imaging apparatus 2 due to improper use, damage, failure, or the like of the diagnostic imaging apparatus 2. In FIG. 9, a typical image defect generated in the image diagnostic apparatus 2 is illustrated in a form of contrasting with the cause portion of the image defect.
 例えば画像不良の原因としては、エアトラップ、カテーテル21内部のドライブシャフトの断線、カテーテル21内部のドライブシャフトの回転阻害、カテーテル21とMDU22との接続不良、MDU22の故障などがある。エアトラップに起因する画像不良は、カテーテル21先端のエアトラップに気泡が残っていることで発生する。検査前のプライミングでエアトラップの気泡が充分に除去されていない場合、気泡によって超音波が減衰し、画像の一部又は全部が暗くなる。また、カテーテル21先端の振動子上に気泡が存在する場合、ドライブシャフトの回転に合わせて画像の暗い部分が回転するという現象が発生する。なお、図9では便宜上、画像の一部が暗くなっている様子をハッチングで図示している。 For example, the causes of image defects include air traps, disconnection of the drive shaft inside the catheter 21, rotation inhibition of the drive shaft inside the catheter 21, poor connection between the catheter 21 and the MDU22, and failure of the MDU22. Image defects caused by the air trap are caused by air bubbles remaining in the air trap at the tip of the catheter 21. If the air trap bubbles are not sufficiently removed by pre-inspection priming, the bubbles attenuate the ultrasound and darken part or all of the image. Further, when air bubbles are present on the vibrator at the tip of the catheter 21, a phenomenon occurs in which the dark portion of the image rotates according to the rotation of the drive shaft. Note that, for convenience, FIG. 9 shows how a part of the image is darkened by hatching.
 また、カテーテル21のドライブシャフトの断線が生じた場合、画像全体が暗くなり、かつ、中心付近のリングダウンが消失する。また、断線の予兆がある場合、画像自体の回転、あるいはNURDなどの現象が発生する。断線の理由は様々だが、例えばカテーテル21が血管内の狭窄部(プラーク等で狭くなった部位)に挿入された場合、ドライブシャフトのキンク(折れ、よれ、潰れなど)が生じる。ドライブシャフトのキンクが生じた状態でカテーテル21を無理に前後に動かした場合、断線が発生する恐れがある。 Further, when the drive shaft of the catheter 21 is broken, the entire image becomes dark and the ring down near the center disappears. Further, when there is a sign of disconnection, a phenomenon such as rotation of the image itself or NURD occurs. There are various reasons for disconnection, but for example, when the catheter 21 is inserted into a narrowed portion (a portion narrowed by plaque or the like) in a blood vessel, a kink (break, twist, crush, etc.) of the drive shaft occurs. If the catheter 21 is forcibly moved back and forth with the drive shaft kinked, disconnection may occur.
 また、カテーテル21のドライブシャフトの回転が阻害されている場合、画像内にモザイク状、うろこ状等の模様が発生する。当該現象はドライブシャフトの捻じれが原因で発生し、ドライブシャフトが捻じれた状態で使用を継続すると回転が阻害され、画像不良が発生する。 Further, when the rotation of the drive shaft of the catheter 21 is obstructed, a mosaic-like or scaly-like pattern is generated in the image. This phenomenon occurs due to the twisting of the drive shaft, and if the drive shaft is continued to be used in a twisted state, the rotation is hindered and an image defect occurs.
 また、カテーテル21とMDU22との接続不良が発生している場合、画像が暗くなる、あるいは放射状、砂嵐状の像が現れるなどの現象が発生する。また、MDU22の故障(例えばエンコーダの不良、フェライトコアが外れる)に起因して、画像全体が暗くなる、あるいは画像の一部(図9で右下端に示すハッチング部分)の輝度が高くなるなどの現象が発生する。 Further, when the connection between the catheter 21 and the MDU 22 is poor, a phenomenon such as darkening of the image or appearance of a radial or sandstorm-like image occurs. Further, due to a failure of the MDU22 (for example, a defective encoder or a ferrite core coming off), the entire image becomes dark, or a part of the image (the hatched portion shown at the lower right in FIG. 9) becomes brighter. The phenomenon occurs.
 本実施の形態でサーバ1は、これらの画像不良の有無及び原因(種類)を医用画像から推定する。そしてサーバ1は、画像不良の原因を除去するための対応策をユーザに案内する案内情報を出力する。なお、上記の画像不良及びその原因は一例であって、上記に限定されない。 In the present embodiment, the server 1 estimates the presence / absence and cause (type) of these image defects from the medical image. Then, the server 1 outputs guidance information that guides the user of countermeasures for removing the cause of the image defect. The above-mentioned image defect and its cause are merely examples, and are not limited to the above.
 図10は、推定モデル142に関する説明図である。推定モデル142は、画像不良が発生している医用画像を入力した場合に、画像不良の原因を推定した推定結果を出力する機械学習モデルである。サーバ1は、検出モデル141と同様に、訓練データを学習して推定モデル142を事前に生成しておく。そしてサーバ1は、画像診断装置2から医用画像を取得した場合、推定モデル142に入力して画像不良の有無及び原因を推定する。図10に基づき、推定モデル142について説明する。 FIG. 10 is an explanatory diagram of the estimation model 142. The estimation model 142 is a machine learning model that outputs an estimation result of estimating the cause of image defects when a medical image in which image defects occur is input. Similar to the detection model 141, the server 1 learns the training data and generates the estimation model 142 in advance. Then, when the server 1 acquires the medical image from the diagnostic imaging apparatus 2, the server 1 inputs it into the estimation model 142 to estimate the presence / absence and the cause of the image defect. The estimation model 142 will be described with reference to FIG.
 なお、後述するように、推定モデル142に入力する医用画像は、被験者の血管(生体管腔)内にカテーテル21を挿入した状態の検査中の画像である場合もあれば、検査前のテスト画像である場合もある。検査前及び検査中の画像不良の推定、並びに検査前及び検査中に応じた画像不良の案内について、詳しくは後述する。 As will be described later, the medical image input to the estimation model 142 may be an image during the examination with the catheter 21 inserted in the blood vessel (living lumen) of the subject, or a test image before the examination. It may be. The estimation of image defects before and during the inspection, and the guidance of the image defects before and during the inspection will be described in detail later.
 推定モデル142は、例えばCNNであり、コンボリューション層及びプーリング層が交互に連結された中間層を備え、入力画像の特徴量(特徴量マップ)を抽出する。なお、本実施の形態に係る推定モデル142は分類問題を扱うCNNであり、セマンティックセグメンテーションモデルのような逆コンボリューション層を備えていない。 The estimation model 142 is, for example, CNN, includes an intermediate layer in which convolution layers and pooling layers are alternately connected, and extracts a feature amount (feature amount map) of an input image. The estimation model 142 according to the present embodiment is a CNN that deals with the classification problem, and does not have an inverse convolutional layer like the semantic segmentation model.
 なお、本実施の形態では推定モデル142がCNNであるものとして説明するが、GAN、RNN、SVM、決定木等、その他の学習アルゴリズムに基づくモデルであってもよい。 Although the estimation model 142 is described as being a CNN in the present embodiment, it may be a model based on other learning algorithms such as GAN, RNN, SVM, and decision tree.
 サーバ1は、訓練用の医用画像に対し、当該医用画像における画像不良の有無、及び画像不良がある場合はその原因を示すデータがラベリングされた訓練データを用いて学習を行う。具体的には、訓練用の各医用画像には、画像が正常であることを示す「正常」、又は画像不良の原因を示す「エアトラップ」、「接続不良」、「断線」、「回転阻害」、「MDU故障」のいずれかのラベル(メタデータ)が付与されている。サーバ1は訓練データを推定モデル142に与え、学習を行う。 The server 1 learns from the medical image for training by using the training data in which the presence or absence of an image defect in the medical image and, if there is an image defect, the data indicating the cause are labeled. Specifically, each medical image for training includes "normal" indicating that the image is normal, or "air trap", "connection failure", "disconnection", and "rotation inhibition" indicating the cause of the image defect. , Or "MDU failure" is given a label (metadata). The server 1 feeds the training data to the estimation model 142 and performs training.
 なお、本実施の形態では正常な医用画像も訓練データとして学習するものとするが、正常な医用画像を訓練データに含めず、画像不良が発生している医用画像のみを学習してもよい。この場合、サーバ1は、各画像不良が発生している確率値を総合的に判断し、例えば全ての画像不良の確率値が閾値以下(例えば70%以下)の場合には正常であるものと推定すればよい。あるいは、ユーザが目視で画像不良の有無を判断し、画像不良があると判断した場合はサーバ1に画像を送信して、サーバ1に推定処理を実行させてもよい。このように、推定モデル142は、少なくとも画像不良が発生している医用画像を入力した場合に、当該画像不良の原因を推定可能であればよく、画像不良の有無まで推定する構成は必須ではない。 In the present embodiment, the normal medical image is also learned as the training data, but the normal medical image may not be included in the training data and only the medical image in which the image defect has occurred may be learned. In this case, the server 1 comprehensively determines the probability value of each image defect, and if, for example, the probability values of all image defects are equal to or less than the threshold value (for example, 70% or less), it is considered to be normal. You can estimate it. Alternatively, the user may visually determine the presence or absence of an image defect, and if it is determined that there is an image defect, the image may be transmitted to the server 1 to cause the server 1 to execute the estimation process. As described above, the estimation model 142 only needs to be able to estimate the cause of the image defect when at least a medical image in which the image defect has occurred is input, and the configuration for estimating the presence or absence of the image defect is not essential. ..
 サーバ1は、訓練用の医用画像を推定モデル142に入力して、画像不良の有無及び原因を推定した推定結果を出力として取得する。具体的には、「正常」、「エアトラップ」等の各ラベルに対応する確率値を出力として取得する。なお、推定モデル142からの出力は確率値ではなく、そのラベルに該当するか否かを二値(「0」又は「1」)で判定した値であってもよい。 The server 1 inputs the medical image for training into the estimation model 142, and acquires the estimation result of estimating the presence / absence and the cause of the image defect as an output. Specifically, the probability value corresponding to each label such as "normal" and "air trap" is acquired as an output. The output from the estimation model 142 may not be a probability value, but may be a value determined by a binary value (“0” or “1”) as to whether or not it corresponds to the label.
 サーバ1は、推定モデル142から出力された推定結果を、訓練データの正解値と比較し、両者が近似するようにニューロン間の重み等のパラメータを最適化して、推定モデル142を生成する。 The server 1 compares the estimation result output from the estimation model 142 with the correct answer value of the training data, optimizes parameters such as weights between neurons so that the two approximate each other, and generates the estimation model 142.
 なお、推定モデル142も検出モデル141と同様に、時系列で連続する複数のフレーム画像から推定を行えるようにすること好適である。この場合、検出モデル141と同様に、推定モデル142を3D-CNN(例えばC3D)、あるいはCNN及びRNNを組み合わせたモデルなどにすればよい。 Similar to the detection model 141, the estimation model 142 is also preferably capable of estimating from a plurality of consecutive frame images in time series. In this case, similarly to the detection model 141, the estimation model 142 may be a 3D-CNN (for example, C3D) or a model in which CNN and RNN are combined.
 さらに本実施の形態では、推定モデル142への入力として、医用画像以外に、医用画像の生成時点における画像診断装置2の操作情報を入力に用いる。操作情報は、ユーザによる画像診断装置2の操作状況を示すログであり、画像診断装置2を使用した被検者の検査状況を識別可能なデータである。 Further, in the present embodiment, as the input to the estimation model 142, the operation information of the diagnostic imaging apparatus 2 at the time of generation of the medical image is used as the input in addition to the medical image. The operation information is a log showing the operation status of the diagnostic imaging apparatus 2 by the user, and is data capable of identifying the inspection status of the subject using the diagnostic imaging apparatus 2.
 具体的には、サーバ1は、医用画像の生成時点の操作情報から、当該時点が検査前であるか、又は検査中(あるいは検査後)であるかを判定する。そしてサーバ1は、検査前又は検査中を判定した判定結果を、当該時点に対応する医用画像と共に推定モデル142に入力する。なお、検査前はカテーテル21が被験者の血管内に挿入されていない状態(検査前のテスト)を表し、検査中はカテーテル21が被験者の血管内に挿入された状態を表す。 Specifically, the server 1 determines from the operation information at the time of generation of the medical image whether the time is before the examination or during the examination (or after the examination). Then, the server 1 inputs the determination result of determining before or during the examination into the estimation model 142 together with the medical image corresponding to the time point. Before the examination, the catheter 21 is not inserted into the blood vessel of the subject (test before the examination), and during the examination, the catheter 21 is inserted into the blood vessel of the subject.
 例えばサーバ1は、医用画像の属性を示すカテゴリ変数として、検査前であるか、又は検査中であるかを示す二値のデータを推定モデル142に入力する。訓練データには、医用画像と対応付けて操作情報が入力用データとして含まれており、サーバ1は、操作情報から判定した検査前又は検査中の判定結果も推定モデル142に入力して学習を行う。 For example, the server 1 inputs binary data indicating whether the test is before or during the test as a categorical variable indicating the attributes of the medical image into the estimation model 142. The training data includes operation information as input data in association with the medical image, and the server 1 also inputs the determination result before or during the examination determined from the operation information into the estimation model 142 for learning. conduct.
 一般的に、画像診断装置2では検査中であるか否かに応じて、検査中に発生しやすい画像不良と、検査中に関わらず発生する画像不良とがある。例えば上述の断線、回転阻害等に起因する画像不良は、カテーテル21を操作している検査中に発生しやすい。一方で、エアトラップ、接続不良等に起因する画像不良は検査中であるか否かに関わらず発生するため、検査前であっても発見しやすい。そこで、推定モデル142に医用画像生成時の検査状況も含めて学習させることで、推定精度を向上させることができる。 Generally, in the diagnostic imaging apparatus 2, there are image defects that are likely to occur during the inspection and image defects that occur regardless of the inspection, depending on whether or not the inspection is in progress. For example, image defects due to the above-mentioned disconnection, rotation inhibition, etc. are likely to occur during an examination in which the catheter 21 is operated. On the other hand, image defects caused by air traps, poor connections, etc. occur regardless of whether or not the inspection is in progress, so it is easy to find them even before the inspection. Therefore, the estimation accuracy can be improved by learning the estimation model 142 including the inspection status at the time of medical image generation.
 サーバ1は、上述の如く訓練データを学習して推定モデル142を生成する。サーバ1は、画像診断装置2から医用画像を取得した場合、検出モデル141を用いてアーチファクトの検出を行うほか、推定モデル142に医用画像を入力して画像不良の有無及び原因を推定する。サーバ1は、画像不良があると推定した場合、画像不良の推定結果と、推定した画像不良の原因を除去するための案内情報とを画像診断装置2に出力する。 The server 1 learns the training data as described above and generates the estimation model 142. When the server 1 acquires a medical image from the diagnostic imaging apparatus 2, the server 1 detects an artifact using the detection model 141, and inputs the medical image to the estimation model 142 to estimate the presence / absence and the cause of the image defect. When it is estimated that there is an image defect, the server 1 outputs the estimation result of the image defect and the guidance information for removing the estimated cause of the image defect to the image diagnostic apparatus 2.
 図11は、実施の形態2に係る画像診断装置2の表示画面例を示す説明図である。図11では、画像不良が発生している場合の画像診断装置2における表示画面例を図示してある。図11では一例として、検査中にカテーテル21の破損(例えば断線)の予兆があると推定された場合の表示画面を図示してある。 FIG. 11 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2 according to the second embodiment. FIG. 11 illustrates an example of a display screen in the diagnostic imaging apparatus 2 when an image defect has occurred. As an example, FIG. 11 shows a display screen when it is estimated that there is a sign of damage (for example, disconnection) of the catheter 21 during the examination.
 画像診断装置2は、実施の形態1と同様に、被検者の血管内をイメージングした医用画像(断層像)を表示する。そして画像診断装置2は、画像不良があると推定された場合、サーバ1からの出力に従って、画像不良に係る推定結果をアラート表示する。 The diagnostic imaging apparatus 2 displays a medical image (tomographic image) that images the inside of the blood vessel of the subject, as in the first embodiment. Then, when it is estimated that there is an image defect, the diagnostic imaging apparatus 2 alerts and displays the estimation result related to the image defect according to the output from the server 1.
 画像診断装置2は、画像不良の原因を除去するための対応策を案内する案内情報を表示する。例えばカテーテル21の断線の予兆があると推定された場合、画像診断装置2は、表示画像を確認しながらカテーテル21をゆっくりと押し進めるよう、カテーテル21の操作方法を案内する。また、画像診断装置2は、当該操作によっても画像不良が解消されない場合、カテーテル21を交換するよう案内する。 The image diagnostic device 2 displays guidance information that guides countermeasures for removing the cause of image defects. For example, when it is presumed that there is a sign of disconnection of the catheter 21, the diagnostic imaging apparatus 2 guides the operation method of the catheter 21 so as to slowly push the catheter 21 while checking the displayed image. Further, the diagnostic imaging apparatus 2 guides the catheter 21 to be replaced when the defective image is not resolved by the operation.
 また、サーバ1は、画像不良の推定の根拠とした画像の特徴部分を可視化した第3医用画像を生成し、画像診断装置2に表示させてもよい。第3医用画像は、推定モデル142が画像不良を推定する際に特徴部分として参照した画像領域を示す画像であり、例えば当該領域をヒートマップで示す画像である。 Further, the server 1 may generate a third medical image that visualizes the characteristic portion of the image as the basis for estimating the image defect, and display it on the diagnostic imaging apparatus 2. The third medical image is an image showing an image region referred to as a feature portion when the estimation model 142 estimates an image defect, and is, for example, an image showing the region with a heat map.
 例えばサーバ1は、Grad-CAMの手法を用いて第3医用画像を生成する。Grad-CAMは、CNN内で入力画像のどの部分を特徴として捉えたか可視化する手法であり、出力に対する寄与が大きい画像部分を抽出する手法である。Grad-CAMでは、CNNの中間層で特徴量を抽出する際の勾配が大きい部分を特徴部分と見なして、抽出を行う。 For example, the server 1 generates a third medical image by using the method of Grad-CAM. Grade-CAM is a method of visualizing which part of an input image is captured as a feature in CNN, and is a method of extracting an image part having a large contribution to output. In Grade-CAM, a portion having a large gradient when extracting a feature amount in the intermediate layer of CNN is regarded as a feature portion, and extraction is performed.
 具体的には、サーバ1は、推定モデル142(CNN)の出力層からの出力値(各ラベルの確率値)と、中間層の最後のコンボリューション層に対する入力の勾配データとを活性化関数に入力し、ヒートマップを生成する。サーバ1は、生成したヒートマップを元の医用画像に重畳し、第3医用画像を生成する。サーバ1は、図11右下に図示するように、第3医用画像を元の医用画像と並列表示させる。 Specifically, the server 1 uses the output value (probability value of each label) from the output layer of the estimation model 142 (CNN) and the input gradient data for the last convolution layer of the intermediate layer as an activation function. Enter and generate a heatmap. The server 1 superimposes the generated heat map on the original medical image to generate a third medical image. As shown in the lower right of FIG. 11, the server 1 displays the third medical image in parallel with the original medical image.
 なお、上記ではGrad-CAMについて説明したが、Guided Grad-CAMなど、その他の手法を用いて第3医用画像を生成してもよい。第3医用画像を表示することで、推定モデル142が画像不良を推定した根拠をユーザに提示し、推定結果が正しいかチェックさせることができる。 Although the grade-CAM has been described above, a third medical image may be generated by using another method such as Guided Grade-CAM. By displaying the third medical image, the estimation model 142 can present the basis for estimating the image defect to the user and have the user check whether the estimation result is correct.
 なお、図11の例では画像不良のみが推定(検出)された場合を図示したが、アーチファクトも同時に検出されている場合、アーチファクトの検出結果と、画像不良の推定根拠とを医用画像に同時に表示してもよい。この場合、例えばサーバ1は、アーチファクト領域に対応する半透明マスクと、画像不良に対応するヒートマップとを同じ医用画像に重畳表示すればよい。 In the example of FIG. 11, only the case where only the image defect is estimated (detected) is shown, but when the artifact is also detected at the same time, the detection result of the artifact and the estimation basis of the image defect are simultaneously displayed on the medical image. You may. In this case, for example, the server 1 may superimpose and display the translucent mask corresponding to the artifact region and the heat map corresponding to the image defect on the same medical image.
 図12は、推定モデル142の生成処理の手順を示すフローチャートである。図12に基づき、訓練データを学習して推定モデル142を生成する際の処理内容について説明する。
 サーバ1の制御部11は、訓練用の医用画像及び操作情報に対し、当該医用画像における画像不良の有無及び原因を示すデータが付与された訓練データを取得する(ステップS201)。制御部11は訓練データに基づき、医用画像を入力した場合に、画像不良の有無及び原因を推定した推定結果を出力する推定モデル142を生成する(ステップS202)。例えば制御部11は、上述の如く、CNNモデルを推定モデル142として生成する。制御部11は、訓練用の医用画像と、操作情報から判定される検査前であるか否かの判定結果とを推定モデル142に入力し、画像不良の有無及び原因を推定した推定結果を出力として取得する。制御部11は、推定結果を正解値と比較し、両者が近似するように、ニューロン間の重み等のパラメータを最適化して推定モデル142を生成する。制御部11は一連の処理を終了する。
FIG. 12 is a flowchart showing a procedure for generating the estimation model 142. Based on FIG. 12, the processing content when learning the training data and generating the estimation model 142 will be described.
The control unit 11 of the server 1 acquires training data to which data indicating the presence / absence and cause of image defects in the medical image is added to the medical image and operation information for training (step S201). Based on the training data, the control unit 11 generates an estimation model 142 that outputs an estimation result that estimates the presence / absence and cause of image defects when a medical image is input (step S202). For example, the control unit 11 generates the CNN model as the estimation model 142 as described above. The control unit 11 inputs the medical image for training and the determination result of whether or not it is before the inspection determined from the operation information into the estimation model 142, and outputs the estimation result of estimating the presence or absence of the image defect and the cause. Get as. The control unit 11 compares the estimation result with the correct answer value, optimizes parameters such as weights between neurons so that the two approximate each other, and generates an estimation model 142. The control unit 11 ends a series of processes.
 図13は、アーチファクト検出及び画像不良推定の処理手順を示すフローチャートである。なお、図7のフローチャートと重複するステップについては、同一の符号を付して説明を省略する。
 画像診断装置2から医用画像を取得した後(ステップS31)、サーバ1の制御部11は以下の処理を実行する。制御部11は、医用画像を生成時の画像診断装置2の操作情報を取得する(ステップS221)。制御部11は、取得した医用画像と、操作情報から判定される検査前であるか否かの判定結果とを推定モデル142に入力して、医用画像における画像不良の有無及び原因を推定する(ステップS222)。
FIG. 13 is a flowchart showing a processing procedure of artifact detection and image defect estimation. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
After acquiring the medical image from the diagnostic imaging apparatus 2 (step S31), the control unit 11 of the server 1 executes the following processing. The control unit 11 acquires the operation information of the diagnostic imaging apparatus 2 at the time of generating the medical image (step S221). The control unit 11 inputs the acquired medical image and the determination result of whether or not it is before the inspection determined from the operation information into the estimation model 142, and estimates the presence or absence and the cause of the image defect in the medical image ( Step S222).
 制御部11は、ステップS222の推定結果に基づき、画像不良があるか否かを判定する(ステップS223)。画像不良があると判定した場合(S223:YES)、制御部11は、推定した画像不良の原因を除去するための対応策を案内する案内情報を画像診断装置2に出力し、表示させる(ステップS224)。具体的には上述の如く、サーバ1は、画像不良が発生している旨をアラート表示させると共に、その画像不良の原因を除去するためのカテーテル21の操作方法等を案内する案内情報を表示させる。 The control unit 11 determines whether or not there is an image defect based on the estimation result in step S222 (step S223). When it is determined that there is an image defect (S223: YES), the control unit 11 outputs and displays the guidance information for guiding the countermeasure for removing the estimated cause of the image defect to the image diagnostic apparatus 2 (step). S224). Specifically, as described above, the server 1 displays an alert indicating that an image defect has occurred, and displays guidance information for guiding the operation method of the catheter 21 for removing the cause of the image defect. ..
 画像不良がないと判定した場合(S223:NO)、制御部11は処理をステップS32に移行する。 When it is determined that there is no image defect (S223: NO), the control unit 11 shifts the process to step S32.
 なお、上記では検出モデル141と推定モデル142とが別々のモデルであるものとして説明したが、両者を同一のモデルとしてもよい。 Although the detection model 141 and the estimation model 142 have been described above as separate models, they may be the same model.
 また、上記では検査前と検査中とで共通の推定モデル142を用いたが、検査前の医用画像を学習した推定モデル142と、検査中の医用画像を学習した推定モデル142とを別々に用意し、検査前であるか否かに応じて異なる推定モデル142を用いてもよい。検査前であるか否かに応じて別々のモデルを用意することで、推定精度を向上させることができる。 Further, in the above, the estimation model 142 common to before and during the examination was used, but the estimation model 142 that learned the medical image before the examination and the estimation model 142 that learned the medical image during the examination are prepared separately. However, different estimation models 142 may be used depending on whether or not the test has been performed. By preparing separate models depending on whether or not they are before the inspection, the estimation accuracy can be improved.
 また、推定モデル142についても検出モデル141と同様に推定結果の修正入力を受け付け、修正された推定結果(画像不良の有無及び原因)をラベリングした医用画像を訓練データとして推定モデル142に与え、再学習を行ってもよい。 Further, the estimation model 142 also accepts the correction input of the estimation result in the same manner as the detection model 141, gives the medical image labeled with the corrected estimation result (presence / absence and cause of image defect) to the estimation model 142 as training data, and re-uses it. You may study.
 以上より、本実施の形態2によれば、アーチファクトの検出だけでなく、画像診断装置2の不適切な使用、破損、故障等に起因する画像不良の推定も同時に行うことができる。 From the above, according to the second embodiment, not only the detection of the artifact but also the estimation of the image defect due to improper use, damage, failure, etc. of the diagnostic imaging apparatus 2 can be performed at the same time.
(実施の形態3)
 本実施の形態では、アーチファクトの他に、検査対象とする生体管腔内の所定のオブジェクトを医用画像から検出する形態について説明する。
(Embodiment 3)
In the present embodiment, in addition to the artifact, a mode in which a predetermined object in the living lumen to be inspected is detected from a medical image will be described.
 図14は、実施の形態3に係る検出モデル141に関する説明図である。本実施の形態でサーバ1は、訓練用の医用画像に対し、アーチファクト領域以外に、検査対象とするオブジェクトの画像領域(以下、「オブジェクト領域」と呼ぶ)を示すデータをラベリングした訓練データを学習して、検出モデル141を生成する。オブジェクトは、診断又は治療の目的とする血管(生体管腔)内の物体であり、例えばプラークなどである。 FIG. 14 is an explanatory diagram of the detection model 141 according to the third embodiment. In the present embodiment, the server 1 learns training data in which data indicating an image area (hereinafter, referred to as “object area”) of an object to be inspected is labeled in addition to the artifact area for a medical image for training. Then, the detection model 141 is generated. An object is an object in a blood vessel (living lumen) intended for diagnosis or treatment, such as a plaque.
 なお、オブジェクトは血管内に存在する生体組織に限定されず、例えば被検者(患者)の血管内に留置されたステントなど、生体組織以外の物質であってもよい。 The object is not limited to the living tissue existing in the blood vessel, and may be a substance other than the living tissue such as a stent placed in the blood vessel of the subject (patient).
 訓練データでは、訓練用の医用画像に対し、アーチファクトのデータ(アーチファクト領域の座標範囲及びアーチファクトの種類)に加えて、又はアーチファクトのデータに代えて、オブジェクトに関するデータが付与されている。具体的には検出モデル141の右側に示すように、画像内にアーチファクトが存在する場合はアーチファクト領域の座標範囲及びアーチファクトの種類を示すデータが、オブジェクトが存在する場合はオブジェクト領域の座標範囲及びオブジェクトの種類を示すデータが付与されている。 In the training data, data related to the object is added to the medical image for training in addition to the artifact data (coordinate range of the artifact region and the type of the artifact) or in place of the artifact data. Specifically, as shown on the right side of the detection model 141, when an artifact exists in the image, data indicating the coordinate range of the artifact region and the type of the artifact, and when an object exists, the coordinate range of the object region and the object. Data indicating the type of is given.
 サーバ1は、上記の訓練データに基づき検出モデル141を生成する。オブジェクト領域が加わる点以外は実施の形態1と同様であるため、本実施の形態では詳細な説明を省略する。サーバ1は、画像診断装置2から医用画像を取得した場合、検出モデル141に入力してアーチファクト領域及び/又はオブジェクト領域を検出し、検出結果を画像診断装置2に出力する。 The server 1 generates the detection model 141 based on the above training data. Since it is the same as that of the first embodiment except that the object area is added, detailed description thereof will be omitted in the present embodiment. When the server 1 acquires a medical image from the diagnostic imaging apparatus 2, the server 1 inputs the medical image to the detection model 141 to detect the artifact region and / or the object region, and outputs the detection result to the diagnostic imaging apparatus 2.
 図15は、実施の形態3に係る画像診断装置2の表示画面例を示す説明図である。本実施の形態において画像診断装置2は、アーチファクト領域以外にオブジェクト領域を示す第2医用画像を表示し、ユーザに提示する。サーバ1は、アーチファクト領域及びオブジェクト領域を同時に検出した場合、各領域の表示態様(表示色)を互いに異ならせた第2医用画像を生成し、画像診断装置2に表示させる。なお、例えばサーバ1は、オブジェクト領域の座標値からオブジェクトのサイズ等を判別し、併せて表示させてもよい。 FIG. 15 is an explanatory diagram showing an example of a display screen of the diagnostic imaging apparatus 2 according to the third embodiment. In the present embodiment, the diagnostic imaging apparatus 2 displays a second medical image showing an object area in addition to the artifact area and presents it to the user. When the server 1 detects the artifact region and the object region at the same time, the server 1 generates a second medical image in which the display modes (display colors) of the regions are different from each other, and displays the second medical image on the diagnostic imaging apparatus 2. For example, the server 1 may determine the size of the object from the coordinate values of the object area and display the object together.
 図16は、実施の形態3に係る検出モデル141の生成処理の手順を示すフローチャートである。
 サーバ1の制御部11は、訓練用の医用画像に対し、アーチファクト領域及び/又はオブジェクト領域に関するデータがラベリングされた訓練データを取得する(ステップS301)。制御部11は訓練データに基づき、医用画像を入力した場合に、アーチファクト領域及び/又はオブジェクト領域を検出する検出モデル141を生成する(ステップS302)。制御部11は一連の処理を終了する。
FIG. 16 is a flowchart showing a procedure for generating the detection model 141 according to the third embodiment.
The control unit 11 of the server 1 acquires the training data in which the data relating to the artifact region and / or the object region is labeled with respect to the medical image for training (step S301). Based on the training data, the control unit 11 generates a detection model 141 that detects an artifact region and / or an object region when a medical image is input (step S302). The control unit 11 ends a series of processes.
 図17は、アーチファクト及びオブジェクト検出処理の手順を示すフローチャートである。なお、図7のフローチャートと重複するステップについては、同一の符号を付して説明を省略する。
 画像診断装置2から医用画像を取得した後(ステップS31)、サーバ1の制御部11は以下の処理を実行する。制御部11は、取得した医用画像を検出モデル141に入力して、医用画像内のアーチファクト領域及び/又はオブジェクト領域を検出する(ステップS321)。
FIG. 17 is a flowchart showing the procedure of the artifact and object detection process. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
After acquiring the medical image from the diagnostic imaging apparatus 2 (step S31), the control unit 11 of the server 1 executes the following processing. The control unit 11 inputs the acquired medical image into the detection model 141 and detects an artifact region and / or an object region in the medical image (step S321).
 制御部11は、ステップS321でアーチファクト領域及び/又はオブジェクト領域が検出されたか否かを判定する(ステップS322)。アーチファクト領域及び/又はオブジェクト領域が検出されなかったと判定した場合(S322:NO)、制御部11は元の医用画像をそのまま画像診断装置2に表示させ(ステップS323)、処理をステップS37に移行する。 The control unit 11 determines whether or not the artifact region and / or the object region is detected in step S321 (step S322). When it is determined that the artifact region and / or the object region is not detected (S322: NO), the control unit 11 displays the original medical image as it is on the diagnostic imaging apparatus 2 (step S323), and shifts the process to step S37. ..
 アーチファクト領域及び/又はオブジェクト領域が検出されたと判定した場合(S322:YES)、制御部11は、アーチファクト領域及び/又はオブジェクト領域を加工した第2医用画像を生成する(ステップS324)。制御部11は、生成した第2医用画像を画像診断装置2に出力し、表示させる(ステップS325)。制御部11は処理をステップS37に移行する。 When it is determined that the artifact region and / or the object region is detected (S322: YES), the control unit 11 generates a second medical image obtained by processing the artifact region and / or the object region (step S324). The control unit 11 outputs the generated second medical image to the diagnostic imaging apparatus 2 and displays it (step S325). The control unit 11 shifts the process to step S37.
 なお、上記では同一の検出モデル141においてアーチファクト及びオブジェクトを検出するものとしたが、両者を検出するためのモデルを別々に設けてもよい。 In the above, it is assumed that the same detection model 141 detects the artifact and the object, but a model for detecting both may be provided separately.
 以上より、本実施の形態3によれば、医用画像からアーチファクト及びオブジェクトを同時に検出してユーザに提示することができ、いずれが所望のオブジェクトであるか、又はアーチファクトであるかを識別させることができる。 From the above, according to the third embodiment, an artifact and an object can be simultaneously detected from the medical image and presented to the user, and it is possible to identify which is the desired object or the artifact. can.
(実施の形態4)
 実施の形態1では、検出モデル141を用いてアーチファクト領域を検出し、検出したアーチファクト領域を示す第2医用画像を生成する形態について説明した。本実施の形態では、アーチファクトを低減した第2医用画像を生成する形態について説明する。
(Embodiment 4)
In the first embodiment, an embodiment in which an artifact region is detected using the detection model 141 and a second medical image showing the detected artifact region is generated has been described. In this embodiment, a mode for generating a second medical image with reduced artifacts will be described.
 図18は、実施の形態4に係るサーバ1の構成例を示すブロック図である。本実施の形態に係るサーバ1の補助記憶部14は、生成モデル143を記憶している。生成モデル143は、検出モデル141と同様に訓練データを学習済みの機械学習モデルである。生成モデル143は、人工知能ソフトウェアの一部として機能するプログラムモジュールとしての利用が想定される。 FIG. 18 is a block diagram showing a configuration example of the server 1 according to the fourth embodiment. The auxiliary storage unit 14 of the server 1 according to the present embodiment stores the generation model 143. The generative model 143 is a machine learning model in which training data has been trained in the same manner as the detection model 141. The generative model 143 is expected to be used as a program module that functions as a part of artificial intelligence software.
 図19は、生成モデル143に関する説明図である。生成モデル143は、画像診断装置2でイメージングされた第1医用画像を入力として、第1医用画像を変換した第2医用画像を生成する機械学習モデルである。本実施の形態では、生成モデル143としてGANを用いる。GANは、入力データから出力データを生成する生成器(Generator)と、生成器が生成したデータの真偽を識別する識別器(Discriminator)とを備え、生成器及び識別器が競合して学習を行うことでネットワークを構築する。 FIG. 19 is an explanatory diagram of the generative model 143. The generative model 143 is a machine learning model that generates a second medical image obtained by converting the first medical image by inputting the first medical image imaged by the diagnostic imaging apparatus 2. In this embodiment, GAN is used as the generative model 143. The GAN includes a generator that generates output data from input data and a discriminator that discriminates the authenticity of the data generated by the generator, and the generator and the discriminator compete with each other for learning. Build a network by doing.
 GANに係る生成器はランダムなノイズ(潜在変数)の入力を受け付け、出力データを生成する。識別器は、学習用に与えられる真のデータと、生成器から与えられるデータとを用いて、生成器から与えられたデータの真偽を学習する。GANでは、最終的に生成器の損失関数が最小化し、かつ、識別器の損失関数が最大化するようにネットワークを構築する。 The generator related to GAN accepts random noise (latent variable) input and generates output data. The discriminator learns the truth of the data given by the generator by using the true data given for learning and the data given by the generator. In GAN, the network is constructed so that the loss function of the generator is finally minimized and the loss function of the discriminator is maximized.
 本実施の形態でサーバ1は、アーチファクト低減用の生成モデル143として、pix2pixを生成する。サーバ1は、生成モデル143を生成するための訓練データとして、アーチファクトを含む医用画像と、当該医用画像よりもアーチファクトが少ない第2医用画像とを用いる。サーバ1は、訓練用の医用画像を生成器に与えて第2医用画像を生成する。そしてサーバ1は、生成器の入出力に相当する医用画像及び第2医用画像のペアを偽のデータとし、訓練データに含まれる医用画像及び第2医用画像のペアを真のデータとして識別器に与え、真偽の識別を行う。サーバ1は、生成器の損失関数が最小化し、かつ、識別器の損失関数が最大化するようパラメータを最適化し、生成モデル143を生成する。 In the present embodiment, the server 1 generates pix2pix as a generation model 143 for reducing artifacts. The server 1 uses a medical image including an artifact and a second medical image having fewer artifacts than the medical image as training data for generating the generative model 143. The server 1 gives a medical image for training to the generator to generate a second medical image. Then, the server 1 uses the pair of the medical image and the second medical image corresponding to the input / output of the generator as fake data, and the pair of the medical image and the second medical image included in the training data as true data in the classifier. Give and identify the truth. The server 1 generates the generative model 143 by optimizing the parameters so that the loss function of the generator is minimized and the loss function of the discriminator is maximized.
 なお、上記では生成モデル143がpix2pixであるものとして説明したが、後述のCycleGAN、StarGANなど、pix2pixとはネットワーク構造が異なるその他のGANであってもよい。 Although the generation model 143 has been described above as being a pix2pix, it may be another GAN having a network structure different from that of the pix2pix, such as CycleGAN and StarGAN, which will be described later.
 また、生成モデル143はGANに限定されず、VAE(Variational Autoencoder)、CNN(例えばU-net)等のニューラルネットワーク、あるいはその他の学習アルゴリズムに基づくモデルであってもよい。 Further, the generative model 143 is not limited to GAN, and may be a model based on a neural network such as VAE (Variational Autoencoder), CNN (for example, U-net), or another learning algorithm.
 サーバ1は、画像診断装置2から取得した医用画像を生成モデル143に入力し、アーチファクトを低減した第2医用画像を生成する。例えばサーバ1は、まず検出モデル141でアーチファクトの検出を行い、順次取得する各断層像(フレーム画像)からアーチファクトが検出された場合、その断層像を生成モデル143に入力してアーチファクトを低減した断層像に変換する。これにより、アーチファクトを低減した血管内断層像をユーザに提示することができ、血管内治療を好適に支援することができる。 The server 1 inputs the medical image acquired from the diagnostic imaging apparatus 2 into the generation model 143, and generates a second medical image with reduced artifacts. For example, the server 1 first detects an artifact with the detection model 141, and when an artifact is detected from each tomographic image (frame image) acquired sequentially, the tomographic image is input to the generative model 143 to reduce the artifact. Convert to an image. As a result, an intravascular tomographic image with reduced artifacts can be presented to the user, and endovascular treatment can be suitably supported.
 図20は、生成モデル143の生成処理の手順を示すフローチャートである。図20に基づき、機械学習により生成モデル143を生成する際の処理内容について説明する。
 サーバ1の制御部11は、アーチファクトを含む医用画像と、当該医用画像よりもアーチファクトが少ない第2医用画像とから成る訓練データを取得する(ステップS401)。制御部11は訓練データに基づき、医用画像を入力した場合に、当該医用画像のアーチファクトを低減した第2医用画像を生成する生成モデル143を生成する(ステップS402)。制御部11は一連の処理を終了する。
FIG. 20 is a flowchart showing the procedure of the generation process of the generation model 143. Based on FIG. 20, the processing content when the generation model 143 is generated by machine learning will be described.
The control unit 11 of the server 1 acquires training data including a medical image including an artifact and a second medical image having fewer artifacts than the medical image (step S401). Based on the training data, the control unit 11 generates a generation model 143 that generates a second medical image with reduced artifacts of the medical image when the medical image is input (step S402). The control unit 11 ends a series of processes.
 図21は、アーチファクト低減処理の手順を示すフローチャートである。なお、図7のフローチャートと重複するステップについては、同一の符号を付して説明を省略する。
 アーチファクト領域が検出されたと判定した場合(S33:YES)、サーバ1の制御部11は、画像診断装置2から取得した医用画像を生成モデル143に入力して、アーチファクトを低減した第2医用画像を生成する(ステップS421)。制御部11は、生成した第2医用画像を画像診断装置2に出力して表示させ(ステップS422)、処理をステップS37に移行する。
FIG. 21 is a flowchart showing the procedure of the artifact reduction process. The steps that overlap with the flowchart of FIG. 7 are designated by the same reference numerals and the description thereof will be omitted.
When it is determined that the artifact region is detected (S33: YES), the control unit 11 of the server 1 inputs the medical image acquired from the diagnostic imaging apparatus 2 into the generative model 143, and inputs the second medical image with reduced artifacts. Generate (step S421). The control unit 11 outputs the generated second medical image to the diagnostic imaging apparatus 2 and displays it (step S422), and shifts the process to step S37.
 以上より、本実施の形態4によれば、アーチファクトを低減した第2医用画像をユーザに提示することもできる。 From the above, according to the fourth embodiment, it is possible to present the user with a second medical image with reduced artifacts.
 今回開示された実施の形態はすべての点で例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。 It should be considered that the embodiment disclosed this time is an example in all respects and is not restrictive. The scope of the present invention is indicated by the scope of claims, not the above-mentioned meaning, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims.
 1   サーバ(情報処理装置)
 11  制御部
 12  主記憶部
 13  通信部
 14  補助記憶部
 P   プログラム
 141 検出モデル
 142 推定モデル
 143 生成モデル
 2   画像診断装置
 21  カテーテル
 22  MDU
 23  画像処理装置
 24  表示装置
1 Server (information processing device)
11 Control unit 12 Main storage unit 13 Communication unit 14 Auxiliary storage unit P program 141 Detection model 142 Estimation model 143 Generative model 2 Diagnostic imaging device 21 Catheter 22 MDU
23 Image processing device 24 Display device

Claims (11)

  1.  生体管腔内に挿入されたカテーテルで検出した信号に基づき生成された医用画像を取得し、
     前記医用画像を入力した場合に、該医用画像内のアーチファクトに対応する画像領域を検出した検出結果を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記アーチファクトに対応する画像領域を検出し、
     検出結果を前記医用画像と関連付けて出力する
     処理をコンピュータに実行させるプログラム。
    Obtain a medical image generated based on the signal detected by the catheter inserted into the lumen of the living body.
    When the medical image is input, the acquired image is input to the model trained to output the detection result of detecting the image region corresponding to the artifact in the medical image, and the image corresponding to the artifact is input. Detect the area and
    A program that causes a computer to execute a process of outputting a detection result in association with the medical image.
  2.  検出した前記アーチファクトに対応する画像領域を、他の画像領域とは異なる表示態様で表示する第2医用画像を生成し、
     生成した前記第2医用画像を出力する
     請求項1に記載のプログラム。
    A second medical image that displays the detected image area corresponding to the artifact in a display mode different from that of other image areas is generated.
    The program according to claim 1, which outputs the generated second medical image.
  3.  前記医用画像から、前記アーチファクトに対応する画像領域と、該アーチファクトの種類とを検出し、
     検出した種類に応じた表示態様で前記アーチファクトに対応する画像領域を表示する前記第2医用画像を生成する
     請求項2に記載のプログラム。
    From the medical image, the image region corresponding to the artifact and the type of the artifact are detected.
    The program according to claim 2, wherein the second medical image is generated, which displays an image area corresponding to the artifact in a display mode according to the detected type.
  4.  前記生体管腔の長手方向に沿って生成された複数の前記医用画像を取得し、
     前記複数の医用画像を前記モデルに入力して、前記アーチファクトに対応する画像領域を検出する
     請求項1~3のいずれか1項に記載のプログラム。
    A plurality of the medical images generated along the longitudinal direction of the living lumen were acquired and obtained.
    The program according to any one of claims 1 to 3, wherein a plurality of medical images are input to the model and an image region corresponding to the artifact is detected.
  5.  前記検出結果を出力後、該検出結果を修正する修正入力を受け付け、
     検出対象とした前記医用画像と、修正後の前記検出結果とに基づく再学習を行い、前記モデルを更新する
     請求項1~4のいずれか1項に記載のプログラム。
    After outputting the detection result, a correction input for modifying the detection result is accepted.
    The program according to any one of claims 1 to 4, which relearns based on the medical image as a detection target and the corrected detection result, and updates the model.
  6.  画像不良が発生している前記医用画像を入力した場合に、前記画像不良の原因を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記画像不良の原因を推定し、
     推定した前記画像不良の原因を除去するための対応策を案内する案内情報を出力する
     請求項1~5のいずれか1項に記載のプログラム。
    When the medical image in which the image defect occurs is input, the acquired medical image is input to the model trained to output the cause of the image defect, and the cause of the image defect is estimated.
    The program according to any one of claims 1 to 5, which outputs guidance information for guiding countermeasures for removing the estimated cause of the image defect.
  7.  前記医用画像を入力した場合に、検査対象のオブジェクトに対応する画像領域を検出した検出結果を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記オブジェクトに対応する画像領域を検出し、
     前記アーチファクト及びオブジェクトそれぞれの検出結果を前記医用画像と関連付けて出力する
     請求項1~6のいずれか1項に記載のプログラム。
    When the medical image is input, the acquired image is input to the model trained to output the detection result of detecting the image area corresponding to the object to be inspected, and the image area corresponding to the object is input. Detect and
    The program according to any one of claims 1 to 6, which outputs the detection results of the artifact and the object in association with the medical image.
  8.  生体管腔内に挿入されたカテーテルを用いて生成された医用画像を取得し、
     前記医用画像を入力した場合に、該医用画像内のアーチファクトに対応する画像領域を検出した検出結果を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記アーチファクトに対応する画像領域を検出し、
     検出結果を前記医用画像と関連付けて出力する
     処理をコンピュータが実行する情報処理方法。
    Obtain medical images generated using a catheter inserted into the lumen of the living body
    When the medical image is input, the acquired image is input to the model trained to output the detection result of detecting the image region corresponding to the artifact in the medical image, and the image corresponding to the artifact is input. Detect the area and
    An information processing method in which a computer executes a process of outputting a detection result in association with the medical image.
  9.  生体管腔内に挿入されたカテーテルを用いて生成された医用画像を取得する取得部と、
     前記医用画像を入力した場合に、該医用画像内のアーチファクトに対応する画像領域を検出した検出結果を出力するよう学習済みのモデルに、取得した前記医用画像を入力して前記アーチファクトに対応する画像領域を検出する検出部と、
     検出結果を前記医用画像と関連付けて出力する出力部と
     を備える情報処理装置。
    An acquisition unit that acquires a medical image generated using a catheter inserted into the lumen of a living body,
    When the medical image is input, the acquired image is input to the model trained to output the detection result of detecting the image region corresponding to the artifact in the medical image, and the image corresponding to the artifact is input. A detector that detects the area and
    An information processing device including an output unit that outputs a detection result in association with the medical image.
  10.  生体管腔内に挿入されたカテーテルを用いて生成された医用画像に対し、該医用画像内のアーチファクトに対応する画像領域を示すデータが付与された訓練データを取得し、
     前記訓練データに基づき、前記医用画像を入力した場合に、前記アーチファクトに対応する画像領域を検出した検出結果を出力する学習済みモデルを生成する
     処理をコンピュータが実行するモデル生成方法。
    For the medical image generated by using the catheter inserted into the living lumen, training data to which the data showing the image area corresponding to the artifact in the medical image is added is acquired, and the training data is acquired.
    A model generation method in which a computer executes a process of generating a trained model that outputs a detection result of detecting an image area corresponding to the artifact when the medical image is input based on the training data.
  11.  生体管腔に挿入されたカテーテルで検出した信号に基づき生成された医用画像を取得し、
     前記医用画像を入力した場合に、該医用画像内のアーチファクトを低減した第2医用画像を生成するよう学習済みのモデルに、取得した前記医用画像を入力して前記第2医用画像を生成し、
     生成した前記第2医用画像を出力する
     処理をコンピュータに実行させるプログラム。
    Obtain a medical image generated based on the signal detected by the catheter inserted into the lumen of the living body.
    When the medical image is input, the acquired medical image is input to a model trained to generate a second medical image with reduced artifacts in the medical image to generate the second medical image.
    A program that causes a computer to execute a process of outputting the generated second medical image.
PCT/JP2021/009234 2020-03-27 2021-03-09 Program, information processing method, information processing device, and model generation method WO2021193008A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-058992 2020-03-27
JP2020058992 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021193008A1 true WO2021193008A1 (en) 2021-09-30

Family

ID=77891433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/009234 WO2021193008A1 (en) 2020-03-27 2021-03-09 Program, information processing method, information processing device, and model generation method

Country Status (1)

Country Link
WO (1) WO2021193008A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023132332A1 (en) * 2022-01-06 2023-07-13 テルモ株式会社 Computer program, image processing method, and image processing device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350620A1 (en) * 2015-05-27 2016-12-01 Siemens Medical Solutions Usa, Inc. Knowledge-based ultrasound image enhancement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160350620A1 (en) * 2015-05-27 2016-12-01 Siemens Medical Solutions Usa, Inc. Knowledge-based ultrasound image enhancement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023132332A1 (en) * 2022-01-06 2023-07-13 テルモ株式会社 Computer program, image processing method, and image processing device

Similar Documents

Publication Publication Date Title
CN102209488B (en) Image processing equipment and method and faultage image capture apparatus and method
JP7341874B2 (en) Image processing device, image processing method, and program
JP2015503363A (en) Visualization method of blood and blood likelihood in blood vessel image
US20230230244A1 (en) Program, model generation method, information processing device, and information processing method
EP4129197A1 (en) Computer program, information processing method, information processing device, and method for generating model
WO2021193008A1 (en) Program, information processing method, information processing device, and model generation method
JP2021041029A (en) Diagnosis support device, diagnosis support system and diagnosis support method
US20230017227A1 (en) Program, information processing method, information processing apparatus, and model generation method
US20230222655A1 (en) Program, information processing device, and information processing method
US20230237657A1 (en) Information processing device, information processing method, program, model generating method, and training data generating method
WO2023054467A1 (en) Model generation method, learning model, computer program, information processing method, and information processing device
WO2021193015A1 (en) Program, information processing method, information processing device, and model generation method
WO2022202310A1 (en) Program, image processing method, and image processing device
WO2022071280A1 (en) Program, information processing device, and information processing method
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2024071322A1 (en) Information processing method, learning model generation method, computer program, and information processing device
WO2024071252A1 (en) Computer program, information processing method, and information processing device
WO2024071251A1 (en) Computer program, information processing method, information processing device, and learning model
JP7233792B2 (en) Diagnostic imaging device, diagnostic imaging method, program, and method for generating training data for machine learning
WO2021193018A1 (en) Program, information processing method, information processing device, and model generation method
WO2021193020A1 (en) Program, information processing method, information processing device, and model generating method
WO2022157838A1 (en) Image processing method, program, image processing device and ophthalmic system
JP2023051177A (en) Computer program, information processing method, and information processing device
US20220028079A1 (en) Diagnosis support device, diagnosis support system, and diagnosis support method
WO2021199961A1 (en) Computer program, information processing method, and information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21775651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21775651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP