WO2022059539A1 - Computer program, information processing method, and information processing device - Google Patents

Computer program, information processing method, and information processing device Download PDF

Info

Publication number
WO2022059539A1
WO2022059539A1 PCT/JP2021/032621 JP2021032621W WO2022059539A1 WO 2022059539 A1 WO2022059539 A1 WO 2022059539A1 JP 2021032621 W JP2021032621 W JP 2021032621W WO 2022059539 A1 WO2022059539 A1 WO 2022059539A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
diagnosing
scanning probe
echo
Prior art date
Application number
PCT/JP2021/032621
Other languages
French (fr)
Japanese (ja)
Inventor
知樹 櫨田
真介 松本
吏悟 小林
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Priority to JP2022550482A priority Critical patent/JPWO2022059539A1/ja
Publication of WO2022059539A1 publication Critical patent/WO2022059539A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography

Definitions

  • the present invention relates to a computer program, an information processing method, and an information processing device.
  • Patent Document 1 creates a guidance plan on how to guide the operator of an ultrasonic diagnostic device, and guides the operator so that an echo image of a subject including a specific anatomical image is captured. The device is disclosed.
  • the ultrasonic guidance device of Patent Document 1 only guides the operation of the ultrasonic diagnostic device based on the guidance plan, and it is an operator whether or not an image suitable for diagnosing a predetermined disease can be obtained even if the operation guide is followed.
  • An object of the present invention is to select and collect an image suitable for diagnosing a predetermined disease from a series of a plurality of images obtained by a scanning probe that scans an organ of a subject, and the amount of the image collected. It is an object of the present invention to provide a computer program, an information processing method, and an information processing apparatus capable of outputting.
  • the computer program generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the generated plurality of images are images suitable for diagnosing a predetermined disease.
  • a computer is made to execute a process of determining whether or not the image is present, storing the image suitable for the diagnosis of the predetermined disease, and outputting the collected amount of the image suitable for the diagnosis of the predetermined disease.
  • the information processing method generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the generated plurality of images are images suitable for diagnosing a predetermined disease. It is determined whether or not the image is suitable for the diagnosis of the predetermined disease, the image suitable for the diagnosis of the predetermined disease is stored, and the collected amount of the image suitable for the diagnosis of the predetermined disease is output.
  • a generation unit that generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the plurality of images generated by the generation unit are predetermined.
  • a determination unit that determines whether or not the image is suitable for diagnosing a disease, a storage unit that stores the image determined by the determination unit to be suitable for diagnosing the predetermined disease, and a diagnosis of the predetermined disease. It is provided with an output unit for outputting the collected amount of the image suitable for the above.
  • an image suitable for diagnosing a predetermined disease can be selected and collected from a series of images obtained by a scanning probe that scans the organ of a subject, and the amount of the image collected can be determined. It is possible to provide a computer program, an information processing method, and an information processing apparatus that can output.
  • FIG. It is a schematic diagram explaining the structural example of the ultrasonic diagnostic apparatus which concerns on Embodiment 1.
  • FIG. It is a block diagram which shows the structural example of the information processing apparatus which concerns on Embodiment 1.
  • FIG. It is a functional block diagram which shows the structural example of the information processing apparatus which concerns on Embodiment 1.
  • FIG. It is a block diagram which shows the structural example of the appropriateness learning model which concerns on Embodiment 1.
  • It is a block diagram which shows the structural example of an individual learning model.
  • It is a schematic diagram which shows an example of the echo image of a normal lung suitable for lung diagnosis.
  • It is a schematic diagram which shows an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable.
  • FIG. 1 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the first embodiment.
  • the ultrasonic diagnostic apparatus according to the first embodiment includes an information processing apparatus 1 and an ultrasonic probe 2.
  • the information processing device 1 and the ultrasonic probe 2 are wirelessly connected and can transmit and receive various signals.
  • the ultrasonic probe 2 may be configured to be connected to the information processing apparatus 1 with a wired cable.
  • the ultrasonic probe 2 is a device that scans the organ of the subject with ultrasonic waves, and the ultrasonic scanning is controlled by the information processing device 1.
  • the ultrasonic probe 2 includes, for example, a plurality of piezoelectric elements, an acoustic matching layer, an acoustic lens, and the like.
  • the piezoelectric element generates ultrasonic waves according to a drive signal output from the information processing device 1.
  • the ultrasonic waves generated by the piezoelectric element are transmitted from the ultrasonic probe 2 to the living body of the subject via the acoustic matching layer and the acoustic lens.
  • the acoustic matching layer is a member for matching the acoustic impedance between the piezoelectric element and the subject.
  • the acoustic lens is an element for converging ultrasonic waves spreading from a piezoelectric element and transmitting them to a subject.
  • the ultrasonic waves transmitted from the ultrasonic probe 2 to the subject are reflected by the discontinuity surface of the acoustic impedance in the organ of the subject, and are received by a plurality of piezoelectric elements.
  • the amplitude of the reflected wave depends on the difference in acoustic impedance at the reflecting surface.
  • the arrival time of the reflected wave depends on the depth of the reflecting surface.
  • the piezoelectric element converts the vibration pressure of the reflected ultrasonic wave into an electric signal.
  • the electric signal is referred to as an echo signal.
  • the ultrasonic probe 2 outputs an echo signal to the information processing device 1.
  • FIG. 2 is a block diagram showing a configuration example of the information processing apparatus 1 according to the first embodiment.
  • the information processing device 1 is a computer including a control unit 11, a memory 12, a storage unit 13, an operation unit 14, a display unit 15, and a communication unit 16.
  • the information processing device 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the control unit 11 includes one or more CPUs (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. It is an arithmetic processing unit.
  • the control unit 11 controls ultrasonic scanning by the ultrasonic probe 2 by reading out and executing the computer program 131 stored in the storage unit 13, and a series of a plurality of units based on the signal obtained from the ultrasonic probe 2. Echo images are sequentially generated in real time, it is determined whether or not the generated plurality of echo images are images suitable for diagnosis of a predetermined lung disease (predetermined disease), and a predetermined echo image is generated in parallel with the predetermined echo image generation process.
  • predetermined lung disease predetermined lung disease
  • the amount of echo images collected and the target collection amount suitable for diagnosing lung disease are displayed in real time, and when echo images exceeding the target collection amount are collected, an index for diagnosing a predetermined lung disease based on the echo images. Is calculated and displayed, and various other processes are executed.
  • the communication unit 16 includes a processing circuit, a communication circuit, and the like for performing wireless communication processing, and transmits and receives various signals to and from the ultrasonic probe 2. Specifically, the communication unit 16 generates ultrasonic waves by transmitting a drive signal to the ultrasonic probe 2 under the control of the control unit 11. Then, the communication unit 16 receives the echo signal output from the ultrasonic probe 2.
  • the memory 12 is, for example, a volatile memory such as a DRAM (Dynamic RAM) or a SRAM (Static RAM), and is a computer program 131 or a control unit read from the storage unit 13 when the arithmetic processing of the control unit 11 is executed. Various data generated by the arithmetic processing of 11 are temporarily stored.
  • a volatile memory such as a DRAM (Dynamic RAM) or a SRAM (Static RAM)
  • SRAM Static RAM
  • the storage unit 13 is a storage device such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), and a flash memory.
  • the control unit 11 stores a computer program 131 and an appropriateness learning model 17 necessary for collecting echo images and diagnosing a predetermined lung disease.
  • the computer program 131 is a program for causing the computer to function as the information processing device 1 according to the first embodiment.
  • the computer program 131 causes the computer to execute the information processing method according to the first embodiment, such as the collection of echo images and the diagnostic processing of a predetermined lung disease.
  • the computer program 131 may be recorded on the recording medium 10 so that it can be read by a computer.
  • the storage unit 13 stores the computer program 131 read from the recording medium 10 by a reading device (not shown).
  • the recording medium 10 is a semiconductor memory such as a flash memory, an optical disk, a magnetic disk, a magnetic disk disk, or the like.
  • the computer program 131 according to the present embodiment may be downloaded from an external server (not shown) connected to the communication network and stored in the storage unit 13.
  • the operation unit 14 is an input device that accepts the operation of the operator using the ultrasonic diagnostic device.
  • the surgeon is, for example, a medical person such as a doctor, a laboratory engineer, or a nurse.
  • the input device is, for example, a pointing device such as a touch panel or a keyboard.
  • the display unit 15 is an output device that outputs information such as an echo image, an echo image collection degree, and a pulmonary congestion degree.
  • the output device is, for example, a liquid crystal display or an EL display.
  • FIG. 3 is a functional block diagram showing a configuration example of the information processing apparatus 1 according to the first embodiment.
  • the control unit 11 of the information processing apparatus 1 reads and executes the computer program 131 stored in the storage unit 13, so that the probe control unit 11a, the image generation unit 11b, the lung diagnosis appropriateness determination unit 11c, and the lung diagnosis image storage can be executed. It functions as a unit 11d, a lung diagnostic image collection degree display processing unit 11e, a pulmonary congestion degree calculation unit 11f, and a pulmonary congestion degree display processing unit 11g.
  • the probe control unit 11a controls the processing of ultrasonic scanning by the ultrasonic probe 2. Specifically, an ultrasonic wave is generated by outputting a drive signal from the ultrasonic probe 2, and an echo signal output from the ultrasonic probe 2 is received.
  • the image generation unit 11b executes a process of generating an echo image based on the echo signal received by the communication unit 16.
  • the image generation unit 11b generates a series of echo images in real time each time the communication unit 16 receives an echo signal.
  • the echo image is, for example, a B-mode image in which the intensity of the reflected wave is represented by brightness, and a two-dimensional tomographic image of an organ is reproduced.
  • the type of echo image is not particularly limited.
  • the lung diagnosis appropriateness determination unit 11c executes a process of determining whether or not the generated echo image is an image suitable for diagnosing a predetermined lung disease, for example, pulmonary congestion.
  • a predetermined lung disease for example, pulmonary congestion.
  • the predetermined pulmonary disease will be described as pulmonary congestion.
  • the lung diagnosis image storage unit 11d executes a process of storing an echo image determined to be an image suitable for diagnosing pulmonary congestion.
  • the lung diagnostic image collection degree display processing unit 11e executes a process of displaying on the display unit 15 the collection degree of echo images suitable for diagnosing pulmonary congestion and the target collection amount required for calculating the pulmonary congestion degree. ..
  • the lung diagnosis image collection degree display processing unit 11e calculates the collection amount of echo images in real time and displays it on the display unit 15.
  • the pulmonary congestion calculation unit 11f executes a process of calculating the pulmonary congestion based on the echo image.
  • the pulmonary congestion degree display processing unit 11g executes a process of displaying the pulmonary congestion degree calculated by the pulmonary congestion degree calculation unit 11f on the display unit 15.
  • FIG. 4 is a block diagram showing a configuration example of the appropriateness learning model 17 according to the first embodiment.
  • the appropriateness learning model 17 includes a plurality of individual learning models 171 and an integrated learning model 172.
  • a plurality of echo images are input to each of the plurality of individual learning models 171.
  • Each individual learning model 171 is a learning model that extracts the feature amount of the input echo image and outputs the extracted feature amount to the integrated learning model 172.
  • the integrated learning model 172 the feature amount of the echo image output from the plurality of individual learning models 171 is input.
  • the integrated learning model 172 is a learning that outputs an image appropriateness indicating the degree of whether or not the plurality of echo images are suitable for diagnosing pulmonary congestion when the feature quantities of a plurality of echo images are input. It is a model.
  • FIG. 5 is a block diagram showing a configuration example of the individual learning model 171.
  • the individual learning model 171 has been trained to extract and output features related to the diagnosis of pulmonary congestion from echo images by machine learning using teacher data and unsupervised learning using an autoencoder. It is a model of.
  • the individual learning model 171 performs a predetermined operation on the input value and outputs the operation result, and the storage unit 13 contains data such as the coefficient and the threshold of the function that defines this operation as the individual learning model 171. It will be remembered.
  • the control unit 11 can execute an arithmetic process for extracting the features of the echo image.
  • the learning process of the individual learning model 171 is performed by a learning computer.
  • the data related to the learned individual learning model 171 may be provided in a mode of distribution via a communication network or may be provided in a mode recorded on the recording medium 10, similarly to the computer program 131. ..
  • the individual learning model 171 is a neural network having, for example, an input layer 171a into which an echo image is input and an intermediate layer 171b for extracting a feature amount of the echo image.
  • the individual learning model 171 is configured using, for example, an autoencoder. As shown in FIG. 5, the autoencoder has an input layer 171a into which an echo image is input, a first intermediate layer 171b in which the input image is dimensionally compressed to extract a feature amount, and an echo image from the extracted feature amount. A second intermediate layer 171c for restoring the image and an output layer 171d for outputting the restored echo image are provided.
  • the first intermediate layer 171b and the second intermediate layer 171c are also referred to as a convolution layer and a deconvolution layer.
  • the individual learning model 171 is composed of an input layer 171a into which an echo image of the encoder is input and an intermediate layer 171b in which the input image is dimensionally compressed to extract a feature amount.
  • FIG. 5 it is shown that the second intermediate layer 171c and the output layer 171d drawn by the broken line are not essential configurations of the individual learning model 171.
  • the configuration of the individual learning model 171 is not particularly limited, and the autoencoder can be individually learned as it is. It may be configured as a model 171.
  • the individual learning model 171 is configured by using the input layer 171a and the first intermediate layer 171b of the autoencoder will be described.
  • the input layer 171a of the neural network has a plurality of neurons to which the pixel values of each pixel of the echo image are input, and each input data is passed to the intermediate layer 171b.
  • the first intermediate layer 171b has a plurality of layers composed of a plurality of neurons.
  • the intermediate layer 171b is a layer for dimensionally compressing image data.
  • the intermediate layer 171b performs dimensional compression of the echo image by performing a convolution process.
  • each layer extracts the feature amount of the echo image of the normal lung and the feature amount of the echo image of the abnormal lung from the input data, and transfers them to the layers from the front stage to the rear stage in order.
  • the final layer of the first intermediate layer 171b outputs the feature amount extracted from the echo image.
  • the learning method of the individual learning model 171 will be described. First, prepare an autoencoder before learning.
  • the autoencoder includes an input layer 171a, a first intermediate layer 171b, a second intermediate layer 171c, and an output layer 171d.
  • the computer collects echo images of multiple normal lungs and echo images of multiple abnormal lungs. That is, a plurality of echo images suitable for diagnosing a predetermined lung disease are collected.
  • the computer uses the collected echo images to machine-learn or deep-learn the pre-learning autoencoder so that the echo image input to the input layer 171a and the image output from the output layer 171d are the same. Let them learn.
  • the computer inputs a plurality of echo images, which are learning data, into the autoencoder before learning, performs arithmetic processing on the first intermediate layer 171b and the second intermediate layer 171c, and then outputs the output layer 171d. Get the image output from. Then, the computer compares the image output from the output layer 171d with the input echo image, and the intermediate layers 171b and 171c so that the image output from the output layer 171d approaches the input echo image. Optimize the parameters used for the arithmetic processing of.
  • the parameter is, for example, a weight (coupling coefficient) between neurons.
  • the method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like. Then, the individual learning model 171 is generated by extracting the input layer 171a and the first intermediate layer 171b from the trained autoencoder.
  • an individual learning model 171 may be generated by supervised learning using CNN (Convolutional Neural Network). Further, although the example in which the individual learning model 171 is a neural network has been described, it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
  • CNN Convolutional Neural Network
  • FIG. 6 is a schematic diagram showing an example of an echo image of a normal lung suitable for lung diagnosis.
  • Appropriate echo images of normal lungs include a clear A-line 31.
  • the A-line 31 is an image due to multiple reflections occurring between the pleura and the ultrasonic probe 2.
  • a suitable echo image of a normal lung in a sagittal section includes an image called a bat sign 32.
  • the bat sign 32 is a curved convex image obtained by reflecting ultrasonic waves by the ribs.
  • FIG. 7A-7C are schematic views showing an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable for lung diagnosis.
  • FIG. 7A is a schematic diagram of an echo image of an abnormal lung in which line B 33 is observed
  • FIG. 7B is a schematic diagram of an echo image of an abnormal lung in which a ground glass-like shadow 34 is observed.
  • FIG. 7C is an echo image without a real image, which is an echo image unsuitable for diagnosing a predetermined lung disease.
  • the B line 33 is an image caused by thickening of the interlobular septum and accumulation of fluid in the alveoli.
  • the ground glass-like shadow 34 is an image that occurs in an abnormal lung such as pneumonia.
  • FIG. 8 is a block diagram showing a configuration example of the integrated learning model 172.
  • the integrated learning model 172 whether or not the plurality of echo images are suitable for diagnosing a predetermined lung disease from the feature quantities of a plurality of echo images by machine learning using teacher data, unsupervised learning such as clustering, etc. It is a trained model trained to output the image suitability indicating the degree.
  • the integrated learning model 172 performs a predetermined operation on an input value and outputs an operation result, and data such as a coefficient and a threshold of a function defining this operation is stored in the storage unit 13 as an integrated learning model 172. It will be remembered.
  • the control unit 11 can execute an arithmetic process for determining the appropriateness of the echo image from the feature amount of the echo image.
  • the learning process of the integrated learning model 172 is performed by a learning computer.
  • the data according to the learned integrated learning model 172 may be provided in the form of distribution via the communication network or may be provided in the form recorded on the recording medium 10, similarly to the computer program 131.
  • the integrated learning model 172 outputs, for example, an input layer 172a into which features of a plurality of echo images are input, an intermediate layer 172b for extracting features of echo images, and extracted features. It is a neural network having an output layer 172c.
  • the input layer 172a of the neural network has a plurality of neurons to which the features of a plurality of echo images are input, and each input data is passed to the intermediate layer 172b.
  • the intermediate layer 172b has a plurality of layers composed of a plurality of neurons. Each layer sequentially passes from the front layer to the rear layer while extracting the feature amount related to the suitability of the echo image from the input data, and the last layer is passed to the output layer 172c.
  • the output layer 172c includes a neuron that outputs a calculation result, and the neuron outputs an image suitability indicating the degree of whether or not a plurality of echo images are images suitable for diagnosing a predetermined lung disease.
  • the integrated learning model 172 is a neural network
  • it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
  • the learning method of the integrated learning model 172 will be described.
  • the computer collects the features of a plurality of echo images that are the source of the teacher data.
  • the computer generates learning data by adding teacher data indicating whether or not the image is suitable for diagnosing a predetermined lung disease to a plurality of feature quantities.
  • the computer generates the integrated learning model 172 by machine learning or deep learning the pre-learning neural network model using the generated learning data.
  • the computer inputs the features of a plurality of echo images included in the training data into the neural network model before training, performs arithmetic processing on the intermediate layer 172b, and outputs the image from the output layer 172c. Get the appropriateness.
  • the computer compares the image suitability output from the output layer 172c with the image suitability indicated by the teacher data, and in the intermediate layer 172b so that the image suitability output from the output layer 172c approaches the correct answer value.
  • the parameter is, for example, a weight (coupling coefficient) between neurons.
  • the method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like.
  • the information processing device 1 obtains a trained integrated learning model 172 by repeating the above processing based on the teacher data of a large number of patients included in the teacher data.
  • FIG. 9 is a flowchart showing an information processing procedure according to the first embodiment
  • FIG. 10 is a schematic diagram showing an example of an ultrasonic diagnostic monitor screen.
  • the control unit 11 of the information processing apparatus 1 displays a monitor screen as shown in FIG. 10 on the display unit 15 (step S111).
  • the monitor screen includes an echo image display unit 151, a collection degree gauge 152, a lung diagnosis image display unit 153, a congestion degree display unit 154, a start button 155, a stop button 156, and the like.
  • the echo image display unit 151 displays an echo image generated based on the echo signal in real time.
  • the collection degree gauge 152 displays the collection degree of the echo image suitable for the diagnosis of a predetermined lung disease among the generated echo images.
  • the collection degree gauge 152 displays the target collection amount of echo images necessary for diagnosing a predetermined lung disease with a predetermined number of meter blocks 152a. Further, when a quantity of echo images corresponding to one meter block 152a is collected, the collection degree gauge 152 collects echo images by changing the color of the meter block 152a in order from the lower meter block 152a. Display the amount. When the colors of all the meter blocks 152a are changed, the collection amount has reached the target collection amount.
  • the lung diagnosis image display unit 153 displays a representative echo image suitable for diagnosing a predetermined lung disease.
  • the congestion degree display unit 154 displays the diagnosis result of a predetermined lung disease.
  • the start button 155 is an operation button for starting the collection of echo images and the diagnostic process of a predetermined lung disease
  • the stop button 156 is an operation button for stopping the process.
  • control unit 11 receives the echo signal output from the ultrasonic probe 2 and generates an echo image based on the received echo signal (step S112). Then, the control unit 11 displays the generated echo signal on the echo image display unit 151 (step S113).
  • control unit 11 calculates the image suitability by inputting the plurality of echo images into the individual learning model 171 (step S114), and the plurality of echo images are generated based on the calculated image suitability. It is determined whether or not the image is suitable for diagnosing a predetermined lung disease (step S115). If it is determined that the image is inappropriate (step S115: NO), the control unit 11 returns the process to step S112.
  • control unit 11 stores the echo image determined to be appropriate in the storage unit 13 (step S116). That is, the echo image is collected.
  • control unit 11 calculates the collection amount of the echo image suitable for the diagnosis of the predetermined lung disease (step S117), and displays the calculated collection amount on the collection degree gauge 152 (step S118).
  • the control unit 11 that displays the collected amount functions as an output unit that outputs the collected amount of the image suitable for diagnosing a predetermined disease. Further, the control unit 11 displays a representative echo image suitable for lung diagnosis as a sample on the lung diagnosis image display unit 153 (step S119).
  • control unit 11 determines whether or not the collection amount of the echo image has reached a predetermined target collection amount (step S120). If it is determined that the target collection amount has not been reached (step S120: NO), the control unit 11 returns the process to step S112.
  • step S120: YES When it is determined that the target collection amount has been reached (step S120: YES), the control unit 11 calculates the pulmonary congestion degree (step S121), and displays the calculated pulmonary congestion degree on the pulmonary congestion degree display unit 154 (step S120: YES). Step S122). Finish the process.
  • FIG. 11 is an explanatory diagram showing a detection method of the B line 33.
  • the left figure of FIG. 11 is an echo image generated based on the echo signal.
  • the horizontal axis indicates the depth direction
  • the vertical axis indicates the polar coordinate image indicating the angle indicating the direction in which ultrasonic waves are transmitted.
  • the brightness of each pixel corresponds to the amplitude of the echo signal.
  • the control unit 11 integrates the luminance value of each pixel in the depth direction of the echo image.
  • the figure in the center of FIG. 11 is a graph conceptually showing the integration result.
  • the horizontal axis shows the above angle, and the vertical axis shows the integrated value.
  • the control unit 11 differentiates the integrated value in the angular direction.
  • the right figure of FIG. 11 is a graph conceptually showing the differential result.
  • the horizontal axis shows the above angle, and the vertical axis shows the differential value.
  • the control unit 11 determines that the portion where the differential value is equal to or greater than the predetermined value is the B line 33.
  • the control unit 11 counts the number of echo images in which the B line 33 exists among the collected plurality of echo images, and the B line 33 is present in the echo images having a predetermined ratio or more, for example, 37.5% or more. If so, it is determined that the patient has pulmonary congestion.
  • control unit 11 determines that the patient has pulmonary congestion, for example, the control unit 11 displays the ratio of the presence of the B line 33 on the congestion degree display unit 154. When it is determined that the patient has pulmonary congestion, the control unit 11 displays on the congestion degree display unit 154 that there is no finding.
  • an image suitable for diagnosing a predetermined lung disease is selected from a series of images obtained by a scanning probe that scans an organ of a subject. And can be collected, and the collected amount of the image can be output.
  • the lung congestion level can be calculated and displayed based on the collected images.
  • the present invention can be applied to the case of scanning other organs. Further, in the first embodiment, an example of scanning an organ of a subject by using ultrasonic waves has been described, but when a scanning probe for optically acquiring a tomographic image of the organ, for example, a probe for optical interference tomographic diagnosis is used. The present invention can also be applied to.
  • the display unit 15 of the information processing apparatus 1 constituting the ultrasonic diagnostic apparatus has an image collection degree and a target collection degree suitable for diagnosing a predetermined lung disease, a scanned image, and a predetermined lung disease.
  • the information processing apparatus 1 may be configured to output and display various kinds of these information to an external monitoring apparatus.
  • an example of determining an image suitable for diagnosing a predetermined lung disease by using a plurality of individual learning models 171 and an integrated learning model 172 has been described, but a plurality of cases are described using a single learning model. It may be configured to judge the suitability of each of the images individually.
  • the ultrasonic diagnostic apparatus according to the second embodiment is different from the first embodiment in that a learning model is used and a predetermined lung disease is diagnosed based on the feature amount of the echo image. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • FIG. 12 is a block diagram showing a configuration example of the index learning model 218 according to the second embodiment.
  • the storage unit 13 of the information processing apparatus 1 according to the second embodiment stores the index learning model 218.
  • the index learning model 218 is a trained model learned to output the degree of pulmonary congestion from the features of the echo image by unsupervised learning such as machine learning using teacher data and clustering.
  • the feature amount of the echo image is, for example, the presence / absence of the B line 33, the number of B lines 33, the presence / absence of the frosted glass-like shadow 34, the contrast of the butt sign 32, and the like.
  • the index learning model 218 performs a predetermined operation on the input value and outputs the operation result, and the storage unit 13 stores data such as the coefficient and the threshold of the function that defines this operation as the index learning model 218. It will be remembered.
  • the control unit 11 can execute an arithmetic process for calculating the degree of pulmonary congestion from the feature amount of the echo image.
  • the learning process of the index learning model 218 is performed by a learning computer.
  • the data according to the learned index learning model 218 may be provided in the form of distribution via the communication network or may be provided in the form recorded on the recording medium 10, similarly to the computer program 131.
  • the index learning model 218 is, for example, an input layer 218a in which the feature amount of the echo image is input, an intermediate layer 218b for extracting the feature amount of the echo image, and an output layer for outputting the extracted feature amount. It is a neural network having 218c.
  • the input layer 218a of the neural network has a plurality of neurons to which the feature amount of the echo image is input, and each input data is passed to the intermediate layer 218b.
  • the intermediate layer 218b has a plurality of layers composed of a plurality of neurons. Each layer sequentially transfers from the front layer to the rear layer while extracting the feature amount of pulmonary congestion from the input data, and the last layer is transferred to the output layer 218c.
  • the output layer 218c includes a neuron that outputs the degree of pulmonary congestion, and the neuron outputs the degree of pulmonary congestion.
  • the index learning model 218 is a neural network
  • it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
  • the learning method of the index learning model 218 will be described.
  • the computer collects the features of a plurality of echo images that are the source of the teacher data. Then, the computer generates learning data by adding teacher data indicating the degree of pulmonary congestion to the feature amount of the echo image.
  • the computer generates the index learning model 218 by machine learning or deep-learning the pre-learning neural network model using the generated learning data. Specifically, the computer inputs the features of a plurality of echo images included in the training data into the neural network model before training, undergoes arithmetic processing in the intermediate layer 218b, and is output from the output layer 218c. Get the degree of congestion.
  • the computer compares the pulmonary congestion degree output from the output layer 218c with the pulmonary congestion degree indicated by the teacher data, and in the intermediate layer 218b so that the pulmonary congestion degree output from the output layer 218c approaches the correct answer value.
  • the parameter is, for example, a weight (coupling coefficient) between neurons.
  • the method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like.
  • the information processing apparatus 1 repeatedly performs the above processing based on the teacher data of a large number of patients included in the teacher data to obtain the trained index learning model 218.
  • the degree of pulmonary congestion can be calculated by inputting the feature amount of the echo image determined to be suitable for the diagnosis of lung disease into the index learning model 218 configured in this way.
  • the feature amount of the echo image can be calculated by using, for example, a learning model (not shown).
  • the learning model is, for example, a neural network having an input layer into which an echo image is input, an intermediate layer for extracting features of the echo image, and an output layer.
  • the learning model is a CNN (Convolutional Neural Network), and includes a plurality of convolutional layers, a pooling layer, a fully connected layer, and the like.
  • the feature amount is, for example, the presence / absence of a real image, the presence / absence of an A line 31, the presence / absence of a B line 33, the number of B lines 33, the presence / absence of a frosted glass-like shadow 34, the presence / absence of a butt sign 32, and the like.
  • the contents of each neuron and output data described above are examples, and are not particularly limited.
  • the learning method of the learning model will be explained. First, the computer collects multiple echo images that are the source of the teacher data. Then, the computer generates learning data by adding teacher data indicating feature quantities such as the presence / absence of a real image and the presence / absence of a B line 33 to the plurality of echo images.
  • the computer generates the individual learning model 171 by machine learning or deep-learning the pre-learning neural network model using the generated training data. Specifically, the computer inputs the echo image included in the learning data into the neural network model before learning, performs arithmetic processing in the intermediate layer, and acquires the feature amount output from the output layer. Then, the computer compares the feature amount output from the output layer with the feature amount indicated by the teacher data, and sets the parameters used for the arithmetic processing in the intermediate layer so that the data output from the output layer approaches the correct answer value. Optimize.
  • the features of the echo image include the presence / absence of the B line 33, the number of B lines 33, the presence / absence of the ground glass-like shadow 34, the contrast of the butt sign 32, and the like.
  • the degree of pulmonary congestion can be calculated by inputting the feature amount of the above into the index learning model 218.
  • the ultrasonic diagnostic apparatus is different from the first embodiment in that the echo image itself is input to the learning model to diagnose a predetermined lung disease. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the second embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • FIG. 13 is a block diagram showing a configuration example of the index learning model 318 according to the third embodiment.
  • the storage unit 13 of the information processing apparatus 1 according to the third embodiment stores the index learning model 318.
  • the index learning model 318 is, for example, a neural network having an input layer 318a into which an echo image is input, an intermediate layer 318b for extracting an echo image, and an output layer 318c for outputting the extracted features.
  • the input layer 318a of the neural network has a plurality of neurons to which the pixel value of each pixel of the echo image is input, and each input data is passed to the intermediate layer 318b.
  • the intermediate layer 318b has a plurality of layers composed of a plurality of neurons.
  • the index learning model 318 of the third embodiment is a CNN and includes a plurality of convolutional layers, a pooling layer, a fully connected layer, and the like. Each layer sequentially transfers from the front layer to the rear layer while extracting the feature amount of pulmonary congestion from the input data, and the last layer is transferred to the output layer 318c.
  • the output layer 318c includes a neuron that outputs the degree of pulmonary congestion, and the neuron outputs the degree of pulmonary congestion.
  • the method of generating the index learning model 318 is the same as that of the second embodiment.
  • the degree of pulmonary congestion can be calculated by inputting an echo image determined to be suitable for diagnosing a lung disease into the index learning model 318 configured in this way.
  • the degree of pulmonary congestion can be calculated by inputting an echo image into the index learning model 318.
  • the ultrasonic diagnostic apparatus is different from the first embodiment in that a predetermined number or more of echo images are generated at a predetermined plurality of scanning sites to diagnose a lung disease. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • FIG. 14 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the fourth embodiment.
  • the ultrasonic probe 2 according to the fourth embodiment includes an acceleration sensor 421 (positioning sensor) and outputs an acceleration signal to the information processing device 1.
  • the information processing apparatus 1 receives the acceleration signal output from the ultrasonic probe 2, and estimates the position of the ultrasonic probe 2 based on the received acceleration signal. Specifically, the information processing apparatus 1 estimates the position of the ultrasonic probe 2 with respect to the reference position based on the position of the ultrasonic probe 2 with respect to the subject when the ultrasonic sensor is started, based on the acceleration signal. do.
  • FIG. 15 is a flowchart showing an information processing procedure according to the fourth embodiment.
  • the control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S411), generates an echo image (step S412), and displays the generated echo image. It is displayed on the unit 15 (step S413).
  • control unit 11 receives the acceleration signal output from the ultrasonic probe 2, and estimates the position of the scanning site, that is, the ultrasonic probe 2 with respect to the subject based on the received acceleration information (step S414).
  • FIG. 16 is a schematic diagram showing a predetermined scanning portion.
  • the information processing apparatus 1 estimates that the position of the ultrasonic probe 2 when the start button 155 of the monitor screen is first operated and the real image is first obtained is the scanning portion “1”. After that, the control unit 11 calculates the position of the ultrasonic probe 2 with respect to the scanning portion “1” based on the acceleration signal, and estimates the position of the ultrasonic probe 2 with respect to the subject.
  • control unit 11 calculates the amount of echo images generated in the predetermined plurality of scanning sites (step S415), and determines whether or not an echo image of a predetermined amount or more is generated in each site (step S416). ..
  • step S416 When it is determined that there is a scanning portion where the generated amount of the echo image is less than the predetermined amount (step S416: YES), the control unit 11 indicates that the ultrasonic probe 2 should be moved to the scanning portion where the generated amount is less than the predetermined amount.
  • the indicated instruction image (movement instruction information) is displayed on the display unit 15 (step S417). If the ultrasonic probe 2 is located in the scanning portion of less than the predetermined amount at the present time, the instruction image is not displayed. Further, when there are a plurality of sites where the amount of echo images generated is less than a predetermined amount, it may be configured to instruct that the scanning sites should be moved to a smaller scanning site in a predetermined order, for example, the scanning site numbers "1" to "8". ..
  • step S417 When the processing of step S417 is completed, or when it is determined in step S416 that there is no scan less than a predetermined amount (step S416: NO), the control unit 11 inputs the generated plurality of echo images into the individual learning model 171. By inputting, it is determined whether or not the echo image is an image suitable for diagnosing a predetermined lung disease (step S114). Since the processing of step S114 and the following is the same as that of the first embodiment, the details will be omitted.
  • each part of the lung can be scanned without omission to generate and collect an echo image, and a lung disease can be diagnosed with high accuracy. ..
  • the ultrasonic diagnostic apparatus according to the fifth embodiment is different from the first embodiment in that the posture of the ultrasonic probe 2 can be instructed to the operator so that echo images suitable for diagnosing lung disease can be efficiently collected. .. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • the ultrasonic probe 2 includes an acceleration sensor 421 (attitude sensor) and outputs an acceleration signal to the information processing apparatus 1.
  • the information processing apparatus 1 receives the acceleration signal output from the ultrasonic probe 2, and estimates the posture of the ultrasonic probe 2 based on the received acceleration signal.
  • FIG. 17 is a flowchart showing an information processing procedure according to the fifth embodiment.
  • the control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S511), generates an echo image (step S512), and displays the generated echo image. It is displayed on the unit 15 (step S513).
  • control unit 11 receives the acceleration signal output from the ultrasonic probe 2, and estimates the posture of the ultrasonic probe 2 with respect to the subject based on the received acceleration information (step S514).
  • control unit 11 inputs the generated plurality of echo images into the individual learning model 171 to indicate the degree of image suitability indicating whether or not the echo image is an image suitable for diagnosing a predetermined lung disease. (Step S515), and it is determined whether or not the echo image is an image suitable for diagnosing a predetermined lung disease based on the calculated image suitability (step S516).
  • step S516 When it is determined that the image is suitable for diagnosing a predetermined lung disease (step S516: YES), the control unit 11 stores the echo image determined to be appropriate in the storage unit 13 (step S517), and the ultrasonic probe 2 Posture information indicating the posture of is stored (step S518).
  • the control unit 11 calculates the collection amount of the echo image (step S519), displays it on the collection degree gauge 152 (step S520), and is a representative suitable for lung diagnosis.
  • the echo image is displayed as a sample on the lung diagnosis image display unit 153 (step S521). Since the subsequent processing is the same as that of the first embodiment, the details will be omitted.
  • step S516 When it is determined in step S516 that the echo image is not an image suitable for diagnosing a predetermined lung disease (step S516: NO), is the control unit 11 reducing the amount of echo image generation suitable for diagnosing the predetermined lung disease? It is determined whether or not (step S523). When it is determined that the amount of echo images generated suitable for diagnosing a predetermined lung disease is reduced (step S523: YES), the control unit 11 reads out the posture information of the ultrasonic probe 2 stored in the storage unit 13 and reads the posture information. The read posture information or an instruction image (posture change instruction information) instructing a posture change based on the posture information is displayed on the display unit 15 (step S524), and the process is returned to step S512. When it is determined that the amount of echo images generated suitable for diagnosing a predetermined lung disease has not decreased (step S523: NO), the control unit 11 returns the process to step S512 as it is.
  • the ultrasonic diagnostic apparatus As described above, according to the ultrasonic diagnostic apparatus according to the fifth embodiment, it is possible to instruct the operator to take the posture of the ultrasonic probe 2 that generates an echo image suitable for diagnosing a predetermined lung disease. ..
  • the ultrasonic diagnostic apparatus is different from the first embodiment in that echo images are collected in consideration of the respiratory cycle of the subject. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • FIG. 18 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the sixth embodiment.
  • the ultrasonic diagnostic apparatus according to the sixth embodiment includes a respiratory cycle sensor 603.
  • the respiration cycle sensor 603 is, for example, a PPG (Photoplethysmography) pulse wave sensor, an acceleration sensor 421, a body motion sensor that detects the movement of the body due to respiration, or the like.
  • the respiratory cycle sensor 603 transmits a signal corresponding to the respiratory cycle of the subject to the information processing apparatus 1.
  • the information processing device 1 receives the signal transmitted from the respiratory cycle sensor 603.
  • FIG. 19 is a flowchart showing an information processing procedure according to the sixth embodiment.
  • the control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S611), generates an echo image (step S612), and displays the generated echo image. It is displayed on the unit 15 (step S613).
  • control unit 11 receives the respiratory cycle signal transmitted from the respiratory cycle sensor 603 (step S614), and associates the received respiratory cycle signal information with the echo image generated at the same time (step S615).
  • control unit 11 selects a plurality of echo images generated at the same breathing timing (step S616). For example, the control unit 11 selects a plurality of echo information associated with the information of the cycle signal of the intake timing. Further, a plurality of echo images to which the information of the respiratory cycle signal of the same inspiratory timing is associated may be selected.
  • control unit 11 calculates the image suitability by inputting the echo image of the selected same breathing timing into the suitability learning model 17 (step S617).
  • the control unit 11 that has completed the processing of step S617 executes the processing of steps S115 to S122 of the first embodiment.
  • the suitability of the echo image is more accurate. Can be determined.
  • the appropriateness learning model 17 When the appropriateness learning model 17 is machine-learned using only the echo image of the expiratory timing, the echo image of the expiratory timing may be selected and input to the appropriateness learning model 17. Similarly, when the appropriateness learning model 17 is machine-learned using only the echo image of the intake timing, the echo image of the intake timing may be selected and input to the appropriateness learning model 17. Further, it is provided with a first appropriateness learning model machine-learned using an echo image of intake timing and a second appropriateness learning model machine-learned using an echo image of intake timing, and an echo generated at exhalation timing. The image may be input to the first appropriateness learning model, and the echo image generated at the intake timing may be input to the second appropriateness learning model.
  • the ultrasonic diagnostic apparatus is different from the first embodiment in that it collects echo images obtained by using a plurality of ultrasonic frequencies. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
  • FIG. 20 is a block diagram showing a configuration example of the appropriateness learning model 17 according to the seventh embodiment.
  • the information processing apparatus 1 periodically switches between a drive signal for transmitting ultrasonic waves of the first frequency and a drive signal for transmitting ultrasonic waves of the second frequency, outputs the output to the ultrasonic probe 2, and receives an echo signal. do.
  • the switching cycle is a short cycle such that the same scanning portion can be scanned at the first frequency and the second frequency.
  • ultrasonic waves of different frequencies By using ultrasonic waves of different frequencies, different echo images can be obtained for the same scanning site.
  • the higher the frequency of ultrasonic waves the higher the resolution but the lower the penetrating power, and the lower the frequency, the lower the resolution and the higher the penetrating power.
  • the control unit 11 of the information processing apparatus 1 inputs a plurality of echo images obtained by ultrasonic waves of the first frequency and a plurality of echo images obtained by ultrasonic waves of the second frequency into the appropriateness learning model 17. By doing so, the image suitability is calculated.
  • the ultrasonic diagnostic apparatus by inputting a plurality of echo images obtained by using ultrasonic waves of different frequencies into the appropriateness learning model 17, it is more accurate and predetermined. It is possible to determine and collect whether or not the echo image is suitable for diagnosing a lung disease.
  • Information processing device 2 Ultrasonic probe 10 Recording medium 11 Control unit 11a Probe control unit 11b Image generation unit 11c Lung diagnosis appropriateness judgment unit 11d Lung diagnosis image storage unit 11e Lung diagnosis image collection degree display processing unit 11f Lung congestion degree calculation unit 11g Lung congestion degree display processing unit 12 Memory 13 Storage unit 14 Operation unit 15 Display unit 16 Communication unit 17 Appropriateness learning model 31 A line 32 Bat sign 33 B line 34 Glazed shadow 131 Computer program 151 Echo image display unit 152 Collection degree Gauge 152a Meter block 153 Lung diagnosis image display 154 Congestion degree display 155 Start button 156 Stop button 171 Individual learning model 172 Integrated learning model 218 Index learning model 603 Respiratory cycle sensor 421 Acceleration sensor

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The present invention generates a series of multiple images on the basis of signals received from a scanning probe that scans an organ of a subject, determines whether the plurality of images generated are suitable for diagnosing a prescribed disease, and causes a computer to execute a process for outputting the collected number of images which are suitable for diagnosing the prescribed disease.

Description

コンピュータプログラム、情報処理方法及び情報処理装置Computer programs, information processing methods and information processing equipment
 本発明は、コンピュータプログラム、情報処理方法及び情報処理装置に関する。 The present invention relates to a computer program, an information processing method, and an information processing device.
 特許文献1には、超音波診断装置の術者をガイドする方法に関するガイダンスプランを作成し、特定の解剖学的像を含む被験体のエコー画像が取り込まれるように術者をガイドする超音波ガイダンス装置が開示されている。 Patent Document 1 creates a guidance plan on how to guide the operator of an ultrasonic diagnostic device, and guides the operator so that an echo image of a subject including a specific anatomical image is captured. The device is disclosed.
特表2019-521745号公報Japanese Patent Publication No. 2019-521745
 しかしながら、特許文献1の超音波ガイダンス装置は、ガイダンスプランに基づいて超音波診断装置の操作をガイドするのみであり、操作ガイドに従っても所定疾患の診断に適した画像が得られるかどうかは術者のスキルに依存する。また得られた画像の中から所定疾患の診断に適した一定量の画像を選択することができないと、確度の高い診断を行えないという問題がある。 However, the ultrasonic guidance device of Patent Document 1 only guides the operation of the ultrasonic diagnostic device based on the guidance plan, and it is an operator whether or not an image suitable for diagnosing a predetermined disease can be obtained even if the operation guide is followed. Depends on the skill of. Further, if a certain amount of images suitable for diagnosing a predetermined disease cannot be selected from the obtained images, there is a problem that highly accurate diagnosis cannot be performed.
 本発明の目的は、被検体の器官を走査する走査プローブによって得られる一連の複数の画像のなかから、所定疾患の診断に適した画像を選択して収集することができ、当該画像の収集量を出力することができるコンピュータプログラム、情報処理方法及び情報処理装置を提供することにある。 An object of the present invention is to select and collect an image suitable for diagnosing a predetermined disease from a series of a plurality of images obtained by a scanning probe that scans an organ of a subject, and the amount of the image collected. It is an object of the present invention to provide a computer program, an information processing method, and an information processing apparatus capable of outputting.
 本態様に係るコンピュータプログラムは、被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成し、生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定し、前記所定疾患の診断に適した前記画像を記憶し、前記所定疾患の診断に適した前記画像の収集量を出力する処理をコンピュータに実行させる。 The computer program according to this aspect generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the generated plurality of images are images suitable for diagnosing a predetermined disease. A computer is made to execute a process of determining whether or not the image is present, storing the image suitable for the diagnosis of the predetermined disease, and outputting the collected amount of the image suitable for the diagnosis of the predetermined disease.
 本態様に係る情報処理方法は、被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成し、生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定し、前記所定疾患の診断に適した前記画像を記憶し、前記所定疾患の診断に適した前記画像の収集量を出力する。 The information processing method according to this aspect generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the generated plurality of images are images suitable for diagnosing a predetermined disease. It is determined whether or not the image is suitable for the diagnosis of the predetermined disease, the image suitable for the diagnosis of the predetermined disease is stored, and the collected amount of the image suitable for the diagnosis of the predetermined disease is output.
 本態様に係る情報処理装置は、被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成する生成部と、該生成部によって生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定する判定部と、該判定部によって前記所定疾患の診断に適していると判定された前記画像を記憶する記憶部と、前記所定疾患の診断に適した前記画像の収集量を出力する出力部とを備える。 In the information processing apparatus according to this embodiment, a generation unit that generates a series of a plurality of images based on a signal obtained from a scanning probe that scans the organ of a subject, and the plurality of images generated by the generation unit are predetermined. A determination unit that determines whether or not the image is suitable for diagnosing a disease, a storage unit that stores the image determined by the determination unit to be suitable for diagnosing the predetermined disease, and a diagnosis of the predetermined disease. It is provided with an output unit for outputting the collected amount of the image suitable for the above.
 上記によれば、被検体の器官を走査する走査プローブによって得られる一連の複数の画像のなかから、所定疾患の診断に適した画像を選択して収集することができ、当該画像の収集量を出力することができるコンピュータプログラム、情報処理方法及び情報処理装置を提供することができる。 According to the above, an image suitable for diagnosing a predetermined disease can be selected and collected from a series of images obtained by a scanning probe that scans the organ of a subject, and the amount of the image collected can be determined. It is possible to provide a computer program, an information processing method, and an information processing apparatus that can output.
実施形態1に係る超音波診断装置の構成例を説明する模式図である。It is a schematic diagram explaining the structural example of the ultrasonic diagnostic apparatus which concerns on Embodiment 1. FIG. 実施形態1に係る情報処理装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the information processing apparatus which concerns on Embodiment 1. FIG. 実施形態1に係る情報処理装置の構成例を示す機能ブロック図である。It is a functional block diagram which shows the structural example of the information processing apparatus which concerns on Embodiment 1. FIG. 実施形態1に係る適切度学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of the appropriateness learning model which concerns on Embodiment 1. 個別学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of an individual learning model. 肺診断に適した正常肺のエコー画像の一例を示す模式図である。It is a schematic diagram which shows an example of the echo image of a normal lung suitable for lung diagnosis. 肺診断に適した異常肺のエコー画像及び不適なエコー画像の一例を示す模式図である。It is a schematic diagram which shows an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable. 肺診断に適した異常肺のエコー画像及び不適なエコー画像の一例を示す模式図である。It is a schematic diagram which shows an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable. 肺診断に適した異常肺のエコー画像及び不適なエコー画像の一例を示す模式図である。It is a schematic diagram which shows an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable. 統合学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of the integrated learning model. 実施形態1に係る情報処理手順を示すフローチャートである。It is a flowchart which shows the information processing procedure which concerns on Embodiment 1. 超音波診断モニタ画面の一例を示す模式図である。It is a schematic diagram which shows an example of the ultrasonic diagnostic monitor screen. Bラインの検出方法を示す説明図である。It is explanatory drawing which shows the detection method of B line. 実施形態2に係る指標学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of the index learning model which concerns on Embodiment 2. 実施形態3に係る指標学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of the index learning model which concerns on Embodiment 3. 実施形態4に係る超音波診断装置の構成例を説明する模式図である。It is a schematic diagram explaining the structural example of the ultrasonic diagnostic apparatus which concerns on Embodiment 4. 実施形態4に係る情報処理手順を示すフローチャートである。It is a flowchart which shows the information processing procedure which concerns on Embodiment 4. 所定の走査部位を示す模式図である。It is a schematic diagram which shows the predetermined scanning part. 実施形態5に係る情報処理手順を示すフローチャートである。It is a flowchart which shows the information processing procedure which concerns on Embodiment 5. 実施形態6に係る超音波診断装置の構成例を説明する模式図である。It is a schematic diagram explaining the structural example of the ultrasonic diagnostic apparatus which concerns on Embodiment 6. 実施形態6に係る情報処理手順を示すフローチャートである。It is a flowchart which shows the information processing procedure which concerns on Embodiment 6. 実施形態7に係る適切度学習モデルの構成例を示すブロック図である。It is a block diagram which shows the structural example of the appropriateness learning model which concerns on Embodiment 7.
 本発明の実施形態に係るコンピュータプログラム、情報処理方法及び情報処理装置の具体例を、以下に図面を参照しつつ説明する。なお、本発明はこれらの例示に限定されるものではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。また、以下に記載する実施形態の少なくとも一部を任意に組み合わせてもよい。 Specific examples of the computer program, information processing method, and information processing apparatus according to the embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these examples, and is indicated by the scope of claims, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims. In addition, at least a part of the embodiments described below may be arbitrarily combined.
 図1は、実施形態1に係る超音波診断装置の構成例を説明する模式図である。実施形態1に係る超音波診断装置は、情報処理装置1と、超音波プローブ2とを備える。情報処理装置1及び超音波プローブ2は無線接続されており、各種信号を送受信することができる。なお、有線ケーブルで超音波プローブ2を情報処理装置1に接続するように構成してもよい。 FIG. 1 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the first embodiment. The ultrasonic diagnostic apparatus according to the first embodiment includes an information processing apparatus 1 and an ultrasonic probe 2. The information processing device 1 and the ultrasonic probe 2 are wirelessly connected and can transmit and receive various signals. The ultrasonic probe 2 may be configured to be connected to the information processing apparatus 1 with a wired cable.
 超音波プローブ2は、被検体の器官を超音波で走査する装置であり、超音波走査は情報処理装置1によって制御される。超音波プローブ2は、例えば、複数の圧電素子、音響整合層及び音響レンズ等を備える。圧電素子は、情報処理装置1から出力される駆動信号に従い超音波を発生させる。圧電素子で発生した超音波は、音響整合層及び音響レンズを介して超音波プローブ2から被検体の生体へ送信される。音響整合層は、圧電素子と、被検体との間の音響インピーダンスを整合させるための部材である。音響レンズは、圧電素子から広がる超音波を収束させて被検体へ送信するための素子である。超音波プローブ2から被検体へ送信された超音波は、被検体の器官における音響インピーダンスの不連続面で反射され、複数の圧電素子にて受信される。反射波の振幅は、反射面における音響インピーダンスの差に依存する。反射波の到達時間は、当該反射面の深さに依存する。圧電素子は、反射された超音波の振動圧力を電気信号に変換する。以下、当該電気信号をエコー信号と呼ぶ。超音波プローブ2はエコー信号を情報処理装置1へ出力する。 The ultrasonic probe 2 is a device that scans the organ of the subject with ultrasonic waves, and the ultrasonic scanning is controlled by the information processing device 1. The ultrasonic probe 2 includes, for example, a plurality of piezoelectric elements, an acoustic matching layer, an acoustic lens, and the like. The piezoelectric element generates ultrasonic waves according to a drive signal output from the information processing device 1. The ultrasonic waves generated by the piezoelectric element are transmitted from the ultrasonic probe 2 to the living body of the subject via the acoustic matching layer and the acoustic lens. The acoustic matching layer is a member for matching the acoustic impedance between the piezoelectric element and the subject. The acoustic lens is an element for converging ultrasonic waves spreading from a piezoelectric element and transmitting them to a subject. The ultrasonic waves transmitted from the ultrasonic probe 2 to the subject are reflected by the discontinuity surface of the acoustic impedance in the organ of the subject, and are received by a plurality of piezoelectric elements. The amplitude of the reflected wave depends on the difference in acoustic impedance at the reflecting surface. The arrival time of the reflected wave depends on the depth of the reflecting surface. The piezoelectric element converts the vibration pressure of the reflected ultrasonic wave into an electric signal. Hereinafter, the electric signal is referred to as an echo signal. The ultrasonic probe 2 outputs an echo signal to the information processing device 1.
 図2は、実施形態1に係る情報処理装置1の構成例を示すブロック図である。情報処理装置1は、制御部11、メモリ12、記憶部13、操作部14、表示部15及び通信部16を備えたコンピュータである。なお、情報処理装置1は複数のコンピュータからなるマルチコンピュータであってもよく、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。 FIG. 2 is a block diagram showing a configuration example of the information processing apparatus 1 according to the first embodiment. The information processing device 1 is a computer including a control unit 11, a memory 12, a storage unit 13, an operation unit 14, a display unit 15, and a communication unit 16. The information processing device 1 may be a multi-computer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
 制御部11は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)、GPGPU(General-purpose computing on graphics processing units)、TPU(Tensor Processing Unit)等の演算処理装置である。制御部11は、記憶部13に記憶されたコンピュータプログラム131を読み出して実行することにより、超音波プローブ2による超音波走査を制御し、超音波プローブ2から得られる信号に基づいて一連の複数のエコー画像をリアルタイムで順次生成し、生成された複数のエコー画像が所定肺疾患(所定疾患)の診断に適した画像であるか否かを判定し、エコー画像の生成処理と並行して、所定肺疾患の診断に適したエコー画像の収集量及び目標収集量をリアルタイムで表示し、目標収集量以上のエコー画像が収集された場合、当該エコー画像に基づいて所定肺疾患を診断するための指標を算出して表示する等の各種処理を実行する。 The control unit 11 includes one or more CPUs (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), GPGPU (General-purpose computing on graphics processing units), TPU (Tensor Processing Unit), etc. It is an arithmetic processing unit. The control unit 11 controls ultrasonic scanning by the ultrasonic probe 2 by reading out and executing the computer program 131 stored in the storage unit 13, and a series of a plurality of units based on the signal obtained from the ultrasonic probe 2. Echo images are sequentially generated in real time, it is determined whether or not the generated plurality of echo images are images suitable for diagnosis of a predetermined lung disease (predetermined disease), and a predetermined echo image is generated in parallel with the predetermined echo image generation process. The amount of echo images collected and the target collection amount suitable for diagnosing lung disease are displayed in real time, and when echo images exceeding the target collection amount are collected, an index for diagnosing a predetermined lung disease based on the echo images. Is calculated and displayed, and various other processes are executed.
 通信部16は、無線通信処理を行うための処理回路、通信回路等を含み、超音波プローブ2との間で各種信号の送受信を行う。具体的には、通信部16は制御部11の制御に従って、駆動信号を超音波プローブ2へ送信することによって、超音波を発生させる。そして、通信部16は、超音波プローブ2から出力されたエコー信号を受信する。 The communication unit 16 includes a processing circuit, a communication circuit, and the like for performing wireless communication processing, and transmits and receives various signals to and from the ultrasonic probe 2. Specifically, the communication unit 16 generates ultrasonic waves by transmitting a drive signal to the ultrasonic probe 2 under the control of the control unit 11. Then, the communication unit 16 receives the echo signal output from the ultrasonic probe 2.
 メモリ12は、例えばDRAM(Dynamic RAM)、SRAM(Static RAM)等の揮発性メモリであり、制御部11の演算処理を実行する際に記憶部13から読み出されたコンピュータプログラム131、又は制御部11の演算処理によって生ずる各種データを一時記憶する。 The memory 12 is, for example, a volatile memory such as a DRAM (Dynamic RAM) or a SRAM (Static RAM), and is a computer program 131 or a control unit read from the storage unit 13 when the arithmetic processing of the control unit 11 is executed. Various data generated by the arithmetic processing of 11 are temporarily stored.
 記憶部13は、ハードディスク、EEPROM(Electrically Erasable Programmable ROM)、フラッシュメモリ等の記憶装置である。記憶部13は、制御部11が、エコー画像の収集及び所定肺疾患の診断処理に必要なコンピュータプログラム131及び適切度学習モデル17を記憶している。 The storage unit 13 is a storage device such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), and a flash memory. In the storage unit 13, the control unit 11 stores a computer program 131 and an appropriateness learning model 17 necessary for collecting echo images and diagnosing a predetermined lung disease.
 コンピュータプログラム131は、コンピュータを本実施形態1に係る情報処理装置1として機能させるためのプログラムである。コンピュータプログラム131は、エコー画像の収集及び所定肺疾患の診断処理といった本実施形態1に係る情報処理方法をコンピュータに実行させる。 The computer program 131 is a program for causing the computer to function as the information processing device 1 according to the first embodiment. The computer program 131 causes the computer to execute the information processing method according to the first embodiment, such as the collection of echo images and the diagnostic processing of a predetermined lung disease.
 なお、コンピュータプログラム131は、記録媒体10にコンピュータ読み取り可能に記録されている態様でも良い。記憶部13は、図示しない読出装置によって記録媒体10から読み出されたコンピュータプログラム131を記憶する。記録媒体10はフラッシュメモリ等の半導体メモリ、光ディスク、磁気ディスク、磁気光ディスク等である。また、通信網に接続されている図示しない外部サーバから本実施形態に係るコンピュータプログラム131をダウンロードし、記憶部13に記憶させる態様であってもよい。 The computer program 131 may be recorded on the recording medium 10 so that it can be read by a computer. The storage unit 13 stores the computer program 131 read from the recording medium 10 by a reading device (not shown). The recording medium 10 is a semiconductor memory such as a flash memory, an optical disk, a magnetic disk, a magnetic disk disk, or the like. Further, the computer program 131 according to the present embodiment may be downloaded from an external server (not shown) connected to the communication network and stored in the storage unit 13.
 操作部14は、超音波診断装置を利用する術者の操作を受け付ける入力装置である。術者は、例えば医師、検査技師、看護師等の医療関係者等である。入力装置は、例えばタッチパネル等のポインティングデバイス、キーボードである。 The operation unit 14 is an input device that accepts the operation of the operator using the ultrasonic diagnostic device. The surgeon is, for example, a medical person such as a doctor, a laboratory engineer, or a nurse. The input device is, for example, a pointing device such as a touch panel or a keyboard.
 表示部15は、エコー画像、エコー画像収集度、肺うっ血度等の情報を出力する出力装置である。出力装置は、例えば液晶ディスプレイ又はELディスプレイである。 The display unit 15 is an output device that outputs information such as an echo image, an echo image collection degree, and a pulmonary congestion degree. The output device is, for example, a liquid crystal display or an EL display.
 図3は、実施形態1に係る情報処理装置1の構成例を示す機能ブロック図である。情報処理装置1の制御部11は、記憶部13に記憶されたコンピュータプログラム131を読み出して実行することにより、プローブ制御部11a、画像生成部11b、肺診断適切度判定部11c、肺診断画像記憶部11d、肺診断画像収集度表示処理部11e、肺うっ血度算出部11f、肺うっ血度表示処理部11gとして機能する。 FIG. 3 is a functional block diagram showing a configuration example of the information processing apparatus 1 according to the first embodiment. The control unit 11 of the information processing apparatus 1 reads and executes the computer program 131 stored in the storage unit 13, so that the probe control unit 11a, the image generation unit 11b, the lung diagnosis appropriateness determination unit 11c, and the lung diagnosis image storage can be executed. It functions as a unit 11d, a lung diagnostic image collection degree display processing unit 11e, a pulmonary congestion degree calculation unit 11f, and a pulmonary congestion degree display processing unit 11g.
 プローブ制御部11aは、超音波プローブ2による超音波走査の処理を制御する。具体的には駆動信号を超音波プローブ2から出力することによって超音波を発生させ、超音波プローブ2から出力されるエコー信号を受信する。 The probe control unit 11a controls the processing of ultrasonic scanning by the ultrasonic probe 2. Specifically, an ultrasonic wave is generated by outputting a drive signal from the ultrasonic probe 2, and an echo signal output from the ultrasonic probe 2 is received.
 画像生成部11bは、通信部16が受信したエコー信号に基づいて、エコー画像を生成する処理を実行する。画像生成部11bは、通信部16がエコー信号を受信する都度、リアルタイムで一連のエコー画像を生成する。エコー画像は、例えば反射波の強度を輝度で表したBモード画像であり、器官の2次元断層像が再現される。なお、エコー画像の種類は特に限定されるものではない。 The image generation unit 11b executes a process of generating an echo image based on the echo signal received by the communication unit 16. The image generation unit 11b generates a series of echo images in real time each time the communication unit 16 receives an echo signal. The echo image is, for example, a B-mode image in which the intensity of the reflected wave is represented by brightness, and a two-dimensional tomographic image of an organ is reproduced. The type of echo image is not particularly limited.
 肺診断適切度判定部11cは、生成されたエコー画像が、所定肺疾患、例えば肺うっ血の診断に適した画像であるか否かを判定する処理を実行する。以下、所定肺疾患は肺うっ血であるものとして説明する。 The lung diagnosis appropriateness determination unit 11c executes a process of determining whether or not the generated echo image is an image suitable for diagnosing a predetermined lung disease, for example, pulmonary congestion. Hereinafter, the predetermined pulmonary disease will be described as pulmonary congestion.
 肺診断画像記憶部11dは、肺うっ血の診断に適した画像であると判定されたエコー画像を記憶する処理を実行する。 The lung diagnosis image storage unit 11d executes a process of storing an echo image determined to be an image suitable for diagnosing pulmonary congestion.
 肺診断画像収集度表示処理部11eは、肺うっ血の診断に適したエコー画像の収集度と、肺うっ血度を算出するために必要な目標収集量とを表示部15に表示する処理を実行する。肺診断画像収集度表示処理部11eは、リアルタイムでエコー画像の収集量を計算し、表示部15に表示する。 The lung diagnostic image collection degree display processing unit 11e executes a process of displaying on the display unit 15 the collection degree of echo images suitable for diagnosing pulmonary congestion and the target collection amount required for calculating the pulmonary congestion degree. .. The lung diagnosis image collection degree display processing unit 11e calculates the collection amount of echo images in real time and displays it on the display unit 15.
 肺うっ血度算出部11fは、目標収集量以上のエコー画像が収集された場合、当該エコー画像に基づいて、肺うっ血度を算出する処理を実行する。 When an echo image exceeding the target collection amount is collected, the pulmonary congestion calculation unit 11f executes a process of calculating the pulmonary congestion based on the echo image.
 肺うっ血度表示処理部11gは、肺うっ血度算出部11fによって算出された肺うっ血度を表示部15に表示する処理を実行する。 The pulmonary congestion degree display processing unit 11g executes a process of displaying the pulmonary congestion degree calculated by the pulmonary congestion degree calculation unit 11f on the display unit 15.
 図4は、実施形態1に係る適切度学習モデル17の構成例を示すブロック図である。適切度学習モデル17は、複数の個別学習モデル171と、統合学習モデル172とを備える。複数の個別学習モデル171には、それぞれ複数のエコー画像が入力される。各個別学習モデル171は、入力されたエコー画像の特徴量を抽出し、抽出された特徴量を統合学習モデル172へ出力する学習モデルである。統合学習モデル172は、複数の個別学習モデル171から出力されたエコー画像の特徴量が入力される。統合学習モデル172は、複数のエコー画像の特徴量が入力された場合、当該複数のエコー画像が肺うっ血の診断に適したエコー画像であるか否かの程度を示す画像適切度を出力する学習モデルである。 FIG. 4 is a block diagram showing a configuration example of the appropriateness learning model 17 according to the first embodiment. The appropriateness learning model 17 includes a plurality of individual learning models 171 and an integrated learning model 172. A plurality of echo images are input to each of the plurality of individual learning models 171. Each individual learning model 171 is a learning model that extracts the feature amount of the input echo image and outputs the extracted feature amount to the integrated learning model 172. In the integrated learning model 172, the feature amount of the echo image output from the plurality of individual learning models 171 is input. The integrated learning model 172 is a learning that outputs an image appropriateness indicating the degree of whether or not the plurality of echo images are suitable for diagnosing pulmonary congestion when the feature quantities of a plurality of echo images are input. It is a model.
 図5は、個別学習モデル171の構成例を示すブロック図である。個別学習モデル171は、教師データを用いた機械学習、オートエンコーダを用いた教師なし学習により、エコー画像から、肺うっ血の診断に関連する特徴量を抽出して出力するように学習された学習済のモデルである。個別学習モデル171は、入力値に対して所定の演算を行い、演算結果を出力するものであり、記憶部13にはこの演算を規定する関数の係数及び閾値等のデータが個別学習モデル171として記憶される。個別学習モデル171として記憶されたデータを読み込むことによって、制御部11は、エコー画像の特徴を抽出するための演算処理を実行することが可能になる。
 本実施形態1において個別学習モデル171の学習処理は、学習用のコンピュータが行う。なお、学習された個別学習モデル171に係るデータは、コンピュータプログラム131と同様に、通信ネットワークを介した配信の態様で提供されてもよく、記録媒体10に記録された態様で提供されてもよい。
FIG. 5 is a block diagram showing a configuration example of the individual learning model 171. The individual learning model 171 has been trained to extract and output features related to the diagnosis of pulmonary congestion from echo images by machine learning using teacher data and unsupervised learning using an autoencoder. It is a model of. The individual learning model 171 performs a predetermined operation on the input value and outputs the operation result, and the storage unit 13 contains data such as the coefficient and the threshold of the function that defines this operation as the individual learning model 171. It will be remembered. By reading the data stored as the individual learning model 171, the control unit 11 can execute an arithmetic process for extracting the features of the echo image.
In the first embodiment, the learning process of the individual learning model 171 is performed by a learning computer. The data related to the learned individual learning model 171 may be provided in a mode of distribution via a communication network or may be provided in a mode recorded on the recording medium 10, similarly to the computer program 131. ..
 本実施形態1において個別学習モデル171は、例えば、エコー画像が入力される入力層171aと、エコー画像の特徴量を抽出する中間層171bとを有するニューラルネットワークである。個別学習モデル171は、例えばオートエンコーダを用いて構成される。オートエンコーダは、図5に示すように、エコー画像が入力される入力層171aと、入力画像を次元圧縮して特徴量を抽出する第1の中間層171bと、抽出された特徴量からエコー画像を復元する第2の中間層171cと、復元されたエコー画像を出力する出力層171dとを備える。なお、第1の中間層171b及び第2の中間層171cはコンボリューション層及びデコンボリューション層とも呼ばれる。個別学習モデル171は、当該エンコーダのエコー画像が入力される入力層171aと、入力画像を次元圧縮して特徴量を抽出する中間層171bとで構成される。図5中、破線で描かれている第2の中間層171c及び出力層171dは、個別学習モデル171の必須の構成ではないことを示している。なお、エコー画像の特徴量を抽出して後段の統合学習モデル172に与えることができるように構成すれば、個別学習モデル171の構成は特に限定されるものでは無く、上記オートエンコーダをそのまま個別学習モデル171として構成してもよい。
 以下、オートエンコーダの入力層171a及び第1の中間層171bを用いて個別学習モデル171を構成する例を説明する。
In the first embodiment, the individual learning model 171 is a neural network having, for example, an input layer 171a into which an echo image is input and an intermediate layer 171b for extracting a feature amount of the echo image. The individual learning model 171 is configured using, for example, an autoencoder. As shown in FIG. 5, the autoencoder has an input layer 171a into which an echo image is input, a first intermediate layer 171b in which the input image is dimensionally compressed to extract a feature amount, and an echo image from the extracted feature amount. A second intermediate layer 171c for restoring the image and an output layer 171d for outputting the restored echo image are provided. The first intermediate layer 171b and the second intermediate layer 171c are also referred to as a convolution layer and a deconvolution layer. The individual learning model 171 is composed of an input layer 171a into which an echo image of the encoder is input and an intermediate layer 171b in which the input image is dimensionally compressed to extract a feature amount. In FIG. 5, it is shown that the second intermediate layer 171c and the output layer 171d drawn by the broken line are not essential configurations of the individual learning model 171. If the feature amount of the echo image is extracted and configured so that it can be given to the integrated learning model 172 in the subsequent stage, the configuration of the individual learning model 171 is not particularly limited, and the autoencoder can be individually learned as it is. It may be configured as a model 171.
Hereinafter, an example in which the individual learning model 171 is configured by using the input layer 171a and the first intermediate layer 171b of the autoencoder will be described.
 ニューラルネットワークの入力層171aは、エコー画像の各画素の画素値が入力される複数のニューロンを有し、入力された各データを中間層171bに受け渡す。 The input layer 171a of the neural network has a plurality of neurons to which the pixel values of each pixel of the echo image are input, and each input data is passed to the intermediate layer 171b.
 第1の中間層171bは、複数のニューロンからなる層を複数有する。中間層171bは、画像データを次元圧縮する層である。例えば、中間層171bは、畳み込み処理を行うことにより、エコー画像の次元圧縮を行う。次元圧縮により、各層は入力されたデータから、正常肺のエコー画像の特徴量、異常肺のエコー画像の特徴量を抽出しながら前段から後段の層へ順々に受け渡す。第1の中間層171bの最終層は、エコー画像から抽出された特徴量を出力する。特徴量は、直ちにその意味を解釈できるものでは無いが、正常肺及び異常肺に現れる画像の特徴として実像の有無、Aライン31、Bライン33、すりガラス状陰影34、バットサイン32等の特徴が関連しているものと考えられる。 The first intermediate layer 171b has a plurality of layers composed of a plurality of neurons. The intermediate layer 171b is a layer for dimensionally compressing image data. For example, the intermediate layer 171b performs dimensional compression of the echo image by performing a convolution process. By dimensional compression, each layer extracts the feature amount of the echo image of the normal lung and the feature amount of the echo image of the abnormal lung from the input data, and transfers them to the layers from the front stage to the rear stage in order. The final layer of the first intermediate layer 171b outputs the feature amount extracted from the echo image. The meaning of the feature quantity cannot be interpreted immediately, but the features of the image appearing in the normal lung and the abnormal lung include the presence or absence of a real image, A line 31, B line 33, ground glass-like shadow 34, bat sign 32, and the like. It seems to be related.
 個別学習モデル171の学習方法について説明する。まず、学習前のオートエンコーダを用意する。オートエンコーダは、入力層171aと、第1の中間層171bと、第2の中間層171cと、出力層171dとを備える。
 コンピュータは、複数の正常肺のエコー画像、複数の異常肺のエコー画像を収集する。つまり、所定肺疾患の診断に適した複数のエコー画像を収集する。次いで、コンピュータは、収集したエコー画像を用いて、入力層171aに入力されたエコー画像と、出力層171dから出力される画像とが同じになるように、学習前のオートエンコーダを機械学習又は深層学習させる。
 具体的には、コンピュータは、学習用データである複数のエコー画像を学習前のオートエンコーダに入力し、第1の中間層171b及び第2の中間層171cでの演算処理を経て、出力層171dから出力される画像を取得する。そして、コンピュータは、出力層171dから出力された画像と、入力されたエコー画像と比較し、出力層171dから出力される画像が、入力されたエコー画像に近づくように、中間層171b、171cでの演算処理に用いるパラメータを最適化する。当該パラメータは、例えばニューロン間の重み(結合係数)などである。パラメータの最適化の方法は特に限定されないが、例えばコンピュータは最急降下法等を用いて各種パラメータの最適化を行う。そして、学習済みのオートエンコーダから入力層171a及び第1の中間層171bを抽出することによって、個別学習モデル171を生成する。
The learning method of the individual learning model 171 will be described. First, prepare an autoencoder before learning. The autoencoder includes an input layer 171a, a first intermediate layer 171b, a second intermediate layer 171c, and an output layer 171d.
The computer collects echo images of multiple normal lungs and echo images of multiple abnormal lungs. That is, a plurality of echo images suitable for diagnosing a predetermined lung disease are collected. The computer then uses the collected echo images to machine-learn or deep-learn the pre-learning autoencoder so that the echo image input to the input layer 171a and the image output from the output layer 171d are the same. Let them learn.
Specifically, the computer inputs a plurality of echo images, which are learning data, into the autoencoder before learning, performs arithmetic processing on the first intermediate layer 171b and the second intermediate layer 171c, and then outputs the output layer 171d. Get the image output from. Then, the computer compares the image output from the output layer 171d with the input echo image, and the intermediate layers 171b and 171c so that the image output from the output layer 171d approaches the input echo image. Optimize the parameters used for the arithmetic processing of. The parameter is, for example, a weight (coupling coefficient) between neurons. The method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like. Then, the individual learning model 171 is generated by extracting the input layer 171a and the first intermediate layer 171b from the trained autoencoder.
 なお、上記では教師あり学習を例示したが、CNN(Convolutional Neural Network)を用いた教師あり学習により、個別学習モデル171を生成してもよい。また、個別学習モデル171がニューラルネットワークである例を説明したが、SVM(Support Vector Machine)、ベイジアンネットワーク、又は、回帰木等の構成のモデルであってもよい。 Although supervised learning is illustrated above, an individual learning model 171 may be generated by supervised learning using CNN (Convolutional Neural Network). Further, although the example in which the individual learning model 171 is a neural network has been described, it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
 図6は、肺診断に適した正常肺のエコー画像の一例を示す模式図である。正常肺の適切なエコー画像には、鮮明なAライン31が含まれる。Aライン31は胸膜と超音波プローブ2との間で起こる多重反射による像である。また、矢状断面での正常肺の適切なエコー画像には、バットサイン32と呼ばれる像が含まれる。バットサイン32は、肋骨によって超音波が反射されて得られる湾曲凸状の像である。これらのAライン31、バットサイン32は、所定肺疾患の診断(正常肺であるとの診断)に適したエコー画像に含まれる特徴である。 FIG. 6 is a schematic diagram showing an example of an echo image of a normal lung suitable for lung diagnosis. Appropriate echo images of normal lungs include a clear A-line 31. The A-line 31 is an image due to multiple reflections occurring between the pleura and the ultrasonic probe 2. Also, a suitable echo image of a normal lung in a sagittal section includes an image called a bat sign 32. The bat sign 32 is a curved convex image obtained by reflecting ultrasonic waves by the ribs. These A-line 31 and bat sign 32 are features included in the echo image suitable for the diagnosis of a predetermined lung disease (diagnosis of normal lung).
 図7A~図7Cは、肺診断に適した異常肺のエコー画像及び不適なエコー画像の一例を示す模式図である。図7Aは、Bライン33がみられる異常肺のエコー画像の模式図、図7Bは、すりガラス状陰影34がみられる異常肺のエコー画像の模式図である。図7Cは、実像が無いエコー画像であり、所定肺疾患の診断に不適なエコー画像である。Bライン33は、小葉間隔壁の肥厚や肺胞内に液体が貯留することで生ずる像である。すりガラス状陰影34は、肺炎等の異常肺において生ずる像である。これらの、Bライン33及びすりガラス状陰影34は、所定肺疾患の診断に適したエコー画像に含まれる特徴である。 7A-7C are schematic views showing an example of an echo image of an abnormal lung suitable for lung diagnosis and an echo image unsuitable for lung diagnosis. FIG. 7A is a schematic diagram of an echo image of an abnormal lung in which line B 33 is observed, and FIG. 7B is a schematic diagram of an echo image of an abnormal lung in which a ground glass-like shadow 34 is observed. FIG. 7C is an echo image without a real image, which is an echo image unsuitable for diagnosing a predetermined lung disease. The B line 33 is an image caused by thickening of the interlobular septum and accumulation of fluid in the alveoli. The ground glass-like shadow 34 is an image that occurs in an abnormal lung such as pneumonia. These B lines 33 and ground glass-like shadows 34 are features included in echo images suitable for diagnosing predetermined lung diseases.
 図8は、統合学習モデル172の構成例を示すブロック図である。統合学習モデル172は、教師データを用いた機械学習、クラスタリング等の教師なし学習により、複数のエコー画像の特徴量から、当該複数のエコー画像が所定肺疾患の診断に適した画像であるか否かの程度を示す画像適切度を出力するように学習された学習済のモデルである。統合学習モデル172は、入力値に対して所定の演算を行い、演算結果を出力するものであり、記憶部13にはこの演算を規定する関数の係数及び閾値等のデータが統合学習モデル172として記憶される。統合学習モデル172として記憶されたデータを読み込むことによって、制御部11は、エコー画像の特徴量から、エコー画像の適宜を判定するための演算処理を実行することが可能になる。
 本実施形態1において統合学習モデル172の学習処理は、学習用のコンピュータが行う。学習された統合学習モデル172に係るデータは、コンピュータプログラム131と同様に、通信ネットワークを介した配信の態様で提供されてもよく、記録媒体10に記録された態様で提供されてもよい。
FIG. 8 is a block diagram showing a configuration example of the integrated learning model 172. In the integrated learning model 172, whether or not the plurality of echo images are suitable for diagnosing a predetermined lung disease from the feature quantities of a plurality of echo images by machine learning using teacher data, unsupervised learning such as clustering, etc. It is a trained model trained to output the image suitability indicating the degree. The integrated learning model 172 performs a predetermined operation on an input value and outputs an operation result, and data such as a coefficient and a threshold of a function defining this operation is stored in the storage unit 13 as an integrated learning model 172. It will be remembered. By reading the data stored as the integrated learning model 172, the control unit 11 can execute an arithmetic process for determining the appropriateness of the echo image from the feature amount of the echo image.
In the first embodiment, the learning process of the integrated learning model 172 is performed by a learning computer. The data according to the learned integrated learning model 172 may be provided in the form of distribution via the communication network or may be provided in the form recorded on the recording medium 10, similarly to the computer program 131.
 本実施形態1において統合学習モデル172は、例えば、複数のエコー画像の特徴量が入力される入力層172aと、エコー画像の特徴量を抽出する中間層172bと、抽出された特徴量を出力する出力層172cとを有するニューラルネットワークである。 In the first embodiment, the integrated learning model 172 outputs, for example, an input layer 172a into which features of a plurality of echo images are input, an intermediate layer 172b for extracting features of echo images, and extracted features. It is a neural network having an output layer 172c.
 ニューラルネットワークの入力層172aは、複数のエコー画像の特徴量が入力される複数のニューロンを有し、入力された各データを中間層172bに受け渡す。 The input layer 172a of the neural network has a plurality of neurons to which the features of a plurality of echo images are input, and each input data is passed to the intermediate layer 172b.
 中間層172bは、複数のニューロンからなる層を複数有する。各層は入力されたデータから、エコー画像の適否に係る特徴量を抽出しながら前段から後段の層へ順々に受け渡し、最後段の層は出力層172cに受け渡す。 The intermediate layer 172b has a plurality of layers composed of a plurality of neurons. Each layer sequentially passes from the front layer to the rear layer while extracting the feature amount related to the suitability of the echo image from the input data, and the last layer is passed to the output layer 172c.
 出力層172cは、演算結果を出力するニューロンを備え、当該ニューロンは複数のエコー画像が所定肺疾患の診断に適した画像であるか否かの程度を示す画像適切度を出力する。 The output layer 172c includes a neuron that outputs a calculation result, and the neuron outputs an image suitability indicating the degree of whether or not a plurality of echo images are images suitable for diagnosing a predetermined lung disease.
 なお本実施形態1においては、統合学習モデル172がニューラルネットワークである例を説明したが、SVM(Support Vector Machine)、ベイジアンネットワーク、又は、回帰木等の構成のモデルであってもよい。 Although the example in which the integrated learning model 172 is a neural network has been described in the first embodiment, it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
 統合学習モデル172の学習方法について説明する。まず、コンピュータは、教師データの元になる複数のエコー画像の特徴量を収集する。そして、コンピュータは、複数の特徴量に対して、所定肺疾患の診断に適した画像であるか否かを示す教師データを付与することによって学習用データを生成する。次いで、コンピュータは、生成した学習用データを用いて、学習前のニューラルネットワークモデルを機械学習又は深層学習させることによって、統合学習モデル172を生成する。
 具体的には、コンピュータは、学習用データに含まれる複数のエコー画像の特徴量を学習前のニューラルネットワークモデルに入力し、中間層172bでの演算処理を経て、出力層172cから出力される画像適切度を取得する。そして、コンピュータは、出力層172cから出力された画像適切度と、教師データが示す画像適切度と比較し、出力層172cから出力される画像適切度が正解値に近づくように、中間層172bでの演算処理に用いるパラメータを最適化する。当該パラメータは、例えばニューロン間の重み(結合係数)などである。パラメータの最適化の方法は特に限定されないが、例えばコンピュータは最急降下法等を用いて各種パラメータの最適化を行う。
The learning method of the integrated learning model 172 will be described. First, the computer collects the features of a plurality of echo images that are the source of the teacher data. Then, the computer generates learning data by adding teacher data indicating whether or not the image is suitable for diagnosing a predetermined lung disease to a plurality of feature quantities. Next, the computer generates the integrated learning model 172 by machine learning or deep learning the pre-learning neural network model using the generated learning data.
Specifically, the computer inputs the features of a plurality of echo images included in the training data into the neural network model before training, performs arithmetic processing on the intermediate layer 172b, and outputs the image from the output layer 172c. Get the appropriateness. Then, the computer compares the image suitability output from the output layer 172c with the image suitability indicated by the teacher data, and in the intermediate layer 172b so that the image suitability output from the output layer 172c approaches the correct answer value. Optimize the parameters used for the arithmetic processing of. The parameter is, for example, a weight (coupling coefficient) between neurons. The method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like.
 情報処理装置1は、教師データに含まれる多数の患者の教師データに基づいて上記の処理を繰り返し行うことによって、学習済の統合学習モデル172を得る。 The information processing device 1 obtains a trained integrated learning model 172 by repeating the above processing based on the teacher data of a large number of patients included in the teacher data.
 図9は、実施形態1に係る情報処理手順を示すフローチャート、図10は、超音波診断モニタ画面の一例を示す模式図である。情報処理装置1の制御部11は、図10に示すようなモニタ画面を表示部15に表示する(ステップS111)。モニタ画面は、エコー画像表示部151、収集度ゲージ152、肺診断画像表示部153、うっ血度表示部154、開始ボタン155、停止ボタン156等を含む。
 エコー画像表示部151は、エコー信号に基づいて生成されるエコー画像をリアルタイムで表示する。収集度ゲージ152は、生成されるエコー画像のうち、所定肺疾患の診断に適したエコー画像の収集度を表示する。収集度ゲージ152は、所定肺疾患の診断に必要なエコー画像の目標収集量を、所定数のメータブロック152aで表示している。また、収集度ゲージ152は、一つのメータブロック152aに対応する数量のエコー画像が収集された場合、下側のメータブロック152aから順に、メータブロック152aの色を変更することによって、エコー画像の収集量を表示する。全てのメータブロック152aの色が変更された場合、収集量が目標収集量に達したことになる。肺診断画像表示部153は、所定肺疾患の診断に適した代表的なエコー画像を表示する。うっ血度表示部154は、所定肺疾患の診断結果を表示する。開始ボタン155はエコー画像の収集及び所定肺疾患の診断処理を開始させるための操作ボタン、停止ボタン156は、当該処理を停止させるための操作ボタンである。
FIG. 9 is a flowchart showing an information processing procedure according to the first embodiment, and FIG. 10 is a schematic diagram showing an example of an ultrasonic diagnostic monitor screen. The control unit 11 of the information processing apparatus 1 displays a monitor screen as shown in FIG. 10 on the display unit 15 (step S111). The monitor screen includes an echo image display unit 151, a collection degree gauge 152, a lung diagnosis image display unit 153, a congestion degree display unit 154, a start button 155, a stop button 156, and the like.
The echo image display unit 151 displays an echo image generated based on the echo signal in real time. The collection degree gauge 152 displays the collection degree of the echo image suitable for the diagnosis of a predetermined lung disease among the generated echo images. The collection degree gauge 152 displays the target collection amount of echo images necessary for diagnosing a predetermined lung disease with a predetermined number of meter blocks 152a. Further, when a quantity of echo images corresponding to one meter block 152a is collected, the collection degree gauge 152 collects echo images by changing the color of the meter block 152a in order from the lower meter block 152a. Display the amount. When the colors of all the meter blocks 152a are changed, the collection amount has reached the target collection amount. The lung diagnosis image display unit 153 displays a representative echo image suitable for diagnosing a predetermined lung disease. The congestion degree display unit 154 displays the diagnosis result of a predetermined lung disease. The start button 155 is an operation button for starting the collection of echo images and the diagnostic process of a predetermined lung disease, and the stop button 156 is an operation button for stopping the process.
 次いで、制御部11は、超音波プローブ2から出力されたエコー信号を受信し、受信したエコー信号に基づいてエコー画像を生成する(ステップS112)。そして、制御部11は、生成したエコー信号をエコー画像表示部151に表示する(ステップS113)。 Next, the control unit 11 receives the echo signal output from the ultrasonic probe 2 and generates an echo image based on the received echo signal (step S112). Then, the control unit 11 displays the generated echo signal on the echo image display unit 151 (step S113).
 次いで、制御部11は、複数のエコー画像をそれぞれ個別学習モデル171に入力することによって、画像適切度を算出し(ステップS114)、算出された画像適切度に基づいて、当該複数のエコー画像が所定肺疾患の診断に適切な画像であるか否かを判定する(ステップS115)。不適な画像であると判定した場合(ステップS115:NO)、制御部11は処理をステップS112へ戻す。 Next, the control unit 11 calculates the image suitability by inputting the plurality of echo images into the individual learning model 171 (step S114), and the plurality of echo images are generated based on the calculated image suitability. It is determined whether or not the image is suitable for diagnosing a predetermined lung disease (step S115). If it is determined that the image is inappropriate (step S115: NO), the control unit 11 returns the process to step S112.
 所定肺疾患の診断に適した画像であると判定した場合(ステップS115:YES)、制御部11は、適切と判定されたエコー画像を記憶部13に記憶する(ステップS116)。つまりエコー画像の収集を行う。 When it is determined that the image is suitable for diagnosing a predetermined lung disease (step S115: YES), the control unit 11 stores the echo image determined to be appropriate in the storage unit 13 (step S116). That is, the echo image is collected.
 次いで、制御部11は、所定肺疾患の診断に適したエコー画像の収集量を算出し(ステップS117)、算出された収集量を収集度ゲージ152に表示する(ステップS118)。収集量を表示する制御部11は、所定疾患の診断に適した前記画像の収集量を出力する出力部として機能する。
 また、制御部11は、肺診断に適した代表的なエコー画像をサンプルとして肺診断画像表示部153に表示する(ステップS119)。
Next, the control unit 11 calculates the collection amount of the echo image suitable for the diagnosis of the predetermined lung disease (step S117), and displays the calculated collection amount on the collection degree gauge 152 (step S118). The control unit 11 that displays the collected amount functions as an output unit that outputs the collected amount of the image suitable for diagnosing a predetermined disease.
Further, the control unit 11 displays a representative echo image suitable for lung diagnosis as a sample on the lung diagnosis image display unit 153 (step S119).
 次いで、制御部11はエコー画像の収集量が所定の目標収集量に達したか否かを判定する(ステップS120)。目標収集量に達していないと判定した場合(ステップS120:NO)、制御部11は処理をステップS112へ戻す。 Next, the control unit 11 determines whether or not the collection amount of the echo image has reached a predetermined target collection amount (step S120). If it is determined that the target collection amount has not been reached (step S120: NO), the control unit 11 returns the process to step S112.
 目標収集量に達したと判定した場合(ステップS120:YES)、制御部11は、肺うっ血度を算出し(ステップS121)、算出された肺うっ血度を、うっ血度表示部154に表示し(ステップS122)。処理を終える。 When it is determined that the target collection amount has been reached (step S120: YES), the control unit 11 calculates the pulmonary congestion degree (step S121), and displays the calculated pulmonary congestion degree on the pulmonary congestion degree display unit 154 (step S120: YES). Step S122). Finish the process.
 図11は、Bライン33の検出方法を示す説明図である。図11左図は、エコー信号に基づいて生成されるエコー画像である。但し、横軸は深さ方向を示し、縦軸は超音波を送信する方向を示す角度を示した極座標画像である。各画素の輝度は、エコー信号の振幅に対応している。 FIG. 11 is an explanatory diagram showing a detection method of the B line 33. The left figure of FIG. 11 is an echo image generated based on the echo signal. However, the horizontal axis indicates the depth direction, and the vertical axis indicates the polar coordinate image indicating the angle indicating the direction in which ultrasonic waves are transmitted. The brightness of each pixel corresponds to the amplitude of the echo signal.
 制御部11は、エコー画像について、深さ方向に各画素の輝度値を積分する。図11中央の図は、積分結果を概念的に示すグラフである。横軸は上記角度を示し、縦軸は積分値を示している。 The control unit 11 integrates the luminance value of each pixel in the depth direction of the echo image. The figure in the center of FIG. 11 is a graph conceptually showing the integration result. The horizontal axis shows the above angle, and the vertical axis shows the integrated value.
 次いで、制御部11は、積分値を角度方向について微分する。図11右図は、微分結果を概念的に示すグラフである。横軸は上記角度を示し、縦軸は微分値を示している。制御部11は、微分値が所定値以上である箇所をBライン33と判定する。そして、制御部11は、収集された複数のエコー画像のうち、Bライン33が存在するエコー画像の枚数を計数し、所定割合以上、例えば37.5%以上のエコー画像にBライン33が存在する場合、肺うっ血であると判定する。 Next, the control unit 11 differentiates the integrated value in the angular direction. The right figure of FIG. 11 is a graph conceptually showing the differential result. The horizontal axis shows the above angle, and the vertical axis shows the differential value. The control unit 11 determines that the portion where the differential value is equal to or greater than the predetermined value is the B line 33. Then, the control unit 11 counts the number of echo images in which the B line 33 exists among the collected plurality of echo images, and the B line 33 is present in the echo images having a predetermined ratio or more, for example, 37.5% or more. If so, it is determined that the patient has pulmonary congestion.
 制御部11は、肺うっ血であると判定した場合、例えばBライン33が存在する割合をうっ血度表示部154に表示する。肺うっ血であると判定した場合、制御部11は、所見が無い旨をうっ血度表示部154に表示する。 When the control unit 11 determines that the patient has pulmonary congestion, for example, the control unit 11 displays the ratio of the presence of the B line 33 on the congestion degree display unit 154. When it is determined that the patient has pulmonary congestion, the control unit 11 displays on the congestion degree display unit 154 that there is no finding.
 以上の通り、本実施形態1に係る超音波診断装置によれば、被検体の器官を走査する走査プローブによって得られる一連の複数の画像のなかから、所定肺疾患の診断に適した画像を選択して収集することができ、当該画像の収集量を出力することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the first embodiment, an image suitable for diagnosing a predetermined lung disease is selected from a series of images obtained by a scanning probe that scans an organ of a subject. And can be collected, and the collected amount of the image can be output.
 また、リアルタイムで、所定肺疾患の診断に適した画像の収集度及び目標収集度を表示することができる。 In addition, it is possible to display the image collection degree and the target collection degree suitable for the diagnosis of a predetermined lung disease in real time.
 更に、所定肺疾患の診断に適した画像を表示することができる。 Furthermore, it is possible to display an image suitable for diagnosing a predetermined lung disease.
 更にまた、所定肺疾患の診断に適した画像の収集度が目標収集度に達した場合、収集した画像に基づいて、肺うっ血度を算出し、表示することができる。 Furthermore, when the collection level of images suitable for diagnosing a predetermined lung disease reaches the target collection level, the lung congestion level can be calculated and displayed based on the collected images.
 なお、実施形態1では被検体の肺を走査する例を説明したが、その他の器官を走査する場合にも本発明を適用することができる。また、本実施形態1では、超音波を用いて被検体の器官を走査する例を説明したが、光学的に器官の断層像を取得する走査プローブ、例えば光干渉断層診断用のプローブを用いる場合にも本発明を適用することができる。 Although the example of scanning the lungs of the subject has been described in the first embodiment, the present invention can be applied to the case of scanning other organs. Further, in the first embodiment, an example of scanning an organ of a subject by using ultrasonic waves has been described, but when a scanning probe for optically acquiring a tomographic image of the organ, for example, a probe for optical interference tomographic diagnosis is used. The present invention can also be applied to.
 また、本実施形態1では、超音波診断装置を構成する情報処理装置1の表示部15に所定肺疾患の診断に適した画像の収集度及び目標収集度、走査された画像、所定肺疾患の診断に適した代表的な画像等を表示する例を説明したが、情報処理装置1は、外部のモニタ装置へこれらの各種情報を出力し、表示させるように構成してもよい。 Further, in the first embodiment, the display unit 15 of the information processing apparatus 1 constituting the ultrasonic diagnostic apparatus has an image collection degree and a target collection degree suitable for diagnosing a predetermined lung disease, a scanned image, and a predetermined lung disease. Although an example of displaying a typical image suitable for diagnosis has been described, the information processing apparatus 1 may be configured to output and display various kinds of these information to an external monitoring apparatus.
 更に、本実施形態1では、複数の個別学習モデル171及び統合学習モデル172を用いて、所定肺疾患の診断に適した画像を判定する例を説明したが、単独の学習モデルを用いて、複数の画像の適否をそれぞれ個別に判断するように構成してもよい。 Further, in the first embodiment, an example of determining an image suitable for diagnosing a predetermined lung disease by using a plurality of individual learning models 171 and an integrated learning model 172 has been described, but a plurality of cases are described using a single learning model. It may be configured to judge the suitability of each of the images individually.
(実施形態2)
 実施形態2に係る超音波診断装置は、学習モデルを用い、エコー画像の特徴量に基づいて所定肺疾患の診断を行う点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態1に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 2)
The ultrasonic diagnostic apparatus according to the second embodiment is different from the first embodiment in that a learning model is used and a predetermined lung disease is diagnosed based on the feature amount of the echo image. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 図12は、実施形態2に係る指標学習モデル218の構成例を示すブロック図である。実施形態2に係る情報処理装置1の記憶部13は、指標学習モデル218を記憶する。指標学習モデル218は、教師データを用いた機械学習、クラスタリング等の教師なし学習により、エコー画像の特徴量から、肺うっ血度を出力するように学習された学習済のモデルである。エコー画像の特徴量は、例えばBライン33の有無、Bライン33の本数、すりガラス状陰影34の有無、バットサイン32のコントラスト等である。指標学習モデル218は、入力値に対して所定の演算を行い、演算結果を出力するものであり、記憶部13にはこの演算を規定する関数の係数及び閾値等のデータが指標学習モデル218として記憶される。指標学習モデル218として記憶されたデータを読み込むことによって、制御部11は、エコー画像の特徴量から、肺うっ血度を算出するための演算処理を実行することが可能になる。
 本実施形態2において指標学習モデル218の学習処理は、学習用のコンピュータが行う。学習された指標学習モデル218に係るデータは、コンピュータプログラム131と同様に、通信ネットワークを介した配信の態様で提供されてもよく、記録媒体10に記録された態様で提供されてもよい。
FIG. 12 is a block diagram showing a configuration example of the index learning model 218 according to the second embodiment. The storage unit 13 of the information processing apparatus 1 according to the second embodiment stores the index learning model 218. The index learning model 218 is a trained model learned to output the degree of pulmonary congestion from the features of the echo image by unsupervised learning such as machine learning using teacher data and clustering. The feature amount of the echo image is, for example, the presence / absence of the B line 33, the number of B lines 33, the presence / absence of the frosted glass-like shadow 34, the contrast of the butt sign 32, and the like. The index learning model 218 performs a predetermined operation on the input value and outputs the operation result, and the storage unit 13 stores data such as the coefficient and the threshold of the function that defines this operation as the index learning model 218. It will be remembered. By reading the data stored as the index learning model 218, the control unit 11 can execute an arithmetic process for calculating the degree of pulmonary congestion from the feature amount of the echo image.
In the second embodiment, the learning process of the index learning model 218 is performed by a learning computer. The data according to the learned index learning model 218 may be provided in the form of distribution via the communication network or may be provided in the form recorded on the recording medium 10, similarly to the computer program 131.
 本実施形態2において指標学習モデル218は、例えば、エコー画像の特徴量が入力される入力層218aと、エコー画像の特徴量を抽出する中間層218bと、抽出された特徴量を出力する出力層218cとを有するニューラルネットワークである。 In the second embodiment, the index learning model 218 is, for example, an input layer 218a in which the feature amount of the echo image is input, an intermediate layer 218b for extracting the feature amount of the echo image, and an output layer for outputting the extracted feature amount. It is a neural network having 218c.
 ニューラルネットワークの入力層218aは、エコー画像の特徴量が入力される複数のニューロンを有し、入力された各データを中間層218bに受け渡す。 The input layer 218a of the neural network has a plurality of neurons to which the feature amount of the echo image is input, and each input data is passed to the intermediate layer 218b.
 中間層218bは、複数のニューロンからなる層を複数有する。各層は入力されたデータから、肺うっ血の特徴量を抽出しながら前段から後段の層へ順々に受け渡し、最後段の層は出力層218cに受け渡す。 The intermediate layer 218b has a plurality of layers composed of a plurality of neurons. Each layer sequentially transfers from the front layer to the rear layer while extracting the feature amount of pulmonary congestion from the input data, and the last layer is transferred to the output layer 218c.
 出力層218cは、肺うっ血度を出力するニューロンを備え、当該ニューロンは肺うっ血度を出力する。 The output layer 218c includes a neuron that outputs the degree of pulmonary congestion, and the neuron outputs the degree of pulmonary congestion.
 なお本実施形態2においては、指標学習モデル218がニューラルネットワークである例を説明したが、SVM(Support Vector Machine)、ベイジアンネットワーク、又は、回帰木等の構成のモデルであってもよい。 Although the example in which the index learning model 218 is a neural network has been described in the second embodiment, it may be a model having a configuration such as an SVM (Support Vector Machine), a Bayesian network, or a regression tree.
 指標学習モデル218の学習方法について説明する。まず、コンピュータは、教師データの元になる複数のエコー画像の特徴量を収集する。そして、コンピュータは、エコー画像の特徴量に対して、肺うっ血度を示す教師データを付与することによって学習用データを生成する。次いで、コンピュータは、生成した学習用データを用いて、学習前のニューラルネットワークモデルを機械学習又は深層学習させることによって、指標学習モデル218を生成する。
 具体的には、コンピュータは、学習用データに含まれる複数のエコー画像の特徴量を学習前のニューラルネットワークモデルに入力し、中間層218bでの演算処理を経て、出力層218cから出力される肺うっ血度を取得する。そして、コンピュータは、出力層218cから出力された肺うっ血度と、教師データが示す肺うっ血度と比較し、出力層218cから出力される肺うっ血度が正解値に近づくように、中間層218bでの演算処理に用いるパラメータを最適化する。当該パラメータは、例えばニューロン間の重み(結合係数)などである。パラメータの最適化の方法は特に限定されないが、例えばコンピュータは最急降下法等を用いて各種パラメータの最適化を行う。
The learning method of the index learning model 218 will be described. First, the computer collects the features of a plurality of echo images that are the source of the teacher data. Then, the computer generates learning data by adding teacher data indicating the degree of pulmonary congestion to the feature amount of the echo image. Next, the computer generates the index learning model 218 by machine learning or deep-learning the pre-learning neural network model using the generated learning data.
Specifically, the computer inputs the features of a plurality of echo images included in the training data into the neural network model before training, undergoes arithmetic processing in the intermediate layer 218b, and is output from the output layer 218c. Get the degree of congestion. Then, the computer compares the pulmonary congestion degree output from the output layer 218c with the pulmonary congestion degree indicated by the teacher data, and in the intermediate layer 218b so that the pulmonary congestion degree output from the output layer 218c approaches the correct answer value. Optimize the parameters used for the arithmetic processing of. The parameter is, for example, a weight (coupling coefficient) between neurons. The method of optimizing the parameters is not particularly limited, but for example, the computer optimizes various parameters by using the steepest descent method or the like.
 情報処理装置1は、教師データに含まれる多数の患者の教師データに基づいて上記の処理を繰り返し行うことによって、学習済の指標学習モデル218を得る。 The information processing apparatus 1 repeatedly performs the above processing based on the teacher data of a large number of patients included in the teacher data to obtain the trained index learning model 218.
 このように構成された指標学習モデル218に、肺疾患の診断に適していると判定されたエコー画像の特徴量を入力することによって、肺うっ血度を算出することができる。
 なお、エコー画像の特徴量は、例えば図示しない学習モデルを用いて算出することができる。学習モデルは、例えば、エコー画像が入力される入力層と、エコー画像の特徴量を抽出する中間層と、出力層とを有するニューラルネットワークである。学習モデルは、CNN(Convolutional Neural Network)であり、複数の畳み込み層、プーリング層、全結合層等を備える。特徴量は、例えば、実像の有無、Aライン31の有無、Bライン33の有無、Bライン33の本数、すりガラス状陰影34の有無、バットサイン32の有無等である。なお、上記した各ニューロン及び出力データの内容は一例であり、特に限定されるものではない。
 学習モデルの学習方法について説明する。まず、コンピュータは、教師データの元になる複数のエコー画像を収集する。そして、コンピュータは、複数のエコー画像に対して、実像の有無、Bライン33の有無等の特徴量を示す教師データを付与することによって学習用データを生成する。次いで、コンピュータは、生成した学習用データを用いて、学習前のニューラルネットワークモデルを機械学習又は深層学習させることによって、個別学習モデル171を生成する。
 具体的には、コンピュータは、学習用データに含まれるエコー画像を学習前のニューラルネットワークモデルに入力し、中間層での演算処理を経て、出力層から出力される特徴量を取得する。そして、コンピュータは、出力層から出力された特徴量と、教師データが示す特徴量と比較し、出力層から出力されるデータが正解値に近づくように、中間層での演算処理に用いるパラメータを最適化する。
The degree of pulmonary congestion can be calculated by inputting the feature amount of the echo image determined to be suitable for the diagnosis of lung disease into the index learning model 218 configured in this way.
The feature amount of the echo image can be calculated by using, for example, a learning model (not shown). The learning model is, for example, a neural network having an input layer into which an echo image is input, an intermediate layer for extracting features of the echo image, and an output layer. The learning model is a CNN (Convolutional Neural Network), and includes a plurality of convolutional layers, a pooling layer, a fully connected layer, and the like. The feature amount is, for example, the presence / absence of a real image, the presence / absence of an A line 31, the presence / absence of a B line 33, the number of B lines 33, the presence / absence of a frosted glass-like shadow 34, the presence / absence of a butt sign 32, and the like. The contents of each neuron and output data described above are examples, and are not particularly limited.
The learning method of the learning model will be explained. First, the computer collects multiple echo images that are the source of the teacher data. Then, the computer generates learning data by adding teacher data indicating feature quantities such as the presence / absence of a real image and the presence / absence of a B line 33 to the plurality of echo images. Next, the computer generates the individual learning model 171 by machine learning or deep-learning the pre-learning neural network model using the generated training data.
Specifically, the computer inputs the echo image included in the learning data into the neural network model before learning, performs arithmetic processing in the intermediate layer, and acquires the feature amount output from the output layer. Then, the computer compares the feature amount output from the output layer with the feature amount indicated by the teacher data, and sets the parameters used for the arithmetic processing in the intermediate layer so that the data output from the output layer approaches the correct answer value. Optimize.
 以上の通り、本実施形態2に係る超音波診断装置によれば、エコー画像の特徴量として、Bライン33の有無、Bライン33の本数、すりガラス状陰影34の有無、バットサイン32のコントラスト等の特徴量を指標学習モデル218に入力することによって、肺うっ血度を算出することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the second embodiment, the features of the echo image include the presence / absence of the B line 33, the number of B lines 33, the presence / absence of the ground glass-like shadow 34, the contrast of the butt sign 32, and the like. The degree of pulmonary congestion can be calculated by inputting the feature amount of the above into the index learning model 218.
(実施形態3)
 実施形態3に係る超音波診断装置は、エコー画像そのものを学習モデルに入力して所定肺疾患の診断を行う点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態2に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 3)
The ultrasonic diagnostic apparatus according to the third embodiment is different from the first embodiment in that the echo image itself is input to the learning model to diagnose a predetermined lung disease. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the second embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 図13は、実施形態3に係る指標学習モデル318の構成例を示すブロック図である。実施形態3に係る情報処理装置1の記憶部13は、指標学習モデル318を記憶する。指標学習モデル318は、例えば、エコー画像が入力される入力層318aと、エコー画像を抽出する中間層318bと、抽出された特徴量を出力する出力層318cとを有するニューラルネットワークである。 FIG. 13 is a block diagram showing a configuration example of the index learning model 318 according to the third embodiment. The storage unit 13 of the information processing apparatus 1 according to the third embodiment stores the index learning model 318. The index learning model 318 is, for example, a neural network having an input layer 318a into which an echo image is input, an intermediate layer 318b for extracting an echo image, and an output layer 318c for outputting the extracted features.
 ニューラルネットワークの入力層318aは、エコー画像の各画素の画素値が入力される複数のニューロンを有し、入力された各データを中間層318bに受け渡す。 The input layer 318a of the neural network has a plurality of neurons to which the pixel value of each pixel of the echo image is input, and each input data is passed to the intermediate layer 318b.
 中間層318bは、複数のニューロンからなる層を複数有する。例えば、実施形態3の指標学習モデル318は、CNNであり、複数の畳み込み層、プーリング層、全結合層等を備える。各層は入力されたデータから、肺うっ血の特徴量を抽出しながら前段から後段の層へ順々に受け渡し、最後段の層は出力層318cに受け渡す。 The intermediate layer 318b has a plurality of layers composed of a plurality of neurons. For example, the index learning model 318 of the third embodiment is a CNN and includes a plurality of convolutional layers, a pooling layer, a fully connected layer, and the like. Each layer sequentially transfers from the front layer to the rear layer while extracting the feature amount of pulmonary congestion from the input data, and the last layer is transferred to the output layer 318c.
 出力層318cは、肺うっ血度を出力するニューロンを備え、当該ニューロンは肺うっ血度を出力する。 The output layer 318c includes a neuron that outputs the degree of pulmonary congestion, and the neuron outputs the degree of pulmonary congestion.
 指標学習モデル318の生成方法は実施形態2と同様である。このように構成された指標学習モデル318に、肺疾患の診断に適していると判定されたエコー画像を入力することによって、肺うっ血度を算出することができる。 The method of generating the index learning model 318 is the same as that of the second embodiment. The degree of pulmonary congestion can be calculated by inputting an echo image determined to be suitable for diagnosing a lung disease into the index learning model 318 configured in this way.
 以上の通り、本実施形態3に係る超音波診断装置によれば、エコー画像を指標学習モデル318に入力することによって、肺うっ血度を算出することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the third embodiment, the degree of pulmonary congestion can be calculated by inputting an echo image into the index learning model 318.
(実施形態4)
 実施形態4に係る超音波診断装置は、所定の複数の走査部位において所定数以上のエコー画像を生成し、肺疾患の診断を行う点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態1に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 4)
The ultrasonic diagnostic apparatus according to the fourth embodiment is different from the first embodiment in that a predetermined number or more of echo images are generated at a predetermined plurality of scanning sites to diagnose a lung disease. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 図14は、実施形態4に係る超音波診断装置の構成例を説明する模式図である。実施形態4に係る超音波プローブ2は、加速度センサ421(測位センサ)を備え、加速度信号を情報処理装置1へ出力する。情報処理装置1は、超音波プローブ2から出力された加速度信号を受信し、受信した加速度信号に基づいて、超音波プローブ2の位置を推定する。具体的には、情報処理装置1は、加速度信号に基づいて、超音波センサを開始したときの被検体に対する超音波プローブ2の位置を基準とし、当該基準位置に対する超音波プローブ2の位置を推定する。 FIG. 14 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the fourth embodiment. The ultrasonic probe 2 according to the fourth embodiment includes an acceleration sensor 421 (positioning sensor) and outputs an acceleration signal to the information processing device 1. The information processing apparatus 1 receives the acceleration signal output from the ultrasonic probe 2, and estimates the position of the ultrasonic probe 2 based on the received acceleration signal. Specifically, the information processing apparatus 1 estimates the position of the ultrasonic probe 2 with respect to the reference position based on the position of the ultrasonic probe 2 with respect to the subject when the ultrasonic sensor is started, based on the acceleration signal. do.
 図15は、実施形態4に係る情報処理手順を示すフローチャートである。情報処理装置1の制御部11は、実施形態1のステップS111~ステップS113と同様の手順でモニタ画面を表示し(ステップS411)、エコー画像を生成し(ステップS412)、生成したエコー画像を表示部15に表示する(ステップS413)。 FIG. 15 is a flowchart showing an information processing procedure according to the fourth embodiment. The control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S411), generates an echo image (step S412), and displays the generated echo image. It is displayed on the unit 15 (step S413).
 次いで、制御部11は、超音波プローブ2から出力される加速度信号を受信し、受信した加速度情報に基づいて、走査部位、つまり被検体に対する超音波プローブ2の位置を推定する(ステップS414)。 Next, the control unit 11 receives the acceleration signal output from the ultrasonic probe 2, and estimates the position of the scanning site, that is, the ultrasonic probe 2 with respect to the subject based on the received acceleration information (step S414).
 図16は、所定の走査部位を示す模式図である。本実施形態4では、図16に示すように、右肺を上下左右に区分けした4つの走査部位(図16中、「1」、「2」、「3」、「4」の数字で示す部位)と、左肺を上下左右に区分けした4つの走査部位(図16中、「5」、「6」、「7」、「8」の数字で示す部位)とを、数字の順で走査することを想定する。情報処理装置1は、最初にモニタ画面の開始ボタン155が操作され、最初に実像が得られ始めたときの超音波プローブ2の位置を、走査部位「1」と推定する。以後、制御部11は、加速度信号に基づいて、走査部位「1」に対する超音波プローブ2の位置を算出し、被検体に対する超音波プローブ2の位置を推定する。 FIG. 16 is a schematic diagram showing a predetermined scanning portion. In the fourth embodiment, as shown in FIG. 16, four scanning sites in which the right lung is divided into upper, lower, left and right (in FIG. 16, the sites indicated by the numbers “1”, “2”, “3”, and “4”). ) And the four scanning sites (the sites indicated by the numbers "5", "6", "7", and "8" in FIG. 16) that divide the left lung into upper, lower, left, and right sides are scanned in numerical order. I assume that. The information processing apparatus 1 estimates that the position of the ultrasonic probe 2 when the start button 155 of the monitor screen is first operated and the real image is first obtained is the scanning portion “1”. After that, the control unit 11 calculates the position of the ultrasonic probe 2 with respect to the scanning portion “1” based on the acceleration signal, and estimates the position of the ultrasonic probe 2 with respect to the subject.
 次いで、制御部11は、上記所定の複数の走査部位におけるエコー画像の生成量を算出し(ステップS415)、各部位で所定量以上のエコー画像が生成されたか否かを判定する(ステップS416)。 Next, the control unit 11 calculates the amount of echo images generated in the predetermined plurality of scanning sites (step S415), and determines whether or not an echo image of a predetermined amount or more is generated in each site (step S416). ..
 エコー画像の生成量が所定量未満の走査部位があると判定した場合(ステップS416:YES)、制御部11は、生成量が所定量未満の走査部位へ超音波プローブ2を移動させるべきことを示す指示画像(移動指示情報)を表示部15に表示する(ステップS417)。なお、現時点で超音波プローブ2が当該所定量未満の走査部位にある場合、指示画像を表示しないように構成する。また、エコー画像の生成量が所定量未満の部位が複数ある場合、所定順、例えば走査部位の番号「1」~「8」が小さい走査部位へ移動させるべき旨を指示するように構成するとよい。 When it is determined that there is a scanning portion where the generated amount of the echo image is less than the predetermined amount (step S416: YES), the control unit 11 indicates that the ultrasonic probe 2 should be moved to the scanning portion where the generated amount is less than the predetermined amount. The indicated instruction image (movement instruction information) is displayed on the display unit 15 (step S417). If the ultrasonic probe 2 is located in the scanning portion of less than the predetermined amount at the present time, the instruction image is not displayed. Further, when there are a plurality of sites where the amount of echo images generated is less than a predetermined amount, it may be configured to instruct that the scanning sites should be moved to a smaller scanning site in a predetermined order, for example, the scanning site numbers "1" to "8". ..
 ステップS417の処理を終えた場合、又はステップS416で所定量未満の走査が無いと判定した場合(ステップS416:NO)、制御部11は、生成された複数のエコー画像をそれぞれ個別学習モデル171に入力することによって、エコー画像が所定肺疾患の診断に適した画像であるか否かを判定する(ステップS114)。ステップS114以下の処理は、実施形態1と同様であるため、詳細は省略する。 When the processing of step S417 is completed, or when it is determined in step S416 that there is no scan less than a predetermined amount (step S416: NO), the control unit 11 inputs the generated plurality of echo images into the individual learning model 171. By inputting, it is determined whether or not the echo image is an image suitable for diagnosing a predetermined lung disease (step S114). Since the processing of step S114 and the following is the same as that of the first embodiment, the details will be omitted.
 以上の通り、本実施形態4に係る超音波診断装置によれば、肺の各部位を漏れなく走査して、エコー画像を生成及び収集することができ、精度良く肺疾患を診断することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the fourth embodiment, each part of the lung can be scanned without omission to generate and collect an echo image, and a lung disease can be diagnosed with high accuracy. ..
(実施形態5)
 実施形態5に係る超音波診断装置は、肺疾患の診断に適したエコー画像が効率良く収集されるよう、超音波プローブ2の姿勢を術者に指示することができる点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態1に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 5)
The ultrasonic diagnostic apparatus according to the fifth embodiment is different from the first embodiment in that the posture of the ultrasonic probe 2 can be instructed to the operator so that echo images suitable for diagnosing lung disease can be efficiently collected. .. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 実施形態5に係る超音波プローブ2は、実施形態4と同様、加速度センサ421(姿勢センサ)を備え、加速度信号を情報処理装置1へ出力する。情報処理装置1は、超音波プローブ2から出力された加速度信号を受信し、受信した加速度信号に基づいて、超音波プローブ2の姿勢を推定する。 Similar to the fourth embodiment, the ultrasonic probe 2 according to the fifth embodiment includes an acceleration sensor 421 (attitude sensor) and outputs an acceleration signal to the information processing apparatus 1. The information processing apparatus 1 receives the acceleration signal output from the ultrasonic probe 2, and estimates the posture of the ultrasonic probe 2 based on the received acceleration signal.
 図17は、実施形態5に係る情報処理手順を示すフローチャートである。情報処理装置1の制御部11は、実施形態1のステップS111~ステップS113と同様の手順でモニタ画面を表示し(ステップS511)、エコー画像を生成し(ステップS512)、生成したエコー画像を表示部15に表示する(ステップS513)。 FIG. 17 is a flowchart showing an information processing procedure according to the fifth embodiment. The control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S511), generates an echo image (step S512), and displays the generated echo image. It is displayed on the unit 15 (step S513).
 次いで、制御部11は、超音波プローブ2から出力される加速度信号を受信し、受信した加速度情報に基づいて、被検体に対する超音波プローブ2の姿勢を推定する(ステップS514)。 Next, the control unit 11 receives the acceleration signal output from the ultrasonic probe 2, and estimates the posture of the ultrasonic probe 2 with respect to the subject based on the received acceleration information (step S514).
 次いで、制御部11は、生成された複数のエコー画像をそれぞれ個別学習モデル171に入力することによって、エコー画像が所定肺疾患の診断に適した画像であるか否かの程度を示す画像適切度を算出し(ステップS515)、算出された画像適切度に基づいてエコー画像が所定肺疾患の診断に適した画像であるか否かを判定する(ステップS516)。 Next, the control unit 11 inputs the generated plurality of echo images into the individual learning model 171 to indicate the degree of image suitability indicating whether or not the echo image is an image suitable for diagnosing a predetermined lung disease. (Step S515), and it is determined whether or not the echo image is an image suitable for diagnosing a predetermined lung disease based on the calculated image suitability (step S516).
 所定肺疾患の診断に適した画像であると判定した場合(ステップS516:YES)、制御部11は、適切と判定されたエコー画像を記憶部13に記憶し(ステップS517)、超音波プローブ2の姿勢を示す姿勢情報を記憶する(ステップS518)。 When it is determined that the image is suitable for diagnosing a predetermined lung disease (step S516: YES), the control unit 11 stores the echo image determined to be appropriate in the storage unit 13 (step S517), and the ultrasonic probe 2 Posture information indicating the posture of is stored (step S518).
 以下、実施形態1のステップS117~120と同様、制御部11は、エコー画像の収集量を算出し(ステップS519)、収集度ゲージ152に表示し(ステップS520)、肺診断に適した代表的なエコー画像をサンプルとして肺診断画像表示部153に表示する(ステップS521)。以降の処理は実施形態1と同様であるため詳細を省略する。 Hereinafter, as in steps S117 to 120 of the first embodiment, the control unit 11 calculates the collection amount of the echo image (step S519), displays it on the collection degree gauge 152 (step S520), and is a representative suitable for lung diagnosis. The echo image is displayed as a sample on the lung diagnosis image display unit 153 (step S521). Since the subsequent processing is the same as that of the first embodiment, the details will be omitted.
 ステップS516でエコー画像が所定肺疾患の診断に適した画像でないと判定した場合(ステップS516:NO)、制御部11は、所定肺疾患の診断に適したエコー画像の生成量が低下しているか否かを判定する(ステップS523)。所定肺疾患の診断に適したエコー画像の生成量が低下していると判定した場合(ステップS523:YES)、制御部11は、記憶部13が記憶する超音波プローブ2の姿勢情報を読み出し、読み出した姿勢情報、又は当該姿勢情報に基づく姿勢変更を指示する指示画像(姿勢変更指示情報)を表示部15に表示し(ステップS524)、処理をステップS512へ戻す。所定肺疾患の診断に適したエコー画像の生成量が低下していないと判定した場合(ステップS523:NO)、制御部11はそのまま処理をステップS512へ戻す。 When it is determined in step S516 that the echo image is not an image suitable for diagnosing a predetermined lung disease (step S516: NO), is the control unit 11 reducing the amount of echo image generation suitable for diagnosing the predetermined lung disease? It is determined whether or not (step S523). When it is determined that the amount of echo images generated suitable for diagnosing a predetermined lung disease is reduced (step S523: YES), the control unit 11 reads out the posture information of the ultrasonic probe 2 stored in the storage unit 13 and reads the posture information. The read posture information or an instruction image (posture change instruction information) instructing a posture change based on the posture information is displayed on the display unit 15 (step S524), and the process is returned to step S512. When it is determined that the amount of echo images generated suitable for diagnosing a predetermined lung disease has not decreased (step S523: NO), the control unit 11 returns the process to step S512 as it is.
 以上の通り、本実施形態5に係る超音波診断装置によれば、所定肺疾患の診断に適したエコー画像が生成される超音波プローブ2の姿勢になるように術者に指示することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the fifth embodiment, it is possible to instruct the operator to take the posture of the ultrasonic probe 2 that generates an echo image suitable for diagnosing a predetermined lung disease. ..
(実施形態6)
 実施形態6に係る超音波診断装置は、被検体の呼吸サイクルを考慮してエコー画像の収集を行う点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態1に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 6)
The ultrasonic diagnostic apparatus according to the sixth embodiment is different from the first embodiment in that echo images are collected in consideration of the respiratory cycle of the subject. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 図18は、実施形態6に係る超音波診断装置の構成例を説明する模式図である。実施形態6に係る超音波診断装置は、呼吸サイクルセンサ603を備える。呼吸サイクルセンサ603は、例えばPPG(Photoplethysmography)脈波センサ、加速度センサ421又は圧電素子等により呼吸による体の動きを検出する体動センサ等である。呼吸サイクルセンサ603は、被検体の呼吸サイクルに応じた信号を情報処理装置1へ送信する。情報処理装置1は、呼吸サイクルセンサ603から送信された信号を受信する。 FIG. 18 is a schematic diagram illustrating a configuration example of the ultrasonic diagnostic apparatus according to the sixth embodiment. The ultrasonic diagnostic apparatus according to the sixth embodiment includes a respiratory cycle sensor 603. The respiration cycle sensor 603 is, for example, a PPG (Photoplethysmography) pulse wave sensor, an acceleration sensor 421, a body motion sensor that detects the movement of the body due to respiration, or the like. The respiratory cycle sensor 603 transmits a signal corresponding to the respiratory cycle of the subject to the information processing apparatus 1. The information processing device 1 receives the signal transmitted from the respiratory cycle sensor 603.
 図19は、実施形態6に係る情報処理手順を示すフローチャートである。情報処理装置1の制御部11は、実施形態1のステップS111~ステップS113と同様の手順でモニタ画面を表示し(ステップS611)、エコー画像を生成し(ステップS612)、生成したエコー画像を表示部15に表示する(ステップS613)。 FIG. 19 is a flowchart showing an information processing procedure according to the sixth embodiment. The control unit 11 of the information processing apparatus 1 displays the monitor screen in the same procedure as in steps S111 to S113 of the first embodiment (step S611), generates an echo image (step S612), and displays the generated echo image. It is displayed on the unit 15 (step S613).
 次いで、制御部11は、呼吸サイクルセンサ603から送信された呼吸サイクル信号を受信し(ステップS614)、受信した呼吸サイクル信号の情報を、同時期に生成されたエコー画像に関連付ける(ステップS615)。 Next, the control unit 11 receives the respiratory cycle signal transmitted from the respiratory cycle sensor 603 (step S614), and associates the received respiratory cycle signal information with the echo image generated at the same time (step S615).
 次いで、制御部11は、同一呼吸タイミングで生成された複数のエコー画像を選択する(ステップS616)。例えば、制御部11は、吸気タイミングのサイクル信号の情報が関連付けられている複数のエコー情報を選択する。また、同一吸気タイミングの呼吸サイクル信号の情報が関連付けられている複数のエコー画像を選択してもよい。 Next, the control unit 11 selects a plurality of echo images generated at the same breathing timing (step S616). For example, the control unit 11 selects a plurality of echo information associated with the information of the cycle signal of the intake timing. Further, a plurality of echo images to which the information of the respiratory cycle signal of the same inspiratory timing is associated may be selected.
 そして、制御部11は、選択された同一呼吸タイミングのエコー画像を適切度学習モデル17に入力することによって、画像適切度を算出する(ステップS617)。ステップS617の処理を終えた制御部11は、実施形態1のステップS115~ステップS122の処理を実行する。 Then, the control unit 11 calculates the image suitability by inputting the echo image of the selected same breathing timing into the suitability learning model 17 (step S617). The control unit 11 that has completed the processing of step S617 executes the processing of steps S115 to S122 of the first embodiment.
 以上の通り、本実施形態6に係る超音波診断装置によれば、同一呼吸サイクルタイミングで生成されたエコー画像を適切度学習モデル17に入力する構成であるため、より精度よく、エコー画像の適否を判定することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the sixth embodiment, since the echo image generated at the same breathing cycle timing is input to the appropriateness learning model 17, the suitability of the echo image is more accurate. Can be determined.
 なお、呼気タイミングのエコー画像のみを用いて適切度学習モデル17を機械学習させた場合、呼気タイミングのエコー画像を選択し、当該適切度学習モデル17に入力するように構成するとよい。同様に、吸気タイミングのエコー画像のみを用いて適切度学習モデル17を機械学習させた場合、吸気タイミングのエコー画像を選択し、当該適切度学習モデル17に入力するように構成するとよい。また、吸気タイミングのエコー画像を用いて機械学習した第1適切度学習モデルと、吸気タイミングのエコー画像を用いて機械学習させた第2適切度学習モデルとを備え、呼気タイミングで生成されたエコー画像を第1適切度学習モデルに入力し、吸気タイミングで生成されたエコー画像を第2適切度学習モデルに入力するように構成してもよい。 When the appropriateness learning model 17 is machine-learned using only the echo image of the expiratory timing, the echo image of the expiratory timing may be selected and input to the appropriateness learning model 17. Similarly, when the appropriateness learning model 17 is machine-learned using only the echo image of the intake timing, the echo image of the intake timing may be selected and input to the appropriateness learning model 17. Further, it is provided with a first appropriateness learning model machine-learned using an echo image of intake timing and a second appropriateness learning model machine-learned using an echo image of intake timing, and an echo generated at exhalation timing. The image may be input to the first appropriateness learning model, and the echo image generated at the intake timing may be input to the second appropriateness learning model.
(実施形態7)
 実施形態7に係る超音波診断装置は、複数の超音波周波数を用いて得られるエコー画像を収集する点が実施形態1と異なる。情報処理装置1のその他の構成は、実施形態1に係る情報処理装置1と同様であるため、同様の箇所には同じ符号を付し、詳細な説明を省略する。
(Embodiment 7)
The ultrasonic diagnostic apparatus according to the seventh embodiment is different from the first embodiment in that it collects echo images obtained by using a plurality of ultrasonic frequencies. Since the other configurations of the information processing apparatus 1 are the same as those of the information processing apparatus 1 according to the first embodiment, the same reference numerals are given to the same parts, and detailed description thereof will be omitted.
 図20は、実施形態7に係る適切度学習モデル17の構成例を示すブロック図である。情報処理装置1は、第1周波数の超音波を送信させる駆動信号と、第2周波数の超音波を送信させる駆動信号とを、周期的に切り替えて超音波プローブ2へ出力し、エコー信号を受信する。切り替え周期は、同一走査部位を第1周波数及び第2周波数で走査できる程度の短い周期である。異なる周波数の超音波を用いることによって、同一走査部位について、異なるエコー画像を得ることができる。一般的に超音波の周波数が高い程、分解能が高くなるが透過力は低下し、周波数が低い程、分解能は低くなり、透過力は高くなる。 FIG. 20 is a block diagram showing a configuration example of the appropriateness learning model 17 according to the seventh embodiment. The information processing apparatus 1 periodically switches between a drive signal for transmitting ultrasonic waves of the first frequency and a drive signal for transmitting ultrasonic waves of the second frequency, outputs the output to the ultrasonic probe 2, and receives an echo signal. do. The switching cycle is a short cycle such that the same scanning portion can be scanned at the first frequency and the second frequency. By using ultrasonic waves of different frequencies, different echo images can be obtained for the same scanning site. Generally, the higher the frequency of ultrasonic waves, the higher the resolution but the lower the penetrating power, and the lower the frequency, the lower the resolution and the higher the penetrating power.
 情報処理装置1の制御部11は、第1周波数の超音波で得られた複数のエコー画像と、第2の周波数の超音波で得られた複数のエコー画像とを適切度学習モデル17に入力することによって、画像適切度を算出する。 The control unit 11 of the information processing apparatus 1 inputs a plurality of echo images obtained by ultrasonic waves of the first frequency and a plurality of echo images obtained by ultrasonic waves of the second frequency into the appropriateness learning model 17. By doing so, the image suitability is calculated.
 同一走査部位について異なる周波数の超音波で得られるエコー画像を適切度学習モデル17に入力することによって、エコー画像のより詳細な情報を得ることができ、肺疾患の診断に適切な画像であるか否かをより精度良く示した画像適切度を算出することができる。 By inputting echo images obtained by ultrasonic waves of different frequencies for the same scanning site into the appropriateness learning model 17, more detailed information on the echo images can be obtained, and is the image suitable for diagnosing lung disease? It is possible to calculate the image suitability that indicates whether or not it is more accurate.
 以上の通り、本実施形態7に係る超音波診断装置によれば、異なる周波数の超音波を用いて得られた複数のエコー画像を適切度学習モデル17に入力することによって、より精度良く、所定肺疾患の診断に適したエコー画像であるか否かを判定し、収集することができる。 As described above, according to the ultrasonic diagnostic apparatus according to the seventh embodiment, by inputting a plurality of echo images obtained by using ultrasonic waves of different frequencies into the appropriateness learning model 17, it is more accurate and predetermined. It is possible to determine and collect whether or not the echo image is suitable for diagnosing a lung disease.
 1 情報処理装置
 2 超音波プローブ
 10 記録媒体
 11 制御部
 11a プローブ制御部
 11b 画像生成部
 11c 肺診断適切度判定部
 11d 肺診断画像記憶部
 11e 肺診断画像収集度表示処理部
 11f 肺うっ血度算出部
 11g 肺うっ血度表示処理部
 12 メモリ
 13 記憶部
 14 操作部
 15 表示部
 16 通信部
 17 適切度学習モデル
 31 Aライン
 32 バットサイン
 33 Bライン
 34 すりガラス状陰影
 131 コンピュータプログラム
 151 エコー画像表示部
 152 収集度ゲージ
 152a メータブロック
 153 肺診断画像表示部
 154 うっ血度表示部
 155 開始ボタン
 156 停止ボタン
 171 個別学習モデル
 172 統合学習モデル
 218 指標学習モデル
 603 呼吸サイクルセンサ
 421 加速度センサ
 
1 Information processing device 2 Ultrasonic probe 10 Recording medium 11 Control unit 11a Probe control unit 11b Image generation unit 11c Lung diagnosis appropriateness judgment unit 11d Lung diagnosis image storage unit 11e Lung diagnosis image collection degree display processing unit 11f Lung congestion degree calculation unit 11g Lung congestion degree display processing unit 12 Memory 13 Storage unit 14 Operation unit 15 Display unit 16 Communication unit 17 Appropriateness learning model 31 A line 32 Bat sign 33 B line 34 Glazed shadow 131 Computer program 151 Echo image display unit 152 Collection degree Gauge 152a Meter block 153 Lung diagnosis image display 154 Congestion degree display 155 Start button 156 Stop button 171 Individual learning model 172 Integrated learning model 218 Index learning model 603 Respiratory cycle sensor 421 Acceleration sensor

Claims (15)

  1.  被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成し、
     生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定し、
     前記所定疾患の診断に適した前記画像を記憶し、
     前記所定疾患の診断に適した前記画像の収集量を出力する
     処理をコンピュータに実行させるためのコンピュータプログラム。
    Generates a series of images based on the signals obtained from a scanning probe that scans the organ of the subject.
    It is determined whether or not the generated plurality of images are images suitable for diagnosing a predetermined disease, and the images are determined.
    The image suitable for the diagnosis of the predetermined disease is stored, and the image is stored.
    A computer program for causing a computer to execute a process of outputting the collected amount of the image suitable for diagnosing the predetermined disease.
  2.  前記画像を生成する処理は、
     前記走査プローブから得られる信号に基づいて前記画像をリアルタイムで順次生成し、
     前記収集量を出力する処理は、
     前記走査プローブから得られる信号に基づく前記画像を生成する処理と並行して、前記収集量と、所定の目標収集量とをリアルタイムで表示する処理を含む
     請求項1に記載のコンピュータプログラム。
    The process of generating the image is
    The images are sequentially generated in real time based on the signal obtained from the scanning probe.
    The process of outputting the collected amount is
    The computer program according to claim 1, further comprising a process of displaying the collected amount and a predetermined target collected amount in real time in parallel with the process of generating the image based on the signal obtained from the scanning probe.
  3.  前記目標収集量以上の前記画像が収集されたか否かを判定し、
     前記所定疾患の診断に適した前記目標収集量以上の前記複数の画像に基づいて、前記所定疾患を診断するための指標を算出し、
     算出された指標を表示する
     処理を前記コンピュータに実行させるための請求項2に記載のコンピュータプログラム。
    It is determined whether or not the image collected in excess of the target collection amount has been collected.
    An index for diagnosing the predetermined disease is calculated based on the plurality of images of the target collection amount or more suitable for the diagnosis of the predetermined disease.
    The computer program according to claim 2, wherein the computer is made to execute a process of displaying the calculated index.
  4.  前記所定疾患の診断に適した前記複数の画像に基づいて、前記所定疾患を診断するための指標を算出し、
     算出された指標を出力する
     処理を前記コンピュータに実行させるための請求項1又は請求項2に記載のコンピュータプログラム。
    Based on the plurality of images suitable for diagnosing the predetermined disease, an index for diagnosing the predetermined disease is calculated.
    The computer program according to claim 1 or 2, for causing the computer to execute a process of outputting the calculated index.
  5.  前記走査プローブは超音波プローブであり、
     前記画像を生成する処理は、
     被検体の肺を走査する前記走査プローブから得られる信号に基づいて一連の複数の画像を生成し、
     前記指標を算出する処理は、
     少なくとも前記画像におけるBラインを検出し、
     検出された前記Bラインに基づいて、肺うっ血に係る前記指標を算出する
     請求項3又は請求項4に記載のコンピュータプログラム。
    The scanning probe is an ultrasonic probe.
    The process of generating the image is
    A series of images is generated based on the signal obtained from the scanning probe that scans the lungs of the subject.
    The process of calculating the index is
    At least the B line in the image is detected and
    The computer program according to claim 3 or 4, which calculates the index relating to pulmonary congestion based on the detected B line.
  6.  前記走査プローブは超音波プローブであり、
     前記画像を生成する処理は、
     被検体の肺を走査する前記走査プローブから得られる信号に基づいて一連の複数の画像を生成し、
     前記指標を算出する処理は、
     前記画像におけるBライン、すりガラス状陰影、及び肋骨での超音波の反射に基づくバットサインを検出し、
     前記Bラインの有無又は本数、前記すりガラス状陰影の有無、及び前記バットサインのコントラストに基づいて、肺うっ血に係る前記指標を算出する
     請求項3又は請求項4に記載のコンピュータプログラム。
    The scanning probe is an ultrasonic probe.
    The process of generating the image is
    A series of images is generated based on the signal obtained from the scanning probe that scans the lungs of the subject.
    The process of calculating the index is
    B-lines, ground-glass opacities, and butt signs based on ultrasonic reflections on the ribs in the image were detected.
    The computer program according to claim 3 or 4, which calculates the index relating to pulmonary congestion based on the presence or absence or number of B lines, the presence or absence of the ground glass-like shadow, and the contrast of the bat sign.
  7.  前記走査プローブは超音波プローブであり、
     前記画像を生成する処理は、
     被検体の肺を走査する前記走査プローブから得られる信号に基づいて一連の複数の画像を生成し、
     前記指標を算出する処理は、
     前記画像が入力された場合、肺うっ血に係る前記指標を出力する指標学習モデルに、前記画像を入力することによって、肺うっ血に係る前記指標を算出する処理を含む
     請求項3又は請求項4に記載のコンピュータプログラム。
    The scanning probe is an ultrasonic probe.
    The process of generating the image is
    A series of images is generated based on the signal obtained from the scanning probe that scans the lungs of the subject.
    The process of calculating the index is
    Claim 3 or claim 4 includes a process of calculating the index related to pulmonary congestion by inputting the image into an index learning model that outputs the index related to pulmonary congestion when the image is input. Described computer program.
  8.  前記指標を算出する処理は、
     被検体の呼吸サイクルを検出する呼吸サイクルセンサから出力される呼吸サイクル信号に基づいて、同一呼吸タイミングで生成された前記複数の画像に基づいて、前記所定疾患を診断するための指標を算出する
     請求項3~請求項7のいずれか1項に記載のコンピュータプログラム。
    The process of calculating the index is
    A claim to calculate an index for diagnosing the predetermined disease based on the plurality of images generated at the same breathing timing based on the respiratory cycle signal output from the respiratory cycle sensor that detects the respiratory cycle of the subject. The computer program according to any one of items 3 to 7.
  9.  前記所定疾患の診断に適した前記画像を表示させる
     処理を前記コンピュータに実行させるための請求項1~請求項8のいずれか1項に記載のコンピュータプログラム。
    The computer program according to any one of claims 1 to 8, for causing the computer to execute a process of displaying the image suitable for diagnosing the predetermined disease.
  10.  一の画像が入力された場合、入力された該画像の特徴量を抽出して出力する個別学習モデルを複数有し、生成された前記複数の画像それぞれを前記複数の個別学習モデルに入力することによって、前記複数の画像それぞれの特徴量を抽出し、
     前記複数の画像の特徴量が入力された場合、前記複数の画像が前記所定疾患の診断に適した画像であるか否の程度を示す画像適切度を出力する統合学習モデルに、前記複数の個別学習モデルから出力された特徴量を入力することによって、前記複数の画像が前記所定疾患の診断に適した画像であるか否かを判定する
     請求項1~請求項9のいずれか1項に記載のコンピュータプログラム。
    When one image is input, it has a plurality of individual learning models that extract and output the feature amount of the input image, and input each of the generated plurality of images to the plurality of individual learning models. The feature amount of each of the plurality of images is extracted by
    When the feature amounts of the plurality of images are input, the plurality of individual images are output to an integrated learning model that outputs an image suitability indicating the degree of whether or not the plurality of images are suitable for diagnosing the predetermined disease. The item according to any one of claims 1 to 9, wherein it is determined whether or not the plurality of images are images suitable for diagnosing the predetermined disease by inputting the feature amount output from the learning model. Computer program.
  11.  前記走査プローブに設けられた測位センサから出力される信号に基づいて、前記被検体の走査部位を推定し、
     前記被検体の複数の走査部位それぞれで所定量以上の前記画像が生成されるように、前記走査プローブの移動を指示する移動指示情報を出力する
     処理を前記コンピュータに実行させるための請求項1~請求項10のいずれか1項に記載のコンピュータプログラム。
    Based on the signal output from the positioning sensor provided on the scanning probe, the scanning site of the subject is estimated.
    Claims 1 to 1 for causing the computer to perform a process of outputting movement instruction information instructing the movement of the scanning probe so that a predetermined amount or more of the image is generated at each of the plurality of scanning sites of the subject. The computer program according to any one of claims 10.
  12.  前記走査プローブに設けられた姿勢センサから出力される信号に基づいて、前記走査プローブの姿勢を推定し、
     前記所定疾患の診断に適した前記画像が生成されたときの前記走査プローブの姿勢を示す姿勢情報を記憶し、
     前記所定疾患の診断に適した前記画像の生成量が低下した場合、記憶した前記姿勢情報に基づいて、前記走査プローブの姿勢変更を指示する姿勢変更指示情報を出力する
     請求項1~請求項11のいずれか1項に記載のコンピュータプログラム。
    The attitude of the scanning probe is estimated based on the signal output from the attitude sensor provided on the scanning probe.
    The posture information indicating the posture of the scanning probe when the image suitable for the diagnosis of the predetermined disease is generated is stored.
    Claims 1 to 11 output posture change instruction information instructing the posture change of the scanning probe based on the stored posture information when the amount of the image generated suitable for diagnosing the predetermined disease is reduced. The computer program according to any one of the above items.
  13.   前記走査プローブは、第1周波数の超音波及び第2周波数の超音波を周期的に切り替えて被検体の器官を走査する超音波プローブであり、
      前記画像を生成する処理は、
      超音波の周波数を切り替えながら被検体の器官を走査する前記走査プローブから得られる信号に基づいて一連の複数の画像を生成する
      請求項1~請求項12のいずれか1項に記載のコンピュータプログラム。
    The scanning probe is an ultrasonic probe that periodically switches between a first frequency ultrasonic wave and a second frequency ultrasonic wave to scan an organ of a subject.
    The process of generating the image is
    The computer program according to any one of claims 1 to 12, which generates a series of a plurality of images based on a signal obtained from the scanning probe that scans an organ of a subject while switching the frequency of ultrasonic waves.
  14.  被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成し、
     生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定し、
     前記所定疾患の診断に適した前記画像を記憶し、
     前記所定疾患の診断に適した前記画像の収集量を出力する
     情報処理方法。
    Generates a series of images based on the signals obtained from a scanning probe that scans the organ of the subject.
    It is determined whether or not the generated plurality of images are images suitable for diagnosing a predetermined disease, and the images are determined.
    The image suitable for the diagnosis of the predetermined disease is stored, and the image is stored.
    An information processing method that outputs the amount of collected images suitable for diagnosing the predetermined disease.
  15.  被検体の器官を走査する走査プローブから得られる信号に基づいて一連の複数の画像を生成する生成部と、
     該生成部によって生成された前記複数の画像が所定疾患の診断に適した画像であるか否かを判定する判定部と、
     該判定部によって前記所定疾患の診断に適していると判定された前記画像を記憶する記憶部と、
     前記所定疾患の診断に適した前記画像の収集量を出力する出力部と
     を備える情報処理装置。
     
    A generator that generates a series of multiple images based on signals obtained from a scanning probe that scans the organ of the subject.
    A determination unit for determining whether or not the plurality of images generated by the generation unit are images suitable for diagnosing a predetermined disease, and a determination unit.
    A storage unit that stores the image determined by the determination unit to be suitable for diagnosing the predetermined disease, and a storage unit.
    An information processing device including an output unit that outputs a collected amount of the image suitable for diagnosing the predetermined disease.
PCT/JP2021/032621 2020-09-15 2021-09-06 Computer program, information processing method, and information processing device WO2022059539A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022550482A JPWO2022059539A1 (en) 2020-09-15 2021-09-06

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020154551 2020-09-15
JP2020-154551 2020-09-15

Publications (1)

Publication Number Publication Date
WO2022059539A1 true WO2022059539A1 (en) 2022-03-24

Family

ID=80776920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/032621 WO2022059539A1 (en) 2020-09-15 2021-09-06 Computer program, information processing method, and information processing device

Country Status (2)

Country Link
JP (1) JPWO2022059539A1 (en)
WO (1) WO2022059539A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4344652A1 (en) 2022-09-30 2024-04-03 FUJIFILM Corporation Ultrasonography apparatus, image processing apparatus, ultrasound image capturing method, and ultrasound image capturing program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004194729A (en) * 2002-12-16 2004-07-15 Hitachi Medical Corp Supporting device for image-based diagnosis
JP2019118694A (en) * 2018-01-10 2019-07-22 コニカミノルタ株式会社 Medical image generation apparatus
JP2020039645A (en) * 2018-09-11 2020-03-19 株式会社日立製作所 Ultrasonic diagnostic apparatus and display method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004194729A (en) * 2002-12-16 2004-07-15 Hitachi Medical Corp Supporting device for image-based diagnosis
JP2019118694A (en) * 2018-01-10 2019-07-22 コニカミノルタ株式会社 Medical image generation apparatus
JP2020039645A (en) * 2018-09-11 2020-03-19 株式会社日立製作所 Ultrasonic diagnostic apparatus and display method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4344652A1 (en) 2022-09-30 2024-04-03 FUJIFILM Corporation Ultrasonography apparatus, image processing apparatus, ultrasound image capturing method, and ultrasound image capturing program

Also Published As

Publication number Publication date
JPWO2022059539A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
KR101565311B1 (en) 3 automated detection of planes from three-dimensional echocardiographic data
CN105392423B (en) The motion tracking system of real-time adaptive motion compensation in biomedical imaging
JP2021531885A (en) Ultrasound system with artificial neural network for guided liver imaging
EP3477589B1 (en) Method of processing medical image, and medical image processing apparatus performing the method
JP2021520939A (en) Adaptive ultrasonic scanning
JP2019521745A (en) Automatic image acquisition to assist the user in operating the ultrasound system
KR20200056875A (en) Ultrasound diagnosis apparatus determining abnormality of fetal heart and operating the same
CN102056547A (en) Medical image processing device and method for processing medical image
WO2012064986A2 (en) System and method of ultrasound image processing
Li et al. Image-guided navigation of a robotic ultrasound probe for autonomous spinal sonography using a shadow-aware dual-agent framework
JP6987131B2 (en) Image-based diagnostic system
CN112206006A (en) Intelligent auxiliary identification equipment and method for autonomously evaluating benign and malignant thyroid nodules
KR20200115488A (en) Image analysis to record heart wall movement
JP2022545355A (en) Systems and methods for identifying, labeling and tracking medical devices
CN111242921A (en) Method and system for automatically updating medical ultrasonic image auxiliary diagnosis system
WO2022059539A1 (en) Computer program, information processing method, and information processing device
CN111820947B (en) Ultrasonic heart reflux automatic capturing method and system and ultrasonic imaging equipment
US20190388057A1 (en) System and method to guide the positioning of a physiological sensor
US20240020839A1 (en) Medical image processing device, medical image processing program, and medical image processing method
CN112132805A (en) Ultrasonic robot state normalization method and system based on human body characteristics
JP6996303B2 (en) Medical image generator
WO2021020419A1 (en) Medical image processing device and medical image processing program
JP7328489B2 (en) Ophthalmic image processing device and ophthalmic photographing device
JP7210927B2 (en) Ophthalmic image processing device, OCT device, and ophthalmic image processing program
CN116687452B (en) Early pregnancy fetus ultrasonic autonomous scanning method, system and equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21869226

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022550482

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21869226

Country of ref document: EP

Kind code of ref document: A1