WO2022071158A1 - 診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム、認知症診断支援方法、並びに学習済み認知症所見導出モデル - Google Patents
診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム、認知症診断支援方法、並びに学習済み認知症所見導出モデル Download PDFInfo
- Publication number
- WO2022071158A1 WO2022071158A1 PCT/JP2021/035194 JP2021035194W WO2022071158A1 WO 2022071158 A1 WO2022071158 A1 WO 2022071158A1 JP 2021035194 W JP2021035194 W JP 2021035194W WO 2022071158 A1 WO2022071158 A1 WO 2022071158A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dementia
- finding
- derivation
- output
- areas
- Prior art date
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 20
- 206010012289 Dementia Diseases 0.000 title claims description 250
- 238000000034 method Methods 0.000 title claims description 86
- 210000003484 anatomy Anatomy 0.000 claims abstract description 83
- 201000010099 disease Diseases 0.000 claims abstract description 43
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 43
- 210000000056 organ Anatomy 0.000 claims abstract description 13
- 238000009795 derivation Methods 0.000 claims description 237
- 238000002224 dissection Methods 0.000 claims description 58
- 238000013527 convolutional neural network Methods 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 39
- 238000012545 processing Methods 0.000 claims description 36
- 238000012360 testing method Methods 0.000 claims description 30
- 238000010606 normalization Methods 0.000 claims description 29
- 210000004556 brain Anatomy 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 19
- 210000001320 hippocampus Anatomy 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000009534 blood test Methods 0.000 claims description 5
- 230000002068 genetic effect Effects 0.000 claims description 5
- 238000012706 support-vector machine Methods 0.000 claims description 5
- 239000012530 fluid Substances 0.000 claims description 2
- 239000000284 extract Substances 0.000 abstract description 9
- 230000013016 learning Effects 0.000 description 152
- 230000006835 compression Effects 0.000 description 55
- 238000007906 compression Methods 0.000 description 55
- 238000003860 storage Methods 0.000 description 29
- 238000004364 calculation method Methods 0.000 description 24
- 208000024827 Alzheimer disease Diseases 0.000 description 22
- 238000005259 measurement Methods 0.000 description 17
- 208000010877 cognitive disease Diseases 0.000 description 16
- 208000027061 mild cognitive impairment Diseases 0.000 description 16
- 230000011218 segmentation Effects 0.000 description 14
- 238000004458 analytical method Methods 0.000 description 13
- 238000011176 pooling Methods 0.000 description 13
- 210000001652 frontal lobe Anatomy 0.000 description 12
- 230000000971 hippocampal effect Effects 0.000 description 8
- 210000000869 occipital lobe Anatomy 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 7
- 238000000513 principal component analysis Methods 0.000 description 7
- 101150037123 APOE gene Proteins 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 102000013455 Amyloid beta-Peptides Human genes 0.000 description 4
- 108010090849 Amyloid beta-Peptides Proteins 0.000 description 4
- 102000000989 Complement System Proteins Human genes 0.000 description 4
- 108010069112 Complement System Proteins Proteins 0.000 description 4
- 210000004727 amygdala Anatomy 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 210000004072 lung Anatomy 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000002610 neuroimaging Methods 0.000 description 4
- 102000013498 tau Proteins Human genes 0.000 description 4
- 108010026424 tau Proteins Proteins 0.000 description 4
- 210000001103 thalamus Anatomy 0.000 description 4
- 102000007592 Apolipoproteins Human genes 0.000 description 3
- 108010071619 Apolipoproteins Proteins 0.000 description 3
- 206010003694 Atrophy Diseases 0.000 description 3
- 230000037444 atrophy Effects 0.000 description 3
- 210000001175 cerebrospinal fluid Anatomy 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000003478 temporal lobe Anatomy 0.000 description 3
- 206010059245 Angiopathy Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000032683 aging Effects 0.000 description 2
- 239000000090 biomarker Substances 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 210000000038 chest Anatomy 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000004185 liver Anatomy 0.000 description 2
- 230000004060 metabolic process Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 210000004885 white matter Anatomy 0.000 description 2
- 208000022099 Alzheimer disease 2 Diseases 0.000 description 1
- 108010025628 Apolipoproteins E Proteins 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 206010062767 Hypophysitis Diseases 0.000 description 1
- 208000029523 Interstitial Lung disease Diseases 0.000 description 1
- 208000009829 Lewy Body Disease Diseases 0.000 description 1
- 201000002832 Lewy body dementia Diseases 0.000 description 1
- 208000019693 Lung disease Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 108010071690 Prealbumin Proteins 0.000 description 1
- 102000009190 Transthyretin Human genes 0.000 description 1
- 201000004810 Vascular dementia Diseases 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000011157 brain segmentation Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 208000019425 cirrhosis of liver Diseases 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 210000000877 corpus callosum Anatomy 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000000232 gallbladder Anatomy 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 210000003140 lateral ventricle Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 208000019423 liver disease Diseases 0.000 description 1
- 230000005976 liver dysfunction Effects 0.000 description 1
- 210000000691 mamillary body Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001769 parahippocampal gyrus Anatomy 0.000 description 1
- 210000003635 pituitary gland Anatomy 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000000542 thalamic effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
- A61B5/0042—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4088—Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20128—Atlas-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Definitions
- the technology of the present disclosure relates to a diagnostic support device, an operation method of the diagnostic support device, an operation program of the diagnostic support device, a dementia diagnosis support method, and a learned dementia finding derivation model.
- a doctor In diagnosing a disease, for example, dementia represented by Alzheimer's disease, a doctor refers to a medical image such as a head MRI (Magnetic Response Imaging) image. For example, the doctor observes the degree of atrophy of the hippocampus, parahippocampal gyrus, amygdala, etc., the degree of white matter angiopathy, the presence or absence of decreased blood flow metabolism in the frontal lobe, temporal lobe, and occipital lobe, and makes findings of dementia. obtain.
- MRI Magnetic Response Imaging
- Patent No. 6438890 describes a diagnostic support device that derives the findings of dementia on a head MRI image by a machine learning model and provides it to a doctor.
- the diagnostic support device described in Japanese Patent No. 6438890 extracts a plurality of anatomical areas according to Brodmann's brain map and the like from the head MRI image, and calculates a Z value indicating the degree of atrophy of each anatomical area. .. Then, the calculated Z value of each anatomical area is input to the machine learning model, and the findings of dementia are output from the machine learning model.
- One embodiment according to the technique of the present disclosure includes a diagnostic support device capable of obtaining more accurate findings of a disease, an operation method of the diagnostic support device, an operation program of the diagnostic support device, a dementia diagnosis support method, and learning.
- a diagnostic support device capable of obtaining more accurate findings of a disease
- an operation method of the diagnostic support device capable of obtaining more accurate findings of a disease
- an operation program of the diagnostic support device capable of obtaining more accurate findings of a disease
- a dementia diagnosis support method a dementia diagnosis support method
- the diagnostic assist device of the present disclosure comprises a processor and a memory connected to or built into the processor, the processor acquiring a medical image, extracting multiple anatomical areas of an organ from the medical image, and multiple anatomical areas.
- the image of is input to a plurality of feature quantity derivation models prepared for each dissection area, multiple feature quantities for each dissection area are output from the feature quantity derivation model, and are output for each dissection area.
- Multiple feature quantities are input to the disease finding derivation model, the disease findings are output from the disease finding derivation model, and the findings are presented.
- the feature quantity derivation model preferably includes at least one of an autoencoder, a single-tasking convolutional neural network for class discrimination, and a multitasking convolutional neural network for class discrimination.
- the processor inputs an image of one anatomical area into a plurality of different feature quantity derivation models and outputs a feature quantity from each of the plurality of feature quantity derivation models.
- the processor inputs disease-related information related to the disease into the disease finding derivation model in addition to a plurality of features.
- the disease finding derivation model is preferably constructed by any of the following methods: neural network, support vector machine, and boosting.
- the processor performs a normalization process to match the acquired medical image with the standard medical image prior to the extraction of the dissected area.
- the organ is the brain and the disease is dementia.
- the dissecting area preferably comprises at least one of the hippocampus and the frontotemporal lobe.
- the disease-related information preferably includes at least one of the volume of the dissected area, the score of the dementia test, the test result of the genetic test, the test result of the cerebrospinal fluid test, and the test result of the blood test. ..
- the method of operating the diagnostic support device of the present disclosure is to acquire a medical image, extract a plurality of anatomical areas of an organ from the medical image, and prepare a plurality of images of a plurality of anatomical areas for each of the plurality of anatomical areas.
- the operation program of the diagnostic support device of the present disclosure is to acquire a medical image, extract a plurality of anatomical areas of an organ from the medical image, and prepare a plurality of images of a plurality of anatomical areas for each of the plurality of anatomical areas.
- the dementia diagnosis support method of the present disclosure is that a computer equipped with a processor and a memory connected to or built in the processor acquires a medical image showing the brain, and extracts multiple dissection areas of the brain from the medical image. To input images of multiple dissection areas into multiple feature quantity derivation models prepared for each dissection area, and to output multiple feature quantities for each dissection area from the feature quantity derivation model. Multiple features output for each of a plurality of dissection areas are input to the dementia finding derivation model, and the dementia findings are output from the dementia findings derivation model and the findings are presented.
- the trained dementia finding derivation model of the present disclosure is a trained dementia finding derivation model for causing a computer to execute a function of outputting dementia findings by inputting a plurality of features.
- a trained dementia finding derivation model for causing a computer to execute a function of outputting dementia findings by inputting a plurality of features.
- a diagnostic support device capable of obtaining more accurate findings of a disease, a method of operating the diagnostic support device, an operation program of the diagnostic support device, a dementia diagnosis support method, and learned dementia findings.
- a derived model can be provided.
- the medical system 2 includes an MRI apparatus 10, a PACS (Picture Archiving and Communication System) server 11, and a diagnostic support apparatus 12.
- MRI apparatus 10 Magnetic Imaging apparatus
- PACS server 11 Picture Archiving and Communication System
- diagnostic support apparatus 12 are connected to a LAN (Local Area Network) 13 installed in a medical facility, and can communicate with each other via the LAN 13.
- LAN Local Area Network
- the MRI apparatus 10 photographs the head of patient P and outputs a head MRI image 15.
- the head MRI image 15 is voxel data representing the three-dimensional shape of the head of the patient P.
- FIG. 1 shows a head MRI image 15S of a sagittal cross section.
- the MRI apparatus 10 transmits the head MRI image 15 to the PACS server 11.
- the PACS server 11 stores and manages the head MRI image 15 from the MRI apparatus 10.
- the head MRI image 15 is an example of a "medical image" according to the technique of the present disclosure.
- the diagnostic support device 12 is, for example, a desktop personal computer, and includes a display 17 and an input device 18.
- the input device 18 is a keyboard, a mouse, a touch panel, a microphone, or the like.
- the doctor operates the input device 18 to send a delivery request for the head MRI image 15 of the patient P to the PACS server 11.
- the PACS server 11 searches for the head MRI image 15 of the patient P requested to be delivered and delivers it to the diagnosis support device 12.
- the diagnosis support device 12 displays the head MRI image 15 distributed from the PACS server 11 on the display 17. The doctor observes the brain of the patient P shown in the head MRI image 15 to make a diagnosis of dementia for the patient P.
- the brain is an example of an "organ” according to the technique of the present disclosure
- dementia is an example of a "disease” according to the technique of the present disclosure.
- FIG. 1 Although only one MRI device 10 and one diagnostic support device 12 are drawn in FIG. 1, a plurality of MRI device 10 and a plurality of diagnostic support devices 12 may be provided.
- the computer constituting the diagnostic support device 12 includes a storage 20, a memory 21, a CPU (Central Processing Unit) 22, and a communication unit 23 in addition to the display 17 and the input device 18 described above. I have. These are interconnected via a bus line 24.
- the CPU 22 is an example of a "processor" according to the technique of the present disclosure.
- the storage 20 is a hard disk drive built in the computer constituting the diagnostic support device 12 or connected via a cable or a network.
- the storage 20 is a disk array in which a plurality of hard disk drives are connected.
- the storage 20 stores control programs such as an operating system, various application programs, and various data associated with these programs.
- a solid state drive may be used instead of the hard disk drive.
- the memory 21 is a work memory for the CPU 22 to execute a process.
- the CPU 22 loads the program stored in the storage 20 into the memory 21 and executes the process according to the program. As a result, the CPU 22 controls each part of the computer in an integrated manner.
- the communication unit 23 controls the transmission of various information with an external device such as the PACS server 11.
- the memory 21 may be built in the CPU 22.
- the operation program 30 is stored in the storage 20 of the diagnostic support device 12.
- the operation program 30 is an application program for making the computer function as the diagnostic support device 12. That is, the operation program 30 is an example of the "operation program of the diagnostic support device" according to the technique of the present disclosure.
- the storage 20 also stores a head MRI image 15, a standard head MRI image 35, a segmentation model 36, a feature quantity derivation model group 38 composed of a plurality of feature quantity derivation models 37, and a dementia finding derivation model 39.
- the CPU 22 of the computer constituting the diagnostic support device 12 cooperates with the memory 21 and the like to read / write (hereinafter abbreviated as RW (Read Write)) control unit 45 and normalization unit. It functions as 46, an extraction unit 47, a feature quantity derivation unit 48, a dementia finding derivation unit 49, and a display control unit 50.
- RW Read Write
- the RW control unit 45 controls the storage of various data in the storage 20 and the reading of various data in the storage 20.
- the RW control unit 45 receives the head MRI image 15 from the PACS server 11 and stores the received head MRI image 15 in the storage 20.
- the RW control unit 45 receives the head MRI image 15 from the PACS server 11 and stores the received head MRI image 15 in the storage 20.
- FIG. 3 only one head MRI image 15 is stored in the storage 20, but a plurality of head MRI images 15 may be stored in the storage 20.
- the RW control unit 45 reads out the head MRI image 15 of the patient P designated by the doctor for diagnosing dementia from the storage 20, and outputs the read head MRI image 15 to the normalization unit 46 and the display control unit 50. do.
- the RW control unit 45 has acquired the head MRI image 15 by reading the head MRI image 15 from the storage 20.
- the RW control unit 45 reads the standard head MRI image 35 from the storage 20, and outputs the read standard head MRI image 35 to the normalization unit 46.
- the RW control unit 45 reads the segmentation model 36 from the storage 20, and outputs the read segmentation model 36 to the extraction unit 47.
- the RW control unit 45 reads out the feature amount derivation model group 38 from the storage 20, and outputs the read feature amount derivation model group 38 to the feature amount derivation unit 48.
- the RW control unit 45 reads the dementia finding derivation model 39 from the storage 20, and outputs the read dementia finding derivation model 39 to the dementia finding derivation unit 49.
- the normalization unit 46 performs a normalization process to match the head MRI image 15 with the standard head MRI image 35, and sets the head MRI image 15 as the normalized head MRI image 55.
- the normalization unit 46 outputs the normalized head MRI image 55 to the extraction unit 47.
- the standard head MRI image 35 is a head MRI image showing a brain having a standard shape, size, and density (pixel value).
- the standard head MRI image 35 is, for example, an image generated by averaging the head MRI images 15 of a plurality of healthy subjects, or an image generated by computer graphics.
- the standard head MRI image 35 is an example of a "standard medical image" according to the technique of the present disclosure.
- the extraction unit 47 inputs the normalized head MRI image 55 into the segmentation model 36.
- the segmentation model 36 is a machine learning model that performs so-called semantic segmentation, in which a label representing each anatomical area of the brain such as the hippocampus, amygdala, and frontal lobe is given to each pixel of the brain reflected in the normalized head MRI image 55.
- the extraction unit 47 extracts an image (hereinafter referred to as an anatomical area image) 56 of a plurality of anatomical areas of the brain from the normalized head MRI image 55 based on the label given by the segmentation model 36.
- the extraction unit 47 outputs the dissection area image group 57 composed of the plurality of dissection area images 56 for each dissection area to the feature amount derivation unit 48.
- One feature amount derivation model 37 is prepared for each of a plurality of anatomical areas of the brain (see FIG. 6).
- the feature amount derivation unit 48 inputs the dissected area image 56 into the corresponding feature amount derivation model 37. Then, the feature amount set 58 composed of a plurality of types of feature amounts Z (see FIG. 6) is output from the feature amount derivation model 37.
- the feature amount derivation unit 48 outputs a feature amount set group 59 composed of a plurality of feature amount sets 58 corresponding to a plurality of anatomical areas to the dementia finding derivation unit 49.
- the dementia finding derivation unit 49 inputs the feature amount set group 59 into the dementia finding derivation model 39. Then, the dementia finding information 60 representing the dementia finding is output from the dementia finding derivation model 39. The dementia finding derivation unit 49 outputs the dementia finding information 60 to the display control unit 50.
- the dementia finding derivation model 39 is an example of the "disease finding derivation model" according to the technique of the present disclosure.
- the display control unit 50 controls the display of various screens on the display 17. On the various screens, a first display screen 70 (see FIG. 8) for instructing analysis by the segmentation model 36, the feature quantity derivation model 37, and the dementia finding derivation model 39, and a second dementia finding information 60 are displayed. A display screen 75 (see FIG. 9) and the like are included.
- the normalization unit 46 performs shape normalization processing 65 and density normalization processing 66 as normalization processing on the head MRI image 15.
- the shape normalization process 65 extracts, for example, a landmark that serves as a reference for alignment from the head MRI image 15 and the standard head MRI image 35, and the landmark of the head MRI image 15 and the standard head MRI image 35. This is a process of moving, rotating, and / or scaling the head MRI image 15 in parallel with the standard head MRI image 35 so as to maximize the correlation with the landmark.
- the density normalization process 66 is, for example, a process of correcting the density histogram of the head MRI image 15 according to the density histogram of the standard head MRI image 35.
- the extraction unit 47 has, as the anatomical area image 56, the anatomical area image 56_1 of the hippocampus, the anatomical area image 56_2 around the hippocampus, the anatomical area image 56_3 of the frontal lobe, and the anatomical area image of the anterior temporal lobe.
- 56_4 anatomical area image 56_5 of the occipital lobe, anatomical area image 56_6 of the thorax, anatomical area image 56_7 of the lower part of the thorax, anatomical area image 56_8 of the tongue, anatomical area image 56_9 of the pituitary gland, and the like are extracted.
- the extraction unit 47 extracts an anatomical area image 56 of each anatomical area such as the mammillary body, corpus callosum, fornix, and lateral ventricle.
- the anatomical areas such as the hippocampus, frontotemporal lobe, frontotemporal lobe, and amygdala are paired left and right.
- the anatomical area image 56 of each of the left and right anatomical areas is extracted from such a pair of left and right anatomical areas. For example, for the hippocampus, an anatomical area image 56_1 of the left hippocampus and an anatomical area image 56_1 of the right hippocampus are extracted.
- the dissected areas it is preferable to include at least one of the hippocampus and the frontotemporal lobe, and it is more preferable to include all of the hippocampus and the frontotemporal lobe.
- the frontotemporal lobe means the anterior part of the temporal lobe.
- the feature amount derivation unit 48 inputs the hippocampal dissection area image 56_1 into the hippocampal feature amount derivation model 37_1, and outputs the hippocampal feature amount set 58_1 from the hippocampal feature amount derivation model 37_1.
- the hippocampal feature amount set 58_1 is composed of a plurality of feature amounts Z1_1, Z2_1, ..., ZN_1. Note that N is the number of feature quantities, for example, tens to hundreds of thousands.
- the feature amount derivation unit 48 inputs the parahippocampal feature amount derivation model 56_2 to the parahippocampal feature amount derivation model 37_2, and inputs the frontal lobe dissection area image 56_3 to the frontal lobe feature amount derivation model 37_3.
- the frontal lobe dissection area image 56_4 is input to the frontal lobe feature amount derivation model 37_4.
- the parahippocampal feature derivation model 37_2 outputs the parahippocampal feature set 58_2, and the frontal lobe feature derivation model 37_3 outputs the frontal lobe feature set 58_3, and the frontal lobe feature derivation model.
- the feature amount set 58_4 of the frontal lobe is output from 37_4.
- the parahippocampal feature set 58_2 is composed of a plurality of features Z1_2, Z2_2, ..., ZN_2, and the frontotemporal lobe feature set 58_2 is composed of a plurality of features Z1_3, Z2_3, ..., ZN_3.
- the frontotemporal lobe feature amount set 58_4 is composed of a plurality of feature amounts Z1_4, Z2_4, ..., ZN_4.
- the feature amount derivation unit 48 inputs the occipital lobe dissection area image 56_5 into the occipital lobe feature amount derivation model 37_5, and inputs the thalamus dissection area image 56_6 into the thalamus feature amount derivation model 37_6. Then, the occipital lobe feature amount set 58_5 is output from the occipital lobe feature amount derivation model 37_5, and the thalamus feature amount set 58_6 is output from the thalamus feature amount derivation model 37_6.
- the occipital lobe feature amount set 58_5 is composed of a plurality of feature amounts Z1_5, Z2_5, ..., ZN_5, and the thalamic feature amount set 58_6 is composed of a plurality of feature amounts Z1_6, Z2_6, ..., ZN_6.
- the plurality of dissected area images 56 are input to the corresponding feature amount derivation model 37, whereby the plurality of feature amount sets 58 for each dissection area image 56 are output from each feature amount derivation model 37.
- the number of feature quantities Z may be the same in each dissection area as in the example N, or may be different in each dissection area.
- the dementia finding derivation unit 49 inputs the feature amount set group 59 into the dementia finding derivation model 39. Then, from the dementia finding derivation model 39, any one of normal (NC; Normal Control), mild cognitive impairment (MCI; Mild Cognitive Impairment), and Alzheimer's disease (AD; Alzheimer's Disease) is used as the dementia finding information 60. Is output.
- NC Normal Control
- MCI Mild Cognitive Impairment
- AD Alzheimer's disease
- FIG. 8 shows an example of the first display screen 70 for instructing the analysis by the segmentation model 36, the feature amount derivation model 37, and the dementia finding derivation model 39.
- the head MRI image 15 of the patient P diagnosing dementia is displayed.
- the head MRI image 15 is a head MRI image 15S having a sagittal cross section, a head MRI image 15A having an axial cross section, and a head MRI image 15C having a coronal cross section.
- a button group 71 for switching the display is provided at the lower part of each of the head MRI images 15S, 15A, and 15C.
- the analysis button 72 is provided on the first display screen 70.
- the doctor selects the analysis button 72 when he / she wants to perform analysis by the segmentation model 36, the feature amount derivation model 37, and the dementia finding derivation model 39.
- the CPU 22 accepts the instruction for analysis by the segmentation model 36, the feature amount derivation model 37, and the dementia finding derivation model 39.
- FIG. 9 shows an example of a second display screen 75 displaying dementia finding information 60 obtained as a result of analysis by the segmentation model 36, the feature amount derivation model 37, and the dementia finding derivation model 39.
- a message 76 corresponding to the dementia finding information 60 is displayed.
- FIG. 9 shows an example in which the dementia finding information 60 is mild cognitive impairment (MCI) and "suspicion of mild cognitive impairment" is displayed as message 76.
- MCI mild cognitive impairment
- the display control unit 50 turns off the display of the message 76 and returns the second display screen 75 to the first display screen 70.
- a compression unit 81 of an autoencoder (hereinafter abbreviated as AE (Auto Encoder)) 80 is used in the feature amount derivation model 37.
- the AE80 has a compression unit 81 and a restoration unit 82.
- An anatomical area image 56 is input to the compression unit 81.
- the compression unit 81 converts the dissected area image 56 into a feature set 58.
- the compression unit 81 passes the feature amount set 58 to the restoration unit 82.
- the restoration unit 82 generates the restoration image 83 of the dissected area image 56 from the feature amount set 58.
- the compression unit 81 converts the dissected area image 56 into the feature amount set 58 by performing a convolution operation as shown in FIG. 11 as an example.
- the compression unit 81 has a convolution layer 200 represented by "conv (abbreviation of convolution)".
- the convolution layer 200 applies, for example, a 3 ⁇ 3 filter 203 to the target data 202 having a plurality of elements 201 arranged in two dimensions. Then, the element value e of one of the elements 201 of interest and the element values a, b, c, d, f, g, h, and i of eight elements 201S adjacent to the element of interest 201I are convolved.
- the convolution layer 200 sequentially performs a convolution operation on each element 201 of the target data 202 while shifting the element of interest 201I by one element, and outputs the element value of the element 204 of the operation data 205.
- the operation data 205 having a plurality of elements 204 arranged in two dimensions can be obtained as in the target data 202.
- the target data 202 first input to the convolution layer 200 is the dissection area image 56, and then the reduction calculation data 205S (see FIG. 13) described later is input to the convolution layer 200 as the target data 202.
- the element 204I corresponding to the element of interest 201I of the operation data 205 which is the result of the convolution operation for the element of interest 201I.
- One operation data 205 is output for one filter 203.
- the operation data 205 is output for each filter 203. That is, as shown in FIG. 12 as an example, the arithmetic data 205 is generated for the number of filters 203 applied to the target data 202. Further, since the arithmetic data 205 has a plurality of elements 204 arranged in two dimensions, it has a width and a height. The number of arithmetic data 205 is called the number of channels.
- FIG. 12 illustrates the four-channel arithmetic data 205 output by applying the four filters 203 to the target data 202.
- the compression unit 81 has a pooling layer 210 represented by “pool (abbreviation of pooling)” in addition to the convolution layer 200.
- the pooling layer 210 obtains a local statistic of the element value of the element 204 of the operation data 205, and generates the reduced operation data 205S having the obtained statistic as the element value.
- the pooling layer 210 performs a maximum value pooling process for obtaining the maximum value of the element value in the block 211 of the 2 ⁇ 2 element as a local statistic. If the block 211 is processed while being shifted by one element in the width direction and the height direction, the reduction calculation data 205S is reduced to half the size of the original calculation data 205.
- the element values a, b, e, and b in the block 211A, the element values b, c, f, and g in the block 211B, and the element values c and d in the block 211C are shown.
- G, and h are exemplified when h is the maximum value, respectively. It should be noted that the mean value pooling process may be performed in which the mean value is obtained as a local statistic instead of the maximum value.
- the compression unit 81 outputs the final calculation data 205 by repeating the convolution process by the convolution layer 200 and the pooling process by the pooling layer 210 a plurality of times.
- the final calculated data 205 is the feature set 58
- the element value of each element 204 of the final calculated data 205 is the feature Z.
- the characteristic amount Z thus obtained represents the shape and texture characteristics of the dissected area, such as the degree of hippocampal atrophy, the degree of white matter angiopathy, and the presence or absence of decreased blood flow metabolism in the frontal lobe, frontotemporal lobe, and occipital lobe. There is.
- each process is actually performed in three dimensions.
- the AE80 is learned by inputting the learning anatomical area image 56L in the learning phase before diverting the compression unit 81 to the feature amount derivation model 37.
- the AE80 outputs a learning restoration image 83L with respect to the learning anatomical area image 56L.
- the loss calculation of the AE80 using the loss function is performed.
- various coefficients of the AE80 coefficients of the filter 203, etc.
- the AE80 is updated according to the update settings.
- the above series of processes of inputting the learning anatomical area image 56L to the AE80, outputting the learning restored image 83L from the AE80, loss calculation, update setting, and updating the AE80 is the learning anatomy.
- the area image 56L is repeatedly exchanged and repeated.
- the repetition of the above series of processes ends when the restoration accuracy from the learning dissection area image 56L to the learning restoration image 83L reaches a predetermined set level.
- the compression unit 81 of the AE80 whose restoration accuracy has reached the set level in this way is stored in the storage 20 and used as the learned feature amount derivation model 37.
- the dementia finding derivation model 39 is constructed by any of a neural network, a support vector machine, and boosting.
- the dementia finding derivation model 39 is trained given the learning data 90.
- the learning data 90 is a set of a learning feature amount set group 59L and a correct answer dementia finding information 60CA corresponding to the learning feature amount set group 59L.
- the learning feature amount set group 59L was obtained by inputting the anatomical area image 56 of a certain head MRI image 15 into the feature amount derivation model 37.
- the correct dementia finding information 60CA is the result of the doctor actually diagnosing the dementia findings on the head MRI image 15 obtained with the learning feature amount set group 59L.
- the learning feature amount set group 59L is input to the dementia finding derivation model 39.
- the dementia finding derivation model 39 outputs learning dementia finding information 60L for the learning feature amount set group 59L.
- the loss calculation of the dementia finding derivation model 39 using the loss function is performed.
- various coefficients of the dementia finding derivation model 39 are updated according to the result of the loss calculation, and the dementia finding derivation model 39 is updated according to the update setting.
- the update setting, and the above-mentioned series of processes for updating the dementia finding derivation model 39 are repeated while the learning data 90 is exchanged.
- the repetition of the above series of processes ends when the prediction accuracy of the learning dementia finding information 60L with respect to the correct dementia finding information 60CA reaches a predetermined set level.
- the dementia finding derivation model 39 whose prediction accuracy has reached a set level is stored in the storage 20 and used by the dementia finding derivation unit 49 as a learned dementia finding derivation model.
- the CPU 22 of the diagnosis support device 12 has a RW control unit 45, a normalization unit 46, an extraction unit 47, and a feature amount derivation unit. It functions as 48, a dementia finding derivation unit 49, and a display control unit 50.
- the RW control unit 45 reads out the corresponding head MRI image 15 and the standard head MRI image 35 from the storage 20 (step). ST100).
- the head MRI image 15 and the standard head MRI image 35 are output from the RW control unit 45 to the normalization unit 46.
- a normalization process for matching the head MRI image 15 with the standard head MRI image 35 is performed (step ST110). ).
- the head MRI image 15 becomes a normalized head MRI image 55.
- the normalized head MRI image 55 is output from the normalized unit 46 to the extraction unit 47.
- a plurality of anatomical area images 56 are extracted from the normalized head MRI image 55 using the segmentation model 36 (step ST120).
- the dissection area image group 57 composed of the plurality of dissection area images 56 is output from the extraction unit 47 to the feature amount derivation unit 48.
- the dissection area image 56 is input to the corresponding feature amount derivation model 37.
- the feature amount set 58 is output from the feature amount derivation model 37 (step ST130).
- the feature amount set group 59 composed of the plurality of feature amount sets 58 is output from the feature amount derivation unit 48 to the dementia finding derivation unit 49.
- the feature amount set group 59 is input to the dementia finding derivation model 39.
- the dementia finding information 60 is output from the dementia finding derivation model 39 (step ST140).
- the dementia finding information 60 is output from the dementia finding derivation unit 49 to the display control unit 50.
- the second display screen 75 shown in FIG. 9 is displayed on the display 17 (step ST150).
- the doctor confirms the dementia finding information 60 through the message 76 on the second display screen 75.
- the CPU 22 of the diagnostic support device 12 includes an RW control unit 45, an extraction unit 47, a feature amount derivation unit 48, a dementia finding derivation unit 49, and a display control unit 50.
- the RW control unit 45 acquires the head MRI image 15 by reading the head MRI image 15 of the patient P who diagnoses dementia from the storage 20.
- the extraction unit 47 extracts an anatomical area image 56 of a plurality of anatomical areas of the brain from the normalized head MRI image 55.
- the feature amount derivation unit 48 inputs a plurality of dissection area images 56 into a plurality of feature amount derivation models 37 prepared for each dissection area, and a plurality of features for each dissection area from the feature amount derivation model 37.
- the quantity set 58 is output.
- the dementia finding derivation unit 49 inputs a feature amount set group 59 composed of a plurality of feature amount sets 58 into the dementia finding derivation model 39, and outputs dementia finding information 60 from the dementia finding derivation model 39.
- the display control unit 50 presents the dementia finding information 60 to the doctor on the second display screen 75.
- the number of feature quantities Z is very large, for example, tens to hundreds of thousands. Therefore, the feature amount Z does not represent a limited feature of the dissected area as in the Z value described in Japanese Patent No. 6438890, but represents a comprehensive feature of the dissected area. Further, the feature amount Z is not a single statistically obtained value like the Z value described in Japanese Patent No. 6438890, but is obtained by inputting the anatomical area image 56 into the feature amount derivation model 37. Is. Therefore, the method of the present disclosure for deriving the dementia finding information 60 based on the feature amount Z (feature amount set group 59 composed of a plurality of feature amount sets 58) is compared with the method described in Japanese Patent No. 6438890. It is possible to improve the prediction accuracy of dementia findings and obtain more accurate dementia findings.
- dementia Compared to other diseases such as cancer, dementia is less likely to show specific lesions that can be recognized with the naked eye. Also, dementia affects the entire brain and is not local. Due to this background, it has been difficult to obtain accurate dementia findings from medical images such as the head MRI image 15 using a machine learning model.
- the brain is subdivided into a plurality of anatomical areas, features are derived for each of the plurality of anatomical areas, and the derived features are input to one dementia finding derivation model 39. There is. Therefore, it is possible to achieve the purpose of obtaining more accurate findings of dementia, which has been difficult in the past.
- the feature quantity derivation model 37 is a diversion of the compression unit 81 of the AE80.
- AE80 is one of the frequently used neural network models in the field of machine learning and is generally very well known. Therefore, it can be diverted to the feature amount derivation model 37 relatively easily.
- the dementia finding derivation model 39 is constructed by one of a neural network, a support vector machine, and a boosting method. Any of these neural networks, support vector machines, and boosting techniques are generally very well known. Therefore, the dementia finding derivation model 39 can be constructed relatively easily.
- the normalization unit 46 performs a normalization process for matching the head MRI image 15 with the standard head MRI image 35 prior to the extraction of the dissected area. Therefore, the subsequent processing can be performed after substantially eliminating the individual difference of the patient P and the device difference of the MRI apparatus 10, and as a result, the reliability of the dementia finding information 60 can be enhanced.
- this embodiment in which the organ is the brain, the disease is dementia, and the dementia finding information 60 is output, is a form that matches the current social problem.
- the hippocampus and frontotemporal lobe are anatomical areas with a particularly high correlation with dementia such as Alzheimer's disease. Therefore, as in this example, if the plurality of dissected areas contain at least one of the hippocampus and the frontotemporal lobe, more accurate findings of dementia can be obtained.
- the presentation mode of the dementia finding information 60 is not limited to the second display screen 75.
- the dementia finding information 60 may be printed out on a paper medium, or the dementia finding information 60 may be transmitted to a doctor's mobile terminal as an attachment file of an e-mail.
- the dementia finding information 60 is not limited to the content exemplified in FIG. 7 (normal / mild cognitive impairment / Alzheimer's disease).
- the degree of progression of dementia one year after the patient P may be fast or slow.
- the type of dementia may be any of Alzheimer's disease, Lewy body dementia, and vascular dementia.
- the compression unit 101 of the convolutional neural network for class determination of a single task (hereinafter, abbreviated as a single task CNN (Convolutional Neural Network)) 100. Is used as the feature quantity derivation model 105.
- the single task CNN 100 has a compression unit 101 and an output unit 102.
- the anatomical area image 56 is input to the compression unit 101. Similar to the compression unit 81, the compression unit 101 converts the dissected area image 56 into the feature amount set 103.
- the compression unit 101 passes the feature amount set 103 to the output unit 102.
- the output unit 102 outputs one class 104 based on the feature amount set 103. In FIG. 19, the output unit 102 outputs the determination result that dementia has developed or has not developed dementia as class 104.
- the compression unit 101 of the single task CNN 100 is used as the feature amount derivation model 105.
- the single task CNN 100 is given learning data 108 and learned in the learning phase before the compression unit 101 is diverted to the feature quantity derivation model 105.
- the learning data 108 is a set of the learning anatomical area image 56L and the correct answer class 104CA corresponding to the learning anatomical area image 56L.
- the correct answer class 104CA is the result of the doctor actually determining whether or not dementia has developed with respect to the head MRI image 15 obtained from the learning anatomical area image 56L.
- the learning dissection area image 56L is input to the single task CNN100.
- the single task CNN100 outputs a learning class 104L for a learning anatomical area image 56L. Based on the learning class 104L and the correct answer class 104CA, the loss calculation of the single task CNN100 is performed. Then, various coefficients of the single task CNN 100 are updated according to the result of the loss calculation, and the single task CNN 100 is updated according to the update settings.
- the compression unit 101 of the single task CNN 100 whose prediction accuracy has reached the set level is stored in the storage 20 as the learned feature amount derivation model 105 and used by the feature amount derivation unit 48.
- the compression unit 101 of the single task CNN100 is used as the feature amount derivation model 105.
- the single-tasking CNN100 is also one of the frequently used neural network models in the field of machine learning and is generally very well known. Therefore, it can be diverted to the feature amount derivation model 105 relatively easily.
- the class 104 may be, for example, the content that the age of the patient P is less than 75 years old or is 75 years old or more, or in the age of the patient P such as 60s and 70s. There may be.
- the compression of the multitasking class discrimination CNN (hereinafter abbreviated as multitasking CNN) 110 is performed.
- Part 111 is used as a feature quantity derivation model 116.
- the multitasking CNN 110 has a compression unit 111 and an output unit 112.
- the dissected area image 56 is input to the compression unit 111.
- the compression unit 111 converts the dissected area image 56 into the feature amount set 113, similarly to the compression unit 81 and the compression unit 101.
- the compression unit 111 passes the feature amount set 113 to the output unit 112.
- the output unit 112 outputs two classes, the first class 114 and the second class 115, based on the feature amount set 113.
- the output unit 112 outputs the determination result that dementia has developed or has not developed dementia as the first class 114.
- the output unit 112 outputs the age of the patient P as the second class 115.
- the compression unit 111 of this multitasking CNN 110 is used as the feature amount derivation model 116.
- the multitasking CNN 110 is trained by being given the training data 118 in the learning phase before the compression unit 111 is diverted to the feature quantity derivation model 116.
- the learning data 118 is a set of the learning anatomical area image 56L and the correct answer first class 114CA and the correct answer second class 115CA corresponding to the learning anatomical area image 56L.
- the correct answer first class 114CA is the result of the doctor actually determining whether or not dementia has developed with respect to the head MRI image 15 obtained the learning anatomical area image 56L.
- the correct answer second class 115CA is the actual age of the patient P to be photographed on the head MRI image 15 obtained the learning anatomical area image 56L.
- the learning dissection area image 56L is input to the multitasking CNN110.
- the multitasking CNN 110 outputs the learning first class 114L and the learning second class 115L to the learning anatomical area image 56L.
- the loss calculation of the multitasking CNN 110 is performed based on the learning first class 114L and the learning second class 115L, and the correct answer first class 114CA and the correct answer second class 115CA.
- various coefficients of the multitasking CNN110 are updated according to the result of the loss calculation, and the multitasking CNN110 is updated according to the update setting.
- the input of the learning dissection area image 56L to the multitasking CNN110, the output of the learning first class 114L and the learning second class 115L from the multitasking CNN110, the loss calculation, the update setting, And the above-mentioned series of processes of updating the multitasking CNN110 are repeated while the learning data 118 is exchanged.
- the repetition of the above series of processes ends when the prediction accuracy of the learning first class 114L and the learning second class 115L for the correct answer first class 114CA and the correct answer second class 115CA reaches a predetermined set level. Will be done.
- the compression unit 111 of the multitasking CNN 110 whose prediction accuracy has reached the set level is stored in the storage 20 as the learned feature amount derivation model 116 and used by the feature amount derivation unit 48.
- the compression unit 111 of the multitasking CNN 110 is used as the feature amount derivation model 116.
- the multitasking CNN110 performs a more complicated process of outputting a plurality of classes (first class 114 and second class 115) as compared with the AE80 and the singletasking CNN100. Therefore, the feature amount set 113 output from the compression unit 111 is likely to more comprehensively represent the features of the dissected area image 56. Therefore, as a result, the accuracy of predicting dementia findings by the dementia findings derivation model 39 can be further improved.
- the first class 114 may be, for example, the degree of progression of dementia at five levels. Further, as the second class 115, the determination result of the age of the patient P may be used.
- the multitasking CNN110 may output three or more classes.
- the dissected area image 56 of one dissected area is input to a plurality of different feature quantity derivation models.
- the feature amount derivation unit 130 of the present embodiment inputs the dissection area image 56 of one dissection area into the first feature amount derivation model 131, inputs it into the second feature amount derivation model 132, and inputs the third feature amount derivation model 132. It is input to the feature amount derivation model 133.
- the feature quantity derivation unit 130 outputs the first feature quantity set 134 from the first feature quantity derivation model 131, outputs the second feature quantity set 135 from the second feature quantity derivation model 132, and derives the third feature quantity.
- the third feature amount set 136 is output from the model 133.
- the first feature quantity derivation model 131 is a diversion of the compression unit 81 of the AE80 of the first embodiment.
- the second feature quantity derivation model 132 is a diversion of the compression unit 101 of the single task CNN100 of the second embodiment.
- the third feature quantity derivation model 133 is a diversion of the compression unit 111 of the multitasking CNN 110 of the third embodiment.
- the feature amount derivation unit 130 draws the dissection area image 56 of one dissection area into the first feature amount derivation model 131, the second feature amount derivation model 132, and the third feature amount derivation. Input to model 133. Then, the first feature amount set 134, the second feature amount set 135, and the third feature amount set 136 are output from each model 131 to 133. Therefore, a wide variety of feature quantities Z can be obtained as compared with the case of using one type of feature quantity derivation model 37. As a result, the accuracy of predicting dementia findings by the dementia findings derivation model 39 can be further improved.
- the plurality of different feature quantity derivation models may be, for example, a combination of the first feature quantity derivation model 131 diverted from the compression unit 81 of the AE80 and the second feature quantity derivation model 132 diverted from the compression unit 101 of the single task CNN100. ..
- a combination of the second feature amount derivation model 132 diverted from the compression unit 101 of the single task CNN 100 and the third feature amount derivation model 133 diverted from the compression unit 111 of the multitask CNN 110 may be used.
- a second feature amount derivation model 132 diverted from the compression unit 101 of the single task CNN 100 that outputs whether or not dementia has developed as a class 104, and a single task CNN 100 that outputs the age of patient P as a class 104. It may be combined with the second feature amount derivation model 132 which diverted the compression part 101 of.
- dementia-related information 141 related to dementia is input to the dementia finding derivation model 142.
- the dementia finding derivation unit 140 of the present embodiment inputs dementia-related information 141 related to dementia into the dementia finding derivation model 142 in addition to the feature amount set group 59. Then, the dementia finding information 143 is output from the dementia finding derivation model 142.
- the dementia-related information 141 is an example of "disease-related information" according to the technique of the present disclosure.
- Dementia-related information 141 is information on patient P who makes a diagnosis of dementia.
- Dementia-related information 141 includes, for example, the volume of the hippocampus.
- dementia-related information 141 includes Hasegawa-type dementia scale score, ApoE gene genotype, amyloid ⁇ measurement value, tau protein measurement value, apolipoprotein measurement value, complement protein measurement value, transsiletin measurement value, and the like. including.
- the Hasegawa dementia scale score, ApoE gene genotype, amyloid ⁇ measurement value, tau protein measurement value, apolipoprotein measurement value, complement protein measurement value, transthyretin measurement value, etc. are obtained from the electronic chart server (not shown). Be quoted.
- the volume of the hippocampus is, for example, the total number of pixels of the anatomical area image 56_1 of the hippocampus.
- the hippocampal volume is an example of the "volume of the dissected area" according to the technique of the present disclosure.
- the volume of other dissected areas such as the amygdala may be included in the dementia-related information 141.
- the Hasegawa dementia scale score is an example of the "dementia test score" related to the technology of the present disclosure.
- the Mini-Mental State Examination (MMSE) score the Mini-Mental State Examination (MMSE) score, Rivermead Behavioral Memory Test (RBMT) score, and clinical cognition.
- the genotype of the ApoE gene is a combination of two of the three ApoE genes ( ⁇ 2 and ⁇ 3, ⁇ 3 and ⁇ 4, etc.). For genotypes that do not have ⁇ 4 at all ( ⁇ 2 and ⁇ 3, ⁇ 3 and ⁇ 3, etc.), the risk of developing Alzheimer's disease for genotypes that have one or two ⁇ 4 ( ⁇ 2 and ⁇ 4, ⁇ 4 and ⁇ 4, etc.) is It is said to be about 3 to 12 times.
- the genotype of the ApoE gene is converted into a numerical value such that the combination of ⁇ 2 and ⁇ 3 is 1, the combination of ⁇ 3 and ⁇ 3 is 2, etc., and is input to the dementia finding derivation model 142.
- the genotype of the ApoE gene is an example of the "test result of genetic test" according to the technique of the present disclosure.
- amyloid ⁇ measurement value and the tau protein measurement value are examples of the "test results of the cerebrospinal fluid test” according to the technique of the present disclosure. Further, the apolypoprotein measured value, the complement protein measured value, and the transsiletin measured value are examples of the "blood test test result" according to the technique of the present disclosure.
- the dementia finding derivation model 142 is trained given the learning data 148.
- the learning data 148 is a combination of the learning feature amount set group 59L and the learning dementia-related information 141L, and the correct answer dementia finding information 143CA corresponding to the learning feature amount set group 59L and the learning dementia-related information 141L. be.
- the learning feature amount set group 59L was obtained by inputting the anatomical area image 56 of a certain head MRI image 15 into the feature amount derivation model 37.
- the learning dementia-related information 141L is the information of the patient P to be photographed on the head MRI image 15 obtained the learning feature amount set group 59L.
- the correct answer dementia finding information 143CA is the result of the doctor actually diagnosing the dementia findings on the head MRI image 15 obtained with the learning feature amount set group 59L, taking into account the learning dementia-related information 141L. ..
- the learning feature amount set group 59L and the learning dementia-related information 141L are input to the dementia finding derivation model 142.
- the dementia finding derivation model 142 outputs learning dementia finding information 143L for learning feature amount set group 59L and learning dementia-related information 141L.
- the loss calculation of the dementia finding derivation model 142 using the loss function is performed.
- various coefficients of the dementia finding derivation model 142 are updated according to the result of the loss calculation, and the dementia finding derivation model 142 is updated according to the update setting.
- the input of the learning feature amount set group 59L and the learning dementia-related information 141L into the dementia finding derivation model 142, and the learning dementia from the dementia finding derivation model 142 is repeated while the learning data 148 is exchanged.
- the repetition of the above series of processes ends when the prediction accuracy of the learning dementia finding information 143L with respect to the correct dementia finding information 143CA reaches a predetermined set level.
- the dementia finding derivation model 142 whose prediction accuracy has reached a set level in this way is stored in the storage 20 and used by the dementia finding derivation unit 140 as a learned dementia finding derivation model.
- the dementia-related information 141 is input to the dementia finding derivation model 142.
- Dementia-related information 141 includes hippocampal volume, Hasegawa dementia scale score, ApoE gene genotype, amyloid ⁇ measurement value, tau protein measurement value, apolipoprotein measurement value, complement protein measurement value, and transsiletin measurement. Including values etc.
- Dementia-related information 141 which is powerful information useful for predicting dementia findings, is added, so the accuracy of predicting dementia findings is dramatically higher than when predicting dementia findings using only the feature set group 59. Can be improved.
- the dementia-related information 141 includes at least one of the volume of the dissected area, the score of the dementia test, the test result of the genetic test, the test result of the cerebrospinal fluid test, and the test result of the blood test. I just need to be there. Dementia-related information 141 may include the gender, age, medical history of patient P, or whether or not patient P has a relative who has developed dementia.
- the AE250 has a compression unit 253 and a restoration unit 254, similar to the AE80 of the first embodiment.
- the dissected area image 56 is input to the compression unit 253.
- the compression unit 253 converts the dissected area image 56 into a feature set 255.
- the compression unit 253 passes the feature amount set 255 to the restoration unit 254.
- the restoration unit 254 generates the restoration image 256 of the dissection area image 56 from the feature amount set 255.
- the single task CNN 251 has a compression unit 253 and an output unit 257, similar to the single task CNN 100 of the second embodiment. That is, the compression unit 253 is shared by the AE250 and the single task CNN251.
- the compression unit 253 passes the feature amount set 255 to the output unit 257.
- the output unit 257 outputs one class 258 based on the feature amount set 255.
- the output unit 257 outputs the determination result that the patient P with mild cognitive impairment remains mild cognitive impairment after 2 years or progresses to Alzheimer's disease after 2 years as class 258.
- the output unit 257 outputs an aggregated feature amount ZA that aggregates a plurality of feature amounts Z constituting the feature amount set 255.
- the aggregate feature amount ZA is output for each dissected area.
- the aggregate feature amount ZA is input to the dementia finding derivation model 282 (see FIG. 30) instead of the feature amount set 255.
- the output unit 257 has a self-attention (hereinafter abbreviated as SA (Self-Attention)) mechanism layer 265, an overall average pooling (hereinafter abbreviated as GAP (Global Function Polling)) layer 266, and Fully connected (hereinafter abbreviated as FC (Full Connected)) layer 267, softmax function (hereinafter abbreviated as SMF (SoftMax Function) layer 268, and principal component analysis (hereinafter abbreviated as PCA (Principal Component Analysis)) layer 269.
- SA Self-attention
- GAP Global Function Polling
- FC Fully connected
- SMF SoftMax Function
- PCA Principal component analysis
- the SA mechanism layer 265 performs the convolution process shown in FIG. 11 on the feature amount set 255 while changing the coefficient of the filter 203 according to the element value of the attention element 201I.
- the convolution process performed in the SA mechanism layer 265 is referred to as an SA convolution process.
- the SA mechanism layer 265 outputs the feature amount set 255 after the SA convolution process to the GAP layer 266.
- the GAP layer 266 undergoes an overall average pooling treatment on the feature amount set 255 after the SA convolution treatment.
- the overall average pooling process is a process for obtaining the average value of the feature amount Z for each channel (see FIG. 12) of the feature amount set 255. For example, when the number of channels of the feature amount set 255 is 512, the average value of 512 feature amounts Z is obtained by the overall average pooling process.
- the GAP layer 266 outputs the average value of the obtained feature amount Z to the FC layer 267 and the PCA layer 269.
- the FC layer 267 converts the average value of the feature amount Z into a variable handled by the SMF of the SMF layer 268.
- the FC layer 267 has an input layer having as many units as the number of average values of the feature amount Z (that is, the number of channels of the feature amount set 255) and an output layer having as many units as the number of variables handled by the SMF.
- Each unit of the input layer and each unit of the output layer are fully connected to each other, and weights are set for each.
- An average value of the feature amount Z is input to each unit of the input layer.
- the sum of products of the average value of the feature amount Z and the weight set between each unit is the output value of each unit of the output layer.
- This output value is a variable handled by SMF.
- the FC layer 267 outputs the variables handled by the SMF to the SMF layer 268.
- the SMF layer 268 outputs the class 258 by applying the variable to the SMF.
- the PCA layer 269 performs PCA on the average value of the feature amount Z, and sets the average value of the plurality of feature amount Z as the aggregated feature amount ZA in a smaller number. For example, the PCA layer 269 aggregates the average value of 512 feature quantities Z into one aggregated feature quantity ZA.
- the AE250 is learned by inputting the learning anatomical area image 56L in the learning phase.
- the AE250 outputs a learning restored image 256L with respect to the learning dissection area image 56L.
- the loss calculation of the AE250 using the loss function is performed.
- various coefficients of the AE250 are updated according to the result of the loss calculation (hereinafter referred to as the loss L1), and the AE250 is updated according to the update setting.
- the above series of processes of inputting the learning anatomical area image 56L to the AE250, outputting the learning restored image 256L from the AE250, loss calculation, update setting, and updating the AE250 is the learning anatomy.
- the area image 56L is repeatedly exchanged and repeated.
- the single task CNN251 is trained by being given learning data 275 in the learning phase.
- the learning data 275 is a set of a learning anatomical area image 56L and a correct answer class 258CA corresponding to the learning anatomical area image 56L.
- the correct answer class 258CA did the patient P to be photographed on the head MRI image 15 obtained the anatomical area image 56L for learning actually remain mild cognitive impairment after 2 years, or progressed to Alzheimer's disease 2 years later? Is shown.
- the learning dissection area image 56L is input to the single task CNN251.
- the single task CNN251 outputs a learning class 258L for a learning dissection area image 56L.
- the loss calculation of the single task CNN251 using the cross entropy function or the like is performed.
- various coefficients of the single task CNN 251 are updated according to the result of the loss calculation (hereinafter referred to as the loss L2), and the single task CNN 251 is updated according to the update setting.
- the update setting of the AE250 and the update setting of the single task CNN251 are performed based on the total loss L represented by the following equation (2).
- ⁇ is a weight.
- L L1 ⁇ ⁇ + L2 ⁇ (1- ⁇ ) ⁇ ⁇ ⁇ (2) That is, the total loss L is a weighted sum of the loss L1 of the AE250 and the loss L2 of the single task CNN251.
- 1 is set for the weight ⁇ in the initial stage of the learning phase.
- the weight ⁇ is gradually reduced from 1 as learning progresses, and eventually becomes a fixed value (0.8 in FIG. 29).
- the learning of the AE250 and the learning of the single task CNN251 are performed together with the intensity corresponding to the weight ⁇ .
- the weight given to the loss L1 is larger than the weight given to the loss L2.
- the weight given to the loss L1 is gradually decreased from the maximum value of 1, and the weight given to the loss L2 is gradually increased from the minimum value of 0, and both are set to fixed values.
- the restoration accuracy from the anatomical area image 56L for learning by the AE250 to the restoration image 256L for learning reaches a predetermined setting level, and the learning is performed for the correct answer class 258CA by the single task CNN251. It ends when the prediction accuracy of class 258L reaches a predetermined set level.
- the AE250 and the single-tasking CNN251 whose restoration accuracy and prediction accuracy have reached the set level in this way are stored in the storage 20 and used as the learned feature amount derivation model 252.
- the dementia finding derivation unit 280 of the present embodiment inputs the aggregated feature group ZAG and the dementia-related information 281 into the dementia finding derivation model 282.
- the aggregated feature group ZAG is composed of a plurality of aggregated feature quantities ZA output for each dissected area.
- the dementia-related information 281 is the sex, age, volume of the dissected area, dementia test score, and genetic test of the patient P who diagnoses dementia. Results, test results of spinal fluid test, test results of blood test, etc. are included.
- the dementia finding derivation model 282 has a quantile normalization unit 283 and a linear discriminant analysis unit 284.
- the aggregate feature group ZAG and dementia-related information 281 are input to the quantile normalization unit 283.
- the quantile normalization unit 283 converts the plurality of aggregated feature quantities ZA constituting the aggregated feature quantity group ZAG and the parameters of the dementia-related information 281 into data according to a normal distribution in order to handle them in the same row.
- Quantile Normalization is performed.
- the linear discriminant analysis unit 284 performs linear discriminant analysis (Linear Discriminant Analysis) for each parameter of the aggregated feature amount ZA and the dementia-related information 281 after the division normalization process, and as a result, the dementia finding information 285 is obtained. Output.
- Dementia finding information 285 is either that the patient P with mild cognitive impairment currently remains mild cognitive impairment after 2 years or progresses to Alzheimer's disease after 2 years.
- the learning of the dementia finding derivation model 282 is the same as the learning of the dementia finding derivation model 142 shown in FIG. 25, except that the learning feature set group 59L is changed to the learning aggregate feature group ZAG. Therefore, illustration and description will be omitted.
- the single task CNN251 that performs the main task such as the output of class 258 and the single task CNN251 are partially common, and the AE250 that performs a subtask that is more general than the main task such as the generation of the restored image 256. Is used as the feature quantity derivation model 252. Then, the AE250 and the single task CNN251 are learned at the same time. Therefore, as compared with the case where the AE250 and the single task CNN251 are separate, a more appropriate feature amount set 255 and aggregated feature amount ZA can be output, and as a result, the prediction accuracy of the dementia finding information 285 can be improved. ..
- the update setting is performed based on the total loss L, which is the weighted sum of the loss L1 of the AE250 and the loss L2 of the single task CNN251. Therefore, by setting the weight ⁇ to an appropriate value, the AE250 can be intensively learned, the single-tasking CNN251 can be intensively learned, and the AE250 and the single-tasking CNN251 can be learned in a well-balanced manner.
- the weight given to the loss L1 is larger than the weight given to the loss L2. Therefore, the AE250 can always be focused on learning. If the AE250 is always focused on learning, the feature amount set 255 that more expresses the features of the shape and texture of the dissected area can be output from the compression unit 253, and as a result, the more plausible aggregate feature amount ZA can be output from the output unit. It can be output from 257.
- the weight given to the loss L1 is gradually decreased from the maximum value, and the weight given to the loss L2 is gradually increased from the minimum value, and when learning is performed a predetermined number of times, both are set as fixed values. Therefore, the AE250 can be learned more intensively in the initial stage of learning.
- the AE250 is responsible for the relatively simple subtask of generating the restored image 256. Therefore, if the AE250 is trained more intensively in the initial stage of learning, the feature amount set 255 that more expresses the characteristics of the shape and texture of the dissected area can be output from the compression unit 253 in the initial stage of learning. ..
- Table 300 shown in FIG. 31 shows No. 1 according to the method for predicting the progression of dementia described in the following documents A, B, C, D, E, F, and G. Nos. 1 to 7 and No. 1 relating to the method for predicting the progression of dementia according to the present embodiment.
- the performance comparison with 8 and 9 is shown.
- No. Reference numeral 8 shows a case where only the aggregate feature group ZAG is input to the dementia finding derivation model 282 and the dementia-related information 281 is not input.
- No. Reference numeral 9 shows a case where the aggregated feature group ZAG and the dementia-related information 281 are input to the dementia finding derivation model 282.
- No. Sensitivity of 8 and 9 is 0.85 and 0.91. These values are No. Higher than any of 1-7. Especially No. The sensitivity of 9 of 0.91 is the highest value among them. Therefore, in the method for predicting the progression of dementia of the present embodiment, as compared with the conventional methods for predicting the progression of dementia described in Documents A to G, the patient P with mild cognitive impairment currently progresses to Alzheimer's disease after the prediction period. Can be said to be able to be predicted without overlooking.
- ADNI Alzheimer's Disease Neuroimaging Initiative
- AIBL is an abbreviation for "Australian Imaging Biomarkers and Lifestyle Study of Aging”.
- J-ADNI is an abbreviation for "Japanese Alzheimer's Disease Neuroimaging Intivive”. Both show a database in which head MRI images 15 and the like of patient P with Alzheimer's disease are accumulated.
- the multitasking CNN110 of the third embodiment may be used.
- the learning of the dementia finding derivation model 142, the learning of the AE250 and the single task CNN251 shown in FIG. 28, and the like may be performed by the diagnostic support device 12, or may be performed by a device other than the diagnostic support device 12. Further, these learnings may be continuously performed after storing each model in the storage 20 of the diagnostic support device 12.
- the PACS server 11 may function as the diagnostic support device 12.
- the medical image is not limited to the illustrated head MRI image 15.
- PET Positron Emission Tomography
- SPECT Single Photon Emission Tomography
- CT Computed Tomography
- the organ is not limited to the illustrated brain, but may be the heart, lungs, liver, etc.
- the right lungs S1, S2, left lungs S1, S2 and the like are extracted as dissected areas.
- the liver the right lobe, left lobe, gallbladder, etc. are extracted as dissected areas.
- the disease is not limited to the exemplified dementia, and may be a diffuse lung disease such as heart disease or interstitial pneumonia, or a liver dysfunction such as liver cirrhosis.
- various types such as the RW control unit 45, the normalization unit 46, the extraction unit 47, the feature quantity derivation units 48 and 130, the dementia findings derivation units 49, 140, and 280, and the display control unit 50.
- various processors Processors
- various processors as described above, in addition to the CPU 22 which is a general-purpose processor that executes software (operation program 30) and functions as various processing units, after manufacturing an FPGA (Field Programmable Gate Array) or the like.
- Dedicated processor with a circuit configuration specially designed to execute specific processing such as programmable logic device (PLD), ASIC (Application Specific Integrated Circuit), which is a processor whose circuit configuration can be changed. Includes electrical circuits and the like.
- One processing unit may be composed of one of these various processors, or may be a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs and / or a CPU). It may be configured in combination with FPGA). Further, a plurality of processing units may be configured by one processor.
- one processor is configured by a combination of one or more CPUs and software, as represented by a computer such as a client and a server.
- the processor functions as a plurality of processing units.
- SoC System On Chip
- SoC system On Chip
- the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
- an electric circuit in which circuit elements such as semiconductor elements are combined can be used.
- the technique of the present disclosure can be appropriately combined with the various embodiments described above and / or various modifications. Further, it is of course not limited to each of the above embodiments, and various configurations can be adopted as long as they do not deviate from the gist. Further, the technique of the present disclosure extends to a storage medium for storing the program non-temporarily in addition to the program.
- a and / or B is synonymous with "at least one of A and B". That is, “A and / or B” means that it may be A alone, B alone, or a combination of A and B. Further, in the present specification, when three or more matters are connected and expressed by "and / or", the same concept as “A and / or B" is applied.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Computational Linguistics (AREA)
- Neurology (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Neurosurgery (AREA)
Abstract
Description
一例として図1に示すように、医療システム2は、MRI装置10、PACS(Picture Archiving and Communication System)サーバ11、および診断支援装置12を備える。これらMRI装置10、PACSサーバ11、および診断支援装置12は、医療施設内に敷設されたLAN(Local Area Network)13に接続されており、LAN13を介して相互に通信することが可能である。
<Patrick McClure, etc., Knowing What You Know in Brain Segmentation Using Bayesian Deep Neural Networks, Front. Neuroinform., 17 October 2019.>
k=az+by+cx+dw+ev+fu+gt+hs+ir・・・(1)
図19および図20に示す第2実施形態では、AE80の圧縮部81の代わりに、シングルタスクのクラス判別用畳み込みニューラルネットワーク(以下、シングルタスクCNN(Convolutional Neural Network)と略す)100の圧縮部101を、特徴量導出モデル105として用いる。
図21および図22に示す第3実施形態では、AE80の圧縮部81、およびシングルタスクCNN100の圧縮部101の代わりに、マルチタスクのクラス判別用CNN(以下、マルチタスクCNNと略す)110の圧縮部111を、特徴量導出モデル116として用いる。
図23に示す第4実施形態では、1つの解剖区域の解剖区域画像56を、異なる複数の特徴量導出モデルに入力する。
図24および図25に示す第5実施形態では、複数の特徴量Zに加えて、認知症に関わる認知症関連情報141を認知症所見導出モデル142に入力する。
図26~図31に示す第6実施形態では、AE250とシングルタスクCNN251を合わせたモデルを、特徴量導出モデル252として用いる。
L=L1×α+L2×(1-α)・・・(2)
すなわち、総合損失Lは、AE250の損失L1とシングルタスクCNN251の損失L2との重み付き和である。
Claims (13)
- プロセッサと、
前記プロセッサに接続または内蔵されたメモリと、を備え、
前記プロセッサは、
医用画像を取得し、
前記医用画像から臓器の複数の解剖区域を抽出し、
前記複数の解剖区域の画像を、前記複数の解剖区域毎に用意された複数の特徴量導出モデルに入力し、前記特徴量導出モデルから前記複数の解剖区域毎の複数の特徴量を出力させ、
前記複数の解剖区域毎に出力された前記複数の特徴量を疾病所見導出モデルに入力し、前記疾病所見導出モデルから疾病の所見を出力させ、
前記所見を提示する、
診断支援装置。 - 前記特徴量導出モデルは、オートエンコーダ、シングルタスクのクラス判別用畳み込みニューラルネットワーク、およびマルチタスクのクラス判別用畳み込みニューラルネットワークのうちの少なくともいずれか1つを含む請求項1に記載の診断支援装置。
- 前記プロセッサは、
1つの前記解剖区域の画像を、異なる複数の前記特徴量導出モデルに入力し、複数の前記特徴量導出モデルの各々から前記特徴量を出力させる請求項1または請求項2に記載の診断支援装置。 - 前記プロセッサは、
前記複数の特徴量に加えて、前記疾病に関わる疾病関連情報を前記疾病所見導出モデルに入力する請求項1から請求項3のいずれか1項に記載の診断支援装置。 - 前記疾病所見導出モデルは、ニューラルネットワーク、サポートベクターマシン、およびブースティングのうちのいずれかの手法によって構築される請求項1から請求項4のいずれか1項に記載の診断支援装置。
- 前記プロセッサは、
前記解剖区域の抽出に先立ち、取得した前記医用画像を標準医用画像に合わせる正規化処理を行う請求項1から請求項5のいずれか1項に記載の診断支援装置。 - 前記臓器は脳であり、前記疾病は認知症である請求項1から請求項6のいずれか1項に記載の診断支援装置。
- 前記複数の解剖区域は、海馬および前側頭葉のうちの少なくともいずれか1つを含む請求項7に記載の診断支援装置。
- 請求項4を引用する請求項7または請求項8に記載の診断支援装置において、
前記疾病関連情報は、前記解剖区域の体積、認知症テストのスコア、遺伝子検査の検査結果、髄液検査の検査結果、および血液検査の検査結果のうちの少なくともいずれか1つを含む診断支援装置。 - 医用画像を取得すること、
前記医用画像から臓器の複数の解剖区域を抽出すること、
前記複数の解剖区域の画像を、前記複数の解剖区域毎に用意された複数の特徴量導出モデルに入力し、前記特徴量導出モデルから前記複数の解剖区域毎の複数の特徴量を出力させること、
前記複数の解剖区域毎に出力された前記複数の特徴量を疾病所見導出モデルに入力し、前記疾病所見導出モデルから疾病の所見を出力させること、および、
前記所見を提示すること、
を含む診断支援装置の作動方法。 - 医用画像を取得すること、
前記医用画像から臓器の複数の解剖区域を抽出すること、
前記複数の解剖区域の画像を、前記複数の解剖区域毎に用意された複数の特徴量導出モデルに入力し、前記特徴量導出モデルから前記複数の解剖区域毎の複数の特徴量を出力させること、
前記複数の解剖区域毎に出力された前記複数の特徴量を疾病所見導出モデルに入力し、前記疾病所見導出モデルから疾病の所見を出力させること、および、
前記所見を提示すること、
を含む処理をコンピュータに実行させるための診断支援装置の作動プログラム。 - プロセッサと、前記プロセッサに接続または内蔵されたメモリと、を備えるコンピュータが、
脳が写った医用画像を取得すること、
前記医用画像から前記脳の複数の解剖区域を抽出すること、
前記複数の解剖区域の画像を、前記複数の解剖区域毎に用意された複数の特徴量導出モデルに入力し、前記特徴量導出モデルから前記複数の解剖区域毎の複数の特徴量を出力させること、
前記複数の解剖区域毎に出力された前記複数の特徴量を認知症所見導出モデルに入力し、前記認知症所見導出モデルから認知症の所見を出力させること、および、
前記所見を提示すること、
を行う認知症診断支援方法。 - 複数の特徴量を入力として認知症の所見を出力する機能をコンピュータに実行させるための学習済み認知症所見導出モデルであり、
前記複数の特徴量は、脳が写った医用画像から抽出された脳の複数の解剖区域の画像を、前記複数の解剖区域毎に用意された複数の特徴量導出モデルに入力することで、前記複数の特徴量導出モデルから出力された特徴量である、
学習済み認知症所見導出モデル。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022553914A JPWO2022071158A1 (ja) | 2020-10-01 | 2021-09-24 | |
EP21875460.4A EP4223229A4 (en) | 2020-10-01 | 2021-09-24 | DIAGNOSTIC AID DEVICE, METHOD FOR OPERATING THE DIAGNOSTIC AID DEVICE, PROGRAM FOR OPERATING THE DIAGNOSTIC AID DEVICE, DEMENTIA DIAGNOSTIC METHOD AND LEARNED MODEL FOR DERIVING DEMENTIA SYMPTOMS |
CN202180067505.8A CN116490132A (zh) | 2020-10-01 | 2021-09-24 | 诊断辅助装置、诊断辅助装置的工作方法、诊断辅助装置的工作程序、痴呆症诊断辅助方法、以及学习完毕痴呆症诊断意见导出模型 |
US18/191,675 US20230260629A1 (en) | 2020-10-01 | 2023-03-28 | Diagnosis support device, operation method of diagnosis support device, operation program of diagnosis support device, dementia diagnosis support method, and trained dementia opinion derivation model |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020167010 | 2020-10-01 | ||
JP2020-167010 | 2020-10-01 | ||
JP2020217833 | 2020-12-25 | ||
JP2020-217833 | 2020-12-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/191,675 Continuation US20230260629A1 (en) | 2020-10-01 | 2023-03-28 | Diagnosis support device, operation method of diagnosis support device, operation program of diagnosis support device, dementia diagnosis support method, and trained dementia opinion derivation model |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022071158A1 true WO2022071158A1 (ja) | 2022-04-07 |
Family
ID=80949126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/035194 WO2022071158A1 (ja) | 2020-10-01 | 2021-09-24 | 診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム、認知症診断支援方法、並びに学習済み認知症所見導出モデル |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230260629A1 (ja) |
EP (1) | EP4223229A4 (ja) |
JP (1) | JPWO2022071158A1 (ja) |
WO (1) | WO2022071158A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023157447A1 (ja) * | 2022-02-18 | 2023-08-24 | 医療研究開発株式会社 | 分層方法、および、分層装置 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010012176A (ja) * | 2008-07-07 | 2010-01-21 | Hamamatsu Photonics Kk | 脳疾患診断システム |
JP2011118543A (ja) * | 2009-12-01 | 2011-06-16 | Shizuoka Prefecture | 症例画像検索装置、方法およびプログラム |
WO2012032940A1 (ja) * | 2010-09-07 | 2012-03-15 | 株式会社 日立メディコ | 認知症診断支援装置及び認知症診断支援方法 |
JP2015208385A (ja) * | 2014-04-24 | 2015-11-24 | 株式会社日立製作所 | 医用画像情報システム、医用画像情報処理方法及びプログラム |
JP2016202904A (ja) * | 2015-04-15 | 2016-12-08 | キヤノン株式会社 | 診断支援装置、診断支援システム、情報処理方法、及びプログラム |
US20170357753A1 (en) * | 2016-05-23 | 2017-12-14 | The Johns Hopkins University | Direct estimation of patient attributes based on mri brain atlases |
JP6483890B1 (ja) | 2018-04-27 | 2019-03-13 | 国立大学法人滋賀医科大学 | 診断支援装置、機械学習装置、診断支援方法、機械学習方法および機械学習プログラム |
JP2020009186A (ja) * | 2018-07-09 | 2020-01-16 | キヤノンメディカルシステムズ株式会社 | 診断支援装置、診断支援方法、及び診断支援プログラム |
CN110934606A (zh) * | 2019-10-31 | 2020-03-31 | 上海杏脉信息科技有限公司 | 脑卒中早期平扫ct图像评估***及评估方法、可读存储介质 |
-
2021
- 2021-09-24 EP EP21875460.4A patent/EP4223229A4/en active Pending
- 2021-09-24 WO PCT/JP2021/035194 patent/WO2022071158A1/ja active Application Filing
- 2021-09-24 JP JP2022553914A patent/JPWO2022071158A1/ja active Pending
-
2023
- 2023-03-28 US US18/191,675 patent/US20230260629A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010012176A (ja) * | 2008-07-07 | 2010-01-21 | Hamamatsu Photonics Kk | 脳疾患診断システム |
JP2011118543A (ja) * | 2009-12-01 | 2011-06-16 | Shizuoka Prefecture | 症例画像検索装置、方法およびプログラム |
WO2012032940A1 (ja) * | 2010-09-07 | 2012-03-15 | 株式会社 日立メディコ | 認知症診断支援装置及び認知症診断支援方法 |
JP2015208385A (ja) * | 2014-04-24 | 2015-11-24 | 株式会社日立製作所 | 医用画像情報システム、医用画像情報処理方法及びプログラム |
JP2016202904A (ja) * | 2015-04-15 | 2016-12-08 | キヤノン株式会社 | 診断支援装置、診断支援システム、情報処理方法、及びプログラム |
US20170357753A1 (en) * | 2016-05-23 | 2017-12-14 | The Johns Hopkins University | Direct estimation of patient attributes based on mri brain atlases |
JP6483890B1 (ja) | 2018-04-27 | 2019-03-13 | 国立大学法人滋賀医科大学 | 診断支援装置、機械学習装置、診断支援方法、機械学習方法および機械学習プログラム |
JP2020009186A (ja) * | 2018-07-09 | 2020-01-16 | キヤノンメディカルシステムズ株式会社 | 診断支援装置、診断支援方法、及び診断支援プログラム |
CN110934606A (zh) * | 2019-10-31 | 2020-03-31 | 上海杏脉信息科技有限公司 | 脑卒中早期平扫ct图像评估***及评估方法、可读存储介质 |
Non-Patent Citations (8)
Title |
---|
BASAIA, SAGOSTA, FWAGNER, LCANU, EMAGNANI, GSANTANGELO, RFILIPPI, M: "Automated classification of Alzheimer's disease and mild cognitive impairment using a single MRI and deep neural networks", NEUROIMAGE: CLINICAL, vol. 21, 2019, pages 101645 |
GOTO, TWANG, CLI, YTSUBOSHITA, Y: "Multi-modal deep learning for predicting progression of Alzheimer's disease using bi-linear shake fusion", PROC. SPIE 11314, MEDICAL IMAGING, 2020 |
LEDIG, CSCHUH, AGUERRERO, RHECKEMANN, R. ARUECKERT, D: "Structural brain imaging in Alzheimer's disease and mild cognitive impairment: biomarker analysis and shared morphometry database", SCIENTIFIC REPORTS, vol. 8, no. 1, 2018, pages 11258, XP055626868, DOI: 10.1038/s41598-018-29295-9 |
LEE, GNHO, KKANG, BSOHN, K. AKIM, D: "Predicting Alzheimer's disease progression using multi-modal deep learning approach", SCIENTIFIC REPORTS, vol. 9, no. 1, 2019, pages 1952, XP055830945, DOI: 10.1038/s41598-018-37769-z |
LU, DPOPURI, KDING, G WBALACHANDAR, RBEG, M. F: "Multimodal and multiscale deep neural networks for the early diagnosis of Alzheimer's disease using structural MR and FDG-PET images", SCIENTIFIC REPORTS, vol. 8, no. 1, 2018, pages 5697 |
NAKAGAWA, TISHIDA, MNAITO, JNAGAI, AYAMAGUCHI, SONODA, K: "Prediction of conversion to Alzheimer's disease using deep survival analysis of MRI images", BRAIN COMMUNICATIONS, vol. 2, no. 1, 2020 |
See also references of EP4223229A4 |
TAM, ADANSEREAU, CITURRIA-MEDINA, YURCHS, SORBAN, PSHARMARKE, HBREITNER, J: "Alzheimer's Disease Neuroimaging Initiative., ''A highly predictive signature of cognition and brain atrophy for progression to Alzheimer's dementia", GIGASCIENCE, vol. 8, no. 5, 2019, pages giz055 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023157447A1 (ja) * | 2022-02-18 | 2023-08-24 | 医療研究開発株式会社 | 分層方法、および、分層装置 |
Also Published As
Publication number | Publication date |
---|---|
US20230260629A1 (en) | 2023-08-17 |
JPWO2022071158A1 (ja) | 2022-04-07 |
EP4223229A1 (en) | 2023-08-09 |
EP4223229A4 (en) | 2024-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10818048B2 (en) | Advanced medical image processing wizard | |
Banar et al. | Towards fully automated third molar development staging in panoramic radiographs | |
JP2020513615A (ja) | 深層学習ニューラルネットワークの分散化された診断ワークフロー訓練 | |
US11229377B2 (en) | System and method for next-generation MRI spine evaluation | |
US10918309B2 (en) | Artificial intelligence-based COPD assessment | |
US20070133851A1 (en) | Method and apparatus for selecting computer-assisted algorithms based on protocol and/or parameters of an acquisistion system | |
Ravi et al. | Degenerative adversarial neuroimage nets for brain scan simulations: Application in ageing and dementia | |
US20140153795A1 (en) | Parametric imaging for the evaluation of biological condition | |
US20230260629A1 (en) | Diagnosis support device, operation method of diagnosis support device, operation program of diagnosis support device, dementia diagnosis support method, and trained dementia opinion derivation model | |
US20230260630A1 (en) | Diagnosis support device, operation method of diagnosis support device, operation program of diagnosis support device, and dementia diagnosis support method | |
CN112053767A (zh) | 医疗中的人工智能分派 | |
WO2022138960A1 (ja) | 診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム | |
JP7114347B2 (ja) | 断層画像予測装置および断層画像予測方法 | |
Tang et al. | LG-Net: lesion gate network for multiple sclerosis lesion inpainting | |
WO2023119866A1 (ja) | 情報処理装置、情報処理装置の作動方法、情報処理装置の作動プログラム、予測モデル、学習装置、および学習方法 | |
JP2023114463A (ja) | 表示装置、方法およびプログラム | |
WO2022071160A1 (ja) | 診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム、並びに認知症診断支援方法 | |
WO2022138961A1 (ja) | 情報処理装置、情報処理装置の作動方法、情報処理装置の作動プログラム | |
CN116490132A (zh) | 诊断辅助装置、诊断辅助装置的工作方法、诊断辅助装置的工作程序、痴呆症诊断辅助方法、以及学习完毕痴呆症诊断意见导出模型 | |
US20110071383A1 (en) | Visualization of abnormalities of airway walls and lumens | |
Burdett et al. | MILXView: a medical imaging, analysis and visualization platform | |
WO2022065062A1 (ja) | 診断支援装置、診断支援装置の作動方法、診断支援装置の作動プログラム | |
CN114708973B (zh) | 一种用于对人体健康进行评估的设备和存储介质 | |
WO2021246047A1 (ja) | 経過予測装置、方法およびプログラム | |
Winter et al. | Automated intracranial vessel segmentation of 4D flow MRI data in patients with atherosclerotic stenosis using a convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21875460 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022553914 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180067505.8 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021875460 Country of ref document: EP Effective date: 20230502 |