WO2021153314A1 - Medical information processing system, medical information processing device, control method for medical information processing system, and program - Google Patents

Medical information processing system, medical information processing device, control method for medical information processing system, and program Download PDF

Info

Publication number
WO2021153314A1
WO2021153314A1 PCT/JP2021/001491 JP2021001491W WO2021153314A1 WO 2021153314 A1 WO2021153314 A1 WO 2021153314A1 JP 2021001491 W JP2021001491 W JP 2021001491W WO 2021153314 A1 WO2021153314 A1 WO 2021153314A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning model
medical information
accuracy
determination
information processing
Prior art date
Application number
PCT/JP2021/001491
Other languages
French (fr)
Japanese (ja)
Inventor
一誠 小原
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2021153314A1 publication Critical patent/WO2021153314A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a medical information processing system, a medical information processing device, a control method of the medical information processing system, and a program.
  • Patent Document 1 discloses a technique of calculating the determination accuracy of a classifier created by machine learning and using the classifier having the highest determination accuracy for the determination process.
  • the judgment performance of the function often depends on the learning data. For example, in learning the function of recognizing the irradiation field of an X-ray image, when most of the learning data uses the function learned using the data of the chest part, the data of the part other than the chest is input. It is expected that the judgment performance will deteriorate if the function is used as.
  • the judgment performance may be improved under specific conditions, but the judgment performance may be lowered under other conditions.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a medical information processing technique capable of performing judgment processing by a learning model that matches the data conditions.
  • the medical information processing system includes an acquisition means for acquiring medical information and an acquisition means.
  • Condition setting means for setting conditions for distributing data used for machine learning
  • a determination means for performing determination processing on target data using a learning model generated by the machine learning using the medical information and the conditions and different for each condition is provided.
  • the determination means is characterized in that a learning model that matches the conditions of the target data is selected from learning models that differ for each of the conditions, and the determination process is performed using the selected learning model.
  • the determination process can be performed by a learning model that matches the data conditions.
  • a system configuration diagram showing an example of a medical information system The flowchart which shows the procedure of the process which the machine learning control apparatus which concerns on 1st Embodiment carries out.
  • the flowchart which shows the procedure of the process which the machine learning control apparatus which concerns on 2nd Embodiment carries out.
  • the flowchart which shows the procedure of the process which the machine learning control device which concerns on 3rd Embodiment carries out.
  • radiation includes not only X-rays but also ⁇ -rays, ⁇ -rays, ⁇ -rays, various particle beams, and the like.
  • FIG. 1 is a diagram showing an example of the configuration of the medical information processing system 10 according to the first embodiment of the present invention.
  • the medical information processing system 10 includes a radiography control device 101 and a machine learning control device 113 that functions as a determination accuracy evaluation device.
  • the radiography control device 101 and the machine learning control device 113 are communicably connected to the HIS 117, the RIS 118, the PACS 119, the printer 120, and the report server 121 via the network 122.
  • HIS117 Hospital Information System
  • HIS117 Hospital Information System
  • RIS118 Radiology Information System
  • PACS119 Picture Archiving and Communication Systems
  • the image interpretation report created by the image interpretation doctor is stored in the report server 121.
  • HIS117 may include a server that manages accounting information.
  • an inspection instruction is input from the terminal of HIS117 and transmitted to the radiology department, which is the request destination.
  • This request information is called an inspection order, and this inspection order includes the department name of the requesting source, inspection items, personal data of the subject 130, and the like.
  • the radiology department receives the inspection order transmitted from the RIS 118, it adds imaging conditions and the like and transfers it to the radiography control device 101.
  • the radiography control device 101 performs radiography according to the received inspection order. Inspection information is added to the image taken based on the radiography control of the radiography control device 101, and the image is transferred to the PACS 119 or printed out by the printer 120. In addition, the information on the inspection performed by the radiography control device 101 is transferred to the HIS 117.
  • the inspection implementation information transferred to HIS117 is used not only for inspection progress management but also for post-inspection accounting.
  • the image interpreting doctor confirms the image transferred to the PACS 119 and the image printed by the printer 120, and creates an image interpretation report in which the image interpretation result is described by a report creating device (not shown). The interpretation report is stored in the report server 121.
  • Each of these devices is connected via, for example, a network 122 composed of a LAN (Local Area Network), a WAN (Wide Area Network), or the like.
  • Each of these devices includes one or more computers.
  • the computer is provided with, for example, a main control unit such as a CPU, and a storage unit such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the computer may be provided with a communication unit such as a network card, an input / output unit such as a keyboard, a display or a touch panel, and the like.
  • a bus or the like is controlled by the main control unit reading and executing a program stored in the storage unit.
  • the radiography control device 101 includes a display unit 102, an operation unit 103, a determination unit 104, a radiation generation control unit 105, a display control unit 106, and an imaging control unit 107.
  • the radiation generation control unit 105 is connected to the radiation generation unit 110 via a cable 111, and controls the irradiation of radiation from the radiation generation unit 110.
  • the radiation generating unit 110 is realized by, for example, a radiation tube, and irradiates the subject 130 (for example, a specific part of the patient) with radiation.
  • the imaging control unit 107 controls the processing in the radiography control device 101 in an integrated manner.
  • the display unit 102 is realized by, for example, a liquid crystal display or the like, and displays various information to a user (photographer, doctor, etc.).
  • the operation unit 103 is realized by, for example, a mouse, an operation button, or the like, and inputs various instructions from the user into the device.
  • the display unit 102 and the operation unit 103 may be realized by a touch panel in which they are integrated.
  • the determination unit 104 is a discriminator that makes inferences based on a learning model created by machine learning using medical information obtained in the past diagnosis, and performs inference of diseases and image processing.
  • the medical information may be information included in any one of the examination order, the image, and the interpretation report, or may be a combination of the information included in the examination order, the image, and the interpretation report.
  • the imaging control unit 107 is connected to the radiation detector 109 via a cable 112, and a power supply, an image signal, a control signal, etc. are exchanged between the two by the cable 112.
  • the radiation detector 109 functions as a detector that detects the radiation transmitted through the subject 130 and acquires an image (radiation image) based on the radiation transmitted through the subject 130. That is, the radiation photographing unit is realized by operating the radiation generating unit 110 and the radiation detector 109 in cooperation with each other.
  • the radiation detector 109 is installed on, for example, a standing or lying position imaging table 108.
  • the imaging control unit 107 functions as an instruction unit for instructing the start of radiological imaging corresponding to at least one of the order information received from the RIS 118.
  • the order information received from the RIS 118 includes, for example, subject information and one or more imaging sites for the subject.
  • the operation unit 103 receives an input from the user and is instructed to start radiography.
  • the photographing control unit 107 may select the order information to be photographed and instruct the start of photographing.
  • radiography When radiography is performed, an image (radiation image) is displayed on the display unit 102.
  • the user can perform image processing, cropping, annotation, geometric transformation, and other image editing on the displayed image via the operation unit 103. These image edits may be automatically performed by the determination unit 104.
  • the above is an explanation of an example of the configuration of the radiography control device 101 in the medical information processing system according to the first embodiment.
  • the configuration shown in FIG. 1 is merely an example and can be changed as appropriate.
  • various devices are connected to the radiography control device 101 via a network 122, but the radiography control device 101 does not necessarily have to be connected to such a device.
  • the diagnostic image may be output to a portable medium such as a DVD and input to various devices via the portable medium.
  • the network 122 may be configured by wire or may be partially configured by a wireless signal transmission line. A part of the cable 111 and the cable 112 may also be composed of a wireless signal transmission line.
  • the machine learning control device 113 includes a learning / evaluation data acquisition unit 114, a learning / evaluation data storage unit 115, a determination accuracy evaluation unit 116, a condition setting unit 123, and a machine learning unit. It has 124 and.
  • the learning / evaluation data acquisition unit 114 acquires data (medical information) used for machine learning learning or evaluation from any device of the medical information processing system 10 connected via the network 122. I do. For example, when learning using a medical image as an input, the learning / evaluation data acquisition unit 114 acquires an image from PACS119.
  • the data acquired as input data for machine learning may be either a single data or a plurality of data.
  • the acquired data is stored in the learning / evaluation data storage unit 115.
  • the learning / evaluation data acquisition unit 114 may not store the acquired data in the learning / evaluation data storage unit 115, but may output the data to the condition setting unit 123.
  • the condition setting unit 123 sets the conditions for allocating the data used for performing machine learning. For example, as a condition for distributing data, for example, a condition such as a photographing portion and an imaging direction is set.
  • the condition setting is not limited to the setting by the condition setting unit 123, and the user may set it manually.
  • the machine learning unit 124 creates a plurality of learning models different for each condition by machine learning using the data (medical information) and the conditions set in step S202. That is, the machine learning unit 124 sets the data acquired by the learning / evaluation data acquisition unit 114 (or the data held in the learning / evaluation data storage unit 115) and the conditions set by the condition setting unit 123. Receive and create multiple learning models that are different for each condition. For example, if the condition of the imaging region is set in step S202, a learning model for each imaging region is created, and if the condition of the imaging direction is set, a learning model for each imaging direction is created.
  • the determination accuracy evaluation unit 116 evaluates the accuracy of the determination process using the learning model newly created by the machine learning unit 124. That is, the determination accuracy evaluation unit 116 has the medical information acquired by the learning / evaluation data acquisition unit 114 (or the data held in the learning / evaluation data storage unit 115 (medical information)) and the condition setting unit 123. Based on the conditions set by, the machine learning unit 124 evaluates the accuracy of a plurality of learning models that are different for each condition newly created. As the accuracy evaluation, the judgment accuracy evaluation unit 116 acquires the condition used for the accuracy evaluation (evaluation data condition) and the accuracy of the judgment process (judgment accuracy) when the evaluation data corresponding to the condition is input.
  • the determination accuracy evaluation unit 116 uses the same data conditions as those used for creating the learning model by the machine learning unit 124 as the conditions for the data to be evaluated. For example, the determination accuracy evaluation unit 116 uses the data of the chest region as the evaluation data of the learning model created under the condition of the chest region.
  • step S205 the determination accuracy evaluation unit 116 applies the learning model newly created by the learning of the machine learning unit 124 to the determination unit 104 based on the result of the accuracy evaluation of the created learning model. Make a decision. For example, the judgment accuracy evaluation unit 116 compares the accuracy of the judgment processing by the learning model newly created by the learning of the machine learning unit 124 with the accuracy of the judgment processing by the learning model currently used in the judgment unit 104. The accuracy is evaluated based on.
  • the determination unit 104 inputs evaluation data corresponding to each condition into the newly created learning model, and acquires the accuracy of the determination process output from the newly created learning model. Then, the determination unit 104 has the accuracy of the determination process output from the newly created learning model, and the learning currently used when the evaluation data is input to the learning model currently used by the determination unit 104. The accuracy is evaluated based on the comparison with the accuracy of the judgment process output from the model.
  • the determination accuracy evaluation unit 116 performs processing when the determination accuracy of the new learning model is equal to or less than the determination accuracy of the learning model currently used by the determination unit 104 based on the comparison result of the determination accuracy (S205-No). finish.
  • the determination accuracy evaluation unit 116 determines that the determination accuracy of the new learning model is higher than the determination accuracy of the learning model currently used by the determination unit 104 based on the comparison result of the determination accuracy (S205-YES). ), The determination accuracy evaluation unit 116 advances the process to step S206, and determines that the created new learning model is output to the determination unit 104.
  • step S206 the determination accuracy evaluation unit 116 is newly created when the accuracy of the determination process by the newly created learning model is higher than the accuracy of the determination process by the learning model used in the determination unit 104.
  • the learning model and the condition are output to the determination unit 104.
  • the determination accuracy evaluation unit 116 outputs the learning model newly created by the learning of the machine learning unit 124 and the conditions set in step S202 to the radiography control device 101.
  • the radiography control device 101 acquires the conditions and the learning model output from the determination accuracy evaluation unit 116, the learning model of each condition is applied to the setting of the determination unit 104.
  • the learning model of each condition is applied to the setting of the determination unit 104.
  • the radiography control device 101 acquires the model A learned under the condition of the chest portion and the model B learned under the condition of the portion other than the chest, the condition of the determination target data used in the determination process of the determination unit 104.
  • a learning model suitable for is selected and used for machine learning.
  • step S301 the determination unit 104 acquires the determination target data, which is an input parameter when executing the inference process, from the photographing control unit 107.
  • the determination target data is the X-ray photographed image.
  • the learning model is switched according to the conditions, information such as the imaging site is also included in the determination target data.
  • step S302 the determination unit 104 acquires the learning model and conditions output from the machine learning control device 113 in the process of step S206.
  • the determination unit 104 selects a learning model that matches the conditions based on the acquired determination target data.
  • the determination unit 104 selects a learning model that matches the conditions of the determination target data from a plurality of learning models that differ for each condition, and performs determination processing using the selected learning model. For example, when determining the irradiation field recognition for an X-ray image of a chest region, the determination unit 104 selects a learning model corresponding to the condition of the chest region. Then, the determination unit 104 applies the learning model corresponding to the condition to the machine learning classifier.
  • step S304 the determination unit 104 performs a determination process using machine learning using the learning model selected in step S303. With the above, the processing of the radiography control device 101 is completed.
  • FIG. 4 is a diagram showing an example of a learning model applied to the determination unit 104.
  • 4A of FIG. 4 shows the state before the application of the learning model created by the learning of the machine learning unit 124 to the determination unit 104
  • 4B of FIG. 4 shows the state after the application of the created learning model. ..
  • the learning model to be used is not switched depending on the condition (the part in the example of 4A of FIG. 4), and the determination unit 104 has already been set in all the parts. Judgment processing using the learning model A that has been performed is performed.
  • the determination unit 104 switches the model to be used depending on the condition (the part in the example of 4B of FIG. 4). For example, when the determination target data input to the determination unit 104 is an X-ray image of the chest region, a learning model B suitable for the condition of the chest region is used.
  • a learning model in which learning is performed for each condition at the time of learning in the machine learning control device 113 is created, and the condition is used in the determination process in the determination unit 104 of the radiography control device 101. It is possible to select a learning model suitable for the above and perform judgment processing using the selected learning model.
  • the determination unit 104 uses the determination unit 104 as a learning model to be used in the determination process corresponding to the condition. Update the learning model used with the newly created learning model.
  • the determination unit 104 When the target data corresponding to the condition is input, the determination unit 104 performs a determination process on the target data using the updated learning model. Further, when the target data different from the conditions corresponding to the learning model to be updated is input, the determination unit 104 uses the learning model used in the determination unit 104 to obtain the target data. Judgment processing is performed.
  • the determination accuracy may decrease when the determination target data other than the chest region is input. is expected.
  • the learning model used in the past is used as it is for the judgment target data other than the chest part, and the updated learning model is used for the judgment target data of the chest part.
  • the condition of the part has been described, but the present invention is not limited to this example, and medical information data such as sensor information, imaging conditions, subject information, image processing parameters, and image information can be used as conditions. That is, a learning model is created in which learning is performed on the condition of medical information data at the time of learning on the machine learning control device 113, and the determination process on the determination unit 104 of the radiography control device 101 uses the medical information data as a condition. , A learning model suitable for medical information data can be selected, and a determination process using the selected learning model can be performed.
  • the learning model that has been used conventionally is used as it is, and if the judgment target data is the medical information data at the time of learning, the medical information data is used as a condition. Switch to the learning model that has been trained. As a result, the judgment accuracy is not lowered when the judgment target data is data other than the medical information data at the time of learning, and the judgment accuracy is determined when the judgment target data is the medical information data at the time of learning. Can be improved.
  • the condition setting unit 123 sets the condition based on all combinations of the plurality of parameters
  • the machine learning unit 124 sets the medical information and the plurality of parameters.
  • FIG. 5 is a flowchart showing a procedure of processing performed by the machine learning control device 113 according to the second embodiment.
  • the processing contents of steps S201 and S203 to S206 are the same as those of the flowchart of FIG. 2, and are different from the flowchart of FIG. 2 in that the condition setting method (S501) is different.
  • step S501 when a plurality of parameters are included in the condition, the condition setting unit 123 sets the condition based on all combinations of the plurality of parameters.
  • the condition setting unit 123 automatically sets a condition based on all combinations of a plurality of possible parameters.
  • the conditions for allocating data used for machine learning include parameters A1, A2, and A3 that can take three kinds of values, and parameters B1, B2, B3, and B4 that can take four kinds of values. If so, a total of 12 patterns of conditions are set based on all combinations of the parameters.
  • step S203 the machine learning unit 124 learns about the medical information and the conditions based on all combinations of the plurality of parameters, and in step S204, the determination is made.
  • the accuracy evaluation unit 116 evaluates the accuracy of the learning model.
  • step S205 the determination accuracy evaluation unit 116 determines whether or not to apply the newly created learning model to the determination unit 104 based on the result of the accuracy evaluation of the created learning model.
  • step S303 the determination unit 104 selects a learning model that matches the conditions based on the acquired determination target data, as in the process described in FIG. A learning model that matches the set of conditions is selected based on the judgment target data.
  • the medical information processing system of the present embodiment it is possible to comprehensively learn and evaluate not only the parameters set by the user but also the conditions based on all combinations of parameters.
  • the condition is limited by the user's intention, but by learning the condition based on all combinations of parameters, all possibilities of performance improvement that are not limited by the user's intention can be obtained. It becomes possible to create and evaluate a learning model by reflecting it in the conditions. It should be noted that this embodiment is merely an example, and may include a process of switching the learning model by determining the compatibility between the determination target data and the set conditions.
  • the user is notified of the accuracy evaluation of the learning model in step S204 of the first embodiment, and the newly created learning model is applied to the determination unit 104 based on the result of the accuracy evaluation of the learning model.
  • An example of the process of determining whether or not to do so will be described.
  • the configuration of the medical information processing system according to the present embodiment is the same as the configuration described with reference to FIG. 1 of the first embodiment.
  • a process of notifying the user of the accuracy evaluation of the learning model which is a process different from the process method in the first embodiment, will be mainly described.
  • the determination accuracy evaluation unit 116 notifies the result of the accuracy evaluation for each condition for the newly created learning model.
  • FIG. 6 is a flowchart showing a procedure of processing performed by the machine learning control device 113 according to the third embodiment.
  • the processing contents of steps S201 to S206 are the same as those of the flowchart of FIG. 2, and are different from the flowchart of FIG. 2 in that the processing (S601) for notifying the evaluation result is added. ..
  • step S601 the determination accuracy evaluation unit 116 notifies the evaluation result of the learning model for each condition.
  • FIG. 7A of FIG. 7 is a diagram showing an example of information notified by the determination accuracy evaluation unit 116 of the machine learning control device 113.
  • the accuracy evaluation results include information indicating the learning model to be evaluated, the conditions used for accuracy evaluation (evaluation data conditions), and the accuracy of judgment processing (judgment accuracy) when evaluation data corresponding to the conditions is input. Is included.
  • model A is a learning model currently in use by the determination unit 104, and shows a model before application in which the learning model created by learning of the machine learning unit 124 is applied to the determination unit 104.
  • Model A corresponds to the learning model described in 4A of FIG.
  • the model B is a learning model newly created by the learning of the machine learning unit 124, and corresponds to the learning model (model B) described in 4B of FIG.
  • the judgment target data is data other than the medical information data at the time of learning (data other than the chest)
  • the previously used learning model model A in FIG. 7
  • the judgment target data is for medical use at the time of learning.
  • switching is performed to a learning model (model B in FIG. 7) that has been trained on the condition of medical information data.
  • the judgment accuracy is not lowered when the judgment target data is data other than the medical information data at the time of learning, and the judgment accuracy is determined when the judgment target data is the medical information data at the time of learning. Can be improved.
  • the judgment accuracy evaluation unit 116 displays the evaluation result of the learning model for each condition and the comparison result of the evaluation result comparing the newly created learning model (model B) and the learning model currently in use (model A) as message information (message information). Notify as 7B) in FIG.
  • the determination accuracy evaluation unit 116 Generates message information (7B in FIG. 7) including information on conditions (evaluation data conditions), learning model (model B), and accuracy, and notifies the user.
  • the judgment accuracy evaluation unit 116 notifies by a combination of the accuracy evaluation result and the message information comparing the accuracy of the learning model used in the judgment unit 104 with the accuracy of the newly created learning model.
  • the notification of the evaluation result 7A and the message information 7B shown in FIG. 7 is output via the network 122.
  • the display control unit 106 of the radiography control device 101 causes the display unit 102 to display the combination of the accuracy evaluation result and the message information.
  • the evaluation result and the message information output from the determination accuracy evaluation unit 116 are displayed on the display unit 102 based on the control of the display control unit 106.
  • this embodiment is merely an example, and is not limited to the configuration of this embodiment as long as it includes a mechanism for notifying the evaluation result for each combination of the learning model and the condition (evaluation data condition).
  • the format of the notification shown in FIG. 7 can be changed and is not necessarily limited to the format shown in FIG. 7.
  • the configuration having the radiography control device 101 and the machine learning control device 113 as the medical information processing system 10 has been described, but the configuration is not limited to this configuration, and the medical information of the device itself is not limited to this. It can also be configured as a processing device.
  • the functional configuration of the machine learning control device 113 shown in FIG. 1 can be provided inside the radiography control device 101. It is also possible to provide the functional configuration of the radiography control device 101 inside the machine learning control device 113. In this case, when the device is configured as a single medical information processing device, it is possible to obtain the same effect as the effect realized by the medical information processing system 10 described above.
  • the determination process can be performed by a learning model that matches the data conditions.
  • the judgment performance is improved in the judgment processing by the learning model that matches the data condition, and the decrease in the judgment performance can be suppressed under other conditions.
  • the present invention supplies a program that realizes one or more functions of the above-described embodiment to a system or device via a network or storage medium, and one or more processors in the computer of the system or device reads and executes the program. It can also be realized by the processing to be performed. It can also be realized by a circuit (for example, ASIC) that realizes one or more functions.
  • a circuit for example, ASIC

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Primary Health Care (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

This medical information processing system is provided with: an acquisition unit which acquires medical information; a condition setting unit which sets a condition for sorting data to be used for machine training; and a determination unit which performs a determination process on target data by using training models that are different depending on each condition and are generated by machine training in which the medical information and the condition are used. The determination unit selects, from among the training models that are different depending on each condition, a training model suitable for the condition of the target data, and performs the determination process by using the selected training model.

Description

医用情報処理システム、医用情報処理装置、医用情報処理システムの制御方法、及びプログラムMedical information processing system, medical information processing device, control method of medical information processing system, and program
 本発明は、医用情報処理システム、医用情報処理装置、医用情報処理システムの制御方法、及びプログラムに関するものである。 The present invention relates to a medical information processing system, a medical information processing device, a control method of the medical information processing system, and a program.
 医用情報処理システムにおいて、機械学習を用いてユーザーの傾向や嗜好に沿った情報を提供する機能や、画像解析精度を向上させる機能等が提案されている。特許文献1には機械学習により作成した識別器の判定精度を算出し、最も判定精度に優れた識別器を判定処理に使用する技術が開示されている。 In the medical information processing system, a function of providing information according to the user's tendency and preference by using machine learning, a function of improving image analysis accuracy, etc. have been proposed. Patent Document 1 discloses a technique of calculating the determination accuracy of a classifier created by machine learning and using the classifier having the highest determination accuracy for the determination process.
特許第5533662号明細書Japanese Patent No. 5533662
 しかしながら、機械学習を用いた機能においては、機能の判定性能が学習用データに依存することがしばしばある。例えば、X線撮影画像の照射野を認識する機能の学習において、学習用データの割合の大半が胸部部位のデータを用いて学習された機能を使用する場合に、胸部以外の部位のデータを入力としてその機能を使用すると、判定性能が低下することが予想される。 However, in a function using machine learning, the judgment performance of the function often depends on the learning data. For example, in learning the function of recognizing the irradiation field of an X-ray image, when most of the learning data uses the function learned using the data of the chest part, the data of the part other than the chest is input. It is expected that the judgment performance will deteriorate if the function is used as.
 一方で、胸部の部位のデータを入力としてその機能を使用すると、学習前よりも判定性能が向上することが想定される。 On the other hand, if the function is used by inputting the data of the chest part, it is expected that the judgment performance will be improved compared to before learning.
 したがって、その機能の判定に使用する学習モデルを入れ替えると特定の条件では判定性能が向上するが、それ以外の条件では判定性能が低下する場合が生じ得る。 Therefore, if the learning model used to judge the function is replaced, the judgment performance may be improved under specific conditions, but the judgment performance may be lowered under other conditions.
 本発明は上記の課題に鑑みてなされたもので、データの条件に適合する学習モデルにより判定処理を行うことが可能な医用情報処理技術の提供を目的とする。 The present invention has been made in view of the above problems, and an object of the present invention is to provide a medical information processing technique capable of performing judgment processing by a learning model that matches the data conditions.
 本発明の目的を達成するために、本発明の一態様による医用情報処理システムは、医用情報を取得する取得手段と、
 機械学習を行うために使用するデータを振り分けるための条件を設定する条件設定手段と、
 前記医用情報と前記条件とを用いた前記機械学習により生成された、前記条件ごとに異なる学習モデルを用いて、対象データに対して判定処理を行う判定手段と、を備え、
 前記判定手段は、前記条件ごとに異なる学習モデルから、前記対象データの条件に適合する学習モデルを選択し、当該選択した学習モデルを用いて、前記判定処理を行うことを特徴とする。
In order to achieve the object of the present invention, the medical information processing system according to one aspect of the present invention includes an acquisition means for acquiring medical information and an acquisition means.
Condition setting means for setting conditions for distributing data used for machine learning, and
A determination means for performing determination processing on target data using a learning model generated by the machine learning using the medical information and the conditions and different for each condition is provided.
The determination means is characterized in that a learning model that matches the conditions of the target data is selected from learning models that differ for each of the conditions, and the determination process is performed using the selected learning model.
 本発明によれば、データの条件に適合する学習モデルにより判定処理を行うことが可能になる。 According to the present invention, the determination process can be performed by a learning model that matches the data conditions.
 添付図面は明細書に含まれ、その一部を構成し、本発明の実施形態を示し、その記述と共に本発明の原理を説明するために用いられる。
医用情報システムの一例を示すシステム構成図。 第1実施形態に係る機械学習制御装置が実施する処理の手順を示すフローチャート。 第1実施形態に係る放射線撮影制御装置が実施する処理の手順を示すフローチャート。 第1実施形態に係る判定部に適用される学習モデルの一例を示す図。 第2実施形態に係る機械学習制御装置が実施する処理の手順を示すフローチャート。 第3実施形態に係る機械学習制御装置が実施する処理の手順を示すフローチャート。 第3実施形態に係る機械学習制御装置が通知する情報の例を示す図。
The accompanying drawings are included in the specification and are used to form a part thereof, show an embodiment of the present invention, and explain the principle of the present invention together with the description thereof.
A system configuration diagram showing an example of a medical information system. The flowchart which shows the procedure of the process which the machine learning control apparatus which concerns on 1st Embodiment carries out. The flowchart which shows the procedure of the process which performs the radiography control apparatus which concerns on 1st Embodiment. The figure which shows an example of the learning model applied to the determination part which concerns on 1st Embodiment. The flowchart which shows the procedure of the process which the machine learning control apparatus which concerns on 2nd Embodiment carries out. The flowchart which shows the procedure of the process which the machine learning control device which concerns on 3rd Embodiment carries out. The figure which shows the example of the information notified by the machine learning control device which concerns on 3rd Embodiment.
 以下、添付図面を参照して実施形態を詳しく説明する。なお、以下の実施形態は特許請求の範囲に係る発明を限定するものではない。実施形態には複数の特徴が記載されているが、これらの複数の特徴の全てが発明に必須のものとは限らず、また、複数の特徴は任意に組み合わせられてもよい。さらに、添付図面においては、同一若しくは同様の構成に同一の参照番号を付し、重複した説明は省略する。以下の実施形態及び特許請求の範囲において、放射線は、X線の他、α線、β線、γ線、及び各種粒子線なども含む。 Hereinafter, embodiments will be described in detail with reference to the attached drawings. The following embodiments do not limit the invention according to the claims. Although a plurality of features are described in the embodiment, not all of the plurality of features are essential to the invention, and the plurality of features may be arbitrarily combined. Further, in the attached drawings, the same or similar configurations are designated by the same reference numbers, and duplicate explanations are omitted. In the following embodiments and claims, radiation includes not only X-rays but also α-rays, β-rays, γ-rays, various particle beams, and the like.
 [第1実施形態]
 図1~図4を参照して、本発明の第1実施形態による医用情報処理システムの構成および動作について説明する。
[First Embodiment]
The configuration and operation of the medical information processing system according to the first embodiment of the present invention will be described with reference to FIGS. 1 to 4.
 <医用情報処理システムの構成>
 図1は、本発明の第1実施形態に係る医用情報処理システム10の構成の一例を示す図である。医用情報処理システム10は、放射線撮影制御装置101、および判定精度評価装置として機能する機械学習制御装置113を有する。医用情報処理システム10において、放射線撮影制御装置101および機械学習制御装置113は、ネットワーク122を介して、HIS117、RIS118、PACS119、プリンタ120、レポートサーバ121と通信可能に接続している。HIS117(Hospital Information System)は、放射線撮影の進捗を管理する院内情報システムである。
<Configuration of medical information processing system>
FIG. 1 is a diagram showing an example of the configuration of the medical information processing system 10 according to the first embodiment of the present invention. The medical information processing system 10 includes a radiography control device 101 and a machine learning control device 113 that functions as a determination accuracy evaluation device. In the medical information processing system 10, the radiography control device 101 and the machine learning control device 113 are communicably connected to the HIS 117, the RIS 118, the PACS 119, the printer 120, and the report server 121 via the network 122. HIS117 (Hospital Information System) is an in-hospital information system that manages the progress of radiography.
 また、RIS118(Radiology Information System)は放射線部門内情報システムであり、PACS119(Picture Archiving and Communication Systems)は画像サーバである。レポートサーバ121には、読影医によって作成された読影レポートが保存される。 In addition, RIS118 (Radiology Information System) is an information system in the radiology department, and PACS119 (Picture Archiving and Communication Systems) is an image server. The image interpretation report created by the image interpretation doctor is stored in the report server 121.
 HIS117は、会計情報を管理するサーバを含んでいても良い。放射線撮影が必要と判断されると、HIS117の端末より検査指示を入力し、依頼先である放射線部門に伝達する。この依頼情報を検査オーダといい、この検査オーダには依頼元の部門名や、検査項目、被写体130の個人データなどが含まれる。放射線部門はRIS118から送信された検査オーダを受信すると、撮影条件などを付加し、放射線撮影制御装置101へ転送する。 HIS117 may include a server that manages accounting information. When it is determined that radiography is necessary, an inspection instruction is input from the terminal of HIS117 and transmitted to the radiology department, which is the request destination. This request information is called an inspection order, and this inspection order includes the department name of the requesting source, inspection items, personal data of the subject 130, and the like. When the radiology department receives the inspection order transmitted from the RIS 118, it adds imaging conditions and the like and transfers it to the radiography control device 101.
 放射線撮影制御装置101では受信した検査オーダに従って放射線撮影を実施する。放射線撮影制御装置101の撮影制御に基づいて撮影された画像に検査情報が付与され、PACS119への転送やプリンタ120でのプリント出力が行われる。また、放射線撮影制御装置101での検査の実施情報は、HIS117へ転送される。HIS117へ転送された検査の実施情報は、検査の進捗管理以外に、検査後の会計処理にも用いられる。読影医はPACS119に転送された画像やプリンタ120でプリントされた画像を確認して読影結果を記載した読影レポートをレポート作成装置(不図示)で作成する。読影レポートはレポートサーバ121で保存される。 The radiography control device 101 performs radiography according to the received inspection order. Inspection information is added to the image taken based on the radiography control of the radiography control device 101, and the image is transferred to the PACS 119 or printed out by the printer 120. In addition, the information on the inspection performed by the radiography control device 101 is transferred to the HIS 117. The inspection implementation information transferred to HIS117 is used not only for inspection progress management but also for post-inspection accounting. The image interpreting doctor confirms the image transferred to the PACS 119 and the image printed by the printer 120, and creates an image interpretation report in which the image interpretation result is described by a report creating device (not shown). The interpretation report is stored in the report server 121.
 これら各装置間は、例えば、LAN(Local Area Network)やWAN(Wide Area Network)等で構成されるネットワーク122を介して接続されている。なお、これら各装置には、1又は複数のコンピュータが含まれる。コンピュータには、例えば、CPU等の主制御部、ROM(Read Only Memory)、RAM(Random Access Memory)等の記憶部が設けられている。また、コンピュータには、ネットワークカード等の通信部、キーボード、ディスプレイ又はタッチパネル等の入出力部等が設けられていてもよい。これら各構成要素は、バス等により接続され、主制御部が記憶部に記憶されたプログラムを読み出して実行することで制御される。 Each of these devices is connected via, for example, a network 122 composed of a LAN (Local Area Network), a WAN (Wide Area Network), or the like. Each of these devices includes one or more computers. The computer is provided with, for example, a main control unit such as a CPU, and a storage unit such as a ROM (Read Only Memory) and a RAM (Random Access Memory). Further, the computer may be provided with a communication unit such as a network card, an input / output unit such as a keyboard, a display or a touch panel, and the like. Each of these components is connected by a bus or the like, and is controlled by the main control unit reading and executing a program stored in the storage unit.
 <放射線撮影制御装置101の構成>
 放射線撮影制御装置101は、表示部102と、操作部103と、判定部104と、放射線発生制御部105と、表示制御部106と、撮影制御部107とを有する。
<Configuration of radiography control device 101>
The radiography control device 101 includes a display unit 102, an operation unit 103, a determination unit 104, a radiation generation control unit 105, a display control unit 106, and an imaging control unit 107.
 放射線発生制御部105は、ケーブル111を介して放射線発生部110と接続されており、放射線発生部110からの放射線の照射を制御する。放射線発生部110は、例えば、放射線管球により実現され、被写体130(例えば、患者の特定部位)に向けて放射線を照射する。 The radiation generation control unit 105 is connected to the radiation generation unit 110 via a cable 111, and controls the irradiation of radiation from the radiation generation unit 110. The radiation generating unit 110 is realized by, for example, a radiation tube, and irradiates the subject 130 (for example, a specific part of the patient) with radiation.
 撮影制御部107は、放射線撮影制御装置101における処理を統括制御する。表示部102は、例えば、液晶ディスプレイ等で実現され、各種情報をユーザー(撮影技師、医師等)に向けて表示する。操作部103は、例えば、マウスや操作ボタン等で実現され、ユーザーからの各種指示を装置内に入力する。なお、表示部102及び操作部103は、それらが一体となったタッチパネルで実現されてもよい。 The imaging control unit 107 controls the processing in the radiography control device 101 in an integrated manner. The display unit 102 is realized by, for example, a liquid crystal display or the like, and displays various information to a user (photographer, doctor, etc.). The operation unit 103 is realized by, for example, a mouse, an operation button, or the like, and inputs various instructions from the user into the device. The display unit 102 and the operation unit 103 may be realized by a touch panel in which they are integrated.
 判定部104は過去の診断で得られた医用情報を用いた機械学習による作成済の学習モデルに基づいた推論を行う識別器であり、疾患の推論や画像処理を行う。ここで医用情報とは検査オーダや画像、読影レポートのいずれかに1つに含まれる情報であってもよいし、検査オーダや画像、読影レポートに含まれる情報の組み合わせでもよい。 The determination unit 104 is a discriminator that makes inferences based on a learning model created by machine learning using medical information obtained in the past diagnosis, and performs inference of diseases and image processing. Here, the medical information may be information included in any one of the examination order, the image, and the interpretation report, or may be a combination of the information included in the examination order, the image, and the interpretation report.
 また、撮影制御部107は、ケーブル112を介して放射線検出器109と接続されており、ケーブル112により両者の間では、電源、画像信号や制御信号等が授受される。放射線検出器109は、被写体130を透過した放射線を検出し、被写体130を透過した放射線に基づく画像(放射線画像)を取得する検出器として機能する。すなわち、放射線発生部110及び放射線検出器109が連携して動作することにより放射線撮影部が実現される。放射線検出器109は、例えば立位または臥位の撮影台108に設置されている。 Further, the imaging control unit 107 is connected to the radiation detector 109 via a cable 112, and a power supply, an image signal, a control signal, etc. are exchanged between the two by the cable 112. The radiation detector 109 functions as a detector that detects the radiation transmitted through the subject 130 and acquires an image (radiation image) based on the radiation transmitted through the subject 130. That is, the radiation photographing unit is realized by operating the radiation generating unit 110 and the radiation detector 109 in cooperation with each other. The radiation detector 109 is installed on, for example, a standing or lying position imaging table 108.
 撮影制御部107は、RIS118から受信したオーダ情報のうち少なくとも1つに対応する放射線撮影の開始を指示する指示部として機能する。RIS118から受信したオーダ情報には、例えば被検者情報と被検者についての1または複数の撮影部位が含まれる。ここで、放射線撮影の開始の指示は、例えば、操作部103がユーザーの入力を受けて指示される。あるいは、撮影制御部107が撮影すべきオーダ情報を選択して撮影の開始を指示してもよい。 The imaging control unit 107 functions as an instruction unit for instructing the start of radiological imaging corresponding to at least one of the order information received from the RIS 118. The order information received from the RIS 118 includes, for example, subject information and one or more imaging sites for the subject. Here, for example, the operation unit 103 receives an input from the user and is instructed to start radiography. Alternatively, the photographing control unit 107 may select the order information to be photographed and instruct the start of photographing.
 放射線撮影を実施すると、表示部102に画像(放射線画像)が表示される。ユーザーは表示された画像に対し、操作部103を介して画像処理、切り出し、アノテーションの付与、幾何変換等の画像編集を実施することが可能である。これらの画像編集は判定部104によって自動で行われてもよい。 When radiography is performed, an image (radiation image) is displayed on the display unit 102. The user can perform image processing, cropping, annotation, geometric transformation, and other image editing on the displayed image via the operation unit 103. These image edits may be automatically performed by the determination unit 104.
 以上が、第1実施形態に係る医用情報処理システムにおける放射線撮影制御装置101の構成の一例についての説明である。なお、図1に示す構成は、あくまで一例であり、適宜変更することが可能である。例えば、図1では、放射線撮影制御装置101に対してネットワーク122を介して各種装置が接続されているが、必ずしも放射線撮影制御装置101は、このような装置と接続される必要はない。診断画像がDVDのような可搬媒体へ出力され、可搬媒体を介して各種装置へ入力されても良い。また、このネットワーク122は有線で構成されていても、一部が無線信号伝送路で構成されていても良い。ケーブル111とケーブル112についても一部が無線信号伝送路で構成されていても良い。 The above is an explanation of an example of the configuration of the radiography control device 101 in the medical information processing system according to the first embodiment. The configuration shown in FIG. 1 is merely an example and can be changed as appropriate. For example, in FIG. 1, various devices are connected to the radiography control device 101 via a network 122, but the radiography control device 101 does not necessarily have to be connected to such a device. The diagnostic image may be output to a portable medium such as a DVD and input to various devices via the portable medium. Further, the network 122 may be configured by wire or may be partially configured by a wireless signal transmission line. A part of the cable 111 and the cable 112 may also be composed of a wireless signal transmission line.
 <機械学習制御装置113の構成及び処理の手順>
 図1及び図2のフローチャートを参照して、第1実施形態における機械学習制御装置113の構成、および機械学習制御装置113が実施する、医用情報処理システムの制御方法における処理の手順を説明する。図1に示すように、機械学習制御装置113は、学習・評価用データ取得部114と、学習・評価用データ記憶部115と、判定精度評価部116と、条件設定部123と、機械学習部124とを有する。
<Procedure for configuration and processing of machine learning control device 113>
With reference to the flowcharts of FIGS. 1 and 2, the configuration of the machine learning control device 113 in the first embodiment and the processing procedure in the control method of the medical information processing system implemented by the machine learning control device 113 will be described. As shown in FIG. 1, the machine learning control device 113 includes a learning / evaluation data acquisition unit 114, a learning / evaluation data storage unit 115, a determination accuracy evaluation unit 116, a condition setting unit 123, and a machine learning unit. It has 124 and.
 ステップS201において、学習・評価用データ取得部114は、ネットワーク122を介して接続された医用情報処理システム10のいずれかの装置から、機械学習の学習または評価に使用するデータ(医用情報)の取得を行う。例えば、医用画像を入力とした学習を行う場合、学習・評価用データ取得部114はPACS119から画像の取得を行う。機械学習の入力データとして取得するデータは単一のデータまたは複数のデータのいずれでも構わない。取得したデータは学習・評価用データ記憶部115に保持される。なお、学習・評価用データ取得部114は取得したデータを学習・評価用データ記憶部115に記憶せず、条件設定部123にデータを出力することも可能である。 In step S201, the learning / evaluation data acquisition unit 114 acquires data (medical information) used for machine learning learning or evaluation from any device of the medical information processing system 10 connected via the network 122. I do. For example, when learning using a medical image as an input, the learning / evaluation data acquisition unit 114 acquires an image from PACS119. The data acquired as input data for machine learning may be either a single data or a plurality of data. The acquired data is stored in the learning / evaluation data storage unit 115. The learning / evaluation data acquisition unit 114 may not store the acquired data in the learning / evaluation data storage unit 115, but may output the data to the condition setting unit 123.
 ステップS202において、条件設定部123は機械学習を行うために使用するデータを振り分けるための条件を設定する。例えば、データの振り分けのための条件として、例えば、撮影部位や撮影方向といった条件が設定される。条件設定は条件設定部123による設定に限らず、ユーザーが手動で設定してもよい。 In step S202, the condition setting unit 123 sets the conditions for allocating the data used for performing machine learning. For example, as a condition for distributing data, for example, a condition such as a photographing portion and an imaging direction is set. The condition setting is not limited to the setting by the condition setting unit 123, and the user may set it manually.
 ステップS203において、機械学習部124は、データ(医用情報)とステップS202で設定された条件とを用いた機械学習により、条件ごとに異なる複数の学習モデルの作成を行う。すなわち、機械学習部124は、学習・評価用データ取得部114で取得されたデータ(または、学習・評価用データ記憶部115に保持されたデータ)と、条件設定部123により設定された条件を受け取り、条件ごとに異なる複数の学習モデルを作成する。例えば、ステップS202において撮影部位という条件が設定されている場合は撮影部位ごとの学習モデルが作成され、撮影方向という条件が設定されている場合は撮影方向ごとの学習モデルが作成される。 In step S203, the machine learning unit 124 creates a plurality of learning models different for each condition by machine learning using the data (medical information) and the conditions set in step S202. That is, the machine learning unit 124 sets the data acquired by the learning / evaluation data acquisition unit 114 (or the data held in the learning / evaluation data storage unit 115) and the conditions set by the condition setting unit 123. Receive and create multiple learning models that are different for each condition. For example, if the condition of the imaging region is set in step S202, a learning model for each imaging region is created, and if the condition of the imaging direction is set, a learning model for each imaging direction is created.
 ステップS204において、判定精度評価部116は、機械学習部124で新たに作成された学習モデルによる判定処理の精度評価を行う。すなわち、判定精度評価部116は、学習・評価用データ取得部114で取得された医用情報(または、学習・評価用データ記憶部115に保持されたデータ(医用情報))と、条件設定部123により設定された条件とに基づいて、機械学習部124で新たに作成された条件ごとに異なる複数の学習モデルの精度評価を行う。精度評価として、判定精度評価部116は、精度評価に使用した条件(評価データ条件)と、条件に対応した評価データを入力した場合における判定処理の精度(判定精度)とを取得する。判定精度評価部116は、評価するデータの条件として、機械学習部124による学習モデルの作成に使われたものと同じデータ条件を使用する。例えば、判定精度評価部116は、胸部部位の条件で作成した学習モデルの評価データとして、胸部部位のデータを使用する。 In step S204, the determination accuracy evaluation unit 116 evaluates the accuracy of the determination process using the learning model newly created by the machine learning unit 124. That is, the determination accuracy evaluation unit 116 has the medical information acquired by the learning / evaluation data acquisition unit 114 (or the data held in the learning / evaluation data storage unit 115 (medical information)) and the condition setting unit 123. Based on the conditions set by, the machine learning unit 124 evaluates the accuracy of a plurality of learning models that are different for each condition newly created. As the accuracy evaluation, the judgment accuracy evaluation unit 116 acquires the condition used for the accuracy evaluation (evaluation data condition) and the accuracy of the judgment process (judgment accuracy) when the evaluation data corresponding to the condition is input. The determination accuracy evaluation unit 116 uses the same data conditions as those used for creating the learning model by the machine learning unit 124 as the conditions for the data to be evaluated. For example, the determination accuracy evaluation unit 116 uses the data of the chest region as the evaluation data of the learning model created under the condition of the chest region.
 ステップS205において、判定精度評価部116は、作成された学習モデルの精度評価の結果に基づいて、機械学習部124の学習により新たに作成された学習モデルを判定部104に適用するか否かの判断を行う。例えば、判定精度評価部116は、機械学習部124の学習により新たに作成された学習モデルによる判定処理の精度と、現在、判定部104で使用されている学習モデルによる判定処理の精度との比較に基づいて、精度評価を行う。 In step S205, the determination accuracy evaluation unit 116 applies the learning model newly created by the learning of the machine learning unit 124 to the determination unit 104 based on the result of the accuracy evaluation of the created learning model. Make a decision. For example, the judgment accuracy evaluation unit 116 compares the accuracy of the judgment processing by the learning model newly created by the learning of the machine learning unit 124 with the accuracy of the judgment processing by the learning model currently used in the judgment unit 104. The accuracy is evaluated based on.
 判定部104は、新たに作成された学習モデルに対して、それぞれの条件に対応した評価データを入力し、新たに作成された学習モデルから出力される判定処理の精度を取得する。そして、判定部104は、新たに作成された学習モデルから出力される判定処理の精度と、判定部104で現在使用されている学習モデルに評価データを入力した場合に、現在使用されている学習モデルから出力される判定処理の精度と、の比較に基づいて、精度評価を行う。 The determination unit 104 inputs evaluation data corresponding to each condition into the newly created learning model, and acquires the accuracy of the determination process output from the newly created learning model. Then, the determination unit 104 has the accuracy of the determination process output from the newly created learning model, and the learning currently used when the evaluation data is input to the learning model currently used by the determination unit 104. The accuracy is evaluated based on the comparison with the accuracy of the judgment process output from the model.
 判定精度評価部116は、判定精度の比較結果に基づいて、新たな学習モデルの判定精度が、判定部104で現在使用されている学習モデルの判定精度以下の場合(S205-No)、処理を終了する。 The determination accuracy evaluation unit 116 performs processing when the determination accuracy of the new learning model is equal to or less than the determination accuracy of the learning model currently used by the determination unit 104 based on the comparison result of the determination accuracy (S205-No). finish.
 一方、判定精度評価部116は、判定精度の比較結果に基づいて、新たな学習モデルの判定精度が、判定部104で現在使用されている学習モデルの判定精度よりも高い場合に(S205-YES)、判定精度評価部116は、処理をステップS206に進め、作成された新たな学習モデルを判定部104に出力するという判断を行う。 On the other hand, the determination accuracy evaluation unit 116 determines that the determination accuracy of the new learning model is higher than the determination accuracy of the learning model currently used by the determination unit 104 based on the comparison result of the determination accuracy (S205-YES). ), The determination accuracy evaluation unit 116 advances the process to step S206, and determines that the created new learning model is output to the determination unit 104.
 ステップS206において、判定精度評価部116は、新たに作成された学習モデルによる判定処理の精度が、判定部104で使用されている学習モデルによる判定処理の精度よりも高い場合に、新たに作成された学習モデルと条件とを判定部104に出力する。判定精度評価部116は、機械学習部124の学習により新たに作成された学習モデルと、ステップS202で設定された条件とを放射線撮影制御装置101に出力する。 In step S206, the determination accuracy evaluation unit 116 is newly created when the accuracy of the determination process by the newly created learning model is higher than the accuracy of the determination process by the learning model used in the determination unit 104. The learning model and the condition are output to the determination unit 104. The determination accuracy evaluation unit 116 outputs the learning model newly created by the learning of the machine learning unit 124 and the conditions set in step S202 to the radiography control device 101.
 放射線撮影制御装置101が、判定精度評価部116から出力された条件及び学習モデルを取得すると、各条件の学習モデルが判定部104の設定に適用される。例えば、放射線撮影制御装置101が、胸部部位という条件で学習したモデルAと、胸部以外の部位という条件で学習したモデルBを取得した場合、判定部104の判定処理で使用する判定対象データの条件に適合した学習モデルが選択され、機械学習に使用される。 When the radiography control device 101 acquires the conditions and the learning model output from the determination accuracy evaluation unit 116, the learning model of each condition is applied to the setting of the determination unit 104. For example, when the radiography control device 101 acquires the model A learned under the condition of the chest portion and the model B learned under the condition of the portion other than the chest, the condition of the determination target data used in the determination process of the determination unit 104. A learning model suitable for is selected and used for machine learning.
 <放射線撮影制御装置101の処理>
 図3のフローチャートを参照して、第1実施形態における放射線撮影制御装置101が実施する処理の手順を説明する。
<Processing of radiography control device 101>
The procedure of the process performed by the radiography control device 101 in the first embodiment will be described with reference to the flowchart of FIG.
 ステップS301において、判定部104は推論処理を実行する際の入力パラメータとなる、判定対象データを撮影制御部107から取得する。例えば、判定部104が、推論処理として、X線撮影画像の照射野認識の判定を行う場合、判定対象データはX線撮影画像となる。また、条件に沿って学習モデルを切り替えるため、撮影部位などの情報も判定対象データに含まれる。 In step S301, the determination unit 104 acquires the determination target data, which is an input parameter when executing the inference process, from the photographing control unit 107. For example, when the determination unit 104 determines the irradiation field recognition of the X-ray photographed image as an inference process, the determination target data is the X-ray photographed image. In addition, since the learning model is switched according to the conditions, information such as the imaging site is also included in the determination target data.
 ステップS302において、判定部104は、ステップS206の処理で、機械学習制御装置113から出力された学習モデルと条件を取得する。 In step S302, the determination unit 104 acquires the learning model and conditions output from the machine learning control device 113 in the process of step S206.
 ステップS303において、判定部104は、取得した判定対象データに基づいて、条件に適合した学習モデルを選択する。判定部104は、条件ごとに異なる複数の学習モデルから、判定対象データの条件に適合する学習モデルを選択し、選択した学習モデルを用いて、判定処理を行う。例えば、胸部部位のX線撮影画像に対して照射野認識の判定を行う場合、判定部104は、胸部部位という条件に対応した学習モデルを選択する。そして、判定部104は、条件に対応した学習モデルを機械学習の識別器に適用する。 In step S303, the determination unit 104 selects a learning model that matches the conditions based on the acquired determination target data. The determination unit 104 selects a learning model that matches the conditions of the determination target data from a plurality of learning models that differ for each condition, and performs determination processing using the selected learning model. For example, when determining the irradiation field recognition for an X-ray image of a chest region, the determination unit 104 selects a learning model corresponding to the condition of the chest region. Then, the determination unit 104 applies the learning model corresponding to the condition to the machine learning classifier.
 ステップS304において、判定部104は、ステップS303で選択された学習モデルを使用して、機械学習を使用した判定処理を行う。以上により放射線撮影制御装置101の処理は終了する。 In step S304, the determination unit 104 performs a determination process using machine learning using the learning model selected in step S303. With the above, the processing of the radiography control device 101 is completed.
 <学習モデルの例>
 次に判定部104に適用される学習モデルについて説明する。図4は判定部104に適用される学習モデルの一例を示す図である。図4の4Aは、機械学習部124の学習により作成された学習モデルが判定部104に適用される適用前の状態を示し、図4の4Bは作成された学習モデルの適用後を示している。図4の4Aに示す学習モデルの適用前の状態では、条件(図4の4Aの例では部位)によって使用する学習モデルを切り替えることは行わず、判定部104は、すべての部位において、既に設定されている学習モデルAを使用した判定処理を行う。一方、図4の4Bに示す学習モデルの適用後の状態では、条件(図4の4Bの例では部位)によって、判定部104は、使用するモデルを切り替える。例えば、判定部104に入力された判定対象データが胸部部位のX線撮影画像の場合、胸部部位という条件に適合した学習モデルBを使用する。
<Example of learning model>
Next, the learning model applied to the determination unit 104 will be described. FIG. 4 is a diagram showing an example of a learning model applied to the determination unit 104. 4A of FIG. 4 shows the state before the application of the learning model created by the learning of the machine learning unit 124 to the determination unit 104, and 4B of FIG. 4 shows the state after the application of the created learning model. .. In the state before the application of the learning model shown in 4A of FIG. 4, the learning model to be used is not switched depending on the condition (the part in the example of 4A of FIG. 4), and the determination unit 104 has already been set in all the parts. Judgment processing using the learning model A that has been performed is performed. On the other hand, in the state after the application of the learning model shown in 4B of FIG. 4, the determination unit 104 switches the model to be used depending on the condition (the part in the example of 4B of FIG. 4). For example, when the determination target data input to the determination unit 104 is an X-ray image of the chest region, a learning model B suitable for the condition of the chest region is used.
 本実施形態の医用情報処理システムによれば、機械学習制御装置113における学習時に条件ごとに学習を行った学習モデルを作成して、放射線撮影制御装置101の判定部104における判定処理では、その条件に適合した学習モデルを選択し、選択した学習モデルを使用した判定処理を行うことができる。判定部104で使用されている学習モデルの精度に比べて新たに作成された学習モデルの精度が高い場合、判定部104は、条件に対応した判定処理で使用する学習モデルとして、判定部104で使用されている学習モデルを、新たに作成された学習モデルで更新する。 According to the medical information processing system of the present embodiment, a learning model in which learning is performed for each condition at the time of learning in the machine learning control device 113 is created, and the condition is used in the determination process in the determination unit 104 of the radiography control device 101. It is possible to select a learning model suitable for the above and perform judgment processing using the selected learning model. When the accuracy of the newly created learning model is higher than the accuracy of the learning model used in the determination unit 104, the determination unit 104 uses the determination unit 104 as a learning model to be used in the determination process corresponding to the condition. Update the learning model used with the newly created learning model.
 判定部104は、条件に対応した対象データが入力された場合、更新された学習モデルを用いて、対象データに対して判定処理を行う。また、判定部104は、更新の対象となった学習モデルに対応する条件とは異なる対象データが入力された場合、判定部104で使用されている学習モデルを用いて、当該対象データに対して判定処理を行う。 When the target data corresponding to the condition is input, the determination unit 104 performs a determination process on the target data using the updated learning model. Further, when the target data different from the conditions corresponding to the learning model to be updated is input, the determination unit 104 uses the learning model used in the determination unit 104 to obtain the target data. Judgment processing is performed.
 例えば、胸部部位の学習用データのみで学習された学習モデルを作成して、これを判定部104に適用した場合、胸部部位以外の判定対象データが入力された際に判定精度が低下することが予想される。 For example, when a learning model learned only from the learning data of the chest region is created and applied to the determination unit 104, the determination accuracy may decrease when the determination target data other than the chest region is input. is expected.
 しかし、胸部部位以外の判定対象データには従来使用していた学習モデルをそのまま使用し、胸部部位の判定対象データには更新した学習モデルを使用する、というように、条件に応じたモデルの切り替えを行うことで、判定精度を低下することなく、かつ、特定の条件の判定精度を向上させることができる。 However, the learning model used in the past is used as it is for the judgment target data other than the chest part, and the updated learning model is used for the judgment target data of the chest part. By performing the above, the determination accuracy of a specific condition can be improved without lowering the determination accuracy.
 本実施形態では部位という条件を挙げて説明したが、この例に限定されず、センサ情報、撮影条件、被写体情報、画像処理パラメータ、画像情報といった医用情報データ等々を条件とすることができる。すなわち、機械学習制御装置113における学習時に、医用情報データを条件とした学習を行った学習モデルを作成して、放射線撮影制御装置101の判定部104における判定処理では、その医用情報データを条件とし、医用情報データに適合した学習モデルを選択し、選択した学習モデルを使用した判定処理を行うことができる。 In the present embodiment, the condition of the part has been described, but the present invention is not limited to this example, and medical information data such as sensor information, imaging conditions, subject information, image processing parameters, and image information can be used as conditions. That is, a learning model is created in which learning is performed on the condition of medical information data at the time of learning on the machine learning control device 113, and the determination process on the determination unit 104 of the radiography control device 101 uses the medical information data as a condition. , A learning model suitable for medical information data can be selected, and a determination process using the selected learning model can be performed.
 判定対象データが学習時の医用情報データ以外のデータである場合は従来使用していた学習モデルをそのまま使用し、判定対象データが学習時の医用情報データである場合は、医用情報データを条件とした学習を行った学習モデルに切り替えを行う。これにより、判定対象データが学習時の医用情報データ以外のデータである場合には判定精度を低下することなく、かつ、判定対象データが学習時の医用情報データである場合には、判定精度を向上させることができる。 If the judgment target data is data other than the medical information data at the time of learning, the learning model that has been used conventionally is used as it is, and if the judgment target data is the medical information data at the time of learning, the medical information data is used as a condition. Switch to the learning model that has been trained. As a result, the judgment accuracy is not lowered when the judgment target data is data other than the medical information data at the time of learning, and the judgment accuracy is determined when the judgment target data is the medical information data at the time of learning. Can be improved.
 [第2実施形態]
 第2実施形態では、第1実施形態ステップS202で説明した条件の設定方法とは異なる条件設定の例を説明する。本実施形態に係る医用情報処理システムの構成は、第1実施形態の図1で説明した構成と同様である。以下の説明では、医用情報処理システム10において、第1実施形態における条件の設定方法と異なる処理部分を中心に説明する。
[Second Embodiment]
In the second embodiment, an example of condition setting different from the condition setting method described in the first embodiment step S202 will be described. The configuration of the medical information processing system according to the present embodiment is the same as the configuration described with reference to FIG. 1 of the first embodiment. In the following description, in the medical information processing system 10, a processing portion different from the condition setting method in the first embodiment will be mainly described.
 第2実施形態では、条件設定部123は、条件において複数のパラメータが含まれている場合、複数のパラメータのすべての組み合わせに基づいた条件を設定し、機械学習部124は、医用情報と、複数のパラメータのすべての組み合わせに基づいた条件とに基づいた機械学習により、条件ごとに異なる複数の学習モデルの作成を行う。 In the second embodiment, when a plurality of parameters are included in the condition, the condition setting unit 123 sets the condition based on all combinations of the plurality of parameters, and the machine learning unit 124 sets the medical information and the plurality of parameters. By machine learning based on the conditions based on all combinations of the parameters of, multiple learning models different for each condition are created.
 図5は第2実施形態に係る機械学習制御装置113が実施する処理の手順を示すフローチャートである。図5のフローチャートにおいて、ステップS201、S203~S206の処理内容は図2のフローチャートと同様の処理内容であり、条件の設定方法(S501)が異なる点で、図2のフローチャートと相違する。 FIG. 5 is a flowchart showing a procedure of processing performed by the machine learning control device 113 according to the second embodiment. In the flowchart of FIG. 5, the processing contents of steps S201 and S203 to S206 are the same as those of the flowchart of FIG. 2, and are different from the flowchart of FIG. 2 in that the condition setting method (S501) is different.
 ステップS501において、条件設定部123は、条件において複数のパラメータが含まれている場合、複数のパラメータのすべての組み合わせに基づいた条件を設定する。条件設定部123は、取り得る複数のパラメータの全ての組み合わせに基づいた条件を自動設定する。例えば、機械学習を行うために使用するデータを振り分けるための条件に3種類の値を取りうるパラメータA1、A2、A3と、4種類の値を取りうるパラメータB1、B2、B3、B4を含んでいる場合に、そのパラメータのすべての組み合わせに基づいた計12パターンの条件を設定する。 In step S501, when a plurality of parameters are included in the condition, the condition setting unit 123 sets the condition based on all combinations of the plurality of parameters. The condition setting unit 123 automatically sets a condition based on all combinations of a plurality of possible parameters. For example, the conditions for allocating data used for machine learning include parameters A1, A2, and A3 that can take three kinds of values, and parameters B1, B2, B3, and B4 that can take four kinds of values. If so, a total of 12 patterns of conditions are set based on all combinations of the parameters.
 以降の処理では、図2において説明した処理と同様に、ステップS203で、機械学習部124は、医用情報と、複数のパラメータのすべての組み合わせに基づいた条件について学習を行い、ステップS204で、判定精度評価部116は学習モデルの精度評価を行う。ステップS205において、判定精度評価部116は、作成された学習モデルの精度評価の結果に基づいて、新たに作成された学習モデルを判定部104に適用するか否かの判断を行う。 In the subsequent processing, similarly to the processing described in FIG. 2, in step S203, the machine learning unit 124 learns about the medical information and the conditions based on all combinations of the plurality of parameters, and in step S204, the determination is made. The accuracy evaluation unit 116 evaluates the accuracy of the learning model. In step S205, the determination accuracy evaluation unit 116 determines whether or not to apply the newly created learning model to the determination unit 104 based on the result of the accuracy evaluation of the created learning model.
 学習モデルを判定部104に適用する場合は、図3において説明した処理と同様に、ステップS303において、判定部104は、取得した判定対象データに基づいて、条件に適合した学習モデルを選択する。判定対象データをもとに条件の組に適合した学習モデルを選択する。 When applying the learning model to the determination unit 104, in step S303, the determination unit 104 selects a learning model that matches the conditions based on the acquired determination target data, as in the process described in FIG. A learning model that matches the set of conditions is selected based on the judgment target data.
 以上説明したように、本実施形態の医用情報処理システムによれば、ユーザーが設定したパラメータだけでなく、パラメータのすべての組み合わせに基づいた条件について網羅的に学習・評価を行うことができる。ユーザーが条件を設定する場合は、ユーザーの意図によって条件が限定されてしまうが、パラメータのすべての組み合わせに基づいた条件を学習することで、ユーザーの意図に制限されない性能向上のすべての可能性を条件に反映し、学習モデルの作成および評価を行うことが可能になる。なお、本実施形態はあくまで一例であり、判定対象データと設定された条件との適合性の判定により学習モデルを切り替える処理を含んでいればよい。 As described above, according to the medical information processing system of the present embodiment, it is possible to comprehensively learn and evaluate not only the parameters set by the user but also the conditions based on all combinations of parameters. When the user sets the condition, the condition is limited by the user's intention, but by learning the condition based on all combinations of parameters, all possibilities of performance improvement that are not limited by the user's intention can be obtained. It becomes possible to create and evaluate a learning model by reflecting it in the conditions. It should be noted that this embodiment is merely an example, and may include a process of switching the learning model by determining the compatibility between the determination target data and the set conditions.
 [第3実施形態]
 第3実施形態では、第1実施形態のステップS204における学習モデルの精度評価をユーザーに通知して、学習モデルの精度評価の結果に基づいて、新たに作成された学習モデルを判定部104に適用するか否かの判断する処理の例を説明する。本実施形態に係る医用情報処理システムの構成は、第1実施形態の図1で説明した構成と同様である。以下の説明では、医用情報処理システム10において、第1実施形態における処理方法と異なる処理である学習モデルの精度評価をユーザーに通知する処理を中心に説明する。
[Third Embodiment]
In the third embodiment, the user is notified of the accuracy evaluation of the learning model in step S204 of the first embodiment, and the newly created learning model is applied to the determination unit 104 based on the result of the accuracy evaluation of the learning model. An example of the process of determining whether or not to do so will be described. The configuration of the medical information processing system according to the present embodiment is the same as the configuration described with reference to FIG. 1 of the first embodiment. In the following description, in the medical information processing system 10, a process of notifying the user of the accuracy evaluation of the learning model, which is a process different from the process method in the first embodiment, will be mainly described.
 第3実施形態では、判定精度評価部116は、新たに作成された学習モデルについて、条件ごとの精度評価の結果を通知する。 In the third embodiment, the determination accuracy evaluation unit 116 notifies the result of the accuracy evaluation for each condition for the newly created learning model.
 図6は第3実施形態に係る機械学習制御装置113が実施する処理の手順を示すフローチャートである。図6のフローチャートにおいて、ステップS201~S206の処理内容は図2のフローチャートと同様の処理内容であり、評価結果を通知する処理(S601)が追加されている点で、図2のフローチャートと相違する。 FIG. 6 is a flowchart showing a procedure of processing performed by the machine learning control device 113 according to the third embodiment. In the flowchart of FIG. 6, the processing contents of steps S201 to S206 are the same as those of the flowchart of FIG. 2, and are different from the flowchart of FIG. 2 in that the processing (S601) for notifying the evaluation result is added. ..
 ステップS601において、判定精度評価部116は、条件ごとの学習モデルの評価結果を通知する。図7の7Aは、機械学習制御装置113の判定精度評価部116が通知する情報の例を示す図である。精度評価の結果には、評価対象の学習モデルを示す情報と、精度評価に使用した条件(評価データ条件)と、条件に対応した評価データを入力した場合における判定処理の精度(判定精度)とが含まれる。 In step S601, the determination accuracy evaluation unit 116 notifies the evaluation result of the learning model for each condition. FIG. 7A of FIG. 7 is a diagram showing an example of information notified by the determination accuracy evaluation unit 116 of the machine learning control device 113. The accuracy evaluation results include information indicating the learning model to be evaluated, the conditions used for accuracy evaluation (evaluation data conditions), and the accuracy of judgment processing (judgment accuracy) when evaluation data corresponding to the conditions is input. Is included.
 図7の7Aにおいて、モデルAは、判定部104において現在使用中の学習モデルであり、機械学習部124の学習により作成された学習モデルが判定部104に適用される適用前のモデルを示す。モデルAは、図4の4Aで説明した学習モデルに対応する。また、モデルBは、機械学習部124の学習により新規作成された学習モデルであり、図4の4Bで説明した学習モデル(モデルB)に対応する。 In 7A of FIG. 7, model A is a learning model currently in use by the determination unit 104, and shows a model before application in which the learning model created by learning of the machine learning unit 124 is applied to the determination unit 104. Model A corresponds to the learning model described in 4A of FIG. Further, the model B is a learning model newly created by the learning of the machine learning unit 124, and corresponds to the learning model (model B) described in 4B of FIG.
 評価データ条件が「胸部」である場合、現在使用中のモデルA(精度80%)に比べて、新規作成のモデルB(精度90%)のほうが判定精度は高くなる。一方、評価データ条件が「胸部以外」である場合、現在使用中のモデルA(精度80%)に比べて、新規作成のモデルB(精度75%)のほうが判定精度は低くなる。 When the evaluation data condition is "chest", the judgment accuracy of the newly created model B (accuracy 90%) is higher than that of the model A (accuracy 80%) currently in use. On the other hand, when the evaluation data condition is "other than the chest", the determination accuracy of the newly created model B (accuracy 75%) is lower than that of the model A (accuracy 80%) currently in use.
 判定対象データが学習時の医用情報データ以外のデータ(胸部以外のデータ)である場合は従来使用していた学習モデル(図7のモデルA)をそのまま使用し、判定対象データが学習時の医用情報データ(胸部のデータ)である場合は、医用情報データを条件とした学習を行った学習モデル(図7のモデルB)に切り替えを行う。これにより、判定対象データが学習時の医用情報データ以外のデータである場合には判定精度を低下することなく、かつ、判定対象データが学習時の医用情報データである場合には、判定精度を向上させることができる。 When the judgment target data is data other than the medical information data at the time of learning (data other than the chest), the previously used learning model (model A in FIG. 7) is used as it is, and the judgment target data is for medical use at the time of learning. In the case of information data (chest data), switching is performed to a learning model (model B in FIG. 7) that has been trained on the condition of medical information data. As a result, the judgment accuracy is not lowered when the judgment target data is data other than the medical information data at the time of learning, and the judgment accuracy is determined when the judgment target data is the medical information data at the time of learning. Can be improved.
 判定精度評価部116は、条件ごとの学習モデルの評価結果とともに、新規作成した学習モデル(モデルB)と現在使用中の学習モデル(モデルA)とを比較した評価結果の比較結果をメッセージ情報(図7の7B)として通知する。判定部104で現在使用している学習モデル(図7の7AのモデルA)よりも、判定精度が向上する学習モデル(図7の7BのモデルB)が存在する場合、判定精度評価部116は、条件(評価データ条件)と学習モデル(モデルB)と精度に関する情報を含むメッセージ情報(図7の7B)を生成し、ユーザーに通知する。 The judgment accuracy evaluation unit 116 displays the evaluation result of the learning model for each condition and the comparison result of the evaluation result comparing the newly created learning model (model B) and the learning model currently in use (model A) as message information (message information). Notify as 7B) in FIG. When there is a learning model (model B of 7B in FIG. 7) whose determination accuracy is higher than that of the learning model currently used by the determination unit 104 (model A of 7A in FIG. 7), the determination accuracy evaluation unit 116 , Generates message information (7B in FIG. 7) including information on conditions (evaluation data conditions), learning model (model B), and accuracy, and notifies the user.
 判定精度評価部116は、精度評価の結果と、判定部104で使用されている学習モデルの精度と新たに作成された学習モデルの精度とを比較したメッセージ情報と、の組み合わせにより通知を行う。 The judgment accuracy evaluation unit 116 notifies by a combination of the accuracy evaluation result and the message information comparing the accuracy of the learning model used in the judgment unit 104 with the accuracy of the newly created learning model.
 図7に示す評価結果7Aおよびメッセージ情報7Bの通知は、ネットワーク122を介して出力される。放射線撮影制御装置101の表示制御部106は、精度評価の結果と、メッセージ情報との組み合わせを表示部102に表示させる。判定精度評価部116から出力された評価結果およびメッセージ情報は、表示制御部106の制御に基づいて、表示部102に表示される。 The notification of the evaluation result 7A and the message information 7B shown in FIG. 7 is output via the network 122. The display control unit 106 of the radiography control device 101 causes the display unit 102 to display the combination of the accuracy evaluation result and the message information. The evaluation result and the message information output from the determination accuracy evaluation unit 116 are displayed on the display unit 102 based on the control of the display control unit 106.
 以上説明したように、本実施形態の医用情報処理システムによれば、評価の結果を通知することでどの条件のデータに対して、どのモデルを使用することが適切なのかをユーザーに通知し、評価結果およびメッセージ情報をユーザーが一覧できるように情報を可視化することが可能になる。なお、本実施形態はあくまで一例であり、学習モデルと条件(評価データ条件)との組み合わせごとの評価結果を通知する仕組みを含んでいれば、本実施形態の構成に限定されるものではない。図7に示す通知の形式は変更可能であり、必ずしも図7に示す形式に限定されるものではない。 As described above, according to the medical information processing system of the present embodiment, by notifying the evaluation result, the user is notified of which model is appropriate for the data of which condition. Information can be visualized so that the user can list the evaluation results and message information. It should be noted that this embodiment is merely an example, and is not limited to the configuration of this embodiment as long as it includes a mechanism for notifying the evaluation result for each combination of the learning model and the condition (evaluation data condition). The format of the notification shown in FIG. 7 can be changed and is not necessarily limited to the format shown in FIG. 7.
 [第4実施形態]
 第1実施形態~第3実施形態では、医用情報処理システム10として、放射線撮影制御装置101、および機械学習制御装置113を有する構成について説明したが、この構成に限定されず、装置単体の医用情報処理装置として構成することも可能である。例えば、図1に示す機械学習制御装置113の機能構成を放射線撮影制御装置101の内部に設けることも可能である。また、放射線撮影制御装置101の機能構成を機械学習制御装置113の内部に設けることも可能である。この場合、装置単体の医用情報処理装置として構成した場合、上述の医用情報処理システム10により実現される効果と同様の効果を得ることが可能である。
[Fourth Embodiment]
In the first to third embodiments, the configuration having the radiography control device 101 and the machine learning control device 113 as the medical information processing system 10 has been described, but the configuration is not limited to this configuration, and the medical information of the device itself is not limited to this. It can also be configured as a processing device. For example, the functional configuration of the machine learning control device 113 shown in FIG. 1 can be provided inside the radiography control device 101. It is also possible to provide the functional configuration of the radiography control device 101 inside the machine learning control device 113. In this case, when the device is configured as a single medical information processing device, it is possible to obtain the same effect as the effect realized by the medical information processing system 10 described above.
 すなわち、上記の各実施形態によれば、データの条件に適合する学習モデルにより判定処理を行うことが可能になる。これにより、データの条件に適合する学習モデルによる判定処理では判定性能が向上し、それ以外の条件では判定性能の低減を抑制することが可能になる。 That is, according to each of the above embodiments, the determination process can be performed by a learning model that matches the data conditions. As a result, the judgment performance is improved in the judgment processing by the learning model that matches the data condition, and the decrease in the judgment performance can be suppressed under other conditions.
 [その他の実施形態]
 本発明は、上述の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
[Other Embodiments]
The present invention supplies a program that realizes one or more functions of the above-described embodiment to a system or device via a network or storage medium, and one or more processors in the computer of the system or device reads and executes the program. It can also be realized by the processing to be performed. It can also be realized by a circuit (for example, ASIC) that realizes one or more functions.
 本発明は上記実施形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、発明の範囲を公にするために請求項を添付する。 The present invention is not limited to the above embodiments, and various modifications and modifications can be made without departing from the spirit and scope of the present invention. Therefore, a claim is attached to make the scope of the invention public.
 本願は、2020年1月29日提出の日本国特許出願特願2020-012887を基礎として優先権を主張するものであり、その記載内容の全てを、ここに援用する。 This application claims priority based on Japanese Patent Application No. 2020-012887 submitted on January 29, 2020, and all the contents thereof are incorporated herein by reference.
 101 放射線撮影制御装置
 102 表示部
 104 判定部
 106 表示制御部
 107 撮影制御部
 109 放射線検出器
 113 機械学習制御装置
 114 学習・評価用データ取得部
 115 学習・評価用データ記憶部
 116 判定精度評価部
 122 ネットワーク
 123 条件設定部
 124 機械学習部
101 Radiation imaging control device 102 Display unit 104 Judgment unit 106 Display control unit 107 Imaging control unit 109 Radiation detector 113 Machine learning control device 114 Learning / evaluation data acquisition unit 115 Learning / evaluation data storage unit 116 Judgment accuracy evaluation unit 122 Network 123 Condition setting unit 124 Machine learning unit

Claims (15)

  1.  医用情報を取得する取得手段と、
     機械学習を行うために使用するデータを振り分けるための条件を設定する条件設定手段と、
     前記医用情報と前記条件とを用いた前記機械学習により生成された、前記条件ごとに異なる学習モデルを用いて、対象データに対して判定処理を行う判定手段と、を備え、
     前記判定手段は、前記条件ごとに異なる学習モデルから、前記対象データの条件に適合する学習モデルを選択し、当該選択した学習モデルを用いて、前記判定処理を行うことを特徴とする医用情報処理システム。
    How to get medical information and
    Condition setting means for setting conditions for distributing data used for machine learning, and
    A determination means for performing determination processing on target data using a learning model generated by the machine learning using the medical information and the conditions and different for each condition is provided.
    The determination means is medical information processing characterized in that a learning model that matches the conditions of the target data is selected from learning models that differ for each of the conditions, and the determination process is performed using the selected learning model. system.
  2.  前記条件設定手段は、前記条件において複数のパラメータが含まれている場合、前記複数のパラメータのすべての組み合わせに基づいた条件を設定し、
     前記判定手段は、前記医用情報と、前記複数のパラメータのすべての組み合わせに基づいた条件とに基づいた前記機械学習により生成された、前記条件ごとに異なる学習モデルを用いて前記対象データに対して判定処理を行うことを特徴とする請求項1に記載の医用情報処理システム。
    When a plurality of parameters are included in the condition, the condition setting means sets a condition based on all combinations of the plurality of parameters.
    The determination means refers to the target data using a learning model generated by the machine learning based on the medical information and conditions based on all combinations of the plurality of parameters, which are different for each condition. The medical information processing system according to claim 1, wherein the determination process is performed.
  3.  新たに作成された学習モデルによる前記判定処理の精度評価を行う評価手段を更に備え、
     前記評価手段は、前記新たに作成された学習モデルによる前記判定処理の精度と、前記判定手段で使用されている学習モデルによる前記判定処理の精度との比較に基づいて、前記精度評価を行うことを特徴とする請求項1または2に記載の医用情報処理システム。
    Further equipped with an evaluation means for evaluating the accuracy of the determination process using the newly created learning model.
    The evaluation means evaluates the accuracy based on the comparison between the accuracy of the determination process by the newly created learning model and the accuracy of the determination process by the learning model used in the determination means. The medical information processing system according to claim 1 or 2.
  4.  前記評価手段は、
     前記新たに作成された学習モデルに対して、それぞれの条件に対応した評価データを入力し、前記学習モデルから出力される前記判定処理の精度と、
     前記判定手段で使用されている学習モデルに前記評価データを入力した場合に、当該学習モデルから出力される前記判定処理の精度と、の比較に基づいて、前記精度評価を行うことを特徴とする請求項3に記載の医用情報処理システム。
    The evaluation means
    For the newly created learning model, the evaluation data corresponding to each condition is input, and the accuracy of the determination process output from the learning model and the accuracy of the determination process.
    When the evaluation data is input to the learning model used in the determination means, the accuracy evaluation is performed based on the comparison with the accuracy of the determination process output from the learning model. The medical information processing system according to claim 3.
  5.  前記評価手段は、
     前記新たに作成された学習モデルによる前記判定処理の精度が、前記判定手段で使用されている学習モデルによる前記判定処理の精度よりも高い場合に、前記新たに作成された学習モデルと前記条件とを前記判定手段に出力することを特徴とする請求項4に記載の医用情報処理システム。
    The evaluation means
    When the accuracy of the determination process by the newly created learning model is higher than the accuracy of the determination process by the learning model used in the determination means, the newly created learning model and the conditions The medical information processing system according to claim 4, wherein the information processing system is output to the determination means.
  6.  前記判定手段は、当該判定手段で使用されている学習モデルの精度に比べて前記新たに作成された学習モデルの精度が高い場合、
     前記条件に対応した前記判定処理で使用する学習モデルとして、当該判定手段で使用されている学習モデルを、前記新たに作成された学習モデルで更新することを特徴とする請求項3乃至5のいずれか1項に記載の医用情報処理システム。
    When the accuracy of the newly created learning model is higher than the accuracy of the learning model used in the determination means, the determination means
    Any of claims 3 to 5, wherein the learning model used in the determination means is updated with the newly created learning model as the learning model used in the determination process corresponding to the condition. The medical information processing system according to item 1.
  7.  前記判定手段は、前記条件に対応した対象データが入力された場合、前記更新された学習モデルを用いて、当該対象データに対して判定処理を行うことを特徴とする請求項6に記載の医用情報処理システム。 The medical use according to claim 6, wherein when the target data corresponding to the conditions is input, the determination means performs a determination process on the target data using the updated learning model. Information processing system.
  8.  前記判定手段は、前記条件とは異なる対象データが入力された場合、前記判定手段で使用されている学習モデルを用いて、当該対象データに対して判定処理を行うことを特徴とする請求項6に記載の医用情報処理システム。 6. The determination means is characterized in that, when target data different from the above conditions is input, the determination means performs a determination process on the target data using the learning model used in the determination means. The medical information processing system described in.
  9.  前記評価手段は、前記新たに作成された学習モデルについて、前記条件ごとの精度評価の結果を通知することを特徴とする請求項3乃至8のいずれか1項に記載の医用情報処理システム。 The medical information processing system according to any one of claims 3 to 8, wherein the evaluation means notifies the result of accuracy evaluation for each of the conditions for the newly created learning model.
  10.  前記精度評価の結果には、評価対象の学習モデルを示す情報と、前記精度評価に使用した条件と、前記条件に対応した評価データを入力した場合における前記判定処理の精度とが含まれることを特徴とする請求項9に記載の医用情報処理システム。 The result of the accuracy evaluation includes information indicating the learning model to be evaluated, the conditions used for the accuracy evaluation, and the accuracy of the determination process when the evaluation data corresponding to the conditions is input. The medical information processing system according to claim 9, wherein the medical information processing system is characterized.
  11.  前記評価手段は、前記精度評価の結果と、前記判定手段で使用されている学習モデルの精度と前記新たに作成された学習モデルの精度とを比較したメッセージ情報と、の組み合わせにより前記通知を行うことを特徴とする請求項9または10に記載の医用情報処理システム。 The evaluation means performs the notification by combining the result of the accuracy evaluation and message information comparing the accuracy of the learning model used in the determination means with the accuracy of the newly created learning model. The medical information processing system according to claim 9 or 10.
  12.  前記精度評価の結果と、前記メッセージ情報との組み合わせを表示手段に表示させる表示制御手段を更に備えることを特徴とする請求項11に記載の医用情報処理システム。 The medical information processing system according to claim 11, further comprising a display control means for displaying a combination of the accuracy evaluation result and the message information on the display means.
  13.  医用情報を取得する取得手段と、
     機械学習を行うために使用するデータを振り分けるための条件を設定する条件設定手段と、
     前記医用情報と前記条件とを用いた前記機械学習により生成された、前記条件ごとに異なる学習モデルを用いて、対象データに対して判定処理を行う判定手段と、を備え、
     前記判定手段は、前記条件ごとに異なる学習モデルから、前記対象データの条件に適合する学習モデルを選択し、当該選択した学習モデルを用いて、前記判定処理を行うことを特徴とする医用情報処理装置。
    How to get medical information and
    Condition setting means for setting conditions for distributing data used for machine learning, and
    A determination means for performing determination processing on target data using a learning model generated by the machine learning using the medical information and the conditions and different for each condition is provided.
    The determination means is medical information processing characterized in that a learning model that matches the conditions of the target data is selected from learning models that differ for each of the conditions, and the determination process is performed using the selected learning model. Device.
  14.  医用情報処理システムの制御方法であって、
     取得手段が、医用情報を取得する取得工程と、
     条件設定手段が、機械学習を行うために使用するデータを振り分けるための条件を設定する条件設定工程と、
     判定手段が、前記医用情報と前記条件とを用いた前記機械学習により生成された、前記条件ごとに異なる学習モデルを用いて、対象データに対して判定処理を行う判定工程と、を有し、
     前記判定工程では、前記条件ごとに異なる学習モデルから、前記対象データの条件に適合する学習モデルを選択し、当該選択した学習モデルを用いて、前記判定処理を行うことを特徴とする医用情報処理システムの制御方法。
    It is a control method for medical information processing systems.
    The acquisition method is the acquisition process for acquiring medical information,
    A condition setting process in which the condition setting means sets conditions for distributing data used for machine learning, and a condition setting process.
    The determination means includes a determination step of performing determination processing on the target data using a learning model different for each condition, which is generated by the machine learning using the medical information and the conditions.
    In the determination step, medical information processing is characterized in that a learning model that matches the conditions of the target data is selected from learning models that differ for each of the conditions, and the determination process is performed using the selected learning model. How to control the system.
  15.  コンピュータに、請求項14に記載の医用情報処理システムの制御方法の各工程を実行させるプログラム。 A program that causes a computer to execute each step of the control method of the medical information processing system according to claim 14.
PCT/JP2021/001491 2020-01-29 2021-01-18 Medical information processing system, medical information processing device, control method for medical information processing system, and program WO2021153314A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-012887 2020-01-29
JP2020012887A JP2021117926A (en) 2020-01-29 2020-01-29 Medical information processing system, medical information processing apparatus, control method of medical information processing system, and program

Publications (1)

Publication Number Publication Date
WO2021153314A1 true WO2021153314A1 (en) 2021-08-05

Family

ID=77078366

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/001491 WO2021153314A1 (en) 2020-01-29 2021-01-18 Medical information processing system, medical information processing device, control method for medical information processing system, and program

Country Status (2)

Country Link
JP (1) JP2021117926A (en)
WO (1) WO2021153314A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7487159B2 (en) * 2021-10-12 2024-05-20 キヤノン株式会社 Medical image processing device, medical image processing method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014155690A1 (en) * 2013-03-29 2014-10-02 富士通株式会社 Model updating method, device and program
JP2018147023A (en) * 2017-03-01 2018-09-20 ヤフー株式会社 Provision device, provision method and provision program
JP2019109620A (en) * 2017-12-15 2019-07-04 ヤフー株式会社 Estimation device, method for estimation, and estimation program
JP2020010805A (en) * 2018-07-17 2020-01-23 大日本印刷株式会社 Specification device, program, specification method, information processing device, and specifier
JP2020010823A (en) * 2018-07-18 2020-01-23 キヤノンメディカルシステムズ株式会社 Medical information processing apparatus, medical information processing system, and medical information processing program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014155690A1 (en) * 2013-03-29 2014-10-02 富士通株式会社 Model updating method, device and program
JP2018147023A (en) * 2017-03-01 2018-09-20 ヤフー株式会社 Provision device, provision method and provision program
JP2019109620A (en) * 2017-12-15 2019-07-04 ヤフー株式会社 Estimation device, method for estimation, and estimation program
JP2020010805A (en) * 2018-07-17 2020-01-23 大日本印刷株式会社 Specification device, program, specification method, information processing device, and specifier
JP2020010823A (en) * 2018-07-18 2020-01-23 キヤノンメディカルシステムズ株式会社 Medical information processing apparatus, medical information processing system, and medical information processing program

Also Published As

Publication number Publication date
JP2021117926A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
US11047809B2 (en) Radiation imaging system, radiation imaging method, control apparatus, and computer-readable medium
CN110338823B (en) Information processing apparatus and method, radiographic apparatus and system, and storage medium
KR20130103689A (en) Information processing apparatus and information processing method
WO2021153314A1 (en) Medical information processing system, medical information processing device, control method for medical information processing system, and program
JP2006198042A (en) Image data diagnostic reading apparatus, right to access managing apparatus and image data diagnostic reading method
JP6132483B2 (en) Radiography control apparatus and method
JP7094691B2 (en) Radiation imaging system, radiography method, control device and program
JP2018055278A (en) Medical information processing apparatus, medical information processing system, medical information processing method, and program
JP6004704B2 (en) Medical examination apparatus and method
JP2017192453A (en) Information processing device, information processing system, information processing method and program
JP7289638B2 (en) Medical information processing system and program
JP2022035719A (en) Photographing error determination support device and program
JP2005124812A (en) Medical image system and image processing method thereof
WO2021153355A1 (en) Medical information processing system, medical information processing device, control method for medical information processing system, and program
JP2006092132A (en) Medical image management system, medical image management server device, and program
US20210043305A1 (en) Medical image diagnosis system, medical image processing method, and storage medium
US20220304642A1 (en) Dynamic analysis device and storage medium
JP7428055B2 (en) Diagnostic support system, diagnostic support device and program
JP2022072572A (en) Medical information processing system, medical information processing method, and program
JP2009125147A (en) Image processing apparatus, image processing method, and program
JP2008229251A (en) Medical image processing apparatus, method and program
JP2024086195A (en) Radiography apparatus and control method thereof, radiation imaging system, information processing apparatus, and program
JP2013048695A (en) Radiation imaging system, method for controlling the system, and program
JP2016129543A (en) Medical image processing device
JP2023049838A (en) Information processor, method for processing information and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21748302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21748302

Country of ref document: EP

Kind code of ref document: A1