WO2020255292A1 - Bone section image analysis method and learning method - Google Patents

Bone section image analysis method and learning method Download PDF

Info

Publication number
WO2020255292A1
WO2020255292A1 PCT/JP2019/024263 JP2019024263W WO2020255292A1 WO 2020255292 A1 WO2020255292 A1 WO 2020255292A1 JP 2019024263 W JP2019024263 W JP 2019024263W WO 2020255292 A1 WO2020255292 A1 WO 2020255292A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bone
learning
bone region
predetermined
Prior art date
Application number
PCT/JP2019/024263
Other languages
French (fr)
Japanese (ja)
Inventor
翔太 押川
▲高▼橋 渉
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Priority to JP2021528532A priority Critical patent/JP7173338B2/en
Priority to CN201980096648.4A priority patent/CN113873945A/en
Priority to PCT/JP2019/024263 priority patent/WO2020255292A1/en
Priority to KR1020217041120A priority patent/KR20220010529A/en
Publication of WO2020255292A1 publication Critical patent/WO2020255292A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5282Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to scatter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates to a bone image analysis method and a learning method, and more particularly to a bone image analysis method and a learning method for analyzing a predetermined bone region of a subject.
  • a bone image analysis method and a learning method for analyzing a predetermined bone region of a subject are known.
  • Such a bone image analysis method is disclosed in, for example, Japanese Patent No. 2638875.
  • Japanese Patent No. 2638875 discloses a bone mineral quantitative analyzer comprising a means for generating radiation and a single crystal lattice irradiated with the radiation.
  • the bone mineral quantification analyzer has two different energies by collimating (adjusting the radiations to translate each other) only the radiations having two predetermined reflection angles among the radiations reflected in the crystal lattice.
  • a means for simultaneously irradiating the subject with the radiation of The bone mineral quantification analyzer scans the subject simultaneously with radiation of two different energies, so that the subject's bone mineral quantification analysis (measurement of bone density) is performed using the transmission data corresponding to each X-ray. It is configured in.
  • Bone density as described above is generally measured for the bone density of the lumbar spine and femur.
  • the shape of the femur varies greatly from person to person, and it is important to identify the bone region of the subject in order to carry out stable follow-up. Therefore, conventionally, in order to more accurately identify (extract) a bone region (bone image) such as a bone, it is conceivable to specify (extract) the bone region based on the learning result of machine learning. ing.
  • the present invention has been made to solve the above-mentioned problems, and one object of the present invention is a subject in which a member having a brightness value larger than that of the bone is provided in a predetermined bone region. It is an object of the present invention to provide a bone image analysis method and a learning method capable of facilitating the analysis of a bone portion on a captured image.
  • the bone image analysis method in the first aspect of the present invention includes a step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and a plurality of bone region images.
  • a step of acquiring a first learning input image by adding a first simulated member image that simulates a predetermined member having a brightness value larger than that of the bone to a part of the bone region image.
  • the learning method includes a step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and a partial bone region of the plurality of bone region images.
  • a step of acquiring a learning input image by adding a simulated member image simulating a predetermined member having a brightness value larger than that of the bone portion to the image, and a predetermined bone region and simulated member image in the learning input image.
  • the step of acquiring the label image including the correct answer information of the position where is displayed, and the learning input image and the label image the image is taken by an X-ray apparatus, and a predetermined bone region and a predetermined member are displayed. It includes a step of performing machine learning for extracting a predetermined bone region and a predetermined member on the captured image.
  • a first learning input image (learning input image) in which a simulated member image simulating a predetermined member having a brightness value larger than that of the bone portion is added to the bone region image is provided.
  • Machine learning is carried out using it.
  • machine learning can be performed using a simulated first learning input image (learning input image) in which the predetermined member is actually provided in the predetermined bone region.
  • the simulated first learning input image (for learning) By using the input image), machine learning for extracting a predetermined bone region and a predetermined member can be performed.
  • a predetermined bone region (and a predetermined member) can be appropriately extracted from a photographed image of a subject in which a predetermined member having a brightness value larger than that of the bone portion is provided in the predetermined bone region. ..
  • a predetermined member having a brightness value larger than that of the bone portion is provided in the predetermined bone region.
  • the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an image processing unit 3, and a control unit 4. Further, the X-ray photographing apparatus 100 includes a display unit 5 for displaying the processed image on the image processing unit 3.
  • the X-ray irradiation unit 1 irradiates the subject T with X-rays.
  • the X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 to the subject T.
  • the X-ray imaging apparatus 100 is used, for example, to calculate (measure) the bone density of the bone region A (see FIG. 2) of the subject T.
  • DEXA Dual-Energy X
  • the ray Absorptiometri is used.
  • the bone region A is a region including the femur and the pelvis. That is, the bone region A exists in each of the left half body and the right half body of the subject T.
  • the bone region A is an example of the "predetermined bone region" in the claims.
  • the X-ray irradiation unit 1 includes an X-ray source 1a.
  • the X-ray source 1a is an X-ray tube that is connected to a high voltage generating portion (not shown) and generates X-rays when a high voltage is applied.
  • the X-ray source 1a is arranged so that the X-ray emission direction is directed toward the detection surface of the X-ray detection unit 2.
  • the X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 and transmitted through the subject T, and outputs a detection signal according to the detected X-ray intensity.
  • the X-ray detector 2 is composed of, for example, an FPD (Flat Panel Detector).
  • the image processing unit 3 includes an image acquisition unit 3a, a machine learning base region extraction unit 3b, and an analysis unit 3c.
  • Each of the image acquisition unit 3a, the machine learning base region extraction unit 3b, and the analysis unit 3c is a functional block as software in the image processing unit 3. That is, each of the image acquisition unit 3a, the machine learning base region extraction unit 3b, and the analysis unit 3c is configured to function based on the command signal of the control unit 4.
  • the image acquisition unit 3a acquires the captured image 10 (see FIG. 2) of the subject T based on the X-rays detected by the X-ray detection unit 2.
  • the captured image 10 is an energy subtraction image acquired by calculating the difference between the acquired images using X-rays of two different energies.
  • the captured image 10 may be an X-ray image or a DRR (Digitally Reduced Radiograph) image created from the CT image data of the subject T.
  • the machine learning base region extraction unit 3b determines on the captured image 10 based on the learning result of machine learning in the learning device 200 in the captured image 10 acquired based on the X-rays detected by the X-ray detection unit 2. It is configured to extract the area of. Specifically, in the first embodiment, deep learning is used as machine learning.
  • the analysis unit 3c is configured to calculate the circularity and the bone density of the region designated by the technician in the captured image 10.
  • the learning device 200 is configured to perform machine learning for extracting the bone region A on the captured image 10 (see FIG. 2) on which the bone region A (see FIG. 2) is displayed. Further, the learning device 200 is configured to perform machine learning for extracting the bone region A and the member 300 on the photographed image 10 (see FIG. 3) in which the bone region A and the member 300 are displayed. Has been done.
  • the member 300 is an example of a "predetermined member" in the claims.
  • the member 300 is a member having a brightness value larger than that of the bone portion.
  • the member 300 includes metal, at least in part, disposed within the bone region A.
  • the member 300 may be a metal indwelling object used in plastic surgery, such as an artificial joint, a fixing plate, and a screw.
  • the bone image analysis method includes a step of acquiring a plurality of (for example, 100) bone region images 20 (see FIG. 5) performed in step 101.
  • the plurality of bone region images 20 are images in which the bone region A is displayed.
  • the plurality of bone region images 20 may be images acquired by the X-ray imaging device 100 or images acquired by other devices.
  • the bone region image 20 may be any of an energy subtraction image, an X-ray image, and a DRR image. In the case of a DRR image, a DRR image of only the bone may be used.
  • images taken at different tube voltages such as low voltage and high voltage may be included.
  • the step of acquiring the bone region image 20 is on the right side where the bone region A of one of the left and right (the right side as an example in the first embodiment) is displayed.
  • the step of acquiring the bone region image 21 as the bone region image 20 is included.
  • the left bone region image 22 on which the bone region A of the other side (that is, the left side) of the left and right is displayed is flipped left and right to obtain the inverted posterior bone region image 23.
  • Including the step of acquiring as the bone region image 20 That is, the left bone region image 22 in which the left bone region A is displayed is flipped left and right so that the right bone region A is displayed in a simulated posterior bone region image 23.
  • the right bone region image 21 is an example of the “one-sided bone image” in the claims.
  • the left bone region image 22 and the inverted posterior bone region image 23 are examples of the “other side bone region pre-inversion image” and the “other side bone region post-inversion image” in the claims, respectively.
  • the bone image analysis method includes a step of acquiring a plurality of learning input images 30 (see FIG. 6) performed in step 102.
  • the learning input image 30 is an example of the "first learning input image” in the claims.
  • a simulated member image 300a simulating the member 300 is added to a part of the bone region images 20 among the plurality of bone region images 20.
  • the learning input image 30 is acquired.
  • the step of acquiring the learning input image 30 is arranged in a part of the bone region image 20 of the plurality of bone region images 20, and at least a part of the step is arranged inside the bone region A.
  • the step includes adding a simulated member image 300a simulating metal.
  • the simulated member image 300a is added to each of the bone region images 20 of, for example, about 30% of the plurality of bone region images 20.
  • the ratio of 30% is an example and is not limited to this.
  • the shape of the simulated member image 300a shown in FIG. 6 is an example, and may be, for example, a circular shape or a triangular shape.
  • the simulated member image 300a is an example of the "first simulated member image" in the claims.
  • the step of adding the simulated member image 300a simulating the metal includes a step of adding the simulated member image 300a having a brightness value substantially equal to the brightness value of the metal to the bone region image 20.
  • the brightness value of the simulated member image 300a is randomly selected (set) from a range of predetermined brightness values considered as metal.
  • the input image 32 is acquired as a plurality of learning input images 30. That is, the simulated member image 300a is added to each of the plurality of learning input images 30 (31, 32) in which the directions of the bone region A are all aligned.
  • the bone region image 20 to which the simulated member image 300a is added may be only the right bone region image 21 or only the inverted bone region image 23.
  • the right-side learning input image 31 and the left-side learning input image 32 are examples of the “one-sided learning image” and the “other-side learning image” in the claims, respectively.
  • the right bone region image 21 and the inverted posterior bone region image 23 will be described as the bone region image 20 without distinction. Further, the right side learning input image 31 and the left side learning input image 32 will be described as the learning input image 30 without distinction.
  • the step of acquiring the learning input image 30 is the brightness value, shape, position, and number of the simulated member image 300a for each of the plurality of bone region images 20 to which the simulated member image 300a is added.
  • a step of acquiring a plurality of learning input images 30 is included by adding a simulated member image 300a to each of the plurality of bone region images 20 so that at least one of them is different from each other.
  • the brightness value, shape, position, and number of the simulated member image 300a to be added for each of the plurality of bone region images 20 are randomly set by the image processing unit 3 (image acquisition unit 3a).
  • the image processing unit 3 image acquisition unit 3a so that at least one of the brightness value, shape, position, and number of the simulated member image 300a to be added is different between the bone region images 20. It is set (adjusted). For example, the shape and number of the simulated member images 300a are different between the right side learning input image 31 and the left side learning input image 32 in FIG.
  • the bone image analysis method includes a step of acquiring the label image 40 (see FIG. 7) performed in step 103.
  • the label image 40 includes correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30.
  • the label image 40 is an image manually generated (acquired) by an engineer based on each of the plurality of learning input images 30.
  • the label image 40 and the correct answer information 400 are examples of the "first label image" and the "first correct answer information" in the claims, respectively.
  • the step of acquiring the label image 40 includes a step of acquiring the label image 40 in which a common correct answer value is given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 40. Specifically, a common correct answer value 1 is given to each of the positions (coordinates) on the label image 40 corresponding to the bone region A and the simulated member image 300a on the learning input image 30.
  • the remaining portion (background portion) of the label image 40 is in a state of value 0. That is, the label image 40 is binarized so as to be divided into a region corresponding to the bone region A and the simulated member image 300a and a region corresponding to the remaining portion (background portion).
  • the bone image analysis method (learning method) is performed in step 103, and the simulated member image 300a of the plurality of bone region images 20 is not added to the bone.
  • a step of acquiring a label image 41 (see FIG. 8) including correct answer information 410 (see FIG. 8) at a position where the bone region A is displayed is provided.
  • the bone region image 20 to which the simulated member image 300a is not added is about 70% of all the bone region images 20.
  • the label image 41 and the correct answer information 410 are examples of the "third label image" and the "third correct answer information" in the claims, respectively.
  • the correct answer value 1 is given to the position (coordinates) on the label image 41 corresponding to the bone region A on the bone region image 20 (the simulated member image 300a is not added).
  • the remaining portion (background portion) of the label image 41 is in a state of value 0. That is, the label image 41 is binarized so as to be divided into a region corresponding to the bone region A and a region corresponding to the remaining portion (background portion).
  • the bone image analysis method includes a step of performing machine learning, which is performed in step 104.
  • the machine learning in this step 104 uses the learning input image 30 and the label image 40 to display the bone region A and the member 300 on the captured image 10 (see FIG. 3).
  • This is a step of performing machine learning for extracting the part region A and the member 300.
  • this machine learning is carried out by using a plurality of sets of the learning input image 30 and the label image 40 corresponding to each other as learning data.
  • the pair of the learning input image 30 and the label image 40 corresponding to each other is composed of one learning input image 30 and the label image 40 generated (acquired) from the one learning input image 30. Means the pair to be.
  • the captured image 10 is an image captured by the X-ray imaging apparatus 100.
  • machine learning is performed using a set of the learning input image 30 and the label image 40, and a set of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added.
  • both the bone region image 20 (learning input image 30) to which the simulated member image 300a is added and the bone region image 20 to which the simulated member image 300a is not added are input data for machine learning. It is used as.
  • the ratio of the pair of the learning input image 30 and the label image 40 to the pair of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added is, for example, about 3: 7.
  • the step 104 for carrying out machine learning includes a step for carrying out re-learning.
  • the bone region image 20 that is the source of the learning input image 30 has a brightness value, shape, position, and number that are different from the simulated member image 300a (see FIG. 6) of the learning input image 30.
  • a learning input image 50 (see FIG. 9) is used to which a simulated member image 300b (see FIG. 9) simulating the member 300, which is different from at least one of them, is added.
  • the label image 60 (see FIG. 10) including the correct answer information 600 of the position where the bone region A and the simulated member image 300b are displayed in the learning input image 50 is used.
  • Re-learning using the learning input image 50 and the label image 60 is performed after machine learning using the learning input image 30 (see FIG. 7) and the label image 40 (see FIG. 7).
  • the label image 60 and the learning input image 50 are examples of the “second label image” and the “second learning input image” in the claims, respectively.
  • the correct answer information 600 and the simulated member image 300b are examples of the "second correct answer information" and the "second simulated member image” in the claims, respectively.
  • the correct answer value common to each of the positions (coordinates) on the label image 60 corresponding to the bone region A and the simulated member image 300b on the learning input image 50. 1 is given.
  • the remaining portion (background portion) of the label image 60 is in a state of value 0. That is, the label image 60 is binarized so as to be divided into a region corresponding to the bone region A and the simulated member image 300b and a region corresponding to the remaining portion (background portion).
  • the re-learning is repeated thousands of times after machine learning (and learning using the label image 41) using the learning input image 30 (see FIG. 7) and the label image 40 (see FIG. 7). Will be.
  • each of the plurality of bone region images 20 to which the simulated member image 300b is added at least one of the brightness value, shape, position, and number of the simulated member image 300b is relearned each time in thousands of times. It is adjusted by the image processing unit 3 (image acquisition unit 3a) so as to be changed.
  • the bone image analysis method includes a step of acquiring an image to be image-extracted (segmentation), which is performed in step 105.
  • the photographed image 10 (see FIGS. 2 and 3) is acquired by photographing the subject T with the X-ray photographing apparatus 100.
  • the captured image 10 in which the left bone region A is displayed is acquired, the right bone region A is displayed by reversing the captured image 10 in which the left bone region A is displayed. Get an image that looks like it is.
  • the orientation of the bone region A in the image to be image-extracted (segmentation) can be aligned with the orientation of the bone region A in the learning image (see FIGS. 7 and 8) used for machine learning. It is possible.
  • the bone image analysis method is performed on the captured image 10 based on the learning result of the machine learning of step 104 performed in step 106.
  • a step of extracting (segmenting) A and the member 300 is provided. That is, the bone region A and the member 300 are extracted (segmented) on the captured image 10 based on the learning results of thousands of re-learning times.
  • the step of extracting the bone region A and the member 300 is based on the learning result of machine learning (including re-learning), and the bone region A is displayed on the captured image 10. And the step of integrally extracting the member 300. That is, the bone region A and the member 300 are not distinguished from each other, and the region corresponding to the bone region A and the member 300 on the photographed image 10 (the black-painted portion in FIG. 11A) is extracted. .. As a result, on the captured image 10, the region corresponding to the bone region A and the member 300 and the region corresponding to the remaining region (background portion) (white-painted portion in FIG. 11A) are distinguished. To. As shown in the comparative example of FIG. 11B, in the conventional method (method of learning only the bone portion), the member 300 and the bone portion separated from the member 300 are extracted. However, the bone around the member 300 has not been extracted.
  • the bone image analysis method includes a step of analyzing the image, which is performed in step 107. Specifically, an arbitrary region is selected on the image in the bone region A extracted in step 106, and the selection result is accepted by the image processing unit 3. Then, in the selected analysis region, the bone density is measured (calculated), the circularity is measured (calculated), and the like.
  • the bone region A and the member 300 separately, for example, by performing extraction using a rule base based on the brightness value in the image-extracted (segmented) region, the bone region It is possible to extract only A.
  • the bone image analysis method includes a step of acquiring a plurality of bone region images 20 in which the bone region A is displayed, and one of the plurality of bone region images 20.
  • a step of acquiring a learning input image 30 by adding a simulated member image 300a simulating a member 300 having a brightness value larger than that of the bone portion to the bone region image 20 of the portion is provided.
  • the bone image analysis method includes a step of acquiring a label image 40 including correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30.
  • the bone portion is photographed by the X-ray imaging apparatus 100 using the learning input image 30 and the label image 40, and the bone portion is displayed on the captured image 10 in which the bone region A and the member 300 are displayed.
  • a step of performing machine learning for extracting the region A and the member 300 is provided.
  • the bone image analysis method includes a step of extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning. As a result, machine learning can be performed using a simulated image (learning input image 30) in which the member 300 is actually provided in the bone region A.
  • the bone portion Machine learning can be performed to extract the region A and the member 300.
  • the bone region A (and the member 300) can be appropriately extracted from the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A.
  • the step of acquiring the learning input image 30 is a part of the bone region image 20 of the plurality of bone region images 20, and at least a part of the bone region image 20.
  • a step of adding a simulated member image 300a simulating a metal arranged inside the region A is included. Thereby, the bone region A (and the metal) can be appropriately extracted from the photographed image 10 in which the metal is arranged in the bone region A.
  • the simulated member image 300a simulating the metal in the step of adding the simulated member image 300a simulating the metal, the simulated member image 300a having a brightness value substantially equal to the brightness value of the metal is added to the bone region image 20. Includes steps to add.
  • machine learning can be performed by the learning input image 30 under conditions similar to those when the metal is actually arranged in the bone region A.
  • the bone region A (and the metal) can be more appropriately extracted from the photographed image 10 in which the metal is arranged in the bone region A.
  • the simulated member image 300a is added.
  • Each of the plurality of bone region images 20 has a simulated member so that at least one of the brightness value, shape, position, and number of the simulated member image 300a differs from each other for each of the plurality of bone region images 20.
  • the step of acquiring a plurality of learning input images 30 by adding the image 300a is included.
  • machine learning can be performed using different learning input images 30, so that the learning input images 30 used for machine learning can be varied.
  • machine learning can be performed using more types of learning input images 30, and the accuracy of machine learning can be improved. Therefore, the member 300 is provided in the bone region A for imaging.
  • the bone region A (and member 300) can be more appropriately extracted from the image 10.
  • the step of acquiring the label image 40 a label in which a common correct answer value is given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 40.
  • the step of acquiring the image 40 is included.
  • the step of extracting the bone region A and the member 300 includes a step of integrally extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning.
  • machine learning can be performed based on a smaller number of correct answer values than when different correct answer values are given to the bone region A and the simulated member image 300a.
  • machine learning in the learning device 200 can be relatively simplified.
  • the bone region image 20 which is the source of the learning input image 30 is different from the simulated member image 300a of the learning input image 30.
  • a step of re-learning after learning using the learning input image 30 and the label image 40 by using the label image 60 including the correct answer information 600 of the position where the member image 300b is displayed is included.
  • the step of extracting the bone region A and the member 300 based on the learning result of machine learning is a step of extracting the bone region A and the member 300 on the captured image 10 based on the re-learned learning result. including.
  • the bone region image 20 that is the source of the learning input image 30 in the re-learning, the bone region image 20 is prepared (memorized) in advance as compared with the case where more bone region images 20 are learned at one time. It is possible to suppress an increase in the number of bone region images 20.
  • the bone region A is displayed in the bone region image 20 to which the simulated member image 300a of the plurality of bone region images 20 is not added.
  • a step of acquiring the label image 41 including the correct answer information 410 of the position to be performed is provided.
  • machine learning is performed using a set of the learning input image 30 and the label image 40, and a set of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added. Includes steps to perform.
  • the bone region A (and the member 300) can be extracted from the captured image 10 in which the member 300 is provided in the bone region A, and the captured image 10 in which only the bone region A is displayed can be extracted. Bone region A can be extracted.
  • the step of acquiring the plurality of bone region images 20 is left or right when the bone region A is present in each of the left half body and the right half body of the subject T.
  • a right-side learning input image 31 in which a simulated member image 300a is added to a right-side bone region image 21 in which one bone region A is displayed, and a left-side bone in which the other bone region A of the left and right is displayed.
  • the step includes a step of acquiring the left side learning input image 32 in which the simulated member image 300a is added to the inverted posterior bone region image 23 in which the part area image 22 is flipped left and right as the learning input image 30.
  • the left side bone region image 22 is flipped left and right to acquire the left side learning input image 32 simulated so that the right side bone region A is displayed, whereby the right side learning input image 31 And the orientation of the bone region A in each of the left learning input images 32 can be aligned.
  • the orientations are unified (that is, the learning conditions are unified) based on the learning data (learning input image 30). Since learning can be performed, the learning efficiency of machine learning can be made higher than in the case of learning separately on the left and right sides.
  • machine learning includes deep learning.
  • the extraction accuracy of the extraction target region by deep learning is relatively high, so that the bone region A (and the member 300) can be accurately extracted on the captured image 10.
  • the learning method includes a step of acquiring a plurality of bone region images 20 in which the bone region A is displayed, and a part of the plurality of bone region images 20.
  • the learning method includes a step of acquiring a label image 40 including correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30.
  • the bone region A is captured on the captured image 10 captured by the X-ray imaging device 100 using the learning input image 30 and the label image 40, and the bone region A and the member 300 are displayed. And a step of performing machine learning to extract the member 300.
  • machine learning can be performed using a simulated image (learning input image 30) in which the member 300 is actually provided in the bone region A.
  • the bone portion Machine learning can be performed to extract the region A and the member 300.
  • the bone region A (and the member 300) can be appropriately extracted from the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A.
  • a learning method capable of facilitating the analysis of the bone portion on the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A is provided. can do.
  • the configuration of the bone image analysis method (learning method) according to the second embodiment will be described with reference to FIGS. 12 to 15.
  • the bone image analysis method (learning method) of the second embodiment unlike the first embodiment in which extraction is performed without distinguishing between the bone region A and the member 300, the bone region A and the member 300 are separated. Extraction is performed separately.
  • the same configuration as that of the first embodiment is illustrated with the same reference numerals as those of the first embodiment, and the description thereof will be omitted.
  • the machine learning base region extraction unit 3b performs machine learning in the learning device 210 in the captured image 10 acquired based on the X-rays detected by the X-ray detection unit 2. It is configured to extract a predetermined region on the captured image 10 based on the learning result of.
  • the bone image analysis method includes a step of acquiring a label image 70 (see FIG. 14), which is performed in step 113.
  • the label image 70 includes correct answer information 700 (see FIG. 14) at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30.
  • the label image 70 and the correct answer information 700 are examples of the "first label image” and the "first correct answer information" in the claims, respectively.
  • the step of acquiring the label image 70 different correct answer values are given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 70.
  • the step of acquiring the label image 70 is included. Specifically, the correct answer value 1 is given to the position (coordinates) on the label image 70 corresponding to the bone region A on the learning input image 30, and the simulated member image 300a on the learning input image 30 is supported.
  • the correct answer value 2 is given to the position (coordinates) on the label image 70.
  • the correct answer information 700 includes the correct answer information 700a at the position where the bone region A is displayed in the learning input image 30, and the correct answer information 700b at the position where the simulated member image 300a is displayed in the learning input image 30. ..
  • the remaining portion (background portion) of the label image 70 is in a state of value 0. That is, the label image 70 is ternary so as to be divided into a region corresponding to the bone region A, a region corresponding to the simulated member image 300a, and a region corresponding to the remaining portion (background portion).
  • the correct answer information 700a and the correct answer information 700b are examples of the "first correct answer information" in the claims.
  • the bone region A on the captured image 10 is based on the learning result of machine learning (including re-learning). And the step of extracting the members 300 individually. That is, the bone region A and the member 300 are distinguished from each other, and the region corresponding to the bone region A on the photographed image 10 (the portion of the diagonal line downward in the left direction of FIG. Each of the 15 (a) downward-sloping diagonal lines) is extracted individually. As a result, on the captured image 10, the region corresponding to the bone region A, the region corresponding to the member 300, and the region corresponding to the remaining region (background portion) (the white-painted portion of FIG. 15A). ) And are distinguished (extracted individually from each other).
  • the bone image analysis method includes a step of analyzing the image, which is performed in step 117. That is, an arbitrary region of the bone region A is selected on the image in which the region corresponding to the bone region A, the region corresponding to the member 300, and the region corresponding to the remaining region (background portion) are distinguished. The selection result is accepted by the image processing unit 3. Then, in the selected analysis region, the bone density is measured (calculated), the circularity is measured (calculated), and the like.
  • the step of acquiring the label image 70 is a label in which different correct answer values are given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 70.
  • the step of acquiring the image 70 is included.
  • the step of extracting the bone region A and the member 300 includes a step of individually extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning. As a result, the boundary between the bone region A and the member 300 can be extracted, so that it is easy to select and analyze only the bone portion on the image in which the bone region A and the member 300 are individually extracted. Can be transformed into.
  • a simulated member image 300a (first simulated member image) simulating a metal arranged inside a bone region A (predetermined bone region) is used.
  • a simulated member image simulating a metal that is, a metal not embedded in the bone
  • a metal that is, a metal not embedded in the bone
  • a simulated member image 300a (first simulated member image) simulating a metal
  • a simulated member image simulating a member other than metal for example, ceramic
  • a simulated member image having a brightness value substantially equal to the brightness value of the member other than the metal is added to the bone region image 20.
  • the present invention is not limited to this.
  • the number of pairs of the learning input image 30 (first learning input image) and the label image 40 (first label image) used for one machine learning is increased as compared with the case of re-learning. Learning may be performed only once.
  • the left bone region image 22 (the image before the other side bone reversal) in which the left bone region A (predetermined bone region) is displayed is flipped left and right.
  • the right bone region image 21 (one-sided bone region image) on which the right bone region A (predetermined bone region) is displayed may be flipped horizontally.
  • the left bone region image 22 (the image before the other side bone reversal) in which the left bone region A (predetermined bone region) is displayed is flipped left and right.
  • An example of adding a simulated member image 300a (first simulated member image) to the posterior bone region image 23 (image after flipping the other side bone) has been shown, but the present invention is not limited to this.
  • the simulated member image 300b (second simulated member image) is added to the bone region image 20 that is the source of the learning input image 30 (first learning input image).
  • the present invention is not limited to this.
  • the simulated member image 300b (second simulated member image) may be added to at least a part of the bone region image 20 to which the simulated member image 300a of the plurality of bone region images 20 is not added.
  • the bone region A predetermined bone region
  • the member 300 predetermined member
  • the remaining portion background portion
  • the present invention is not limited to this.
  • the member 300 (predetermined member) and the background portion may not be distinguished, and the bone region A (predetermined bone region), the member 300 (predetermined member), and the background portion may be extracted individually. Good.
  • the bone region A is a region including the femur, but the present invention is not limited to this.
  • the bone region A may be a region of the bone other than the femur.
  • the predetermined bone portion is photographed by an X-ray apparatus using the first learning input image and the first label image, and the predetermined bone portion is displayed on the captured image in which the predetermined bone region and the predetermined member are displayed.
  • a step of performing machine learning to extract a region and the predetermined member, and A bone image analysis method comprising a step of extracting the predetermined bone region and the predetermined member on the captured image based on the learning result of the machine learning.
  • the step of acquiring the first learning input image is a metal in which a part of the plurality of bone region images is arranged in the bone region image, and at least a part of the metal is arranged inside the predetermined bone region.
  • the step of adding the first simulated member image simulating the metal includes a step of adding the first simulated member image having a brightness value substantially equal to the brightness value of the metal to the bone region image.
  • the bone image analysis method according to any one of items 1 to 3, further comprising a step of acquiring a plurality of the first learning input images by adding the image.
  • the step of acquiring the first label image is to obtain the first label image in which a common correct answer value is given to the predetermined bone region and the position corresponding to the first simulated member image on the first label image. Including the steps to get In the step of extracting the predetermined bone region and the predetermined member, the predetermined bone region and the predetermined member are integrally extracted on the photographed image based on the learning result of the machine learning.
  • the bone image analysis method according to any one of items 1 to 3, which comprises a step.
  • the step of acquiring the first label image is to obtain the first label image in which different correct answer values are given to the predetermined bone region and the positions corresponding to the first simulated member image on the first label image.
  • Including the steps to get The step of extracting the predetermined bone region and the predetermined member is a step of individually extracting the predetermined bone region and the predetermined member on the photographed image based on the learning result of the machine learning.
  • the bone image analysis method according to any one of items 1 to 3, which comprises.
  • the brightness value, shape, and position of the bone region image that is the source of the first learning input image and the first simulated member image of the first learning input image are obtained.
  • a second learning input image to which a second simulated member image simulating the predetermined member, which is different in at least one of the numbers, and the predetermined bone region in the second learning input image are obtained.
  • the step of extracting the predetermined bone region and the predetermined member based on the learning result of the machine learning is the step of extracting the predetermined bone region and the predetermined member on the captured image based on the relearned learning result.
  • the bone image analysis method according to any one of items 1 to 3, which comprises a step of extracting a predetermined member.
  • a third label image including the third correct answer information of the position where the predetermined bone region is displayed is acquired.
  • the step of performing the machine learning is a set of the first learning input image and the first label image, and the bone region image to which the first simulated member image is not added and the third label image.
  • the bone image analysis method according to any one of items 1 to 3, which comprises a step of performing the machine learning using a set.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Dentistry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

This bone section image analysis method comprises the step of extracting a predetermined bone region (A) and a predetermined member (300) having a higher brightness than a bone section on the basis of the learning result of machine learning using a first learning input image (30) acquired by adding a first simulation member image (300a) for simulating the predetermined member (300) to a bone region image (20) and a first label image (40) corresponding to the first learning input image (30).

Description

骨部画像解析方法および学習方法Bone image analysis method and learning method
 この発明は、骨部画像解析方法および学習方法に関し、特に、被験者の所定の骨部領域の解析を行うための骨部画像解析方法および学習方法に関する。 The present invention relates to a bone image analysis method and a learning method, and more particularly to a bone image analysis method and a learning method for analyzing a predetermined bone region of a subject.
 従来、被験者の所定の骨部領域の解析を行うための骨部画像解析方法および学習方法が知られている。このような骨部画像解析方法は、たとえば、特許第2638875号公報に開示されている。 Conventionally, a bone image analysis method and a learning method for analyzing a predetermined bone region of a subject are known. Such a bone image analysis method is disclosed in, for example, Japanese Patent No. 2638875.
 特許第2638875号公報には、放射線を発生する手段と、この放射線が照射される1枚の結晶格子と、を備える骨塩定量分析装置が開示されている。また、骨塩定量分析装置には、上記結晶格子において反射された放射線のうち所定の2つの反射角度の放射線のみをコリメート(互いの放射線が並進するように調整)することにより、2つの異なるエネルギの放射線を被験者に対して同時に照射する手段を備える。骨塩定量分析装置は、2つの異なるエネルギの放射線により被験者を同時にスキャンすることにより、各々のX線に対応する透過データを用いて、被験者の骨塩定量分析(骨密度の測定)を行うように構成されている。 Japanese Patent No. 2638875 discloses a bone mineral quantitative analyzer comprising a means for generating radiation and a single crystal lattice irradiated with the radiation. In addition, the bone mineral quantification analyzer has two different energies by collimating (adjusting the radiations to translate each other) only the radiations having two predetermined reflection angles among the radiations reflected in the crystal lattice. A means for simultaneously irradiating the subject with the radiation of The bone mineral quantification analyzer scans the subject simultaneously with radiation of two different energies, so that the subject's bone mineral quantification analysis (measurement of bone density) is performed using the transmission data corresponding to each X-ray. It is configured in.
特許第2638875号公報Japanese Patent No. 2638875
 上記のような骨密度の測定は、一般的に、腰椎や大腿骨の骨密度が対象となる。ここで、大腿骨は形状に個人差が大きく、安定した経過観察を実施するには、被験者の骨部の領域の特定が重要となる。そこで、従来、骨部等の骨部領域(骨部画像)をより正確に特定(抽出)するために、機械学習の学習結果に基づいて骨部領域の特定(抽出)を行うことが考えられている。 Bone density as described above is generally measured for the bone density of the lumbar spine and femur. Here, the shape of the femur varies greatly from person to person, and it is important to identify the bone region of the subject in order to carry out stable follow-up. Therefore, conventionally, in order to more accurately identify (extract) a bone region (bone image) such as a bone, it is conceivable to specify (extract) the bone region based on the learning result of machine learning. ing.
 しかし、従来の手法では、骨部よりも輝度値が大きい金属からなる医療用の部材等が所定の骨部領域に設けられている被験者を撮影した撮影画像から、機械学習の学習結果に基づいて所定の骨部領域の抽出を行った場合に、上記医療用の部材の周囲の骨部を抽出することができない場合があるという不都合がある。これは、所定の骨部領域に上記医療用の部材が設けられているような症例の画像を学習のために十分な数だけ準備することが困難であることに起因して、上記症例の画像を用いた機械学習が十分に行われていないことが原因である。したがって、従来の手法では、骨部よりも輝度値が大きい上記部材が所定の骨部領域に設けられている被験者の撮影画像から骨部の抽出が困難であるため、上記撮影画像上において骨部の解析を行うことが困難であるという問題点がある。 However, in the conventional method, based on the learning result of machine learning from a photographed image of a subject in which a medical member or the like made of metal having a brightness value larger than that of the bone is provided in a predetermined bone region. There is a disadvantage that when a predetermined bone region is extracted, it may not be possible to extract the bone around the medical member. This is because it is difficult to prepare a sufficient number of images of a case in which the medical member is provided in a predetermined bone region for learning, and thus the image of the case. The cause is that machine learning using is not sufficiently performed. Therefore, in the conventional method, it is difficult to extract the bone part from the photographed image of the subject in which the member having a brightness value larger than that of the bone part is provided in the predetermined bone part region. There is a problem that it is difficult to analyze.
 この発明は、上記のような課題を解決するためになされたものであり、この発明の1つの目的は、骨部よりも輝度値が大きい部材が所定の骨部領域に設けられている被験者の撮影画像上において骨部の解析を行うのを容易化することが可能な骨部画像解析方法および学習方法を提供することである。 The present invention has been made to solve the above-mentioned problems, and one object of the present invention is a subject in which a member having a brightness value larger than that of the bone is provided in a predetermined bone region. It is an object of the present invention to provide a bone image analysis method and a learning method capable of facilitating the analysis of a bone portion on a captured image.
 上記目的を達成するために、この発明の第1の局面における骨部画像解析方法は、所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、複数の骨部領域画像のうちの一部の骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した第1模擬部材画像を付加することにより、第1学習用入力画像を取得するステップと、第1学習用入力画像において所定の骨部領域および第1模擬部材画像が表示される位置の第1正解情報を含む第1ラベル画像を取得するステップと、第1学習用入力画像と第1ラベル画像とを用いて、X線撮影装置により撮影され、所定の骨部領域および所定の部材が表示された撮影画像上において所定の骨部領域および所定の部材を抽出するための機械学習を実施するステップと、機械学習の学習結果に基づいて、撮影画像上において、所定の骨部領域および所定の部材を抽出するステップと、を備える。 In order to achieve the above object, the bone image analysis method in the first aspect of the present invention includes a step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and a plurality of bone region images. A step of acquiring a first learning input image by adding a first simulated member image that simulates a predetermined member having a brightness value larger than that of the bone to a part of the bone region image. 1 A step of acquiring a first label image including a predetermined bone region and a first correct answer information of a position where a first simulated member image is displayed in the input image for learning, and an input image for learning and a first label image. A step of performing machine learning for extracting a predetermined bone region and a predetermined member on a photographed image taken by an X-ray imaging device and displaying a predetermined bone region and a predetermined member. And a step of extracting a predetermined bone region and a predetermined member on a captured image based on the learning result of machine learning.
 また、この発明の第2の局面による学習方法は、所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、複数の骨部領域画像のうちの一部の骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した模擬部材画像を付加することにより、学習用入力画像を取得するステップと、学習用入力画像において所定の骨部領域および模擬部材画像が表示される位置の正解情報を含むラベル画像を取得するステップと、学習用入力画像とラベル画像とを用いて、X線撮影装置により撮影され、所定の骨部領域および所定の部材が表示された撮影画像上において、所定の骨部領域および所定の部材を抽出するための機械学習を実施するステップと、を備える。 Further, the learning method according to the second aspect of the present invention includes a step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and a partial bone region of the plurality of bone region images. A step of acquiring a learning input image by adding a simulated member image simulating a predetermined member having a brightness value larger than that of the bone portion to the image, and a predetermined bone region and simulated member image in the learning input image. Using the step of acquiring the label image including the correct answer information of the position where is displayed, and the learning input image and the label image, the image is taken by an X-ray apparatus, and a predetermined bone region and a predetermined member are displayed. It includes a step of performing machine learning for extracting a predetermined bone region and a predetermined member on the captured image.
 本発明によれば、上記のように、骨部よりも輝度値が大きい所定の部材を模擬した模擬部材画像が骨部領域画像に付加された第1学習用入力画像(学習用入力画像)を用いて機械学習が実施されている。これにより、所定の骨部領域に上記所定の部材が実際に設けられているような模擬的な第1学習用入力画像(学習用入力画像)を用いて機械学習を実施することができる。その結果、症例が少ないことに起因して所定の骨部領域に上記所定の部材が実際に設けられている画像が準備できなくても、上記の模擬的な第1学習用入力画像(学習用入力画像)を用いることにより、所定の骨部領域および所定の部材を抽出するための機械学習を実施することができる。これにより、骨部よりも輝度値が大きい所定の部材が所定の骨部領域に設けられている被験者の撮影画像から、所定の骨部領域(および所定の部材)を適切に抽出することができる。その結果、骨部よりも輝度値が大きい所定の部材が所定の骨部領域に設けられている被験者の撮影画像上において骨部の解析を行うのを容易化することができる。 According to the present invention, as described above, a first learning input image (learning input image) in which a simulated member image simulating a predetermined member having a brightness value larger than that of the bone portion is added to the bone region image is provided. Machine learning is carried out using it. As a result, machine learning can be performed using a simulated first learning input image (learning input image) in which the predetermined member is actually provided in the predetermined bone region. As a result, even if an image in which the predetermined member is actually provided in the predetermined bone region cannot be prepared due to the small number of cases, the simulated first learning input image (for learning) By using the input image), machine learning for extracting a predetermined bone region and a predetermined member can be performed. Thereby, a predetermined bone region (and a predetermined member) can be appropriately extracted from a photographed image of a subject in which a predetermined member having a brightness value larger than that of the bone portion is provided in the predetermined bone region. .. As a result, it is possible to facilitate the analysis of the bone portion on the photographed image of the subject in which the predetermined member having a brightness value larger than that of the bone portion is provided in the predetermined bone region.
第1実施形態によるX線撮影装置および学習装置を示した図である。It is a figure which showed the X-ray imaging apparatus and learning apparatus by 1st Embodiment. 第1および第2実施形態による撮影画像(金属の部材なし)を示した図である。It is a figure which showed the photographed image (without a metal member) by 1st and 2nd Embodiment. 第1および第2実施形態による撮影画像(金属の部材あり)を示した図である。It is a figure which showed the photographed image (with a metal member) by 1st and 2nd Embodiment. 第1実施形態による骨部画像解析方法および学習方法を示したフロー図である。It is a flow chart which showed the bone image analysis method and learning method by 1st Embodiment. 第1および第2実施形態による骨部領域画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the bone region image by 1st and 2nd Embodiment. 第1および第2実施形態による学習用入力画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the learning input image by 1st and 2nd Embodiment. 第1実施形態による模擬部材画像が付加された学習用入力画像に対応するラベル画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the label image corresponding to the learning input image to which the simulated member image is added according to 1st Embodiment. 第1実施形態による模擬部材画像が付加されていない学習用入力画像に対応するラベル画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the label image corresponding to the learning input image to which the simulated member image according to 1st Embodiment is not added. 第1および第2実施形態による再学習に用いる学習用入力画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the learning input image used for relearning by 1st and 2nd Embodiment. 第1実施形態による再学習に用いる学習用入力画像に対応するラベル画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the label image corresponding to the learning input image used for re-learning by 1st Embodiment. 第1実施形態による模擬部材画像を用いた機械学習に基づく抽出結果と、模擬部材画像を用いない機械学習(比較例)に基づく抽出結果とを比較した図である。It is a figure which compared the extraction result based on the machine learning using the simulated member image by 1st Embodiment, and the extraction result based on the machine learning (comparative example) which did not use a simulated member image. 第2実施形態によるX線撮影装置および学習装置を示した図である。It is a figure which showed the X-ray imaging apparatus and learning apparatus by 2nd Embodiment. 第2実施形態による骨部画像解析方法および学習方法を示したフロー図である。It is a flow chart which showed the bone image analysis method and learning method by 2nd Embodiment. 第2実施形態による学習用入力画像に対応するラベル画像の取得方法を説明するための図である。It is a figure for demonstrating the acquisition method of the label image corresponding to the input image for learning by 2nd Embodiment. 第2実施形態による模擬部材画像を用いた機械学習に基づく抽出結果を示す図である。It is a figure which shows the extraction result based on the machine learning using the simulated member image by 2nd Embodiment.
 以下、本発明を具体化した実施形態を図面に基づいて説明する。 Hereinafter, embodiments embodying the present invention will be described with reference to the drawings.
 [第1実施形態]
 (X線撮影装置の構成)
 図1に示すように、X線撮影装置100は、X線照射部1と、X線検出部2と、画像処理部3と、制御部4とを備えている。また、X線撮影装置100は、画像処理部3に処理された画像を表示する表示部5を備えている。
[First Embodiment]
(Configuration of X-ray imaging device)
As shown in FIG. 1, the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an image processing unit 3, and a control unit 4. Further, the X-ray photographing apparatus 100 includes a display unit 5 for displaying the processed image on the image processing unit 3.
 X線照射部1は、被験者TにX線を照射する。X線検出部2は、X線照射部1から被験者Tに照射されたX線を検出する。X線撮影装置100は、たとえば、被験者Tの骨部領域A(図2参照)の骨密度の算出(測定)に用いられる。骨密度の測定においては、たとえば、X線照射部1から2種類のエネルギのX線を被験者Tの測定部位に照射することにより、骨成分と他の組織とを区別するDEXA(Dual-Energy X-ray Absorptiometry)法が用いられる。なお、第1実施形態では、一例として、骨部領域Aを大腿骨と骨盤とを含む領域とする。すなわち、骨部領域Aは、被験者Tの左半身および右半身の各々に存在する。なお、骨部領域Aは、請求の範囲の「所定の骨部領域」の一例である。 The X-ray irradiation unit 1 irradiates the subject T with X-rays. The X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 to the subject T. The X-ray imaging apparatus 100 is used, for example, to calculate (measure) the bone density of the bone region A (see FIG. 2) of the subject T. In the measurement of bone density, for example, DEXA (Dual-Energy X) that distinguishes bone components from other tissues by irradiating the measurement site of subject T with X-rays having two types of energy from the X-ray irradiation unit 1 -The ray Absorptiometri) method is used. In the first embodiment, as an example, the bone region A is a region including the femur and the pelvis. That is, the bone region A exists in each of the left half body and the right half body of the subject T. The bone region A is an example of the "predetermined bone region" in the claims.
 X線照射部1は、X線源1aを含んでいる。X線源1aは、図示しない高電圧発生部に接続されており、高電圧が印加されることによりX線を発生させるX線管である。X線源1aは、X線出射方向をX線検出部2の検出面に向けて配置されている。 The X-ray irradiation unit 1 includes an X-ray source 1a. The X-ray source 1a is an X-ray tube that is connected to a high voltage generating portion (not shown) and generates X-rays when a high voltage is applied. The X-ray source 1a is arranged so that the X-ray emission direction is directed toward the detection surface of the X-ray detection unit 2.
 X線検出部2は、X線照射部1から照射され、被験者Tを透過したX線を検出し、検出したX線強度に応じた検出信号を出力する。なお、X線検出部2は、たとえば、FPD(Flat Panel Detector)により構成されている。 The X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 and transmitted through the subject T, and outputs a detection signal according to the detected X-ray intensity. The X-ray detector 2 is composed of, for example, an FPD (Flat Panel Detector).
 画像処理部3は、画像取得部3aと、機械学習ベース領域抽出部3bと、解析部3cと、を含む。なお、画像取得部3a、機械学習ベース領域抽出部3b、および、解析部3cの各々は、画像処理部3の中のソフトウェアとしての機能ブロックである。すなわち、画像取得部3a、機械学習ベース領域抽出部3b、および、解析部3cの各々は、制御部4の指令信号に基づき機能するように構成されている。 The image processing unit 3 includes an image acquisition unit 3a, a machine learning base region extraction unit 3b, and an analysis unit 3c. Each of the image acquisition unit 3a, the machine learning base region extraction unit 3b, and the analysis unit 3c is a functional block as software in the image processing unit 3. That is, each of the image acquisition unit 3a, the machine learning base region extraction unit 3b, and the analysis unit 3c is configured to function based on the command signal of the control unit 4.
 画像取得部3aは、X線検出部2により検出されたX線に基づいて被験者Tの撮影画像10(図2参照)を取得する。たとえば、撮影画像10は、異なる2つのエネルギのX線を用いて取得された各々の画像の差分を算出することにより取得されるエネルギ・サブトラクション画像である。なお、撮影画像10は、X線画像、または、被験者TのCT画像データから作成されたDRR(Digitally Reconstructed Radiograph)画像であってもよい。 The image acquisition unit 3a acquires the captured image 10 (see FIG. 2) of the subject T based on the X-rays detected by the X-ray detection unit 2. For example, the captured image 10 is an energy subtraction image acquired by calculating the difference between the acquired images using X-rays of two different energies. The captured image 10 may be an X-ray image or a DRR (Digitally Reduced Radiograph) image created from the CT image data of the subject T.
 機械学習ベース領域抽出部3bは、X線検出部2により検出されたX線に基づいて取得された撮影画像10において、学習装置200における機械学習の学習結果に基づいて、撮影画像10上における所定の領域を抽出するように構成されている。具体的には、第1実施形態では、機械学習として、深層学習が用いられる。 The machine learning base region extraction unit 3b determines on the captured image 10 based on the learning result of machine learning in the learning device 200 in the captured image 10 acquired based on the X-rays detected by the X-ray detection unit 2. It is configured to extract the area of. Specifically, in the first embodiment, deep learning is used as machine learning.
 解析部3cは、撮影画像10において技師により指定された領域の、円形度の算出、および、骨密度の算出等を行うように構成されている。 The analysis unit 3c is configured to calculate the circularity and the bone density of the region designated by the technician in the captured image 10.
 (学習装置の構成)
 学習装置200は、骨部領域A(図2参照)が表示された撮影画像10(図2参照)上において、骨部領域Aを抽出するための機械学習を実施するように構成されている。また、学習装置200は、骨部領域Aおよび部材300が表示されている撮影画像10(図3参照)上において、骨部領域Aおよび部材300を抽出するための機械学習を実施するように構成されている。なお、部材300は、請求の範囲の「所定の部材」の一例である。
(Configuration of learning device)
The learning device 200 is configured to perform machine learning for extracting the bone region A on the captured image 10 (see FIG. 2) on which the bone region A (see FIG. 2) is displayed. Further, the learning device 200 is configured to perform machine learning for extracting the bone region A and the member 300 on the photographed image 10 (see FIG. 3) in which the bone region A and the member 300 are displayed. Has been done. The member 300 is an example of a "predetermined member" in the claims.
 具体的には、部材300とは、骨部よりも輝度値が大きい部材である。詳細には、部材300は、少なくとも一部が骨部領域Aの内部に配置された金属を含む。たとえば、部材300は、人工関節、固定用プレート、および、スクリュー等の、整形手術において使用される金属製留置物が考えられる。 Specifically, the member 300 is a member having a brightness value larger than that of the bone portion. Specifically, the member 300 includes metal, at least in part, disposed within the bone region A. For example, the member 300 may be a metal indwelling object used in plastic surgery, such as an artificial joint, a fixing plate, and a screw.
 (骨部画像解析方法および学習方法)
 次に、図4~図11を参照して、X線撮影装置100における骨部画像解析方法、および、学習装置200における学習方法について説明する。
(Bone image analysis method and learning method)
Next, the bone image analysis method in the X-ray imaging apparatus 100 and the learning method in the learning apparatus 200 will be described with reference to FIGS. 4 to 11.
 図4に示すように、骨部画像解析方法(学習方法)は、ステップ101において行われる、複数(たとえば100枚)の骨部領域画像20(図5参照)を取得するステップを備える。複数の骨部領域画像20は、骨部領域Aが表示された画像である。複数の骨部領域画像20は、X線撮影装置100により取得された画像であっても、その他の装置により取得された画像であってもよい。また、骨部領域画像20は、エネルギ・サブトラクション画像、X線画像、および、DRR画像のいずれであってもよい。なお、DRR画像の場合は、骨部のみのDRR画像でもよい。また、X線画像の場合は、低圧および高圧などの異なる管電圧で撮影された画像を含んでいてもよい。 As shown in FIG. 4, the bone image analysis method (learning method) includes a step of acquiring a plurality of (for example, 100) bone region images 20 (see FIG. 5) performed in step 101. The plurality of bone region images 20 are images in which the bone region A is displayed. The plurality of bone region images 20 may be images acquired by the X-ray imaging device 100 or images acquired by other devices. Further, the bone region image 20 may be any of an energy subtraction image, an X-ray image, and a DRR image. In the case of a DRR image, a DRR image of only the bone may be used. Further, in the case of an X-ray image, images taken at different tube voltages such as low voltage and high voltage may be included.
 具体的には、図5に示すように、骨部領域画像20を取得するステップは、左右のうちの一方(第1実施形態では一例として右側とする)の骨部領域Aが表示された右側骨部領域画像21を骨部領域画像20として取得するステップを含む。また、骨部領域画像20を取得するステップは、左右のうちの他方(すなわち左側)の骨部領域Aが表示された左側骨部領域画像22が左右反転された反転後骨部領域画像23を、骨部領域画像20として取得するステップを含む。すなわち、左側の骨部領域Aが表示された左側骨部領域画像22は、左右反転されることにより、右側の骨部領域Aが表示されているように模擬された反転後骨部領域画像23に変換される。たとえば、右側骨部領域画像21および反転後骨部領域画像23は、それぞれ50枚ずつ取得される。なお、右側骨部領域画像21は、請求の範囲の「一方側骨部画像」の一例である。また、左側骨部領域画像22および反転後骨部領域画像23は、それぞれ、請求の範囲の「他方側骨部反転前画像」および「他方側骨部反転後画像」の一例である。 Specifically, as shown in FIG. 5, the step of acquiring the bone region image 20 is on the right side where the bone region A of one of the left and right (the right side as an example in the first embodiment) is displayed. The step of acquiring the bone region image 21 as the bone region image 20 is included. Further, in the step of acquiring the bone region image 20, the left bone region image 22 on which the bone region A of the other side (that is, the left side) of the left and right is displayed is flipped left and right to obtain the inverted posterior bone region image 23. , Including the step of acquiring as the bone region image 20. That is, the left bone region image 22 in which the left bone region A is displayed is flipped left and right so that the right bone region A is displayed in a simulated posterior bone region image 23. Is converted to. For example, 50 images of the right bone region image 21 and 50 images of the inverted posterior bone region image 23 are acquired. The right bone region image 21 is an example of the “one-sided bone image” in the claims. The left bone region image 22 and the inverted posterior bone region image 23 are examples of the “other side bone region pre-inversion image” and the “other side bone region post-inversion image” in the claims, respectively.
 次に、図4に示すように、骨部画像解析方法(学習方法)は、ステップ102において行われる、複数の学習用入力画像30(図6参照)を取得するステップを備える。なお、学習用入力画像30は、請求の範囲の「第1学習用入力画像」の一例である。 Next, as shown in FIG. 4, the bone image analysis method (learning method) includes a step of acquiring a plurality of learning input images 30 (see FIG. 6) performed in step 102. The learning input image 30 is an example of the "first learning input image" in the claims.
 ここで、第1実施形態では、図6に示すように、ステップ102では、複数の骨部領域画像20のうちの一部の骨部領域画像20に、部材300を模擬した模擬部材画像300aを付加することにより、学習用入力画像30が取得される。具体的には、学習用入力画像30を取得するステップは、複数の骨部領域画像20のうちの一部の骨部領域画像20に、少なくとも一部が骨部領域Aの内部に配置された金属を模擬した模擬部材画像300aを付加するステップを含む。なお、模擬部材画像300aは、複数の骨部領域画像20のうちのたとえば約30%の骨部領域画像20の各々に付加される。なお、30%という割合は一例であり、これに限られない。また、図6に示した模擬部材画像300aの形状は一例であり、たとえば円形形状または三角形形状などでもよい。なお、模擬部材画像300aは、請求の範囲の「第1模擬部材画像」の一例である。 Here, in the first embodiment, as shown in FIG. 6, in step 102, a simulated member image 300a simulating the member 300 is added to a part of the bone region images 20 among the plurality of bone region images 20. By adding, the learning input image 30 is acquired. Specifically, the step of acquiring the learning input image 30 is arranged in a part of the bone region image 20 of the plurality of bone region images 20, and at least a part of the step is arranged inside the bone region A. The step includes adding a simulated member image 300a simulating metal. The simulated member image 300a is added to each of the bone region images 20 of, for example, about 30% of the plurality of bone region images 20. The ratio of 30% is an example and is not limited to this. Further, the shape of the simulated member image 300a shown in FIG. 6 is an example, and may be, for example, a circular shape or a triangular shape. The simulated member image 300a is an example of the "first simulated member image" in the claims.
 また、金属を模擬した模擬部材画像300aを付加するステップは、金属の輝度値と略等しい輝度値を有する模擬部材画像300aを、骨部領域画像20に付加するステップを含む。具体的には、模擬部材画像300aの輝度値は、金属として考えられる所定の輝度値の範囲の中からランダムに選択(設定)される。 Further, the step of adding the simulated member image 300a simulating the metal includes a step of adding the simulated member image 300a having a brightness value substantially equal to the brightness value of the metal to the bone region image 20. Specifically, the brightness value of the simulated member image 300a is randomly selected (set) from a range of predetermined brightness values considered as metal.
 詳細には、ステップ102では、右側骨部領域画像21に模擬部材画像300aが付加された右側学習用入力画像31と、反転後骨部領域画像23に模擬部材画像300aが付加された左側学習用入力画像32とが、複数の学習用入力画像30として取得される。すなわち、骨部領域Aの向きが全て揃っている複数の学習用入力画像30(31、32)の各々に、模擬部材画像300aが付加されている。なお、模擬部材画像300aが付加される骨部領域画像20が、右側骨部領域画像21のみであっても、反転後骨部領域画像23のみであってもよい。なお、右側学習用入力画像31および左側学習用入力画像32は、それぞれ、請求の範囲の「一方側学習用画像」および「他方側学習用画像」の一例である。 Specifically, in step 102, the input image 31 for right side learning in which the simulated member image 300a is added to the right bone region image 21 and the left side learning input image 31 in which the simulated member image 300a is added to the inverted bone region image 23. The input image 32 is acquired as a plurality of learning input images 30. That is, the simulated member image 300a is added to each of the plurality of learning input images 30 (31, 32) in which the directions of the bone region A are all aligned. The bone region image 20 to which the simulated member image 300a is added may be only the right bone region image 21 or only the inverted bone region image 23. The right-side learning input image 31 and the left-side learning input image 32 are examples of the “one-sided learning image” and the “other-side learning image” in the claims, respectively.
 なお、以下では、右側骨部領域画像21と反転後骨部領域画像23とを区別せず骨部領域画像20として説明する。また、右側学習用入力画像31と左側学習用入力画像32とを区別せず学習用入力画像30として説明する。 In the following, the right bone region image 21 and the inverted posterior bone region image 23 will be described as the bone region image 20 without distinction. Further, the right side learning input image 31 and the left side learning input image 32 will be described as the learning input image 30 without distinction.
 第1実施形態では、学習用入力画像30を取得するステップは、模擬部材画像300aが付加される複数の骨部領域画像20ごとに、模擬部材画像300aの輝度値、形状、位置、および、数のうちの少なくとも1つが互いに異なるように、複数の骨部領域画像20の各々に模擬部材画像300aを付加することによって、複数の学習用入力画像30を取得するステップを含む。具体的には、複数の骨部領域画像20ごとに、付加される模擬部材画像300aの輝度値、形状、位置、および、数が、画像処理部3(画像取得部3a)によりランダムに設定される。この際、付加される模擬部材画像300aの輝度値、形状、位置、および、数のうちの少なくとも1つが、骨部領域画像20同士で互いに異なるように画像処理部3(画像取得部3a)により設定(調整)される。たとえば図6の右側学習用入力画像31と左側学習用入力画像32とでは、模擬部材画像300aの形状および数が異なっている。 In the first embodiment, the step of acquiring the learning input image 30 is the brightness value, shape, position, and number of the simulated member image 300a for each of the plurality of bone region images 20 to which the simulated member image 300a is added. A step of acquiring a plurality of learning input images 30 is included by adding a simulated member image 300a to each of the plurality of bone region images 20 so that at least one of them is different from each other. Specifically, the brightness value, shape, position, and number of the simulated member image 300a to be added for each of the plurality of bone region images 20 are randomly set by the image processing unit 3 (image acquisition unit 3a). To. At this time, the image processing unit 3 (image acquisition unit 3a) so that at least one of the brightness value, shape, position, and number of the simulated member image 300a to be added is different between the bone region images 20. It is set (adjusted). For example, the shape and number of the simulated member images 300a are different between the right side learning input image 31 and the left side learning input image 32 in FIG.
 次に、図4に示すように、骨部画像解析方法(学習方法)は、ステップ103において行われる、ラベル画像40(図7参照)を取得するステップを備える。ラベル画像40は、学習用入力画像30において骨部領域Aおよび模擬部材画像300aが表示される位置の正解情報400を含む。なお、ラベル画像40は、複数の学習用入力画像30の各々に基づいて、技師により手動で生成(取得)される画像である。なお、ラベル画像40および正解情報400は、それぞれ、請求の範囲の「第1ラベル画像」および「第1正解情報」の一例である。 Next, as shown in FIG. 4, the bone image analysis method (learning method) includes a step of acquiring the label image 40 (see FIG. 7) performed in step 103. The label image 40 includes correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30. The label image 40 is an image manually generated (acquired) by an engineer based on each of the plurality of learning input images 30. The label image 40 and the correct answer information 400 are examples of the "first label image" and the "first correct answer information" in the claims, respectively.
 また、ラベル画像40を取得するステップは、ラベル画像40上において骨部領域Aおよび模擬部材画像300aに対応する位置に共通の正解値が付与されたラベル画像40を取得するステップを含む。具体的には、学習用入力画像30上における骨部領域Aおよび模擬部材画像300aに対応するラベル画像40上の位置(座標)の各々に、共通の正解値1が与えられる。なお、ラベル画像40における残りの部分(背景部分)は値0の状態である。すなわち、ラベル画像40は、骨部領域Aおよび模擬部材画像300aに対応する領域と、残りの部分(背景部分)に対応する領域とで区分されるように2値化されている。 Further, the step of acquiring the label image 40 includes a step of acquiring the label image 40 in which a common correct answer value is given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 40. Specifically, a common correct answer value 1 is given to each of the positions (coordinates) on the label image 40 corresponding to the bone region A and the simulated member image 300a on the learning input image 30. The remaining portion (background portion) of the label image 40 is in a state of value 0. That is, the label image 40 is binarized so as to be divided into a region corresponding to the bone region A and the simulated member image 300a and a region corresponding to the remaining portion (background portion).
 また、第1実施形態では、図4に示すように、骨部画像解析方法(学習方法)は、ステップ103において行われる、複数の骨部領域画像20のうちの模擬部材画像300aが付加されない骨部領域画像20において、骨部領域Aが表示される位置の正解情報410(図8参照)を含むラベル画像41(図8参照)を取得するステップを備える。なお、模擬部材画像300aが付加されない骨部領域画像20は、全ての骨部領域画像20のうちの約70%である。また、ラベル画像41および正解情報410は、それぞれ、請求の範囲の「第3ラベル画像」および「第3正解情報」の一例である。 Further, in the first embodiment, as shown in FIG. 4, the bone image analysis method (learning method) is performed in step 103, and the simulated member image 300a of the plurality of bone region images 20 is not added to the bone. In the partial region image 20, a step of acquiring a label image 41 (see FIG. 8) including correct answer information 410 (see FIG. 8) at a position where the bone region A is displayed is provided. The bone region image 20 to which the simulated member image 300a is not added is about 70% of all the bone region images 20. Further, the label image 41 and the correct answer information 410 are examples of the "third label image" and the "third correct answer information" in the claims, respectively.
 具体的には、(模擬部材画像300aが付加されない)骨部領域画像20上における骨部領域Aに対応するラベル画像41上の位置(座標)に正解値1が与えられる。なお、ラベル画像41における残りの部分(背景部分)は値0の状態である。すなわち、ラベル画像41は、骨部領域Aに対応する領域と、残りの部分(背景部分)に対応する領域とで区分されるように2値化されている。 Specifically, the correct answer value 1 is given to the position (coordinates) on the label image 41 corresponding to the bone region A on the bone region image 20 (the simulated member image 300a is not added). The remaining portion (background portion) of the label image 41 is in a state of value 0. That is, the label image 41 is binarized so as to be divided into a region corresponding to the bone region A and a region corresponding to the remaining portion (background portion).
 次に、図4に示すように、骨部画像解析方法(学習方法)は、ステップ104において行われる、機械学習を実施するステップを備える。 Next, as shown in FIG. 4, the bone image analysis method (learning method) includes a step of performing machine learning, which is performed in step 104.
 第1実施形態では、このステップ104における機械学習は、学習用入力画像30とラベル画像40とを用いて、骨部領域Aおよび部材300が表示された撮影画像10(図3参照)上において骨部領域Aおよび部材300を抽出するための機械学習を実施するステップである。言い換えると、この機械学習は、互いに対応する学習用入力画像30とラベル画像40との複数の組を学習用のデータとして用いることにより実施される。なお、互いに対応する学習用入力画像30とラベル画像40との組とは、1つの学習用入力画像30と、上記1つの学習用入力画像30から生成(取得)されたラベル画像40とにより構成される組を意味する。また、撮影画像10は、X線撮影装置100により撮影された画像である。 In the first embodiment, the machine learning in this step 104 uses the learning input image 30 and the label image 40 to display the bone region A and the member 300 on the captured image 10 (see FIG. 3). This is a step of performing machine learning for extracting the part region A and the member 300. In other words, this machine learning is carried out by using a plurality of sets of the learning input image 30 and the label image 40 corresponding to each other as learning data. The pair of the learning input image 30 and the label image 40 corresponding to each other is composed of one learning input image 30 and the label image 40 generated (acquired) from the one learning input image 30. Means the pair to be. The captured image 10 is an image captured by the X-ray imaging apparatus 100.
 また、この機械学習を実施するステップは、学習用入力画像30とラベル画像40との組、および、模擬部材画像300aが付加されない骨部領域画像20とラベル画像41との組を用いて機械学習を実施するステップを含む。すなわち、模擬部材画像300aが付加されている骨部領域画像20(学習用入力画像30)、および、模擬部材画像300aが付加されていない骨部領域画像20の両方が、機械学習用の入力データとして用いられている。なお、学習用入力画像30とラベル画像40との組と、模擬部材画像300aが付加されない骨部領域画像20とラベル画像41との組との比は、たとえば3:7程度である。 Further, in the step of performing this machine learning, machine learning is performed using a set of the learning input image 30 and the label image 40, and a set of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added. Includes steps to carry out. That is, both the bone region image 20 (learning input image 30) to which the simulated member image 300a is added and the bone region image 20 to which the simulated member image 300a is not added are input data for machine learning. It is used as. The ratio of the pair of the learning input image 30 and the label image 40 to the pair of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added is, for example, about 3: 7.
 次に、図4に示すように、機械学習を実施するステップ104は、再学習を実施するステップを備える。上記再学習では、学習用入力画像30の元となった骨部領域画像20に、学習用入力画像30の模擬部材画像300a(図6参照)とは輝度値、形状、位置、および、数のうちの少なくとも1つが異なる、部材300を模擬した模擬部材画像300b(図9参照)が付加された学習用入力画像50(図9参照)が用いられる。また、上記再学習では、学習用入力画像50において骨部領域Aおよび模擬部材画像300bが表示される位置の正解情報600を含むラベル画像60(図10参照)が用いられる。学習用入力画像50とラベル画像60とを用いた再学習は、学習用入力画像30(図7参照)とラベル画像40(図7参照)とを用いた機械学習の後に行われる。なお、ラベル画像60および学習用入力画像50は、それぞれ、請求の範囲の「第2ラベル画像」および「第2学習用入力画像」の一例である。また、正解情報600および模擬部材画像300bは、それぞれ、請求の範囲の「第2正解情報」および「第2模擬部材画像」の一例である。 Next, as shown in FIG. 4, the step 104 for carrying out machine learning includes a step for carrying out re-learning. In the above re-learning, the bone region image 20 that is the source of the learning input image 30 has a brightness value, shape, position, and number that are different from the simulated member image 300a (see FIG. 6) of the learning input image 30. A learning input image 50 (see FIG. 9) is used to which a simulated member image 300b (see FIG. 9) simulating the member 300, which is different from at least one of them, is added. Further, in the re-learning, the label image 60 (see FIG. 10) including the correct answer information 600 of the position where the bone region A and the simulated member image 300b are displayed in the learning input image 50 is used. Re-learning using the learning input image 50 and the label image 60 is performed after machine learning using the learning input image 30 (see FIG. 7) and the label image 40 (see FIG. 7). The label image 60 and the learning input image 50 are examples of the “second label image” and the “second learning input image” in the claims, respectively. Further, the correct answer information 600 and the simulated member image 300b are examples of the "second correct answer information" and the "second simulated member image" in the claims, respectively.
 なお、ラベル画像40(図7参照)と同様に、学習用入力画像50上における骨部領域Aおよび模擬部材画像300bに対応するラベル画像60上の位置(座標)の各々に、共通の正解値1が与えられる。なお、ラベル画像60における残りの部分(背景部分)は値0の状態である。すなわち、ラベル画像60は、骨部領域Aおよび模擬部材画像300bに対応する領域と、残りの部分(背景部分)に対応する領域とで区分されるように2値化されている。 Similar to the label image 40 (see FIG. 7), the correct answer value common to each of the positions (coordinates) on the label image 60 corresponding to the bone region A and the simulated member image 300b on the learning input image 50. 1 is given. The remaining portion (background portion) of the label image 60 is in a state of value 0. That is, the label image 60 is binarized so as to be divided into a region corresponding to the bone region A and the simulated member image 300b and a region corresponding to the remaining portion (background portion).
 また、上記再学習は、学習用入力画像30(図7参照)とラベル画像40(図7参照)とを用いた機械学習(およびラベル画像41を用いた学習)の後に数千回繰り返して行われる。模擬部材画像300bが付加される複数の骨部領域画像20の各々において、模擬部材画像300bの輝度値、形状、位置、および、数のうちの少なくとも1つが数千回の再学習の中で毎回変更されるように画像処理部3(画像取得部3a)により調整されている。 Further, the re-learning is repeated thousands of times after machine learning (and learning using the label image 41) using the learning input image 30 (see FIG. 7) and the label image 40 (see FIG. 7). Will be. In each of the plurality of bone region images 20 to which the simulated member image 300b is added, at least one of the brightness value, shape, position, and number of the simulated member image 300b is relearned each time in thousands of times. It is adjusted by the image processing unit 3 (image acquisition unit 3a) so as to be changed.
 次に、図4に示すように、骨部画像解析方法は、ステップ105において行われる、画像抽出(セグメンテーション)の対象の画像を取得するステップを備える。具体的には、X線撮影装置100により被験者Tの撮影を行うことにより撮影画像10(図2および図3参照)を取得する。なお、左側の骨部領域Aが表示された撮影画像10が取得された場合は、上記左側の骨部領域Aが表示された撮影画像10を左右反転させることにより右側の骨部領域Aが表示されているように模した画像を取得する。これにより、画像抽出(セグメンテーション)の対象の画像における骨部領域Aの向きを、機械学習に用いられた学習用の画像(図7および図8参照)における骨部領域Aの向きと揃えることが可能である。 Next, as shown in FIG. 4, the bone image analysis method includes a step of acquiring an image to be image-extracted (segmentation), which is performed in step 105. Specifically, the photographed image 10 (see FIGS. 2 and 3) is acquired by photographing the subject T with the X-ray photographing apparatus 100. When the captured image 10 in which the left bone region A is displayed is acquired, the right bone region A is displayed by reversing the captured image 10 in which the left bone region A is displayed. Get an image that looks like it is. As a result, the orientation of the bone region A in the image to be image-extracted (segmentation) can be aligned with the orientation of the bone region A in the learning image (see FIGS. 7 and 8) used for machine learning. It is possible.
 ここで、第1実施形態では、図4に示すように、骨部画像解析方法は、ステップ106において行われる、ステップ104の機械学習の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を抽出(セグメンテーション)するステップを備える。すなわち、数千回の再学習の学習結果に基づいて、撮影画像10上において骨部領域Aおよび部材300が抽出(セグメンテーション)される。 Here, in the first embodiment, as shown in FIG. 4, the bone image analysis method is performed on the captured image 10 based on the learning result of the machine learning of step 104 performed in step 106. A step of extracting (segmenting) A and the member 300 is provided. That is, the bone region A and the member 300 are extracted (segmented) on the captured image 10 based on the learning results of thousands of re-learning times.
 具体的には、図11に示すように、骨部領域Aおよび部材300を抽出するステップは、機械学習(再学習を含む)の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を一体的に抽出するステップを含む。すなわち、骨部領域Aおよび部材300は互いに区別されることなく、撮影画像10上の骨部領域Aおよび部材300に対応する領域(図11の(a)の黒塗りの部分)が抽出される。これにより、撮影画像10上において、骨部領域Aおよび部材300に対応する領域と、残りの領域(背景部分)に対応する領域(図11の(a)の白塗りの部分)とが区別される。なお、図11の(b)の比較例に図示しているように、従来の方法(骨部のみを学習する方法)では、部材300、および、部材300から離間した骨部は抽出されているが、部材300の周りの骨部が抽出されていない。 Specifically, as shown in FIG. 11, the step of extracting the bone region A and the member 300 is based on the learning result of machine learning (including re-learning), and the bone region A is displayed on the captured image 10. And the step of integrally extracting the member 300. That is, the bone region A and the member 300 are not distinguished from each other, and the region corresponding to the bone region A and the member 300 on the photographed image 10 (the black-painted portion in FIG. 11A) is extracted. .. As a result, on the captured image 10, the region corresponding to the bone region A and the member 300 and the region corresponding to the remaining region (background portion) (white-painted portion in FIG. 11A) are distinguished. To. As shown in the comparative example of FIG. 11B, in the conventional method (method of learning only the bone portion), the member 300 and the bone portion separated from the member 300 are extracted. However, the bone around the member 300 has not been extracted.
 そして、図4に示すように、骨部画像解析方法は、ステップ107において行われる、画像の解析を行うステップを備える。具体的には、ステップ106において抽出された骨部領域Aにおいて任意の領域が画像上で選択され、この選択結果が画像処理部3により受け付けられる。そして、選択された解析領域において、骨密度の測定(算出)、および、円形度の測定(算出)等が行われる。なお、骨部領域Aと部材300とを区別して解析を行う場合は、画像抽出(セグメンテーション)された領域内において、たとえば輝度値に基づいたルールベースを用いた抽出を行うことによって、骨部領域Aだけを抽出することが可能である。 Then, as shown in FIG. 4, the bone image analysis method includes a step of analyzing the image, which is performed in step 107. Specifically, an arbitrary region is selected on the image in the bone region A extracted in step 106, and the selection result is accepted by the image processing unit 3. Then, in the selected analysis region, the bone density is measured (calculated), the circularity is measured (calculated), and the like. When analyzing the bone region A and the member 300 separately, for example, by performing extraction using a rule base based on the brightness value in the image-extracted (segmented) region, the bone region It is possible to extract only A.
 (第1実施形態の効果)
 第1実施形態では、以下のような効果を得ることができる。
(Effect of the first embodiment)
In the first embodiment, the following effects can be obtained.
 第1実施形態では、上記のように、骨部画像解析方法は、骨部領域Aが表示された複数の骨部領域画像20を取得するステップと、複数の骨部領域画像20のうちの一部の骨部領域画像20に、骨部よりも輝度値が大きい部材300を模擬した模擬部材画像300aを付加することにより、学習用入力画像30を取得するステップと、を備える。また、骨部画像解析方法は、学習用入力画像30において骨部領域Aおよび模擬部材画像300aが表示される位置の正解情報400を含むラベル画像40を取得するステップを備える。また、骨部画像解析方法は、学習用入力画像30とラベル画像40とを用いて、X線撮影装置100により撮影され、骨部領域Aおよび部材300が表示された撮影画像10上において骨部領域Aおよび部材300を抽出するための機械学習を実施するステップを備える。また、骨部画像解析方法は、機械学習の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を抽出するステップを備える。これにより、骨部領域Aに部材300が実際に設けられているような模擬的な画像(学習用入力画像30)を用いて機械学習を実施することができる。その結果、症例が少ないことに起因して骨部領域Aに部材300が実際に設けられている画像が準備できなくても、上記の模擬的な学習用入力画像30を用いることにより、骨部領域Aおよび部材300を抽出するための機械学習を実施することができる。これにより、骨部よりも輝度値が大きい部材300が骨部領域Aに設けられている被験者Tの撮影画像10から、骨部領域A(および部材300)を適切に抽出することができる。その結果、骨部よりも輝度値が大きい部材300が骨部領域Aに設けられている被験者Tの撮影画像10上において骨部の解析を行うのを容易化することができる。 In the first embodiment, as described above, the bone image analysis method includes a step of acquiring a plurality of bone region images 20 in which the bone region A is displayed, and one of the plurality of bone region images 20. A step of acquiring a learning input image 30 by adding a simulated member image 300a simulating a member 300 having a brightness value larger than that of the bone portion to the bone region image 20 of the portion is provided. Further, the bone image analysis method includes a step of acquiring a label image 40 including correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30. Further, in the bone image analysis method, the bone portion is photographed by the X-ray imaging apparatus 100 using the learning input image 30 and the label image 40, and the bone portion is displayed on the captured image 10 in which the bone region A and the member 300 are displayed. A step of performing machine learning for extracting the region A and the member 300 is provided. Further, the bone image analysis method includes a step of extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning. As a result, machine learning can be performed using a simulated image (learning input image 30) in which the member 300 is actually provided in the bone region A. As a result, even if an image in which the member 300 is actually provided in the bone region A cannot be prepared due to the small number of cases, by using the above-mentioned simulated learning input image 30, the bone portion Machine learning can be performed to extract the region A and the member 300. Thereby, the bone region A (and the member 300) can be appropriately extracted from the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A. As a result, it is possible to facilitate the analysis of the bone portion on the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A.
 また、第1実施形態では、上記のように、学習用入力画像30を取得するステップは、複数の骨部領域画像20のうちの一部の骨部領域画像20に、少なくとも一部が骨部領域Aの内部に配置された金属を模擬した模擬部材画像300aを付加するステップを含む。これにより、骨部領域Aに金属が配置された撮影画像10から、骨部領域A(および金属)を適切に抽出することができる。 Further, in the first embodiment, as described above, the step of acquiring the learning input image 30 is a part of the bone region image 20 of the plurality of bone region images 20, and at least a part of the bone region image 20. A step of adding a simulated member image 300a simulating a metal arranged inside the region A is included. Thereby, the bone region A (and the metal) can be appropriately extracted from the photographed image 10 in which the metal is arranged in the bone region A.
 また、第1実施形態では、上記のように、金属を模擬した模擬部材画像300aを付加するステップは、金属の輝度値と略等しい輝度値を有する模擬部材画像300aを、骨部領域画像20に付加するステップを含む。これにより、実際に金属が骨部領域Aに配置されている場合に近い条件の学習用入力画像30により機械学習を実施することができる。その結果、骨部領域Aに金属が配置された撮影画像10から、骨部領域A(および金属)をより適切に抽出することができる。 Further, in the first embodiment, as described above, in the step of adding the simulated member image 300a simulating the metal, the simulated member image 300a having a brightness value substantially equal to the brightness value of the metal is added to the bone region image 20. Includes steps to add. As a result, machine learning can be performed by the learning input image 30 under conditions similar to those when the metal is actually arranged in the bone region A. As a result, the bone region A (and the metal) can be more appropriately extracted from the photographed image 10 in which the metal is arranged in the bone region A.
 また、第1実施形態では、上記のように、学習用入力画像30を取得するステップは、複数の骨部領域画像20の各々に模擬部材画像300aを付加する場合に、模擬部材画像300aが付加される複数の骨部領域画像20ごとに、模擬部材画像300aの輝度値、形状、位置、および、数のうちの少なくとも1つが互いに異なるように、複数の骨部領域画像20の各々に模擬部材画像300aを付加することによって、複数の学習用入力画像30を取得するステップを含む。これにより、互いに異なった学習用入力画像30を用いて機械学習を実施することができるので、機械学習に用いられる学習用入力画像30に多様性を持たせることができる。その結果、より多くの種類の学習用入力画像30を用いて機械学習を実施することができるとともに機械学習の精度を向上させることができるので、部材300が骨部領域Aに設けられている撮影画像10から、骨部領域A(および部材300)をより適切に抽出することができる。 Further, in the first embodiment, as described above, in the step of acquiring the learning input image 30, when the simulated member image 300a is added to each of the plurality of bone region images 20, the simulated member image 300a is added. Each of the plurality of bone region images 20 has a simulated member so that at least one of the brightness value, shape, position, and number of the simulated member image 300a differs from each other for each of the plurality of bone region images 20. The step of acquiring a plurality of learning input images 30 by adding the image 300a is included. As a result, machine learning can be performed using different learning input images 30, so that the learning input images 30 used for machine learning can be varied. As a result, machine learning can be performed using more types of learning input images 30, and the accuracy of machine learning can be improved. Therefore, the member 300 is provided in the bone region A for imaging. The bone region A (and member 300) can be more appropriately extracted from the image 10.
 また、第1実施形態では、上記のように、ラベル画像40を取得するステップは、ラベル画像40上において骨部領域Aおよび模擬部材画像300aに対応する位置に共通の正解値が付与されたラベル画像40を取得するステップを含む。また、骨部領域Aおよび部材300を抽出するステップは、機械学習の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を一体的に抽出するステップを含む。これにより、骨部領域Aおよび模擬部材画像300aに互いに異なる正解値を付与する場合に比べて、より少ない正解値に基づいて機械学習を実施することができる。その結果、学習装置200における機械学習を比較的簡易化することができる。 Further, in the first embodiment, as described above, in the step of acquiring the label image 40, a label in which a common correct answer value is given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 40. The step of acquiring the image 40 is included. Further, the step of extracting the bone region A and the member 300 includes a step of integrally extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning. As a result, machine learning can be performed based on a smaller number of correct answer values than when different correct answer values are given to the bone region A and the simulated member image 300a. As a result, machine learning in the learning device 200 can be relatively simplified.
 また、第1実施形態では、上記のように、機械学習を実施するステップは、学習用入力画像30の元となった骨部領域画像20に、学習用入力画像30の模擬部材画像300aとは輝度値、形状、位置、および、数のうちの少なくとも1つが異なる、部材300を模擬した模擬部材画像300bが付加された学習用入力画像50と、学習用入力画像50において骨部領域Aおよび模擬部材画像300bが表示される位置の正解情報600を含むラベル画像60とを用いて、学習用入力画像30とラベル画像40とを用いた学習の後に再学習するステップを含む。また、機械学習の学習結果に基づいて骨部領域Aおよび部材300を抽出するステップは、再学習された学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を抽出するステップを含む。これにより、学習用入力画像30およびラベル画像40とは異なる学習用入力画像50およびラベル画像60を用いて再学習をすることによって、学習用入力画像30およびラベル画像40を用いた学習のみを行う場合に比べてより多くの学習を行うことができるので、撮影画像10から骨部領域A(および部材300)をより一層適切に抽出することができる。また、学習用入力画像30の元となった骨部領域画像20を再学習において用いることによって、より多く骨部領域画像20を一度に学習する場合に比べて、予め準備(記憶)しておく骨部領域画像20の数が増大するのを抑制することができる。 Further, in the first embodiment, as described above, in the step of performing machine learning, the bone region image 20 which is the source of the learning input image 30 is different from the simulated member image 300a of the learning input image 30. A learning input image 50 to which a simulated member image 300b simulating a member 300, which is different in brightness value, shape, position, and at least one of numbers, is added, and a bone region A and a simulation in the learning input image 50. A step of re-learning after learning using the learning input image 30 and the label image 40 by using the label image 60 including the correct answer information 600 of the position where the member image 300b is displayed is included. Further, the step of extracting the bone region A and the member 300 based on the learning result of machine learning is a step of extracting the bone region A and the member 300 on the captured image 10 based on the re-learned learning result. including. As a result, by re-learning using the learning input image 50 and the label image 60 which are different from the learning input image 30 and the label image 40, only the learning using the learning input image 30 and the label image 40 is performed. Since more learning can be performed than in the case, the bone region A (and the member 300) can be extracted more appropriately from the captured image 10. Further, by using the bone region image 20 that is the source of the learning input image 30 in the re-learning, the bone region image 20 is prepared (memorized) in advance as compared with the case where more bone region images 20 are learned at one time. It is possible to suppress an increase in the number of bone region images 20.
 また、第1実施形態では、上記のように、骨部画像解析方法は、複数の骨部領域画像20のうちの模擬部材画像300aが付加されない骨部領域画像20において、骨部領域Aが表示される位置の正解情報410を含むラベル画像41を取得するステップを備える。また、機械学習を実施するステップは、学習用入力画像30とラベル画像40との組、および、模擬部材画像300aが付加されない骨部領域画像20とラベル画像41との組を用いて機械学習を実施するステップを含む。これにより、部材300が骨部領域Aに設けられている撮影画像10から骨部領域A(および部材300)を抽出することができるとともに、骨部領域Aのみが表示されている撮影画像10から骨部領域Aを抽出することができる。 Further, in the first embodiment, as described above, in the bone image analysis method, the bone region A is displayed in the bone region image 20 to which the simulated member image 300a of the plurality of bone region images 20 is not added. A step of acquiring the label image 41 including the correct answer information 410 of the position to be performed is provided. Further, in the step of performing machine learning, machine learning is performed using a set of the learning input image 30 and the label image 40, and a set of the bone region image 20 and the label image 41 to which the simulated member image 300a is not added. Includes steps to perform. As a result, the bone region A (and the member 300) can be extracted from the captured image 10 in which the member 300 is provided in the bone region A, and the captured image 10 in which only the bone region A is displayed can be extracted. Bone region A can be extracted.
 また、第1実施形態では、上記のように、複数の骨部領域画像20を取得するステップは、骨部領域Aが被験者Tの左半身および右半身の各々に存在する場合に、左右のうちの一方の骨部領域Aが表示された右側骨部領域画像21に模擬部材画像300aが付加された右側学習用入力画像31と、左右のうちの他方の骨部領域Aが表示された左側骨部領域画像22が左右反転された反転後骨部領域画像23に模擬部材画像300aが付加された左側学習用入力画像32とを、学習用入力画像30として取得するステップを含む。これにより、左側骨部領域画像22を左右反転させることにより、右側の骨部領域Aが表示されているように模擬された左側学習用入力画像32を取得することによって、右側学習用入力画像31および左側学習用入力画像32の各々における骨部領域Aの向きを揃えることができる。その結果、表示される骨部領域Aの左右の向きが揃えられていない場合と異なり、向きが統一された(すなわち学習の条件が統一された)学習データ(学習用入力画像30)に基づいて学習を行うことができるので、左右別々に学習する場合と比べて機械学習の学習効率をより高くすることができる。 Further, in the first embodiment, as described above, the step of acquiring the plurality of bone region images 20 is left or right when the bone region A is present in each of the left half body and the right half body of the subject T. A right-side learning input image 31 in which a simulated member image 300a is added to a right-side bone region image 21 in which one bone region A is displayed, and a left-side bone in which the other bone region A of the left and right is displayed. The step includes a step of acquiring the left side learning input image 32 in which the simulated member image 300a is added to the inverted posterior bone region image 23 in which the part area image 22 is flipped left and right as the learning input image 30. As a result, the left side bone region image 22 is flipped left and right to acquire the left side learning input image 32 simulated so that the right side bone region A is displayed, whereby the right side learning input image 31 And the orientation of the bone region A in each of the left learning input images 32 can be aligned. As a result, unlike the case where the left and right orientations of the displayed bone region A are not aligned, the orientations are unified (that is, the learning conditions are unified) based on the learning data (learning input image 30). Since learning can be performed, the learning efficiency of machine learning can be made higher than in the case of learning separately on the left and right sides.
 また、第1実施形態では、上記のように、機械学習は、深層学習を含む。これにより、深層学習による抽出対象領域の抽出精度は比較的高いので、撮影画像10上において骨部領域A(および部材300)を精度良く抽出することができる。 Further, in the first embodiment, as described above, machine learning includes deep learning. As a result, the extraction accuracy of the extraction target region by deep learning is relatively high, so that the bone region A (and the member 300) can be accurately extracted on the captured image 10.
 また、第1実施形態では、上記のように、学習方法は、骨部領域Aが表示された複数の骨部領域画像20を取得するステップと、複数の骨部領域画像20のうちの一部の骨部領域画像20に、骨部よりも輝度値が大きい部材300を模擬した模擬部材画像300aを付加することにより、学習用入力画像30を取得するステップと、を備える。また、学習方法は、学習用入力画像30において骨部領域Aおよび模擬部材画像300aが表示される位置の正解情報400を含むラベル画像40を取得するステップを備える。また、学習方法は、学習用入力画像30とラベル画像40とを用いて、X線撮影装置100により撮影され、骨部領域Aおよび部材300が表示された撮影画像10上において、骨部領域Aおよび部材300を抽出するための機械学習を実施するステップを備える。これにより、骨部領域Aに部材300が実際に設けられているような模擬的な画像(学習用入力画像30)を用いて機械学習を実施することができる。その結果、症例が少ないことに起因して骨部領域Aに部材300が実際に設けられている画像が準備できなくても、上記の模擬的な学習用入力画像30を用いることにより、骨部領域Aおよび部材300を抽出するための機械学習を実施することができる。これにより、骨部よりも輝度値が大きい部材300が骨部領域Aに設けられている被験者Tの撮影画像10から、骨部領域A(および部材300)を適切に抽出することができる。その結果、骨部よりも輝度値が大きい部材300が骨部領域Aに設けられている被験者Tの撮影画像10上において骨部の解析を行うのを容易化することが可能な学習方法を提供することができる。 Further, in the first embodiment, as described above, the learning method includes a step of acquiring a plurality of bone region images 20 in which the bone region A is displayed, and a part of the plurality of bone region images 20. A step of acquiring a learning input image 30 by adding a simulated member image 300a simulating a member 300 having a brightness value larger than that of the bone portion to the bone region image 20 of the above. Further, the learning method includes a step of acquiring a label image 40 including correct answer information 400 at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30. Further, in the learning method, the bone region A is captured on the captured image 10 captured by the X-ray imaging device 100 using the learning input image 30 and the label image 40, and the bone region A and the member 300 are displayed. And a step of performing machine learning to extract the member 300. As a result, machine learning can be performed using a simulated image (learning input image 30) in which the member 300 is actually provided in the bone region A. As a result, even if an image in which the member 300 is actually provided in the bone region A cannot be prepared due to the small number of cases, by using the above-mentioned simulated learning input image 30, the bone portion Machine learning can be performed to extract the region A and the member 300. Thereby, the bone region A (and the member 300) can be appropriately extracted from the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A. As a result, a learning method capable of facilitating the analysis of the bone portion on the photographed image 10 of the subject T in which the member 300 having a brightness value larger than that of the bone portion is provided in the bone region A is provided. can do.
 [第2実施形態]
 次に、図12~図15を参照して、第2実施形態による骨部画像解析方法(学習方法)の構成について説明する。第2実施形態の骨部画像解析方法(学習方法)では、骨部領域Aと部材300とを区別せずに抽出を行う上記第1実施形態とは異なり、骨部領域Aと部材300とを区別して抽出が行われる。なお、上記第1実施形態と同様の構成は、第1実施形態と同じ符号を付して図示するとともに説明を省略する。
[Second Embodiment]
Next, the configuration of the bone image analysis method (learning method) according to the second embodiment will be described with reference to FIGS. 12 to 15. In the bone image analysis method (learning method) of the second embodiment, unlike the first embodiment in which extraction is performed without distinguishing between the bone region A and the member 300, the bone region A and the member 300 are separated. Extraction is performed separately. The same configuration as that of the first embodiment is illustrated with the same reference numerals as those of the first embodiment, and the description thereof will be omitted.
 (X線撮影装置の構成)
 図12に示すように、第2実施形態では、機械学習ベース領域抽出部3bは、X線検出部2により検出されたX線に基づいて取得された撮影画像10において、学習装置210における機械学習の学習結果に基づいて、撮影画像10上における所定の領域を抽出するように構成されている。
(Configuration of X-ray imaging device)
As shown in FIG. 12, in the second embodiment, the machine learning base region extraction unit 3b performs machine learning in the learning device 210 in the captured image 10 acquired based on the X-rays detected by the X-ray detection unit 2. It is configured to extract a predetermined region on the captured image 10 based on the learning result of.
 (骨部画像解析方法および学習方法)
 次に、図13~図15を参照して、X線撮影装置100における骨部画像解析方法、および、学習装置210における学習方法について説明する。
(Bone image analysis method and learning method)
Next, the bone image analysis method in the X-ray imaging apparatus 100 and the learning method in the learning apparatus 210 will be described with reference to FIGS. 13 to 15.
 図13に示すように、骨部画像解析方法は、ステップ113において行われる、ラベル画像70(図14参照)を取得するステップを備える。ラベル画像70は、学習用入力画像30において骨部領域Aおよび模擬部材画像300aが表示される位置の正解情報700(図14参照)を含む。なお、ラベル画像70および正解情報700は、それぞれ、請求の範囲の「第1ラベル画像」および「第1正解情報」の一例である。 As shown in FIG. 13, the bone image analysis method includes a step of acquiring a label image 70 (see FIG. 14), which is performed in step 113. The label image 70 includes correct answer information 700 (see FIG. 14) at a position where the bone region A and the simulated member image 300a are displayed in the learning input image 30. The label image 70 and the correct answer information 700 are examples of the "first label image" and the "first correct answer information" in the claims, respectively.
 ここで、第2実施形態では、図14に示すように、ラベル画像70を取得するステップは、ラベル画像70上において骨部領域Aおよび模擬部材画像300aに対応する位置に互いに異なる正解値が付与されたラベル画像70を取得するステップを含む。具体的には、学習用入力画像30上における骨部領域Aに対応するラベル画像70上の位置(座標)に正解値1が与えられるとともに、学習用入力画像30上における模擬部材画像300aに対応するラベル画像70上の位置(座標)に正解値2が与えられる。すなわち、正解情報700は、学習用入力画像30において骨部領域Aが表示される位置の正解情報700aと、学習用入力画像30において模擬部材画像300aが表示される位置の正解情報700bとを含む。なお、ラベル画像70における残りの部分(背景部分)は値0の状態である。すなわち、ラベル画像70は、骨部領域Aに対応する領域と、模擬部材画像300aに対応する領域と、残りの部分(背景部分)に対応する領域とで区分されるように3値化されている。なお、正解情報700aおよび正解情報700bは、請求の範囲の「第1正解情報」の一例である。 Here, in the second embodiment, as shown in FIG. 14, in the step of acquiring the label image 70, different correct answer values are given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 70. The step of acquiring the label image 70 is included. Specifically, the correct answer value 1 is given to the position (coordinates) on the label image 70 corresponding to the bone region A on the learning input image 30, and the simulated member image 300a on the learning input image 30 is supported. The correct answer value 2 is given to the position (coordinates) on the label image 70. That is, the correct answer information 700 includes the correct answer information 700a at the position where the bone region A is displayed in the learning input image 30, and the correct answer information 700b at the position where the simulated member image 300a is displayed in the learning input image 30. .. The remaining portion (background portion) of the label image 70 is in a state of value 0. That is, the label image 70 is ternary so as to be divided into a region corresponding to the bone region A, a region corresponding to the simulated member image 300a, and a region corresponding to the remaining portion (background portion). There is. The correct answer information 700a and the correct answer information 700b are examples of the "first correct answer information" in the claims.
 なお、詳細な説明は省略するが、再学習においても、骨部領域Aおよび模擬部材画像300aに対応する位置に互いに異なる正解値が付与されたラベル画像(図示せず)に基づいて機械学習が行われる。 Although detailed description is omitted, even in re-learning, machine learning is performed based on a label image (not shown) in which different correct answer values are given to positions corresponding to the bone region A and the simulated member image 300a. Will be done.
 また、図15に示すように、ステップ116の骨部領域Aおよび部材300を抽出するステップは、機械学習(再学習を含む)の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を個別に抽出するステップを含む。すなわち、骨部領域Aおよび部材300は互いに区別され、撮影画像10上の骨部領域Aに対応する領域(図15の(a)の左下がり斜線の部分)および部材300に対応する領域(図15の(a)の右下がり斜線の部分)の各々が個別に抽出される。これにより、撮影画像10上において、骨部領域Aに対応する領域と、部材300に対応する領域と、残りの領域(背景部分)に対応する領域(図15の(a)の白塗りの部分)とが区別(互いに個別に抽出)される。 Further, as shown in FIG. 15, in the step of extracting the bone region A and the member 300 in step 116, the bone region A on the captured image 10 is based on the learning result of machine learning (including re-learning). And the step of extracting the members 300 individually. That is, the bone region A and the member 300 are distinguished from each other, and the region corresponding to the bone region A on the photographed image 10 (the portion of the diagonal line downward in the left direction of FIG. Each of the 15 (a) downward-sloping diagonal lines) is extracted individually. As a result, on the captured image 10, the region corresponding to the bone region A, the region corresponding to the member 300, and the region corresponding to the remaining region (background portion) (the white-painted portion of FIG. 15A). ) And are distinguished (extracted individually from each other).
 そして、図13に示すように、骨部画像解析方法は、ステップ117において行われる、画像の解析を行うステップを備える。すなわち、骨部領域Aに対応する領域と、部材300に対応する領域と、残りの領域(背景部分)に対応する領域とが区別された画像上において、骨部領域Aの任意の領域が選択され、この選択結果が画像処理部3により受け付けられる。そして、選択された解析領域において、骨密度の測定(算出)、および、円形度の測定(算出)等が行われる。 Then, as shown in FIG. 13, the bone image analysis method includes a step of analyzing the image, which is performed in step 117. That is, an arbitrary region of the bone region A is selected on the image in which the region corresponding to the bone region A, the region corresponding to the member 300, and the region corresponding to the remaining region (background portion) are distinguished. The selection result is accepted by the image processing unit 3. Then, in the selected analysis region, the bone density is measured (calculated), the circularity is measured (calculated), and the like.
 なお、第2実施形態のその他の構成は、上記第1実施形態と同様である。 The other configurations of the second embodiment are the same as those of the first embodiment.
 (第2実施形態の効果)
 第2実施形態では、以下のような効果を得ることができる。
(Effect of the second embodiment)
In the second embodiment, the following effects can be obtained.
 また、第2実施形態では、上記のように、ラベル画像70を取得するステップは、ラベル画像70上において骨部領域Aおよび模擬部材画像300aに対応する位置に互いに異なる正解値が付与されたラベル画像70を取得するステップを含む。また、骨部領域Aおよび部材300を抽出するステップは、機械学習の学習結果に基づいて、撮影画像10上において、骨部領域Aおよび部材300を個別に抽出するステップを含む。これにより、骨部領域Aと部材300との境界を抽出することができるので、骨部領域Aおよび部材300が個別に抽出された画像上において、骨部のみを選択して解析する作業を容易化することができる。また、骨部領域Aと部材300とを区別するために機械学習の学習結果に基づいた抽出以外の抽出(画素の輝度値の差に基づいた抽出法であるルールベースの抽出など)を行う必要がないので、骨部領域Aと部材300とを区別して抽出するための作業を簡易化することができる。 Further, in the second embodiment, as described above, the step of acquiring the label image 70 is a label in which different correct answer values are given to the positions corresponding to the bone region A and the simulated member image 300a on the label image 70. The step of acquiring the image 70 is included. Further, the step of extracting the bone region A and the member 300 includes a step of individually extracting the bone region A and the member 300 on the captured image 10 based on the learning result of machine learning. As a result, the boundary between the bone region A and the member 300 can be extracted, so that it is easy to select and analyze only the bone portion on the image in which the bone region A and the member 300 are individually extracted. Can be transformed into. Further, in order to distinguish between the bone region A and the member 300, it is necessary to perform extraction other than extraction based on the learning result of machine learning (rule-based extraction, which is an extraction method based on the difference in the brightness values of pixels, etc.). Therefore, it is possible to simplify the work for distinguishing and extracting the bone region A and the member 300.
 なお、第2実施形態のその他の効果は、上記第1実施形態と同様である。 The other effects of the second embodiment are the same as those of the first embodiment.
 (変形例)
 なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく請求の範囲によって示され、さらに請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
(Modification example)
It should be noted that the embodiments disclosed this time are exemplary in all respects and are not considered to be restrictive. The scope of the present invention is shown by the claims rather than the description of the above-described embodiment, and further includes all modifications (modifications) within the meaning and scope equivalent to the claims.
 たとえば、上第1および第2記実施形態では、骨部領域A(所定の骨部領域)の内部に配置された金属を模擬した模擬部材画像300a(第1模擬部材画像)を用いる例を示したが、本発明はこれに限られない。たとえば、骨部領域A(所定の骨部領域)の外部に配置された金属(すなわち骨の内部に埋め込まれていない金属)を模擬した模擬部材画像を用いてもよい。 For example, in the first and second embodiments above, an example is shown in which a simulated member image 300a (first simulated member image) simulating a metal arranged inside a bone region A (predetermined bone region) is used. However, the present invention is not limited to this. For example, a simulated member image simulating a metal (that is, a metal not embedded in the bone) arranged outside the bone region A (predetermined bone region) may be used.
 また、上記第1および第2記実施形態では、金属を模擬した模擬部材画像300a(第1模擬部材画像)を用いる例を示したが、本発明はこれに限られない。金属以外の部材(たとえばセラミック)を模擬した模擬部材画像を用いてもよい。この場合、上記金属以外の部材の輝度値と略等しい輝度値を有する模擬部材画像を骨部領域画像20に付加する。 Further, in the above first and second embodiments, an example of using a simulated member image 300a (first simulated member image) simulating a metal is shown, but the present invention is not limited to this. A simulated member image simulating a member other than metal (for example, ceramic) may be used. In this case, a simulated member image having a brightness value substantially equal to the brightness value of the member other than the metal is added to the bone region image 20.
 また、上第1および第2記実施形態では、再学習を複数回行う例を示したが、本発明はこれに限られない。たとえば、1回の機械学習に用いる学習用入力画像30(第1学習用入力画像)とラベル画像40(第1ラベル画像)との組の数を、再学習を行う場合よりも増加させて機械学習を1回のみ実施してもよい。 Further, in the above first and second embodiments, an example in which re-learning is performed a plurality of times is shown, but the present invention is not limited to this. For example, the number of pairs of the learning input image 30 (first learning input image) and the label image 40 (first label image) used for one machine learning is increased as compared with the case of re-learning. Learning may be performed only once.
 また、上記第1および第2記実施形態では、左側の骨部領域A(所定の骨部領域)が表示された左側骨部領域画像22(他方側骨部反転前画像)を左右反転させる例を示したが、本発明はこれに限られない。右側の骨部領域A(所定の骨部領域)が表示された右側骨部領域画像21(一方側骨部画像)を左右反転させてもよい。 Further, in the first and second embodiments, the left bone region image 22 (the image before the other side bone reversal) in which the left bone region A (predetermined bone region) is displayed is flipped left and right. However, the present invention is not limited to this. The right bone region image 21 (one-sided bone region image) on which the right bone region A (predetermined bone region) is displayed may be flipped horizontally.
 また、上記第1および第2記実施形態では、左側の骨部領域A(所定の骨部領域)が表示された左側骨部領域画像22(他方側骨部反転前画像)を左右反転させた反転後骨部領域画像23(他方側骨部反転後画像)に模擬部材画像300a(第1模擬部材画像)を付加する例を示したが、本発明はこれに限られない。左側の骨部領域A(所定の骨部領域)が表示された左側骨部領域画像22(他方側骨部反転前画像)に模擬部材画像300a(第1模擬部材画像)を付加した画像を左右反転させてもよい。 Further, in the first and second embodiments, the left bone region image 22 (the image before the other side bone reversal) in which the left bone region A (predetermined bone region) is displayed is flipped left and right. An example of adding a simulated member image 300a (first simulated member image) to the posterior bone region image 23 (image after flipping the other side bone) has been shown, but the present invention is not limited to this. Left and right images obtained by adding a simulated member image 300a (first simulated member image) to the left bone region image 22 (image before inversion of the other side bone) on which the left bone region A (predetermined bone region) is displayed. It may be inverted.
 また、上記第1および第2記実施形態では、学習用入力画像30(第1学習用入力画像)の元となった骨部領域画像20に模擬部材画像300b(第2模擬部材画像)を付加する例を示したが、本発明はこれに限られない。たとえば、複数の骨部領域画像20のうちの模擬部材画像300aが付加されなかった骨部領域画像20の少なくとも一部に模擬部材画像300b(第2模擬部材画像)を付加してもよい。 Further, in the first and second embodiments, the simulated member image 300b (second simulated member image) is added to the bone region image 20 that is the source of the learning input image 30 (first learning input image). However, the present invention is not limited to this. For example, the simulated member image 300b (second simulated member image) may be added to at least a part of the bone region image 20 to which the simulated member image 300a of the plurality of bone region images 20 is not added.
 また、上記第2記実施形態では、骨部領域A(所定の骨部領域)と、部材300(所定の部材)と、残りの部分(背景部分)とを個別に抽出する例を示したが、本発明はこれに限られない。たとえば、部材300(所定の部材)と背景部分とは区別せず、骨部領域A(所定の骨部領域)と、部材300(所定の部材)および背景部分とを、個別に抽出してもよい。 Further, in the second embodiment described above, an example is shown in which the bone region A (predetermined bone region), the member 300 (predetermined member), and the remaining portion (background portion) are individually extracted. , The present invention is not limited to this. For example, the member 300 (predetermined member) and the background portion may not be distinguished, and the bone region A (predetermined bone region), the member 300 (predetermined member), and the background portion may be extracted individually. Good.
 また、上記第1および第2実施形態では、骨部領域A(所定の骨部領域)が大腿骨を含む領域である例を示したが、本発明はこれに限られない。たとえば、骨部領域A(所定の骨部領域)が大腿骨以外の骨部の領域であってもよい。 Further, in the first and second embodiments, an example is shown in which the bone region A (predetermined bone region) is a region including the femur, but the present invention is not limited to this. For example, the bone region A (predetermined bone region) may be a region of the bone other than the femur.
 また、上記第1および第2実施形態では、機械学習として、深層学習(AI)が用いられる例を示したが、本発明はこれに限られない。たとえば、機械学習として、深層学習以外の機械学習を用いてもよい。 Further, in the first and second embodiments described above, an example in which deep learning (AI) is used as machine learning is shown, but the present invention is not limited to this. For example, as machine learning, machine learning other than deep learning may be used.
[態様]
 上記した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
[Aspect]
It will be understood by those skilled in the art that the above exemplary embodiments are specific examples of the following embodiments.
(項目1)
 所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、
 前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した第1模擬部材画像を付加することにより、第1学習用入力画像を取得するステップと、
 前記第1学習用入力画像において前記所定の骨部領域および前記第1模擬部材画像が表示される位置の第1正解情報を含む第1ラベル画像を取得するステップと、
 前記第1学習用入力画像と前記第1ラベル画像とを用いて、X線撮影装置により撮影され、前記所定の骨部領域および前記所定の部材が表示された撮影画像上において前記所定の骨部領域および前記所定の部材を抽出するための機械学習を実施するステップと、
 前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するステップと、を備える、骨部画像解析方法。
(Item 1)
A step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and
Input for first learning by adding a first simulated member image that simulates a predetermined member having a brightness value larger than that of the bone to the bone region image that is a part of the plurality of bone region images. Steps to get the image and
A step of acquiring a first label image including the first correct answer information of the predetermined bone region and the position where the first simulated member image is displayed in the first learning input image.
The predetermined bone portion is photographed by an X-ray apparatus using the first learning input image and the first label image, and the predetermined bone portion is displayed on the captured image in which the predetermined bone region and the predetermined member are displayed. A step of performing machine learning to extract a region and the predetermined member, and
A bone image analysis method comprising a step of extracting the predetermined bone region and the predetermined member on the captured image based on the learning result of the machine learning.
(項目2)
 前記第1学習用入力画像を取得するステップは、前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、少なくとも一部が前記所定の骨部領域の内部に配置された金属を模擬した前記第1模擬部材画像を付加するステップを含む、項目1に記載の骨部画像解析方法。
(Item 2)
The step of acquiring the first learning input image is a metal in which a part of the plurality of bone region images is arranged in the bone region image, and at least a part of the metal is arranged inside the predetermined bone region. The bone image analysis method according to item 1, which comprises a step of adding the first simulated member image simulating the above.
(項目3)
 前記金属を模擬した前記第1模擬部材画像を付加するステップは、前記金属の輝度値と略等しい輝度値を有する前記第1模擬部材画像を、前記骨部領域画像に付加するステップを含む、項目2に記載の骨部画像解析方法。
(Item 3)
The step of adding the first simulated member image simulating the metal includes a step of adding the first simulated member image having a brightness value substantially equal to the brightness value of the metal to the bone region image. 2. The bone image analysis method according to 2.
(項目4)
 前記第1学習用入力画像を取得するステップは、複数の前記骨部領域画像の各々に前記第1模擬部材画像を付加する場合に、前記第1模擬部材画像が付加される前記複数の骨部領域画像ごとに、前記第1模擬部材画像の輝度値、形状、位置、および、数のうちの少なくとも1つが互いに異なるように、前記複数の骨部領域画像の各々に前記第1模擬部材画像を付加することによって、複数の前記第1学習用入力画像を取得するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 4)
In the step of acquiring the first learning input image, when the first simulated member image is added to each of the plurality of bone region images, the plurality of bones to which the first simulated member image is added. The first simulated member image is provided in each of the plurality of bone region images so that at least one of the brightness value, shape, position, and number of the first simulated member image is different from each other for each region image. The bone image analysis method according to any one of items 1 to 3, further comprising a step of acquiring a plurality of the first learning input images by adding the image.
(項目5)
 前記第1ラベル画像を取得するステップは、前記第1ラベル画像上において前記所定の骨部領域および前記第1模擬部材画像に対応する位置に共通の正解値が付与された前記第1ラベル画像を取得するステップを含み、
 前記所定の骨部領域および前記所定の部材を抽出するステップは、前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を一体的に抽出するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 5)
The step of acquiring the first label image is to obtain the first label image in which a common correct answer value is given to the predetermined bone region and the position corresponding to the first simulated member image on the first label image. Including the steps to get
In the step of extracting the predetermined bone region and the predetermined member, the predetermined bone region and the predetermined member are integrally extracted on the photographed image based on the learning result of the machine learning. The bone image analysis method according to any one of items 1 to 3, which comprises a step.
(項目6)
 前記第1ラベル画像を取得するステップは、前記第1ラベル画像上において前記所定の骨部領域および前記第1模擬部材画像に対応する位置に互いに異なる正解値が付与された前記第1ラベル画像を取得するステップを含み、
 前記所定の骨部領域および前記所定の部材を抽出するステップは、前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を個別に抽出するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 6)
The step of acquiring the first label image is to obtain the first label image in which different correct answer values are given to the predetermined bone region and the positions corresponding to the first simulated member image on the first label image. Including the steps to get
The step of extracting the predetermined bone region and the predetermined member is a step of individually extracting the predetermined bone region and the predetermined member on the photographed image based on the learning result of the machine learning. The bone image analysis method according to any one of items 1 to 3, which comprises.
(項目7)
 前記機械学習を実施するステップは、前記第1学習用入力画像の元となった前記骨部領域画像に、前記第1学習用入力画像の前記第1模擬部材画像とは輝度値、形状、位置、および、数のうちの少なくとも1つが異なる、前記所定の部材を模擬した第2模擬部材画像が付加された第2学習用入力画像と、前記第2学習用入力画像において前記所定の骨部領域および前記第2模擬部材画像が表示される位置の第2正解情報を含む第2ラベル画像とを用いて、前記第1学習用入力画像と前記第1ラベル画像とを用いた学習の後に再学習するステップをさらに含み、
 前記機械学習の学習結果に基づいて前記所定の骨部領域および前記所定の部材を抽出するステップは、再学習された学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 7)
In the step of performing the machine learning, the brightness value, shape, and position of the bone region image that is the source of the first learning input image and the first simulated member image of the first learning input image are obtained. , And a second learning input image to which a second simulated member image simulating the predetermined member, which is different in at least one of the numbers, and the predetermined bone region in the second learning input image. And re-learning after learning using the first learning input image and the first label image using the second label image including the second correct answer information of the position where the second simulated member image is displayed. Including more steps to do
The step of extracting the predetermined bone region and the predetermined member based on the learning result of the machine learning is the step of extracting the predetermined bone region and the predetermined member on the captured image based on the relearned learning result. The bone image analysis method according to any one of items 1 to 3, which comprises a step of extracting a predetermined member.
(項目8)
 前記複数の骨部領域画像のうちの前記第1模擬部材画像が付加されない前記骨部領域画像において、前記所定の骨部領域が表示される位置の第3正解情報を含む第3ラベル画像を取得するステップをさらに備え、
 前記機械学習を実施するステップは、前記第1学習用入力画像と前記第1ラベル画像との組、および、前記第1模擬部材画像が付加されない前記骨部領域画像と前記第3ラベル画像との組を用いて前記機械学習を実施するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 8)
In the bone region image to which the first simulated member image is not added among the plurality of bone region images, a third label image including the third correct answer information of the position where the predetermined bone region is displayed is acquired. With more steps to do
The step of performing the machine learning is a set of the first learning input image and the first label image, and the bone region image to which the first simulated member image is not added and the third label image. The bone image analysis method according to any one of items 1 to 3, which comprises a step of performing the machine learning using a set.
(項目9)
 前記第1学習用入力画像を取得するステップは、前記所定の骨部領域が被験者の左半身および右半身の各々に存在する場合に、左右のうちの一方の前記所定の骨部領域が表示された一方側骨部画像に前記第1模擬部材画像が付加された一方側学習用画像と、左右のうちの他方の前記所定の骨部領域が表示された他方側学習用画像が左右反転された他方側骨部反転後画像に前記第1模擬部材画像が付加された他方側学習用画像とを、前記第1学習用入力画像として取得するステップを含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 9)
In the step of acquiring the first learning input image, when the predetermined bone region exists in each of the left half body and the right half body of the subject, one of the left and right said predetermined bone regions is displayed. The one-sided learning image in which the first simulated member image was added to the one-sided bone image and the other-side learning image in which the predetermined bone region of the other of the left and right was displayed were flipped horizontally. In any one of items 1 to 3, the step of acquiring the other-side learning image to which the first simulated member image is added to the image after reversing the other-side bone portion as the first learning input image is included. The described bone image analysis method.
(項目10)
 前記機械学習は、深層学習を含む、項目1~3のいずれか1項に記載の骨部画像解析方法。
(Item 10)
The bone image analysis method according to any one of items 1 to 3, wherein the machine learning includes deep learning.
(項目11)
 所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、
 前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した模擬部材画像を付加することにより、学習用入力画像を取得するステップと、
 前記学習用入力画像において前記所定の骨部領域および前記模擬部材画像が表示される位置の正解情報を含むラベル画像を取得するステップと、
 前記学習用入力画像と前記ラベル画像とを用いて、X線撮影装置により撮影され、前記所定の骨部領域および前記所定の部材が表示された撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するための機械学習を実施するステップと、を備える、学習方法。
(Item 11)
A step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and
A learning input image is acquired by adding a simulated member image simulating a predetermined member having a brightness value larger than that of the bone to the bone region image of a part of the plurality of bone region images. Steps and
A step of acquiring a label image including correct answer information of the predetermined bone region and the position where the simulated member image is displayed in the learning input image, and
Using the learning input image and the label image, the predetermined bone region and the predetermined bone region and the predetermined bone region and the said A learning method comprising a step of performing machine learning for extracting a predetermined member.
 10 撮影画像
 20 骨部領域画像
 21 右側骨部領域画像(一方側骨部画像)
 22 左側骨部領域画像(他方側骨部反転前画像)
 23 反転後骨部領域画像(他方側骨部反転後画像)
 30 学習用入力画像(第1学習用入力画像)
 31 右側学習用入力画像(一方側学習用画像)
 32 左側学習用入力画像(他方側学習用画像)
 40、70 ラベル画像(第1ラベル画像)
 41 ラベル画像(第3ラベル画像)
 50 学習用入力画像(第2学習用入力画像)
 60 ラベル画像(第2ラベル画像)
 100 X線撮影装置
 300 部材(所定の部材)
 300a 模擬部材画像(第1模擬部材画像)
 300b 模擬部材画像(第2模擬部材画像)
 400、700、700a、700b 正解情報(第1正解情報)
 410 正解情報(第3正解情報)
 600 正解情報(第2正解情報)
 A 骨部領域(所定の骨部領域)
 T 被験者
10 Captured image 20 Bone region image 21 Right bone region image (one-sided bone region image)
22 Left bone region image (image before contralateral bone reversal)
23 Post-reversal bone region image (post-reversal image of the other side)
30 Input image for learning (input image for first learning)
31 Right-side learning input image (one-sided learning image)
32 Left side learning input image (other side learning image)
40, 70 label image (first label image)
41 Label image (3rd label image)
50 Input image for learning (input image for second learning)
60 label image (second label image)
100 X-ray imaging device 300 members (predetermined members)
300a simulated member image (first simulated member image)
300b simulated member image (second simulated member image)
400, 700, 700a, 700b Correct answer information (first correct answer information)
410 Correct answer information (3rd correct answer information)
600 Correct answer information (2nd correct answer information)
A Bone area (predetermined bone area)
T subject

Claims (11)

  1.  所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、
     前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した第1模擬部材画像を付加することにより、第1学習用入力画像を取得するステップと、
     前記第1学習用入力画像において前記所定の骨部領域および前記第1模擬部材画像が表示される位置の第1正解情報を含む第1ラベル画像を取得するステップと、
     前記第1学習用入力画像と前記第1ラベル画像とを用いて、X線撮影装置により撮影され、前記所定の骨部領域および前記所定の部材が表示された撮影画像上において前記所定の骨部領域および前記所定の部材を抽出するための機械学習を実施するステップと、
     前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するステップと、を備える、骨部画像解析方法。
    A step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and
    Input for first learning by adding a first simulated member image that simulates a predetermined member having a brightness value larger than that of the bone to the bone region image that is a part of the plurality of bone region images. Steps to get the image and
    A step of acquiring a first label image including the first correct answer information of the predetermined bone region and the position where the first simulated member image is displayed in the first learning input image.
    The predetermined bone portion is photographed by an X-ray apparatus using the first learning input image and the first label image, and the predetermined bone portion is displayed on the captured image in which the predetermined bone region and the predetermined member are displayed. A step of performing machine learning to extract a region and the predetermined member, and
    A bone image analysis method comprising a step of extracting the predetermined bone region and the predetermined member on the captured image based on the learning result of the machine learning.
  2.  前記第1学習用入力画像を取得するステップは、前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、少なくとも一部が前記所定の骨部領域の内部に配置された金属を模擬した前記第1模擬部材画像を付加するステップを含む、請求項1に記載の骨部画像解析方法。 The step of acquiring the first learning input image is a metal in which a part of the plurality of bone region images is arranged in the bone region image, and at least a part of the metal is arranged inside the predetermined bone region. The bone image analysis method according to claim 1, further comprising a step of adding the first simulated member image simulating the above.
  3.  前記金属を模擬した前記第1模擬部材画像を付加するステップは、前記金属の輝度値と略等しい輝度値を有する前記第1模擬部材画像を、前記骨部領域画像に付加するステップを含む、請求項2に記載の骨部画像解析方法。 The step of adding the first simulated member image simulating the metal includes a step of adding the first simulated member image having a brightness value substantially equal to the brightness value of the metal to the bone region image. Item 2. The bone image analysis method according to Item 2.
  4.  前記第1学習用入力画像を取得するステップは、複数の前記骨部領域画像の各々に前記第1模擬部材画像を付加する場合に、前記第1模擬部材画像が付加される前記複数の骨部領域画像ごとに、前記第1模擬部材画像の輝度値、形状、位置、および、数のうちの少なくとも1つが互いに異なるように、前記複数の骨部領域画像の各々に前記第1模擬部材画像を付加することによって、複数の前記第1学習用入力画像を取得するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。 In the step of acquiring the first learning input image, when the first simulated member image is added to each of the plurality of bone region image, the plurality of bones to which the first simulated member image is added. For each region image, the first simulated member image is provided in each of the plurality of bone region images so that at least one of the brightness value, shape, position, and number of the first simulated member image is different from each other. The bone image analysis method according to any one of claims 1 to 3, further comprising a step of acquiring a plurality of the first learning input images by adding the image.
  5.  前記第1ラベル画像を取得するステップは、前記第1ラベル画像上において前記所定の骨部領域および前記第1模擬部材画像に対応する位置に共通の正解値が付与された前記第1ラベル画像を取得するステップを含み、
     前記所定の骨部領域および前記所定の部材を抽出するステップは、前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を一体的に抽出するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。
    The step of acquiring the first label image is to obtain the first label image in which a common correct answer value is given to the predetermined bone region and the position corresponding to the first simulated member image on the first label image. Including the steps to get
    In the step of extracting the predetermined bone region and the predetermined member, the predetermined bone region and the predetermined member are integrally extracted on the photographed image based on the learning result of the machine learning. The bone image analysis method according to any one of claims 1 to 3, which comprises a step.
  6.  前記第1ラベル画像を取得するステップは、前記第1ラベル画像上において前記所定の骨部領域および前記第1模擬部材画像に対応する位置に互いに異なる正解値が付与された前記第1ラベル画像を取得するステップを含み、
     前記所定の骨部領域および前記所定の部材を抽出するステップは、前記機械学習の学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を個別に抽出するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。
    The step of acquiring the first label image is to obtain the first label image in which different correct answer values are given to the predetermined bone region and the position corresponding to the first simulated member image on the first label image. Including the steps to get
    The step of extracting the predetermined bone region and the predetermined member is a step of individually extracting the predetermined bone region and the predetermined member on the photographed image based on the learning result of the machine learning. The bone image analysis method according to any one of claims 1 to 3, which comprises.
  7.  前記機械学習を実施するステップは、前記第1学習用入力画像の元となった前記骨部領域画像に、前記第1学習用入力画像の前記第1模擬部材画像とは輝度値、形状、位置、および、数のうちの少なくとも1つが異なる、前記所定の部材を模擬した第2模擬部材画像が付加された第2学習用入力画像と、前記第2学習用入力画像において前記所定の骨部領域および前記第2模擬部材画像が表示される位置の第2正解情報を含む第2ラベル画像とを用いて、前記第1学習用入力画像と前記第1ラベル画像とを用いた学習の後に再学習するステップをさらに含み、
     前記機械学習の学習結果に基づいて前記所定の骨部領域および前記所定の部材を抽出するステップは、再学習された学習結果に基づいて、前記撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。
    In the step of performing the machine learning, the brightness value, shape, and position of the bone region image, which is the source of the first learning input image, are different from those of the first simulated member image of the first learning input image. , And a second learning input image to which a second simulated member image simulating the predetermined member, which is different in at least one of the numbers, and the predetermined bone region in the second learning input image. And re-learning after learning using the first learning input image and the first label image using the second label image including the second correct answer information of the position where the second simulated member image is displayed. Including more steps to do
    The step of extracting the predetermined bone region and the predetermined member based on the learning result of the machine learning is the step of extracting the predetermined bone region and the predetermined member on the captured image based on the relearned learning result. The bone image analysis method according to any one of claims 1 to 3, which comprises a step of extracting a predetermined member.
  8.  前記複数の骨部領域画像のうちの前記第1模擬部材画像が付加されない前記骨部領域画像において、前記所定の骨部領域が表示される位置の第3正解情報を含む第3ラベル画像を取得するステップをさらに備え、
     前記機械学習を実施するステップは、前記第1学習用入力画像と前記第1ラベル画像との組、および、前記第1模擬部材画像が付加されない前記骨部領域画像と前記第3ラベル画像との組を用いて前記機械学習を実施するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。
    In the bone region image to which the first simulated member image is not added among the plurality of bone region images, a third label image including the third correct answer information of the position where the predetermined bone region is displayed is acquired. With more steps to do
    The step of carrying out the machine learning is a set of the first learning input image and the first label image, and the bone region image to which the first simulated member image is not added and the third label image. The bone image analysis method according to any one of claims 1 to 3, which comprises a step of carrying out the machine learning using a set.
  9.  前記第1学習用入力画像を取得するステップは、前記所定の骨部領域が被験者の左半身および右半身の各々に存在する場合に、左右のうちの一方の前記所定の骨部領域が表示された一方側骨部画像に前記第1模擬部材画像が付加された一方側学習用画像と、左右のうちの他方の前記所定の骨部領域が表示された他方側骨部反転前画像が左右反転された他方側骨部反転後画像に前記第1模擬部材画像が付加された他方側学習用画像とを、前記第1学習用入力画像として取得するステップを含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。 In the step of acquiring the first learning input image, when the predetermined bone region exists in each of the left half body and the right half body of the subject, one of the left and right said predetermined bone regions is displayed. The one-side learning image in which the first simulated member image is added to the one-sided bone image and the image before the other-side bone inversion in which the predetermined bone region of the other of the left and right is displayed are left-right inverted. Any of claims 1 to 3, which includes a step of acquiring the other-side learning image to which the first simulated member image is added to the image after reversing the other-side bone portion as the first learning input image. The bone image analysis method according to item 1.
  10.  前記機械学習は、深層学習を含む、請求項1~3のいずれか1項に記載の骨部画像解析方法。 The bone image analysis method according to any one of claims 1 to 3, wherein the machine learning includes deep learning.
  11.  所定の骨部領域が表示された複数の骨部領域画像を取得するステップと、
     前記複数の骨部領域画像のうちの一部の前記骨部領域画像に、骨部よりも輝度値が大きい所定の部材を模擬した模擬部材画像を付加することにより、学習用入力画像を取得するステップと、
     前記学習用入力画像において前記所定の骨部領域および前記模擬部材画像が表示される位置の正解情報を含むラベル画像を取得するステップと、
     前記学習用入力画像と前記ラベル画像とを用いて、X線撮影装置により撮影され、前記所定の骨部領域および前記所定の部材が表示された撮影画像上において、前記所定の骨部領域および前記所定の部材を抽出するための機械学習を実施するステップと、を備える、学習方法。
    A step of acquiring a plurality of bone region images in which a predetermined bone region is displayed, and
    A learning input image is acquired by adding a simulated member image simulating a predetermined member having a brightness value larger than that of the bone to the bone region image of a part of the plurality of bone region images. Steps and
    A step of acquiring a label image including correct answer information of the predetermined bone region and the position where the simulated member image is displayed in the learning input image, and
    Using the learning input image and the label image, the predetermined bone region and the predetermined bone region and the predetermined bone region and the said A learning method comprising a step of performing machine learning for extracting a predetermined member.
PCT/JP2019/024263 2019-06-19 2019-06-19 Bone section image analysis method and learning method WO2020255292A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021528532A JP7173338B2 (en) 2019-06-19 2019-06-19 Bone image analysis method and learning method
CN201980096648.4A CN113873945A (en) 2019-06-19 2019-06-19 Bone image analysis method and learning method
PCT/JP2019/024263 WO2020255292A1 (en) 2019-06-19 2019-06-19 Bone section image analysis method and learning method
KR1020217041120A KR20220010529A (en) 2019-06-19 2019-06-19 How to interpret and learn a bone image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/024263 WO2020255292A1 (en) 2019-06-19 2019-06-19 Bone section image analysis method and learning method

Publications (1)

Publication Number Publication Date
WO2020255292A1 true WO2020255292A1 (en) 2020-12-24

Family

ID=74040359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/024263 WO2020255292A1 (en) 2019-06-19 2019-06-19 Bone section image analysis method and learning method

Country Status (4)

Country Link
JP (1) JP7173338B2 (en)
KR (1) KR20220010529A (en)
CN (1) CN113873945A (en)
WO (1) WO2020255292A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005156334A (en) * 2003-11-25 2005-06-16 Nec Tohoku Sangyo System Kk Pseudo defective image automatic creation device and imaging inspection device
JP2013240584A (en) * 2012-04-27 2013-12-05 Nihon Univ Image processing apparatus, x-ray ct scanner and image processing method
JP2015530193A (en) * 2012-09-27 2015-10-15 シーメンス プロダクト ライフサイクル マネージメント ソフトウェアー インコーポレイテッドSiemens Product Lifecycle Management Software Inc. Multiple bone segmentation for 3D computed tomography

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2638875B2 (en) 1988-01-31 1997-08-06 株式会社島津製作所 Bone mineral quantitative analyzer
US8903167B2 (en) * 2011-05-12 2014-12-02 Microsoft Corporation Synthesizing training samples for object recognition
CN108460414B (en) * 2018-02-27 2019-09-17 北京三快在线科技有限公司 Generation method, device and the electronic equipment of training sample image
CN108509915B (en) * 2018-04-03 2021-10-26 百度在线网络技术(北京)有限公司 Method and device for generating face recognition model
CN109255767B (en) * 2018-09-26 2021-03-12 北京字节跳动网络技术有限公司 Image processing method and device
CN109523507B (en) * 2018-09-26 2023-09-19 苏州六莲科技有限公司 Method and device for generating lesion image and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005156334A (en) * 2003-11-25 2005-06-16 Nec Tohoku Sangyo System Kk Pseudo defective image automatic creation device and imaging inspection device
JP2013240584A (en) * 2012-04-27 2013-12-05 Nihon Univ Image processing apparatus, x-ray ct scanner and image processing method
JP2015530193A (en) * 2012-09-27 2015-10-15 シーメンス プロダクト ライフサイクル マネージメント ソフトウェアー インコーポレイテッドSiemens Product Lifecycle Management Software Inc. Multiple bone segmentation for 3D computed tomography

Also Published As

Publication number Publication date
KR20220010529A (en) 2022-01-25
CN113873945A (en) 2021-12-31
JPWO2020255292A1 (en) 2020-12-24
JP7173338B2 (en) 2022-11-16

Similar Documents

Publication Publication Date Title
Precht et al. Radiographers’ perspectives’ on Visual Grading Analysis as a scientific method to evaluate image quality
US9142020B2 (en) Osteo-articular structure
JP2007325928A (en) Method of processing radiation image in tomosynthesis for detecting radiological sign
JP2004057831A (en) Method and system for low-dose image simulation of image forming system
CN112165900A (en) Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device
Kim et al. 3D reconstruction of leg bones from X-ray images using CNN-based feature analysis
US11995838B2 (en) System and method for imaging
KR20240013724A (en) Artificial Intelligence Training Using a Multipulse X-ray Source Moving Tomosynthesis Imaging System
Mastmeyer et al. Direct haptic volume rendering in lumbar puncture simulation
US6278760B1 (en) Radiation image forming method and apparatus
Brooks et al. Automated analysis of the American College of Radiology mammographic accreditation phantom images
Oprea et al. Image processing techniques used for dental x-ray image analysis
KR20210028559A (en) Image analyzing method, image processing apparatus, bone mineral density measuring apparatus and learning model creation method
AU2021100684A4 (en) DEPCADDX - A MATLAB App for Caries Detection and Diagnosis from Dental X-rays
CN106308836B (en) Computer tomography image correction system and method
JP2009160313A (en) Image processing apparatus, image processing method, and computer program
WO2020255292A1 (en) Bone section image analysis method and learning method
US7260254B2 (en) Comparing images
EP1903787A2 (en) Image processing device and image processing method
US20220358652A1 (en) Image processing apparatus, radiation imaging apparatus, image processing method, and storage medium
WO2020255290A1 (en) Organ image analysis method and learning method
US11324466B2 (en) Creating monochromatic CT image
CN114343702A (en) X-ray imaging apparatus, image processing method, and learning-completed model generation method
EP4368109A1 (en) Method for training a scatter correction model for use in an x-ray imaging system
KR20210083452A (en) Dual energy tomography apparatus, and metal image classification method using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19934035

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021528532

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217041120

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19934035

Country of ref document: EP

Kind code of ref document: A1