WO2024057942A1 - Ocular fundus image processing device and ocular fundus image processing program - Google Patents

Ocular fundus image processing device and ocular fundus image processing program Download PDF

Info

Publication number
WO2024057942A1
WO2024057942A1 PCT/JP2023/031715 JP2023031715W WO2024057942A1 WO 2024057942 A1 WO2024057942 A1 WO 2024057942A1 JP 2023031715 W JP2023031715 W JP 2023031715W WO 2024057942 A1 WO2024057942 A1 WO 2024057942A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessel
fundus image
image processing
distribution
fundus
Prior art date
Application number
PCT/JP2023/031715
Other languages
French (fr)
Japanese (ja)
Inventor
理幸 齋藤
航介 野田
佳苗 福津
晋 石田
Original Assignee
国立大学法人北海道大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人北海道大学 filed Critical 国立大学法人北海道大学
Publication of WO2024057942A1 publication Critical patent/WO2024057942A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present disclosure relates to a fundus image processing device that processes a fundus image of a subject's eye, and a fundus image processing program that is executed in the fundus image processing device.
  • a typical objective of the present disclosure is to provide a fundus image processing device and a fundus image processing program that can appropriately present information regarding a wide range of blood vessels in a two-dimensional fundus image.
  • a first aspect of a fundus image processing device is a fundus image processing device that processes a fundus image of an eye to be examined, wherein a control unit of the fundus image processing device includes a control unit for capturing a fundus image.
  • a fundus image acquisition step of acquiring a plurality of fundus images including blood vessels in the fundus of the subject's eye taken by an apparatus; and acquiring a blood vessel image showing at least one of arteries and veins included in each of the acquired fundus images.
  • a blood vessel image acquisition step and a distribution of the existence probability of blood vessels existing in the retina of the subject's eye by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed.
  • a probability map generation step of generating a retinal vascular distribution probability map indicating the retinal vascular distribution;
  • a second aspect of a fundus image processing device is a fundus image processing device that processes a fundus image of an eye to be examined, wherein a control unit of the fundus image processing device is configured to perform fundus image capturing.
  • a fundus image acquisition step of acquiring a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, photographed by an apparatus; and a blood vessel image showing at least one of arteries and veins included in the acquired fundus image.
  • a blood vessel image acquisition step of acquiring a blood vessel image, and a retinal blood vessel distribution probability that indicates the distribution of the probability of blood vessels existing in the retina of the eye to be examined, which is generated by adding multiple blood vessel images in an aligned state.
  • performing a feature information generation step of generating vascular distribution characteristic information indicating characteristics of the vascular distribution of the target blood vessel image by processing information of a region of the map that corresponds to a blood vessel region of the target blood vessel image; do.
  • a first aspect of a fundus image processing program provided by a typical embodiment of the present disclosure is a fundus image processing program executed by a fundus image processing device that processes a fundus image of a subject's eye, the fundus image processing program is executed by the control unit of the fundus image processing device to obtain a plurality of fundus images including blood vessels in the fundus of the eye to be examined, which are captured by the fundus image photographing device; a blood vessel image acquisition step of acquiring a blood vessel image indicating at least one of an artery and a vein included in the fundus image; and a state in which the plurality of blood vessel images acquired for each of the plurality of fundus images are aligned.
  • the fundus image processing device is caused to perform a probability map generation step of generating a retinal blood vessel distribution probability map indicating the distribution of the existence probability of blood vessels existing in the retina of the eye to be examined by adding the retinal blood vessels.
  • a second aspect of a fundus image processing program provided by a typical embodiment of the present disclosure is a fundus image processing program executed by a fundus image processing device that processes a fundus image of an eye to be examined, the fundus image processing program is executed by the control unit of the fundus image processing device to obtain a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, which is captured by the fundus image capturing device;
  • the blood vessel distribution of the target blood vessel image is determined by processing information of a region corresponding to the blood vessel region of the target blood vessel image in a retinal blood vessel distribution probability map showing the distribution of the existence probability of blood vessels existing in the retina of the subject's eye. causing the fundus image processing device to perform a feature information generation step of generating blood vessel distribution feature information indicating the characteristics of the fundus oculi.
  • the control unit of the fundus image processing device of the first aspect illustrated in the present disclosure executes a fundus image acquisition step, a blood vessel image acquisition step, and a probability map generation step.
  • the control unit obtains a plurality of fundus images including blood vessels in the fundus of the subject's eye, which are captured by the fundus image capturing device.
  • the control unit acquires a blood vessel image indicating at least one of an artery and a vein included in each acquired fundus image.
  • the control unit determines the existence probability of a blood vessel existing in the retina of the subject's eye by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed.
  • a retinal blood vessel distribution probability map showing the distribution of is generated.
  • the two-dimensional distribution of the existence probability of retinal blood vessels within a set (population) of a plurality of eyes to be examined whose fundus images have been taken is appropriately shown. Therefore, for example, by comparing the fundus image of the eye to be examined (e.g., a blood vessel image obtained from the fundus image, or the fundus image itself) with the retinal blood vessel distribution probability map, the blood vessels of the population can be examined.
  • the condition of blood vessels in the target eye to be examined can be appropriately understood over a wide range (for example, in each region).
  • vascular distribution probability map for each of a plurality of different populations, it is also possible to appropriately understand the characteristics of the state of blood vessels in each population over a wide range. That is, according to the retinal blood vessel distribution probability map, information indicating the state of blood vessels in the retina can be obtained for each region. Therefore, according to the technology of the present disclosure, information regarding a wide range of blood vessels in the fundus of the eye is appropriately presented.
  • Various fundus images from which blood vessel images can be obtained can be used as the fundus images used to generate the retinal vascular distribution probability map.
  • a two-dimensional color fundus image obtained by photographing the fundus from the front with a fundus camera is used as the fundus image.
  • a blood vessel image is appropriately acquired based on the color fundus image.
  • a retinal vascular distribution probability map may be generated based on a two-dimensional OCT angio image of the fundus taken by an OCT (Optical Coherence Tomography) device.
  • OCT Optical Coherence Tomography
  • a two-dimensional fundus image taken from the front using a laser scanning ophthalmoscope (SLO) may be input into the mathematical model.
  • SLO laser scanning ophthalmoscope
  • the control unit may generate a retinal vascular distribution probability map by adding and averaging a plurality of aligned blood vessel images.
  • the retinal blood vessel distribution probability map can be handled in the same unit as the value of each pixel (for example, brightness value, etc.) of each blood vessel image.
  • the control unit may acquire a blood vessel image indicating at least one of an artery and a vein included in the fundus image by inputting the fundus image into a mathematical model trained by a machine learning algorithm.
  • a blood vessel image showing blood vessels with high accuracy is likely to be obtained.
  • the mathematical model uses a fundus image of the subject's eye taken in the past as input training data, and uses a blood vessel image showing at least one of arteries and veins in the fundus image of the input training data as output training data. May be trained.
  • the trained mathematical model can appropriately output a blood vessel image based on the input fundus image.
  • At least one of the plurality of blood vessel images may be generated in response to an instruction input by an operator via an operation unit (eg, a mouse, etc.).
  • an operation unit eg, a mouse, etc.
  • the control unit may express the brightness value of each pixel in the two-dimensional retinal blood vessel distribution probability map by color and shading (that is, by converting it into a heat map). good).
  • the control unit can cause the display unit to display a monochrome retinal blood vessel distribution probability map according to the luminance value of each pixel.
  • the control unit may acquire both an artery blood vessel image and a venous blood vessel image included in each fundus image.
  • the control unit In the probability map generation step, the control unit generates an arterial retinal vascular distribution probability map by adding blood vessel images of a plurality of arteries, and generates a retinal blood vessel distribution probability map of a vein by adding blood vessel images of a plurality of veins. A distribution probability map may also be generated. Depending on the condition of the subject, such as disease, different changes may appear between the arteries and veins of the fundus.
  • both the retinal vascular distribution probability map for arteries (hereinafter sometimes referred to as “arterial distribution probability map”) and the retinal vascular distribution probability map for veins (hereinafter sometimes referred to as “venous distribution probability map”) are generated. This makes it easier to obtain more useful information.
  • control unit can generate a retinal vascular distribution probability map that includes arteries and veins by acquiring multiple blood vessel images showing both arteries and veins in the fundus and adding the acquired multiple blood vessel images. good.
  • a retinal vascular distribution probability map may be generated in which arteries and veins are not classified in any way.
  • the fundus image processing device can also generate only one of the artery distribution probability map and the vein distribution probability map.
  • the control unit may further execute a papilla position specifying step of specifying the position of the optic disc in the acquired blood vessel image.
  • the control unit may generate the retinal vascular distribution probability map by adding together the positions of each of the plurality of blood vessel images with the position of the optic disc as a reference.
  • the structure of the retinal blood vessels in the fundus of the eye is such that they spread outward from the papilla.
  • the general structure of the retinal blood vessels extending from the papilla tends to be uniform regardless of the eye to be examined.
  • control unit may specify the centroid position of the optic disc shown in the blood vessel image in the papillary position specifying step.
  • the control unit may add each of the plurality of blood vessel images in a state where the positions thereof are aligned with respect to the centroid position of the optic disc. In this case, since the plurality of blood vessel images are aligned based on the centroid position of one point, the accuracy of the generated retinal blood vessel distribution probability map is further improved.
  • the control unit may identify the position of the nipple in the blood vessel image by inputting the fundus image to a mathematical model trained by a machine learning algorithm.
  • the position of the nipple can be easily identified with high accuracy.
  • the blood vessel image and the position of the nipple may both be output using the same mathematical model, or may be output separately using different mathematical models.
  • the operator may grasp the position of the papilla in the fundus image or blood vessel image, and input the grasped position into the fundus image processing device.
  • the control unit may specify the position input via the operation unit or the like as the position of the nipple. Further, the control unit may specify the papilla position by performing known image processing on the fundus image or the blood vessel image.
  • the control unit may automatically align the multiple blood vessel images. In this case, the amount of work required by the operator to generate the retinal vascular distribution probability map is appropriately reduced. Further, the operator may input an instruction for aligning the plurality of blood vessel images to the fundus image processing device via the operation unit.
  • the fundus image processing device may align a plurality of blood vessel images in accordance with an instruction input via the operation unit. Even in this case, the retinal blood vessel distribution probability map is appropriately generated.
  • the control unit may perform processing to make the range on the fundus reflected in the fundus image and blood vessel image uniform. For example, the control unit may make the width of the photographing range within the image, which changes depending on the photographing magnification, uniform, depending on the photographing magnification when the fundus image is photographed. Further, the control unit may make the range on the fundus reflected in the image uniform by extracting an image within a specific range from the fundus image or blood vessel image. However, if a plurality of fundus images in the same range are captured by a fundus image capturing device at the same magnification, it is possible to omit the process of making the range on the fundus uniform.
  • the control unit may further execute a target blood vessel image acquisition step and a feature map generation step.
  • the control unit acquires a target blood vessel image that is a blood vessel image to be analyzed.
  • the feature map generation step the control unit generates, as a target blood vessel feature map, a map of a region corresponding to a region of a blood vessel in the target blood vessel image, out of the retinal blood vessel distribution probability map.
  • the target blood vessel feature map indicates the characteristics of blood vessel distribution in the target blood vessel image. For example, according to the target blood vessel feature map, the probability of existence of blood vessels in the population in the region of blood vessels shown in the target blood vessel image can be appropriately grasped on the image.
  • the correlation between the existence probability of blood vessels in each region on the blood vessel distribution probability map and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the target blood vessel feature map, for example, a large blood vessel may exist in an area where there is a high possibility that a small blood vessel exists in the population, or a thin blood vessel may exist in an area where there is a high possibility that a large blood vessel exists in the population. It is easy to understand cases where blood vessels are present. As described above, according to the target blood vessel feature map, the characteristics of the blood vessel shown in the target blood vessel image can be easily grasped.
  • the target blood vessel image may be acquired based on a fundus image to be analyzed (hereinafter referred to as "target fundus image").
  • target fundus image a fundus image to be analyzed
  • the target blood vessel image may be acquired by inputting the target fundus image to a mathematical model trained by a machine learning algorithm.
  • the target blood vessel feature map may be generated with the retinal blood vessel distribution probability map and the target blood vessel image aligned.
  • various methods described above for example, a method using the position of the nipple as a reference
  • the alignment may be performed automatically or manually by an operator.
  • a specific method for generating the target blood vessel feature map can also be selected as appropriate.
  • the control unit may generate the target blood vessel feature map by extracting a region corresponding to a blood vessel region of the target blood vessel image from the retinal blood vessel distribution probability map. Further, the control unit may generate the target blood vessel feature map by masking a region other than the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map.
  • the control unit may further execute a target blood vessel image acquisition step and a blood vessel distribution histogram generation step.
  • the control unit acquires a target blood vessel image that is the blood vessel image to be analyzed.
  • the control unit generates a blood vessel distribution histogram based on information of a region corresponding to a blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map.
  • the blood vessel distribution histogram shows the number of pixels in the blood vessel region of the target blood vessel image according to the existence probability of the blood vessel indicated by the retinal blood vessel distribution probability map.
  • the characteristics of the blood vessel distribution in the target blood vessel image appear prominently.
  • the range from the lowest value to the highest value of the existence probability of blood vessels in the retinal blood vessel distribution probability map is divided into N pieces (256 pieces in the present disclosure, as an example).
  • each pixel existing within the blood vessel region of the target blood vessel image belongs to one of N areas (hereinafter referred to as "existence probability areas") divided in order of blood vessel existence probability.
  • the blood vessel distribution histogram the distribution of the number of pixels belonging to each of the N existence probability areas is appropriately grasped for a plurality of pixels existing in the blood vessel region of the target blood vessel image. Therefore, the blood vessel distribution histogram provides useful information that could not be obtained using past analysis methods.
  • the target blood vessel image may be acquired by a method similar to the method described in the method for generating a target blood vessel feature map. Furthermore, information on a region of the retinal vascular distribution probability map that corresponds to a region of a blood vessel in the target blood vessel image may be acquired in a state where the retinal vascular distribution probability map and the target blood vessel image are aligned.
  • the various methods described above can be used for alignment.
  • the control unit may further execute an aggregate histogram generation step of generating an aggregate histogram that aggregates a plurality of blood vessel distribution histograms generated for a plurality of target blood vessel images.
  • an aggregate histogram generation step of generating an aggregate histogram that aggregates a plurality of blood vessel distribution histograms generated for a plurality of target blood vessel images.
  • control unit may aggregate the plurality of blood vessel distribution histograms after correcting the influence of changes in the distribution of blood vessel existence probabilities due to the age of each subject to be analyzed. In this case, it is possible to understand the characteristics of the blood vessel distribution using the aggregate histogram in which the influence of changes due to age is suppressed.
  • control unit performs a regression analysis using a regression equation in which age is an explanatory variable and existence probability is an objective variable for each existence probability area divided in order of the existence probability of blood vessels, and thereby changes due to age. may be corrected for the influence of
  • the control unit may further execute a difference histogram generation step of generating a difference histogram between the blood vessel distribution histogram generated for each target blood vessel image and the aggregate histogram generated in the aggregate histogram generation step.
  • a difference histogram generation step of generating a difference histogram between the blood vessel distribution histogram generated for each target blood vessel image and the aggregate histogram generated in the aggregate histogram generation step.
  • the control unit may generate an age-specific aggregate histogram in which the plurality of blood vessel distribution histograms generated for each of the plurality of target blood vessel images are aggregated for each age group to which the age of the subject to be analyzed belongs. According to the age-specific aggregate histogram, it is possible to appropriately understand the characteristics of blood vessel distribution by age.
  • the control unit may further execute a population-specific aggregate histogram generation step and a differential histogram generation step.
  • the control unit aggregates the plurality of blood vessel distribution histograms generated for the plurality of target blood vessel images belonging to each population, for each of the plurality of populations to which the target blood vessel image belongs. Generate an aggregate histogram for each population.
  • the control unit generates a difference histogram between the plurality of aggregate histograms generated for each population.
  • the characteristics of blood vessel distribution for each population can be appropriately understood using the difference histogram. For example, by determining the population according to various conditions such as the presence or absence of a disease in the subject or the state of the disease, useful information that could not be obtained with past analysis methods can be obtained.
  • the plurality of populations for which aggregate histograms are generated may include a population of diabetic patients who have not developed diabetic retinopathy.
  • the characteristics of diabetic patients who have not developed diabetic retinopathy are clearly reflected in the difference histogram. Therefore, for example, information about the blood vessel distribution histogram for each eye to be examined can be compared with the difference histogram to determine the possibility that the examinee has diabetes or the possibility that diabetic retinopathy will develop. There is a possibility that it can be predicted.
  • BRVO retinal vascular occlusive disease
  • BRAO branch retinal artery occlusion
  • CRVO central retinal vein occlusion
  • CRAO central retinal artery occlusion
  • retinitis pigmentosa glaucoma
  • refractive errors myopia, hyperopia
  • follow-up of retinopathy of prematurity etc.
  • useful information regarding at least one of the following is exemplified in this disclosure.
  • the state of blood vessels in living things can be observed non-invasively using the retina. Therefore, regardless of the presence or absence of a disease in the eye to be examined, the retinal blood vessels of the eye to be examined may be affected by the conditions of living organisms other than the eye to be examined (e.g., the condition of various organs, the condition of blood vessels, the condition of blood components, the state of blood pressure, etc.). ) often appear. Therefore, according to the technology exemplified in the present disclosure, there is a high possibility that useful information regarding the condition of a living body other than the subject's eye can be obtained.
  • the conditions of living organisms other than the eye to be examined e.g., the condition of various organs, the condition of blood vessels, the condition of blood components, the state of blood pressure, etc.
  • the comparison target population can be set as appropriate.
  • the comparison target population may include all subjects/eyes to be examined.
  • the population to be compared may be a population obtained by excluding the subjects/eyes to be examined from the population to be compared from all the subjects/eyes to be examined.
  • the control unit may further execute a feature information generation step and a mathematical model construction step.
  • the control unit processes information of a region corresponding to a blood vessel region of a specific blood vessel image in the retinal blood vessel distribution probability map to generate a blood vessel that shows the characteristics of the blood vessel distribution of the specific blood vessel image.
  • Distribution feature information (for example, at least one of the aforementioned blood vessel distribution histogram or target blood vessel feature map) is generated.
  • the control unit generates a plurality of sets of training data in which blood vessel distribution feature information is used as input training data, and information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data.
  • a training dataset is used to train a machine learning algorithm to build a mathematical model that outputs information about a specific disease.
  • the vascular distribution feature information represents the vascular distribution characteristics of the target blood vessel image. Furthermore, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the disease state of the subject to be analyzed (target subject) from whom the target blood vessel image was acquired. Therefore, by inputting the vascularity characteristic information of a target subject into a mathematical model that has been trained in advance using a training data set that includes vascularity characteristic information of multiple subjects, information regarding a specific disease can be appropriately obtained. be obtained.
  • a vascular distribution histogram may be used as the vascular distribution characteristic information (input training data) for training the mathematical model and the vascular distribution characteristic information input to the mathematical model.
  • the blood vessel distribution histogram clearly shows the characteristics of the blood vessel distribution in the target blood vessel image. Therefore, by using a blood vessel distribution histogram, information regarding a specific disease can be easily obtained with higher accuracy.
  • the characteristics of the blood vessel distribution in the target blood vessel image also appear appropriately in the target blood vessel feature map. Therefore, it is also possible to use the target blood vessel feature map as input training data and input data to the mathematical model instead of or in conjunction with the blood vessel distribution histogram.
  • the mathematical model may be trained using vascular distribution feature information as input training data and using information indicating the presence or absence of diabetes of the subject from whom the input training data was acquired as output training data.
  • the mathematical model may output information regarding diabetes of the target subject (for example, probability of having diabetes, etc.) by inputting the blood vessel distribution characteristic information of the target blood vessel image.
  • information indicating the presence or absence of diseases other than diabetes-related diseases for example, arteriosclerotic disease, retinal vascular occlusive disease (branch retinal vein occlusion (BRVO), branch retinal artery occlusion) (BRAO), central retinal vein occlusion (CRVO), central retinal artery occlusion (CRAO)), retinal degenerative diseases such as retinitis pigmentosa, glaucoma, uveitis, refractive errors (myopia, hyperopia), and premature infants. retinopathy, etc.) may be used. Even in this case, there is a possibility that information regarding the target subject's disease can be appropriately obtained.
  • diseases other than diabetes-related diseases for example, arteriosclerotic disease, retinal vascular occlusive disease (branch retinal vein occlusion (BRVO), branch retinal artery occlusion) (BRAO), central retinal vein occlusion (CRVO), central retinal artery occlusion (CRAO)
  • the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include vascular distribution feature information about arteries and vascular distribution features about veins.
  • the information may be used together. In this case, information regarding the disease can be easily obtained with higher accuracy.
  • vascular distribution characteristic information that includes arteries and veins.
  • vascular distribution characteristic information for either arteries or veins.
  • the input training data for training the mathematical model may include, in addition to the vascularity characteristic information, information on at least one of the age and gender of the subject from whom the vascularity characteristic information was acquired.
  • Blood vessel distribution characteristic information often changes depending on at least one of the age and gender of the subject. Therefore, by including at least one of age and gender information in the input training data, it becomes easier for the mathematical model to output appropriate information according to the age and gender of the target subject.
  • the constructed mathematical model includes the age and gender of the subject from whom the vascular distribution characteristic information was acquired in addition to the vascular distribution characteristic information. It is desirable to input at least one of the following information.
  • the control unit of the fundus image processing device of the second aspect exemplified in the present disclosure executes a fundus image acquisition step, a blood vessel image acquisition step, and a feature information generation step.
  • the control unit obtains a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, which is captured by the fundus image capturing device.
  • the control unit acquires a target blood vessel image that is a blood vessel image indicating at least one of an artery and a vein included in the acquired fundus image.
  • the control unit In the feature information generation step, the control unit generates a retinal blood vessel distribution probability map, which is generated by adding the plurality of blood vessel images in an aligned state, and indicates the distribution of the existence probability of blood vessels existing in the retina of the subject's eye.
  • vascular distribution feature information indicating the characteristics of the vascular distribution of the target blood vessel image is generated by processing information on a region corresponding to the blood vessel region of the target blood vessel image.
  • the blood vessel distribution probability map the two-dimensional distribution of the existence probability of retinal blood vessels in a set (population) of a plurality of eyes to be examined whose fundus images have been taken is appropriately shown.
  • the characteristics of the vascular distribution of the target blood vessel image are appropriately expressed in the information of the area corresponding to the blood vessel area of the target blood vessel image. Therefore, by generating the vascular distribution characteristic information, the characteristics of the vascular distribution of the target blood vessel image can be easily grasped.
  • the fundus image processing device acquires information on a retinal vascular distribution probability map generated in advance by a medical device manufacturer or a research facility, and generates feature information based on the information on the acquired retinal vascular distribution probability map.
  • a generation step may be performed.
  • the fundus image processing device may generate a retinal blood vessel distribution probability map. Note that a method similar to the method described in the first aspect can be adopted as a method for generating the vascular distribution probability map.
  • the control unit may generate, as the target blood vessel feature map, a map of a region corresponding to the region of the blood vessel in the target blood vessel image, out of the retinal blood vessel distribution probability map.
  • the target blood vessel feature map As described above, according to the target blood vessel feature map, the characteristics of the blood vessel distribution in the target blood vessel image can be grasped at a glance. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted.
  • the output method of the generated target blood vessel feature map can be selected as appropriate.
  • the control unit may output the target blood vessel feature map by displaying the target blood vessel feature map on the display unit.
  • the control unit calculates the number of pixels in the blood vessel region of the target blood vessel image based on the information of the region corresponding to the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map.
  • a blood vessel distribution histogram may be generated depending on the existence probability of blood vessels indicated by the distribution probability map. As described above, the blood vessel distribution histogram clearly shows the characteristics of the blood vessel distribution in the target blood vessel image. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted.
  • the output method of the generated blood vessel distribution histogram can also be selected as appropriate.
  • the control unit may output the blood vessel distribution histogram by displaying the blood vessel distribution histogram on the display unit.
  • the control unit may display the blood vessel distribution histogram generated for the target blood vessel image and the aggregate histogram described in the first aspect on the display unit in a comparable state (for example, side by side).
  • the user can easily compare the characteristics of the blood vessel distribution for the population of the aggregate histogram and the characteristics of the blood vessel distribution of the fundus of the eye to be analyzed.
  • the fundus image processing device may acquire an aggregate histogram generated in advance by a medical device manufacturer, a research facility, or the like.
  • the aggregated histogram is generated by summing up the plurality of blood vessel distribution histograms generated for each of the plurality of blood vessel images.
  • the blood vessel distribution histogram is generated based on the information of the region corresponding to the blood vessel region of each blood vessel image in the retinal blood vessel distribution probability map.
  • the blood vessel distribution histogram shows the number of pixels in the blood vessel region of each blood vessel image according to the probability of existence of the blood vessel indicated by the retinal blood vessel distribution probability map.
  • the control unit may generate a blood vessel distribution histogram based on information of a region corresponding to a blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map.
  • the control unit may generate a difference histogram that is a difference between a total histogram in which a plurality of blood vessel distribution histograms generated for each of the plurality of blood vessel images are aggregated and a blood vessel distribution histogram generated for the target blood vessel image.
  • the characteristics of the vascular distribution for each analysis target are shown in comparison with the characteristics of the vascular distribution in the population of the aggregate histogram. Therefore, the characteristics of blood vessel distribution for each analysis target can be more easily understood.
  • the fundus image processing device may acquire a total histogram generated in advance by a medical device manufacturer, a research facility, or the like.
  • the population of the aggregated histogram to be compared (comparison display or acquisition of difference) with the vascular distribution histogram to be analyzed can be selected as appropriate.
  • the comparison target population may include all subjects/eyes to be examined.
  • the user can compare the characteristics of the blood vessel distribution in the blood vessel image to be analyzed with the characteristics of the average blood vessel distribution based on the output information.
  • the population to be compared may be a population of subjects/eyes to be examined that meet specific conditions (for example, a population of diabetic patients who have not developed diabetic retinopathy).
  • the user appropriately determines, based on the output information, whether the characteristics of the blood vessel distribution in the blood vessel image to be analyzed are similar to the characteristics of the blood vessel distribution of a population that satisfies a specific condition. be able to.
  • control unit may display the generated difference histogram as is on the display unit. Further, the control unit may output information based on the difference shown by the difference histogram (that is, the difference between the characteristics of the vascular distribution to be analyzed and the characteristics of the vascular distribution in the population of the aggregated histogram). For example, the control unit may digitize and output the difference shown by the difference histogram. Further, the control unit may output the degree of approximation between the characteristics of the vascular distribution to be analyzed and the characteristics of the vascular distribution in the population of the aggregated histogram, based on the difference shown by the difference histogram. In this case, the user can more easily determine whether the characteristics of the blood vessel distribution in the blood vessel image to be analyzed are similar to the characteristics of the blood vessel distribution of a population that satisfies a specific condition.
  • vascular distribution characteristic information is used as input training data
  • information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data.
  • a mathematical model that outputs information regarding a specific disease may be trained and constructed using a machine learning algorithm.
  • the control unit inputs the vascular distribution feature information of the target blood vessel image generated in the feature information generation step into the mathematical model, thereby generating information regarding a specific disease of the target subject (for example, a specific disease) output by the mathematical model.
  • You may further perform a disease information acquisition step of acquiring a probability of having a disease, etc.).
  • the vascular distribution characteristic information represents the characteristics of the vascular distribution of the target blood vessel image. Further, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the disease state of the subject (target subject) from whom the target blood vessel image was acquired. Therefore, by inputting the vascularity characteristic information of a target subject into a mathematical model that has been trained in advance using a training data set that includes vascularity characteristic information of multiple subjects, information regarding a specific disease can be appropriately obtained. be obtained.
  • the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include at least one of a vascular distribution histogram and a target vascular feature map.
  • the output training data may include information indicating whether or not the subject from whom the input training data was acquired has diabetes.
  • the mathematical model may output information regarding diabetes of the target subject (for example, probability of having diabetes, etc.) by inputting the blood vessel distribution characteristic information of the target blood vessel image. In this case, information regarding the target subject's diabetes is acquired with high accuracy.
  • information regarding the target subject's diabetes is acquired with high accuracy.
  • the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include vascular distribution feature information about arteries and vascular distribution features about veins. The information may be used together.
  • FIG. 1 is a block diagram showing a schematic configuration of fundus image processing devices 1 and 21 and fundus image photographing devices 11A and 11B.
  • FIG. 3 is a diagram illustrating an example of a fundus image 30 and blood vessel images 40A and 40B showing blood vessels included in the fundus image 30.
  • FIG. 1 is a flowchart of fundus image processing performed by the fundus image processing device 1 of the first embodiment. It is a figure which shows an example of the retinal blood vessel distribution probability map 50A of arteries, and the retinal blood vessel distribution probability map 50B of veins.
  • FIG. 5 is a diagram showing an example of an artery target blood vessel characteristic map 51A, a vein target blood vessel characteristic map 51B, an artery blood vessel distribution histogram 52A, and a vein blood vessel distribution histogram 52B.
  • FIG. FIG. 3 is a diagram illustrating an example of an age-specific aggregate histogram of arteries. It is a figure which shows an example of the aggregate histogram of veins by age. This is a difference histogram of arteries between the DM group and the control group. It is a difference histogram of veins between the DM group and the control group.
  • a fundus image processing device 1 a fundus image processing device 21, and fundus image photographing devices 11A and 11B are used.
  • the fundus image processing device 1 generates or constructs a retinal blood vessel distribution probability map, an aggregate histogram, a mathematical model (details will be described later), etc. that outputs information regarding a specific disease, based on a plurality of fundus images.
  • a fundus image processing program for generating or constructing a retinal blood vessel distribution probability map, aggregate histogram, mathematical model, etc. is stored in, for example, the storage device 4 of the fundus image processing device 1.
  • the fundus image processing device 21 generates a blood vessel distribution that indicates the characteristics of the blood vessel distribution of the blood vessel image to be analyzed (target blood vessel image) based on the retinal blood vessel distribution probability map and aggregate histogram generated by the fundus image processing device 1. Generate feature information. Further, the fundus image processing device 21 can also acquire information regarding a specific disease by inputting blood vessel distribution information into the mathematical model constructed by the fundus image processing device 1. A fundus image processing program for executing the generation process of blood vessel distribution characteristic information and the like is stored, for example, in the storage device 24 of the fundus image processing device 21. Note that it is also possible for the fundus image processing device 21 to generate a retinal blood vessel distribution probability map, a summary histogram, and the like. It is also possible for the fundus image processing device 1 to generate blood vessel distribution characteristic information.
  • the fundus image photographing devices 11A and 11B photograph fundus images of the eye to be examined.
  • a personal computer (hereinafter referred to as "PC") is used for the fundus image processing apparatuses 1 and 21 of this embodiment.
  • devices that can function as the fundus image processing apparatuses 1 and 21 are not limited to PCs.
  • the fundus image photographing devices 11A, 11B, a server, or the like may function as the fundus image processing devices 1, 21.
  • the fundus image photographing devices 11A, 11B function as the fundus image processing devices 1, 21, the fundus image photographing devices 11A, 11B, while photographing the fundus image, create a retinal blood vessel distribution probability map, based on the photographed fundus image. At least one of an aggregate histogram and blood vessel distribution characteristic information can be generated.
  • the control units of a plurality of devices for example, the CPU of a PC and the CPU of a fundus image photographing device) may cooperate to execute fundus image processing, which will be described later.
  • a CPU is used as an example of a controller that performs various processes.
  • a controller other than the CPU may be used for at least some of the various devices.
  • a GPU may be used as the controller to speed up the processing.
  • the fundus image processing device 1 will be explained.
  • the fundus image processing device 1 is placed, for example, at a manufacturer that provides the fundus image processing device 21 or a fundus image processing program to users, or at various research facilities (for example, a university hospital).
  • the fundus image processing device 1 includes a control unit 2 that performs various control processes, and a communication I/F 5.
  • the control unit 2 includes a CPU 3, which is a controller that controls, and a storage device 4 that can store programs, data, and the like.
  • the storage device 4 stores a fundus image processing program for executing fundus image processing (see FIG. 3), which will be described later.
  • the communication I/F 5 connects the fundus image processing device 1 with other devices (for example, the fundus image photographing device 11A and the fundus image processing device 21, etc.).
  • the fundus image processing device 1 is connected to an operation section 7 and a display device 8.
  • the operation unit 7 is operated by the user in order for the user to input various instructions to the fundus image processing device 1 .
  • the operation unit 7 for example, at least one of a keyboard, a mouse, a touch panel, etc. can be used.
  • a microphone or the like for inputting various instructions may be used together with or in place of the operation section 7.
  • the display device 8 displays various images.
  • various devices capable of displaying images for example, at least one of a monitor, a display, a projector, etc.
  • the fundus image processing device 1 can acquire fundus image data (hereinafter sometimes simply referred to as “fundus image”) from the fundus image capturing device 11A.
  • the fundus image processing device 1 may acquire fundus image data from the fundus image capturing device 11A, for example, by wired communication, wireless communication, a removable storage medium (eg, USB memory), or the like.
  • the fundus image processing device 21 will be explained.
  • the fundus image processing device 21 is placed, for example, in a facility (for example, a hospital or a medical examination facility) that performs diagnosis or examination of subjects.
  • the fundus image processing device 21 includes a control unit 22 that performs various control processes, and a communication I/F 25.
  • the control unit 22 includes a CPU 23 that is a controller that controls control, and a storage device 24 that can store programs, data, and the like.
  • the storage device 24 stores a fundus image processing program for executing fundus image processing (see FIG. 11), which will be described later.
  • the communication I/F 25 connects the fundus image processing device 21 with other devices (for example, the fundus image photographing device 11B and the fundus image processing device 1, etc.).
  • the fundus image processing device 21 is connected to an operation section 27 and a display device 28. As with the operation section 7 and display device 8 described above, various devices can be used for the operation section 27 and the display device 28.
  • the fundus image processing device 21 can acquire a fundus image from the fundus image photographing device 11B. Further, the fundus image processing device 21 can acquire at least one of the retinal blood vessel distribution probability map and the aggregate histogram generated by the fundus image processing device 1.
  • the fundus image processing device 21 may acquire a fundus image, a retinal blood vessel distribution probability map, etc., for example, through wired communication, wireless communication, a removable storage medium (eg, a USB memory), or at least one of the following.
  • the fundus image photographing device 11 (11A, 11B) will be explained.
  • the fundus image capturing device 11 various devices that capture images of the fundus of the eye to be examined can be used.
  • the fundus image capturing device 11 used in this embodiment is a fundus camera capable of capturing a two-dimensional color front image of the fundus using visible light. Therefore, the blood vessel image acquisition process, which will be described later, is appropriately performed based on the color fundus image.
  • a device other than the fundus camera for example, at least one of an OCT device, a laser scanning ophthalmoscope (SLO), etc.
  • the fundus image may be a two-dimensional front image of the fundus taken from the front side of the eye to be examined, or a three-dimensional image of the fundus.
  • the fundus image photographing device 11 includes a control unit 12 (12A, 12B) that performs various control processes, and a fundus image photographing section 16 (16A, 16B).
  • the control unit 12 includes a CPU 13 (13A, 13B) that is a controller that controls control, and a storage device 14 (14A, 14B) that can store programs, data, and the like.
  • the fundus image photographing unit 16 includes an optical member and the like for photographing a fundus image of the eye to be examined.
  • the fundus image photographing device 11 executes at least a part of fundus image processing (see FIGS. 3 and 11) to be described later, at least a part of the fundus image processing program for executing the fundus image processing is stored in the storage device 14. Needless to say, it will be remembered.
  • the blood vessel image is an image showing blood vessels included in the fundus image.
  • the fundus image processing apparatuses 1 and 21 of this embodiment acquire a blood vessel image showing blood vessels in the input fundus image by inputting the fundus image to a mathematical model trained by a machine learning algorithm.
  • the fundus image processing devices 1 and 21 of the present embodiment input the fundus image into a mathematical model trained by a machine learning algorithm, thereby processing the optic disc (hereinafter also simply referred to as "papilla") of the input fundus image. 2) (specifically, the center of gravity of the nipple).
  • the mathematical model is trained in advance so that, when a fundus image is input, it outputs a blood vessel image and a papillary position for the input fundus image. Note that the mathematical model that outputs the blood vessel image and the mathematical model that outputs the position of the nipple may be constructed separately.
  • the mathematical model is constructed to output blood vessel images and nipple positions by being trained with a training dataset.
  • the training data set includes input side data (input training data) and output side data (output training data).
  • FIG. 2 shows an example of the fundus image 30 and the blood vessel image 40 (40A, 40B).
  • the fundus image 30 which is a two-dimensional color front image captured by the fundus image photographing device (in this embodiment, the fundus camera) 11A
  • the image region of the fundus image 30 used as input training data includes both the papilla 31 and the macula 32 of the eye to be examined.
  • blood vessel images 40A and 40B which are images showing at least one of arteries and veins in the fundus image 30 used as input training data, and information showing the position of the papilla (in this embodiment, the center of gravity position G of the papilla) is used as the training data for output.
  • an artery blood vessel image 40A in the fundus image 30 and a vein blood vessel image 40B in the fundus image 30 are included in the output training data. Therefore, when the fundus image 30 is input, the constructed mathematical model can output the artery blood vessel image 40A and the venous blood vessel image 40B included in the input fundus image 30.
  • the output training data (that is, the blood vessel images 40A, 40B and information indicating the position of the papilla) is generated in response to an instruction input by an operator who has confirmed the fundus image 30, which is the input training data, for example. Good too.
  • the fundus image processing device 1 generates a retinal blood vessel distribution probability map, a summary histogram, and the like based on the plurality of fundus images 30. Furthermore, the fundus image processing device 1 constructs a mathematical model that outputs information regarding a specific disease by training the mathematical model using a machine learning algorithm.
  • the fundus image processing illustrated in FIG. 3 is executed by the CPU 3 of the fundus image processing device 1 according to the fundus image processing program stored in the storage device 4.
  • the CPU 3 acquires a plurality of fundus images 30 (see FIG. 2), which are two-dimensional color frontal images, taken by the fundus image capturing device (in this embodiment, the fundus camera) 11A (S11). .
  • the CPU 3 acquires blood vessel images 40A and 40B indicating at least one of an artery and a vein included in each fundus image 30 acquired in S1 (S2).
  • the CPU 3 inputs the fundus image 30 to a mathematical model trained by a machine learning algorithm, thereby acquiring blood vessel images 40A and 40B output by the mathematical model. Therefore, blood vessel images 40A and 40B that show blood vessels in the fundus image 30 with high accuracy are easily obtained.
  • the artery blood vessel image 40A and the vein blood vessel image 40B included in each fundus image 30 are acquired separately.
  • processing may be performed to make the ranges on the fundus reflected in the plurality of fundus images 30 or the plurality of blood vessel images 40A and 40B uniform.
  • the width of the imaging range within the image which varies depending on the imaging magnification, may be made uniform.
  • the range on the fundus reflected in the image may be made uniform.
  • the process of making the range on the fundus more uniform may be performed in response to an instruction input by the operator (that is, manually), or may be performed automatically by the CPU 3.
  • the CPU 3 identifies the position of the nipple (in this embodiment, the position of the center of gravity of the nipple) in each of the blood vessel images 40A and 40B acquired in S2 (S3).
  • the CPU 3 inputs the fundus image 30 to a mathematical model trained by a machine learning algorithm, thereby acquiring the position of the papilla in the fundus image 30 output by the mathematical model. Therefore, the position of the papilla in the fundus image 30 and the blood vessel images 40A and 40B can be easily identified with high accuracy.
  • the CPU 3 adds the plurality of blood vessel images 40A and 40B acquired in S2 in a state in which alignment has been performed, thereby creating a retinal blood vessel distribution probability map that indicates the distribution of the probability of blood vessels existing in the retina of the subject's eye.
  • 50A and 50B are generated (S4).
  • the retinal blood vessel distribution probability maps 50A and 50B the two-dimensional distribution of the existence probability of retinal blood vessels within a set (population) of a plurality of eyes to be examined in which the fundus image 30 was taken is appropriately shown. Therefore, according to the retinal blood vessel distribution probability maps 50A and 50B, information regarding a wide range of blood vessels in the fundus image is appropriately presented.
  • the general structure of blood vessels in the retina tends to be uniform regardless of the eye to be examined. Therefore, the correlation between the existence probability of blood vessels in each region on the retinal blood vessel distribution probability maps 50A and 50B and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the retinal blood vessel distribution probability maps 50A and 50B, it is also easy to understand the state of blood vessel thickness for each region.
  • the CPU 3 In S4, the CPU 3 generates retinal blood vessel distribution probability maps 50A, 50B by adding and averaging the aligned plural blood vessel images 40A, 40B. Therefore, the retinal blood vessel distribution probability maps 50A, 50B can be handled in the same unit as the value (for example, brightness value) of each pixel of each blood vessel image 40A, 40B.
  • the CPU 3 In S4, the CPU 3 generates an arterial retinal vascular distribution probability map 50A by adding the plurality of artery blood vessel images 40A, and generates a venous retinal blood vessel distribution probability map 50A by adding the plurality of vein blood vessel images 40B.
  • a probability map 50B is generated.
  • different changes may appear between the arteries and veins of the fundus. Therefore, by generating both the arterial retinal vascular distribution probability map 50A and the venous retinal vascular distribution probability map 50B, it becomes easier to obtain more useful information.
  • the plurality of blood vessel images 40A, 40B are added with the positions of the plurality of blood vessel images 40A, 40B being aligned based on the position of the papilla in each of the blood vessel images 40A, 40B specified in S3.
  • Ru The structure of the retinal blood vessels in the fundus of the eye is such that they spread outward from the papilla.
  • the general structure of the retinal blood vessels extending from the papilla tends to be uniform regardless of the eye to be examined.
  • the plurality of vascular images 40A, 40B are aligned based on the position of the papilla, so that the blood vessels indicated by the retinal vascular distribution probability maps 50A, 50B are aligned.
  • the accuracy of the distribution of existence probabilities is further improved.
  • each of the plurality of blood vessel images 40A, 40B is aligned based on the centroid position of the nipple in each of the blood vessel images 40A, 40B specified in S3.
  • the plurality of blood vessel images 40A, 40B are aligned based on the position of the center of gravity of one point, so the accuracy of the generated retinal blood vessel distribution probability maps 50A, 50B is further improved.
  • the CPU 3 may automatically align the plurality of blood vessel images 40A and 40B in S4. In this case, the amount of work required by the operator to generate the retinal vascular distribution probability maps 50A, 50B is appropriately reduced. Further, the operator may input an instruction to align the plurality of blood vessel images 40A and 50B to the fundus image processing device 1 via the operation unit 7. The CPU 3 may align the plurality of blood vessel images 40A and 40B in accordance with instructions input via the operation unit 7. Even in this case, the retinal blood vessel distribution probability maps 50A and 50B are appropriately generated.
  • FIG. 4 shows an example of an artery retinal vascular distribution probability map 50A and a venous retinal vascular distribution probability map 50B displayed on the display device 8.
  • the CPU 3 determines the brightness value of each pixel in the two-dimensional retinal blood vessel distribution probability maps 50A, 50B (in this embodiment, the brightness value is 0 to 255 and 256 levels of brightness values) are expressed and displayed using colors and shading (in other words, they are displayed as a heat map).
  • the higher the luminance value of the pixel the darker the warm color, and the lower the luminance value of the pixel, the darker the cooler color.
  • the CPU 3 generates target blood vessel characteristic maps 51A and 51B (see FIG. 5), which are maps showing characteristics of blood vessel distribution in the blood vessel image to be analyzed (S5). Specifically, the CPU 3 acquires target blood vessel images, which are blood vessel images 40A and 40B to be analyzed. In this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein are acquired for the fundus of the eye to be analyzed.
  • the target blood vessel image may be acquired by a method similar to the method used in S2 described above, or may be acquired by a method different from the method used in S2.
  • the CPU 3 generates maps of regions corresponding to the blood vessel regions of the target blood vessel image as target blood vessel characteristic maps 51A and 51B.
  • the CPU 3 generates a map of a region corresponding to the blood vessel region of the target blood vessel image of the artery, out of the retinal blood vessel distribution probability map 50A of the artery, as the target blood vessel feature map 51A of the artery.
  • the CPU 3 generates a map of a region corresponding to the region of the blood vessel in the vein target blood vessel image from the vein retinal blood vessel distribution probability map 50B as the vein target blood vessel characteristic map 51B.
  • target blood vessel feature maps 51A and 51B show characteristics of blood vessel distribution in target blood vessel images.
  • the existence probability of blood vessels in the population in the region of blood vessels shown in the target blood vessel image can be appropriately grasped on the image.
  • the correlation between the existence probability of blood vessels in each region on the retinal blood vessel distribution probability maps 50A and 50B and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the target blood vessel characteristic maps 51A and 51B, for example, a large blood vessel may exist in an area where there is a high possibility that a small blood vessel exists in the population, or an area where there is a high possibility that a large blood vessel exists in the population. It is easy to understand cases where small blood vessels are present. As described above, according to the target blood vessel characteristic maps 51A and 51B, the characteristics of the blood vessels appearing in the target blood vessel images can be easily grasped.
  • the CPU 3 generates target blood vessel feature maps 51A, 51B with the retinal blood vessel distribution probability maps 50A, 50B and target blood vessel images aligned.
  • various methods described above for example, a method using the position of the nipple as a reference
  • the alignment may be performed automatically or manually by an operator.
  • the CPU 3 may generate the target blood vessel feature maps 51A, 51B by extracting a region corresponding to the blood vessel region of the target blood vessel image from the retinal blood vessel distribution probability maps 50A, 50B.
  • the CPU 3 may generate the target blood vessel characteristic maps 51A and 51B by masking regions other than the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability maps 50A and 50B.
  • the CPU 3 generates blood vessel distribution histograms 52A and 52B (see FIG. 5), which are histograms indicating characteristics of blood vessel distribution in the blood vessel image to be analyzed (S6). Specifically, similarly to S5, the CPU 3 acquires target blood vessel images, which are the blood vessel images 40A and 40B to be analyzed. In this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein are acquired for the fundus of the eye to be analyzed. The CPU 3 indicates the number of pixels in the blood vessel region of the target blood vessel image using the retinal blood vessel distribution probability maps 50A and 50B based on the information on the blood vessel region of the target blood vessel image among the retinal blood vessel distribution probability maps 50A and 50B.
  • the blood vessel distribution histograms 52A and 52B are generated as blood vessel distribution histograms 52A and 52B.
  • the range from the lowest value to the highest value of the existence probability of blood vessels in the retinal blood vessel distribution probability maps 50A and 50B is divided into 256 levels according to the brightness value.
  • each pixel existing within the blood vessel region of the target blood vessel image belongs to one of 256 areas (hereinafter referred to as "existence probability areas") divided in order of blood vessel existence probability.
  • the CPU 3 creates a histogram of the number of pixels belonging to each existence probability area.
  • the CPU 3 generates the arterial vascular distribution histogram 52A based on the information of the blood vessel region of the target artery blood vessel image in the arterial retinal vascular distribution probability map 50A. Further, the CPU 3 generates a venous vascular distribution histogram 52B based on information on the region of the blood vessel in the target venous blood vessel image in the venous retinal vascular distribution probability map 50B.
  • the horizontal axis is the probability of existence of a blood vessel in the retinal blood vessel distribution probability maps 50A and 50B (that is, a plurality of existence probability areas), and the vertical axis is the area of the blood vessel in the target blood vessel image. This is the number of pixels that belong to each existence probability area among a plurality of pixels existing within the area.
  • the characteristics of the blood vessel distribution in the target blood vessel image are prominently displayed.
  • the blood vessel distribution histograms 52A and 52B also provide useful information that could not be obtained using past analysis methods.
  • the CPU 3 generates blood vessel distribution histograms 52A, 52B with the retinal blood vessel distribution probability maps 50A, 50B and the target blood vessel image aligned.
  • various methods described above for example, a method using the position of the nipple as a reference
  • the alignment may be performed automatically or manually by an operator.
  • the CPU 3 generates a total histogram by totalizing the multiple blood vessel distribution histograms 52A, 52B to be totaled among the multiple blood vessel distribution histograms 52A, 52B generated for the multiple target blood vessel images in S6 (S7 ).
  • the CPU 3 generates a total arterial histogram by totaling the blood vessel distribution histograms 52A of a plurality of arteries.
  • the CPU 3 generates a vein total histogram by totaling the blood vessel distribution histograms 52A of a plurality of veins.
  • the CPU 3 generates the aggregate histogram by averaging the aggregate results of the plurality of blood vessel distribution histograms 52A and 52B.
  • the age-based aggregate histogram (see FIGS. 6 and 7), which will be described later, is an example of an aggregate histogram.
  • the CPU 3 can generate various aggregate histograms by appropriately setting the population to be aggregated. For example, as described below, populations may be set by age group, or populations with specific conditions regarding diseases etc. may be set. According to the aggregation histogram, the characteristics of the blood vessel distribution of the population targeted for aggregation can be appropriately grasped.
  • the CPU 3 may display the generated aggregate histogram on the display device 8 to allow the user to understand the characteristics of the blood vessel distribution of the population targeted for the aggregate. Further, the CPU 3 generates a difference histogram that is the difference between the blood vessel distribution histograms 52A and 52B generated for each target blood vessel image and the total histogram. This processing will be described in detail in the fundus image processing of the second embodiment (see FIG. 11).
  • the CPU 3 generates an age-specific aggregate histogram (see FIGS. 6 and 7) (S8). Specifically, the CPU 3 analyzes the plurality of blood vessel distribution histograms 52A and 52B generated for the plurality of target blood vessel images in S6 for each age group to which the age of the subject to be analyzed belongs (in this embodiment, 20s, 30s, etc.). A total histogram by age group is generated by averaging the results for each age group. In this embodiment, the CPU 3 generates an age-specific aggregate histogram of arteries (see FIG. 6) by aggregating the artery blood vessel distribution histogram 52A for each age. Further, the CPU 3 generates a venous age-specific aggregate histogram (see FIG. 7) by aggregating the venous blood vessel distribution histogram 52A for each age group.
  • the horizontal axis is the existence probability of blood vessels in the retinal vascular distribution probability maps 50A and 50B (that is, multiple existence probability areas), and the vertical axis is the target blood vessel. It is the average value of the number of pixels belonging to each existence probability area among a plurality of pixels existing in the blood vessel region of the image.
  • the age-specific aggregation histograms shown in FIGS. 6 and 7 were generated without determining the conditions for the subjects to be aggregated. Referring to FIGS.
  • the CPU 3 when aggregating a plurality of blood vessel distribution histograms 52A and 52B generated for a plurality of subjects of different ages (for example, when executing the processing of S7 described above and S9 described later), the CPU 3 The plurality of blood vessel distribution histograms 52A and 52B are aggregated after correcting the influence of changes in the distribution of blood vessel existence probabilities depending on the age of the subject to be analyzed.
  • the CPU 3 performs a regression analysis using a regression equation in which age is an explanatory variable and existence probability is an objective variable for each existence probability area divided in order of blood vessel existence probability. to correct for the effects of changes due to age.
  • the CPU 3 generates a plurality of aggregate histograms for each population (S9). Specifically, for each of the plurality of populations to which the target blood vessel images belong, the CPU 3 determines the target blood vessel images belonging to each population among the plurality of blood vessel distribution histograms 52A and 52B generated for the plurality of target blood vessel images in S6. The plurality of generated blood vessel distribution histograms 52A, 52B are aggregated, and the aggregated results are averaged for each population. As a result, aggregate histograms are generated for each population. By generating aggregation histograms for each population, the characteristics of the fundus blood vessel distribution for each population can be appropriately understood. In addition, in S9, aggregation processing of the arterial vascular distribution histogram 52A and aggregation processing of the venous vascular distribution histogram 52B are executed.
  • the CPU 3 generates a difference histogram (see FIGS. 8 and 9) indicating the difference between the plurality of aggregate histograms generated for each population in S9 (S10).
  • S10 an artery difference histogram and a vein difference histogram are generated.
  • the characteristics of blood vessel distribution for each population can be appropriately understood. For example, by determining the population according to various conditions such as the presence or absence of a disease in the subject or the state of the disease, useful information that could not be obtained with past analysis methods can be obtained.
  • FIGS. 8 and 9 are examples of difference histograms for two different populations.
  • the difference histograms shown in FIGS. 8 and 9 are generated by setting the DM group and the control group as two populations, and then generating the aggregate histogram for each population (S9), and each of the two aggregate histograms. This is generated by executing the difference histogram generation process (S10) for .
  • FIG. 8 is a difference histogram for arteries
  • FIG. 9 is a difference histogram for veins.
  • FIG. 10 shows a map 53A in which the difference histogram data for arteries in the DM group shown in FIG. 8 is superimposed on the fundus image 30, and the difference histogram data for veins in the DM group shown in FIG. This is a superimposed map 53B.
  • the brightness (number of pixels) increases compared to the control group, but in the area where the existence probability of blood vessels is low (existence probability area 0 to 46), the control group The brightness (number of pixels) is reduced compared to the group.
  • the luminance (number of pixels) is decreased compared to the control group in all regions, but especially in the fovea. It can be seen that the amount of decrease is large in areas where the existence probability of retinal blood vessels including the surrounding area is sparse.
  • the difference histogram between the blood vessel distribution histogram of the eye to be analyzed and the aggregated histogram to be compared is the DM shown in FIGS. 8 and 9. It is possible to predict the risk of diabetic retinopathy before the onset of diabetic retinopathy, depending on whether or not it approximates the group difference histogram.
  • the CPU 3 constructs a mathematical model that outputs information regarding a specific disease by training the mathematical model using a machine learning algorithm (S11).
  • S11 a mathematical model is constructed by training the mathematical model using a plurality of training data sets. Each training data set includes input side data (input training data) and output side data (output training data).
  • the blood vessel distribution histograms 52A and 52B are used as input training data.
  • the blood vessel distribution histograms 52A and 52B are examples of blood vessel distribution characteristic information indicating the characteristics of the blood vessel distribution of a specific blood vessel image.
  • the vascular distribution characteristic information is generated by processing information of a region corresponding to a blood vessel region of a specific blood vessel image among the retinal blood vessel distribution probability maps 50A and 50B.
  • the output training data uses information indicating the presence or absence of a specific disease (in this embodiment, the presence or absence of diabetes) of the subject from whom the input training data was acquired.
  • Information indicating the presence or absence of a specific disease may be generated, for example, by an operator operating the operating unit 7.
  • the mathematical model outputs information regarding a specific disease of the target subject (in this embodiment, the probability that the target subject has diabetes) by inputting the blood vessel distribution histograms 52A and 52B of the target subject. be trained to do so.
  • a program for realizing the mathematical model constructed by the fundus image processing device 1 is incorporated into the fundus image processing device 21.
  • the blood vessel distribution histograms 52A and 52B show the characteristics of the blood vessel distribution of the target blood vessel image. Further, the characteristics of the blood vessel distribution in the target blood vessel image change depending on whether the subject has diabetes or not. Therefore, by inputting the target subject's vascular distribution histograms 52A and 52B to a mathematical model trained by the training data set including the vascular distribution histograms 52A and 52B, the probability that the target subject has diabetes can be determined appropriately. be obtained.
  • input training data for training a mathematical model and blood vessel distribution histograms 52A and 52B input to the constructed mathematical model include a blood vessel distribution histogram 52A for arteries and a blood vessel distribution histogram for veins.
  • a distribution histogram 52B is used together. Therefore, it becomes easier to obtain information regarding the disease with higher accuracy.
  • the input training data for training the mathematical model includes, in addition to the blood vessel distribution histograms 52A and 52B, information on the age and sex of the subject from whom the blood vessel distribution histograms 52A and 52B were obtained. is included.
  • Blood vessel distribution characteristic information such as the blood vessel distribution histograms 52A, 52B often changes depending on at least one of the age and gender of the subject. Therefore, by including at least one of age and gender information in the input training data, it becomes easier for the mathematical model to output appropriate information according to the age and gender of the target subject.
  • information on the age and sex of the subject from whom the vascular distribution histograms 52A, 52B were obtained is also input into the constructed mathematical model.
  • a neural network using five fully connected layers is made to learn a training data set through 5-fold-cross validation.
  • the content of the algorithm can be selected as appropriate.
  • the inventor of the present invention conducted a trial to evaluate the usefulness of the constructed mathematical model.
  • the inventor inputted the blood vessel distribution histogram, age, and gender of multiple subjects with diabetes (diabetic group) into the constructed mathematical model, thereby determining whether each subject had diabetes.
  • the inventor inputs the blood vessel distribution histogram, age, and gender of each of multiple subjects without diabetes (normal group) into the constructed mathematical model. Obtained the probability of having diabetes.
  • the inventors compared the difference between the probabilities obtained for the diabetic group and the probabilities obtained for the normal group using a t-test, which is one of the hypothesis testing methods.
  • the average value of the probabilities obtained for the diabetic group was ⁇ 0.57 ⁇ 0.080,'' and the average value of the probabilities obtained for the normal group was ⁇ 0.53 ⁇ 0.075,'' which was statistically significant. A difference was observed.
  • the fundus image processing device 21 of the second embodiment acquires the retinal blood vessel distribution probability maps 50A, 50B and the aggregate histogram generated by the fundus image processing device 1 of the first embodiment.
  • the fundus image processing device 21 of the second embodiment calculates the blood vessel distribution of the blood vessel image to be analyzed (target blood vessel image) based on the retinal blood vessel distribution probability maps 50A, 50B generated by the fundus image processing device 1 and the aggregated histogram.
  • Generate blood vessel distribution characteristic information indicating the characteristics of.
  • the blood vessel distribution characteristic information includes, for example, target blood vessel characteristic maps 51A and 51B (see FIG.
  • a program for realizing the mathematical model constructed by the fundus image processing device 1 of the first embodiment is incorporated into the fundus image processing device 21 of the second embodiment.
  • the probability that the subject to be analyzed has diabetes is obtained by inputting the blood vessel distribution histograms 52A and 52B generated based on the fundus image to be analyzed into a mathematical model.
  • the fundus image processing illustrated in FIG. 11 is executed by the CPU 23 of the fundus image processing device 21 according to the fundus image processing program stored in the storage device 24.
  • the CPU 23 acquires the fundus image 30 (see FIG. 2), which is a two-dimensional color frontal image, captured by the fundus image capturing device (in this embodiment, the fundus camera) 11B, as a fundus image to be analyzed (S11 ).
  • the CPU 23 acquires a target blood vessel image (in this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein) indicating at least one of an artery and a vein included in the target blood vessel image acquired in S11 (S12). ).
  • the CPU 23 identifies the position of the papilla (in this embodiment, the position of the center of gravity of the papilla) in the target blood vessel image (S13). Note that the same processes as the processes S1 to S3 in the first embodiment can be adopted for the processes S11 to S13.
  • the CPU 23 generates target blood vessel characteristic maps 51A and 51B (see FIG. 5) that indicate the characteristics of blood vessel distribution in the target blood vessel image, and displays them on the display device 28 (S14).
  • target blood vessel characteristic maps 51A and 51B As described above, according to the target blood vessel characteristic maps 51A and 51B, the characteristics of the blood vessel distribution in the target blood vessel image can be grasped at a glance. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted. Note that the process similar to the process of S5 in the first embodiment can be adopted for the process of generating the target blood vessel feature maps 51A and 51B in S14.
  • the CPU 23 generates blood vessel distribution histograms 52A and 52B (see FIG. 5), which are histograms indicating characteristics of blood vessel distribution in the target blood vessel image, and displays them on the display device 28 (S15).
  • blood vessel distribution histograms 52A and 52B the characteristics of the blood vessel distribution in the target blood vessel image are prominently displayed. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted. Note that the same process as the process of S6 in the first embodiment can be adopted for the process of generating the blood vessel distribution histograms 52A and 52B in S15.
  • the CPU 23 uses the blood vessel distribution histograms 52A and 52B generated for the target blood vessel images and the aggregate histogram generated by the fundus image processing apparatus 1 of the first embodiment (for example, the blood vessel distribution histograms 52A and 52B generated for the target blood vessel images)
  • the aggregate histogram of the population of the eyes to be examined, or the aggregate histogram of the population of all subjects/eyes to be examined) may be displayed on the display device 28 in a comparable state (for example, side by side). In this case, the user can easily compare the characteristics of the blood vessel distribution for the population of the aggregate histogram and the characteristics of the blood vessel distribution of the fundus of the eye to be analyzed.
  • the CPU 23 uses the blood vessel distribution histograms 52A and 52B generated in S15 and the aggregate histogram generated by the fundus image processing device 1 of the first embodiment (for example, aggregates the population of subjects/eyes that meet specific conditions).
  • a difference histogram which is the difference between the histogram (or a total histogram of the population of all subjects/eyes to be examined), is generated and displayed on the display device 28 (S16).
  • the difference histogram generated in S16 the characteristics of the blood vessel distribution for each analysis target are shown in a state where they are compared with the characteristics of the blood vessel distribution in the population of the aggregate histogram. Therefore, the characteristics of blood vessel distribution for each analysis target can be more easily understood.
  • the CPU 23 inputs the blood vessel distribution histograms 52A and 52B of the subject to be analyzed (target subject) generated in S15 into the mathematical model constructed in S11 (see FIG. 3). Information regarding a specific disease (in this embodiment, the probability that the target subject has diabetes) is obtained (S17).
  • the mathematical model is created by a machine learning algorithm such that when the blood vessel distribution histograms 52A, 52B are input, the probability that the subject from whom the blood vessel distribution histograms 52A, 52B are obtained is diabetic is output. trained.
  • the mathematical model is created in advance using a plurality of training data sets in which the blood vessel distribution histograms 52A and 52B are used as input training data, and information indicating the presence or absence of diabetes of the subject whose input training data was acquired is used as output training data. trained.
  • the blood vessel distribution histograms 52A and 52B show the characteristics of the blood vessel distribution of the target blood vessel image. Furthermore, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the diabetes status of the subject from whom the target blood vessel image was acquired.
  • the target subject's vascular distribution histograms 52A, 52B can be The probability that a person has diabetes is appropriately obtained.
  • both the blood vessel distribution histogram 52A for arteries and the blood vessel distribution histogram 52B for veins are input into the mathematical model.
  • information on the age and sex of the subject from whom the blood vessel distribution histograms 52A, 52B were acquired is also input into the mathematical model. As a result, it becomes easier to obtain the probability of having diabetes with higher accuracy.
  • the techniques disclosed in the above embodiments are merely examples. Therefore, it is also possible to modify the techniques exemplified in the above embodiments. It is also possible to execute only some of the multiple processes exemplified in the above embodiments.
  • one or only two of the target blood vessel characteristic maps 51A, 51B, the blood vessel distribution histograms 52A, 52B, and the difference histogram may be output as the blood vessel distribution characteristic information.
  • only the generation process of the blood vessel distribution histograms 52A and 52B (S11 to S13, S15) and the acquisition process of specific disease information using a mathematical model (S17) may be executed.
  • the process of acquiring the fundus image in S1 of FIG. 3 is an example of the "fundus image acquisition step" of the first aspect.
  • the process of acquiring a blood vessel image in S2 of FIG. 3 is an example of the "blood vessel image acquisition step” of the first aspect.
  • the process of generating the retinal blood vessel distribution probability map in S4 of FIG. 3 is an example of the "probability map generation step” of the first aspect.
  • the process of specifying the position of the nipple in S3 of FIG. 3 is an example of the "nipple position specifying step" of the first aspect.
  • the process of acquiring target blood vessel images in S5 and S6 of FIG. 3 is an example of the "target blood vessel image acquisition step" of the first aspect.
  • the process of generating the target blood vessel feature map in S5 of FIG. 3 is an example of the "feature map generation step" of the first aspect.
  • the process of generating a blood vessel distribution histogram in S6 of FIG. 3 is an example of the "blood vessel distribution histogram generation step” of the first aspect.
  • the process of generating the aggregate histogram in S7 to S9 in FIG. 3 is an example of the "aggregate histogram generation step” of the first aspect.
  • the process of generating the age-specific aggregate histogram in S8 of FIG. 3 is an example of the "age-specific aggregate histogram generation step” of the first aspect.
  • the process of generating a difference histogram in S10 of FIG. 3 is an example of the "difference histogram generation step” of the first aspect.
  • the process of acquiring blood vessel distribution feature information in S5 and S6 of FIG. 3 is an example of a "feature information generation step.”
  • the process of constructing a mathematical model in S11 of FIG. 3 is an example of a "mathematical model construction step.”
  • the process of acquiring the fundus image to be analyzed in S11 of FIG. 11 is an example of the "fundus image acquisition step" of the second aspect.
  • the process of acquiring the target blood vessel image in S12 of FIG. 11 is an example of the "blood vessel image acquisition step” of the second aspect.
  • the process of generating blood vessel distribution feature information in S14 to S16 in FIG. 11 is an example of the "feature information generation step” of the second aspect.
  • the process of acquiring disease information (probability of diabetes in the above embodiment) in S17 of FIG. 11 is an example of a "disease information acquisition step.”

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Primary Health Care (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A control unit of this ocular fundus image processing device executes an ocular fundus image acquisition step, a blood vessel image acquisition step, and a probability map generation step. In the ocular fundus image acquisition step, the control unit acquires a plurality of ocular fundus images each including a blood vessel in the ocular fundus in an eye to be examined, which are captured by an ocular fundus image capture device. In the blood vessel image acquisition step, the control unit acquires a blood vessel image which shows at least one of an artery and a vein contained in each of the acquired ocular fundus images. In the probability map generation step, the control unit adds a plurality of blood vessel images respectively acquired for the plurality of ocular fundus images in an aligned state to generate a retinal blood vessel distribution probability map showing the distribution of existence probability of blood vessels present in the retina of the eye.

Description

眼底画像処理装置および眼底画像処理プログラムFundus image processing device and fundus image processing program
 本開示は、被検眼の眼底画像を処理する眼底画像処理装置、および、眼底画像処理装置において実行される眼底画像処理プログラムに関する。 The present disclosure relates to a fundus image processing device that processes a fundus image of a subject's eye, and a fundus image processing program that is executed in the fundus image processing device.
 眼底を観察することで、生体の血管の状態を非侵襲で把握することが可能である。従来、眼底画像から得られた血管(動脈および静脈の少なくともいずれか)に関する情報が、種々の診断等に利用されている。例えば、特許文献1が開示する動静脈径比の計測方法では、視神経乳頭(以下、単に「乳頭」という)の中心を基準に、半径が異なる2つの同心円で囲まれる領域Rが複数設定される。設定された領域R内で、複数の血管が抽出される。抽出された複数の血管の中から、血管間の距離が小さい2本の血管が血管対として選択される。選択された血管対から動静脈径比が算出される。また、第2分岐後の血管に基づいて動静脈径比を算出する方法等も提案されている。 By observing the fundus of the eye, it is possible to grasp the state of blood vessels in a living body non-invasively. Conventionally, information regarding blood vessels (at least one of arteries and veins) obtained from fundus images has been used for various diagnoses. For example, in the method for measuring the arteriovenous diameter ratio disclosed in Patent Document 1, a plurality of regions R n surrounded by two concentric circles with different radii are set based on the center of the optic disc (hereinafter simply referred to as "papilla"). Ru. A plurality of blood vessels are extracted within the set region R n . Two blood vessels with a short distance between them are selected as a blood vessel pair from among the plurality of extracted blood vessels. An arteriovenous diameter ratio is calculated from the selected blood vessel pair. Furthermore, a method of calculating the arteriovenous diameter ratio based on the blood vessel after the second branch has also been proposed.
特開2014-193225号公報JP2014-193225A
 前述した従来の眼底血管の解析方法では、複数の眼の血管の状態の比較を容易にするために、特定の領域(例えば、乳頭を中心とした2つの同心円で囲まれる領域等)の血管に関する一次元の情報が扱われるのみであり、他の領域の血管の情報は参照されなかった。従って、二次元の眼底画像における広範囲の血管に関する情報を適切に提示することができれば、医学的に非常に有用な情報となる。 In the conventional fundus blood vessel analysis method described above, in order to facilitate comparison of the status of blood vessels in multiple eyes, analysis of blood vessels in a specific area (for example, an area surrounded by two concentric circles around the papilla) is performed. Only one-dimensional information was handled, and information on blood vessels in other regions was not referred to. Therefore, if information regarding a wide range of blood vessels in a two-dimensional fundus image can be appropriately presented, it will be medically very useful information.
 本開示の典型的な目的は、二次元の眼底画像における広範囲の血管に関する情報を適切に提示することが可能な眼底画像処理装置および眼底画像処理プログラムを提供することである。 A typical objective of the present disclosure is to provide a fundus image processing device and a fundus image processing program that can appropriately present information regarding a wide range of blood vessels in a two-dimensional fundus image.
 本開示における典型的な実施形態が提供する眼底画像処理装置の第1態様は、被検眼の眼底画像を処理する眼底画像処理装置であって、前記眼底画像処理装置の制御部は、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む眼底画像を複数取得する眼底画像取得ステップと、取得した各々の前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得する血管画像取得ステップと、複数の前記眼底画像の各々について取得された複数の前記血管画像を、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップを生成する確率マップ生成ステップと、を実行する。 A first aspect of a fundus image processing device provided by a typical embodiment of the present disclosure is a fundus image processing device that processes a fundus image of an eye to be examined, wherein a control unit of the fundus image processing device includes a control unit for capturing a fundus image. a fundus image acquisition step of acquiring a plurality of fundus images including blood vessels in the fundus of the subject's eye taken by an apparatus; and acquiring a blood vessel image showing at least one of arteries and veins included in each of the acquired fundus images. A blood vessel image acquisition step and a distribution of the existence probability of blood vessels existing in the retina of the subject's eye by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed. a probability map generation step of generating a retinal vascular distribution probability map indicating the retinal vascular distribution;
 本開示における典型的な実施形態が提供する眼底画像処理装置の第2態様は、被検眼の眼底画像を処理する眼底画像処理装置であって、前記眼底画像処理装置の制御部は、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む解析対象の眼底画像を取得する眼底画像取得ステップと、取得した前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像である対象血管画像を取得する血管画像取得ステップと、複数の血管画像が位置合わせされた状態で加算されることで生成された、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報を処理することで、前記対象血管画像の血管分布の特徴を示す血管分布特徴情報を生成する特徴情報生成ステップと、を実行する。 A second aspect of a fundus image processing device provided by a typical embodiment of the present disclosure is a fundus image processing device that processes a fundus image of an eye to be examined, wherein a control unit of the fundus image processing device is configured to perform fundus image capturing. a fundus image acquisition step of acquiring a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, photographed by an apparatus; and a blood vessel image showing at least one of arteries and veins included in the acquired fundus image. A blood vessel image acquisition step of acquiring a blood vessel image, and a retinal blood vessel distribution probability that indicates the distribution of the probability of blood vessels existing in the retina of the eye to be examined, which is generated by adding multiple blood vessel images in an aligned state. performing a feature information generation step of generating vascular distribution characteristic information indicating characteristics of the vascular distribution of the target blood vessel image by processing information of a region of the map that corresponds to a blood vessel region of the target blood vessel image; do.
 本開示における典型的な実施形態が提供する眼底画像処理プログラムの第1態様は、被検眼の眼底画像を処理する眼底画像処理装置によって実行される眼底画像処理プログラムであって、前記眼底画像処理プログラムが前記眼底画像処理装置の制御部によって実行されることで、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む眼底画像を複数取得する眼底画像取得ステップと、取得した各々の前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得する血管画像取得ステップと、複数の前記眼底画像の各々について取得された複数の前記血管画像を、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップを生成する確率マップ生成ステップと、を前記眼底画像処理装置に実行させる。 A first aspect of a fundus image processing program provided by a typical embodiment of the present disclosure is a fundus image processing program executed by a fundus image processing device that processes a fundus image of a subject's eye, the fundus image processing program is executed by the control unit of the fundus image processing device to obtain a plurality of fundus images including blood vessels in the fundus of the eye to be examined, which are captured by the fundus image photographing device; a blood vessel image acquisition step of acquiring a blood vessel image indicating at least one of an artery and a vein included in the fundus image; and a state in which the plurality of blood vessel images acquired for each of the plurality of fundus images are aligned. The fundus image processing device is caused to perform a probability map generation step of generating a retinal blood vessel distribution probability map indicating the distribution of the existence probability of blood vessels existing in the retina of the eye to be examined by adding the retinal blood vessels.
 本開示における典型的な実施形態が提供する眼底画像処理プログラムの第2態様は、被検眼の眼底画像を処理する眼底画像処理装置によって実行される眼底画像処理プログラムであって、前記眼底画像処理プログラムが前記眼底画像処理装置の制御部によって実行されることで、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む解析対象の眼底画像を取得する眼底画像取得ステップと、取得した前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像である対象血管画像を取得する血管画像取得ステップと、複数の血管画像が位置合わせされた状態で加算されることで生成された、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報を処理することで、前記対象血管画像の血管分布の特徴を示す血管分布特徴情報を生成する特徴情報生成ステップと、を前記眼底画像処理装置に実行させる。 A second aspect of a fundus image processing program provided by a typical embodiment of the present disclosure is a fundus image processing program executed by a fundus image processing device that processes a fundus image of an eye to be examined, the fundus image processing program is executed by the control unit of the fundus image processing device to obtain a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, which is captured by the fundus image capturing device; A blood vessel image acquisition step of acquiring a target blood vessel image which is a blood vessel image indicating at least one of an artery and a vein included in the fundus image, and a blood vessel image generated by adding a plurality of blood vessel images in an aligned state. The blood vessel distribution of the target blood vessel image is determined by processing information of a region corresponding to the blood vessel region of the target blood vessel image in a retinal blood vessel distribution probability map showing the distribution of the existence probability of blood vessels existing in the retina of the subject's eye. causing the fundus image processing device to perform a feature information generation step of generating blood vessel distribution feature information indicating the characteristics of the fundus oculi.
 本開示に係る眼底画像処理装置および眼底画像処理プログラムによると、二次元の眼底画像における広範囲の血管に関する情報が適切に提示される。 According to the fundus image processing device and fundus image processing program according to the present disclosure, information regarding a wide range of blood vessels in a two-dimensional fundus image is appropriately presented.
 本開示で例示する第1態様の眼底画像処理装置の制御部は、眼底画像取得ステップ、血管画像取得ステップ、および確率マップ生成ステップを実行する。眼底画像取得ステップでは、制御部は、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む眼底画像を複数取得する。血管画像取得ステップでは、制御部は、取得した各々の眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得する。確率マップ生成ステップでは、制御部は、複数の眼底画像の各々について取得された複数の血管画像を、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップを生成する。 The control unit of the fundus image processing device of the first aspect illustrated in the present disclosure executes a fundus image acquisition step, a blood vessel image acquisition step, and a probability map generation step. In the fundus image acquisition step, the control unit obtains a plurality of fundus images including blood vessels in the fundus of the subject's eye, which are captured by the fundus image capturing device. In the blood vessel image acquisition step, the control unit acquires a blood vessel image indicating at least one of an artery and a vein included in each acquired fundus image. In the probability map generation step, the control unit determines the existence probability of a blood vessel existing in the retina of the subject's eye by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed. A retinal blood vessel distribution probability map showing the distribution of is generated.
 本開示の技術によって生成される血管分布確率マップによると、眼底画像が撮影された複数の被検眼の集合(母集団)内における、網膜の血管の存在確率の二次元分布が適切に示される。従って、例えば、検査対象の被検眼の眼底画像(例えば、眼底画像から取得される血管画像、または眼底画像そのもの等)が、網膜血管分布確率マップと比較されることで、母集団の血管に対する検査対象の被検眼の血管の状態を、広い範囲で(例えば、各々の領域毎に)適切に把握することができる。複数の異なる母集団の各々について血管分布確率マップが生成されることで、各々の母集団における血管の状態の特徴等を、広い範囲で適切に把握することも可能である。つまり、網膜血管分布確率マップによると、各々の領域毎に、網膜の血管の状態を示す情報が得られる。よって、本開示の技術によると、眼底における広範囲の血管に関する情報が、適切に提示される。 According to the blood vessel distribution probability map generated by the technology of the present disclosure, the two-dimensional distribution of the existence probability of retinal blood vessels within a set (population) of a plurality of eyes to be examined whose fundus images have been taken is appropriately shown. Therefore, for example, by comparing the fundus image of the eye to be examined (e.g., a blood vessel image obtained from the fundus image, or the fundus image itself) with the retinal blood vessel distribution probability map, the blood vessels of the population can be examined. The condition of blood vessels in the target eye to be examined can be appropriately understood over a wide range (for example, in each region). By generating a vascular distribution probability map for each of a plurality of different populations, it is also possible to appropriately understand the characteristics of the state of blood vessels in each population over a wide range. That is, according to the retinal blood vessel distribution probability map, information indicating the state of blood vessels in the retina can be obtained for each region. Therefore, according to the technology of the present disclosure, information regarding a wide range of blood vessels in the fundus of the eye is appropriately presented.
 なお、同じ動物の複数の眼に着目した場合、個々の眼の網膜血管の大まかな構造は、被検眼に関わらず均一な構造となり易い。従って、血管分布確率マップ上の各領域の血管の存在確率と、母集団における各領域の血管の太さの間の相関は高くなる。よって、血管分布確率マップによると、血管の太さの状態を、各々の領域毎に把握することも容易となる。 Note that when focusing on multiple eyes of the same animal, the general structure of the retinal blood vessels of each eye tends to be uniform regardless of the eye being examined. Therefore, the correlation between the existence probability of blood vessels in each region on the blood vessel distribution probability map and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the vascular distribution probability map, it is also easy to understand the state of the thickness of blood vessels for each region.
 網膜血管分布確率マップを生成するために用いられる眼底画像には、血管画像を取得することが可能な種々の眼底画像を使用することができる。一例として、本開示では、眼底画像として、眼底カメラによって眼底を正面から撮影した二次元のカラー眼底画像が使用される。この場合、カラー眼底画像に基づいて適切に血管画像が取得される。また、OCT(Optical Cohelence Tomography)装置によって撮影された眼底の二次元OCTアンジオ画像に基づいて、網膜血管分布確率マップが生成されてもよい。レーザ走査型検眼鏡(SLO)によって眼底を正面から撮影した二次元の眼底画像が、数学モデルに入力されてもよい。 Various fundus images from which blood vessel images can be obtained can be used as the fundus images used to generate the retinal vascular distribution probability map. As an example, in the present disclosure, a two-dimensional color fundus image obtained by photographing the fundus from the front with a fundus camera is used as the fundus image. In this case, a blood vessel image is appropriately acquired based on the color fundus image. Further, a retinal vascular distribution probability map may be generated based on a two-dimensional OCT angio image of the fundus taken by an OCT (Optical Coherence Tomography) device. A two-dimensional fundus image taken from the front using a laser scanning ophthalmoscope (SLO) may be input into the mathematical model.
 確率マップ生成ステップでは、制御部は、位置合わせされた複数の血管画像を加算して平均化することで、網膜血管分布確率マップを生成してもよい。この場合、各々の血管画像の各画素の値(例えば輝度値等)の単位と同じ単位で網膜血管分布確率マップを取り扱うことができる。ただし、複数の血管画像の加算後の平均化を省略することも可能である。 In the probability map generation step, the control unit may generate a retinal vascular distribution probability map by adding and averaging a plurality of aligned blood vessel images. In this case, the retinal blood vessel distribution probability map can be handled in the same unit as the value of each pixel (for example, brightness value, etc.) of each blood vessel image. However, it is also possible to omit the averaging after addition of multiple blood vessel images.
 血管画像取得ステップでは、制御部は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像を入力することで、眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得してもよい。この場合、血管を高い精度で示す血管画像が取得され易い。なお、数学モデルは、過去に撮影された被検眼の眼底画像を入力用訓練データとし、且つ、入力用訓練データの眼底画像における動脈および静脈の少なくともいずれかを示す血管画像を出力用訓練データとして訓練されていてもよい。この場合、訓練された数学モデルは、入力された眼底画像に基づいて適切に血管画像を出力することができる。 In the blood vessel image acquisition step, the control unit may acquire a blood vessel image indicating at least one of an artery and a vein included in the fundus image by inputting the fundus image into a mathematical model trained by a machine learning algorithm. . In this case, a blood vessel image showing blood vessels with high accuracy is likely to be obtained. Note that the mathematical model uses a fundus image of the subject's eye taken in the past as input training data, and uses a blood vessel image showing at least one of arteries and veins in the fundus image of the input training data as output training data. May be trained. In this case, the trained mathematical model can appropriately output a blood vessel image based on the input fundus image.
 ただし、血管画像を取得する方法を変更することも可能である。例えば、複数の血管画像の少なくともいずれかは、操作部(例えばマウス等)を介して作業者によって入力された指示に応じて生成されてもよい。 However, it is also possible to change the method of acquiring blood vessel images. For example, at least one of the plurality of blood vessel images may be generated in response to an instruction input by an operator via an operation unit (eg, a mouse, etc.).
 制御部は、網膜血管分布確率マップを表示部に表示させる場合、二次元の網膜血管分布確率マップにおける各画素の輝度値を、色および濃淡で表現してもよい(つまり、ヒートマップ化してもよい)。一例として、本開示では、画素の輝度値が高い程、濃い暖色とし、画素の輝度値が低い程、濃い寒色とする。ヒートマップ化した網膜血管分布確率マップを表示させることで、血管の存在確率の二次元分布が、さらに適切に把握され易くなる。ただし、網膜血管分布確率マップの具体的な表示方法を変更することも可能である。例えば、制御部は、各画素の輝度値に応じたモノクロの網膜血管分布確率マップを表示部に表示させることも可能である。 When displaying the retinal blood vessel distribution probability map on the display unit, the control unit may express the brightness value of each pixel in the two-dimensional retinal blood vessel distribution probability map by color and shading (that is, by converting it into a heat map). good). As an example, in the present disclosure, the higher the luminance value of the pixel, the darker the warm color, and the lower the luminance value of the pixel, the darker the cooler color. By displaying a retinal blood vessel distribution probability map converted into a heat map, the two-dimensional distribution of blood vessel existence probabilities can be more easily understood. However, it is also possible to change the specific display method of the retinal vascular distribution probability map. For example, the control unit can cause the display unit to display a monochrome retinal blood vessel distribution probability map according to the luminance value of each pixel.
 制御部は、血管画像取得ステップにおいて、各々の眼底画像に含まれる動脈の血管画像と静脈の血管画像を共に取得してもよい。制御部は、確率マップ生成ステップにおいて、複数の動脈の血管画像を加算することで、動脈の網膜血管分布確率マップを生成すると共に、複数の静脈の血管画像を加算することで、静脈の網膜血管分布確率マップを生成してもよい。被検者の疾患等の状態によっては、眼底の動脈と静脈の間で異なる変化が表れる場合もある。従って、動脈の網膜血管分布確率マップ(以下、「動脈分布確率マップ」という場合もある)と、静脈の網膜血管分布確率マップ(以下、「静脈分布確率マップ」という場合もある)が共に生成されることで、より有用な情報が得られ易くなる。 In the blood vessel image acquisition step, the control unit may acquire both an artery blood vessel image and a venous blood vessel image included in each fundus image. In the probability map generation step, the control unit generates an arterial retinal vascular distribution probability map by adding blood vessel images of a plurality of arteries, and generates a retinal blood vessel distribution probability map of a vein by adding blood vessel images of a plurality of veins. A distribution probability map may also be generated. Depending on the condition of the subject, such as disease, different changes may appear between the arteries and veins of the fundus. Therefore, both the retinal vascular distribution probability map for arteries (hereinafter sometimes referred to as "arterial distribution probability map") and the retinal vascular distribution probability map for veins (hereinafter sometimes referred to as "venous distribution probability map") are generated. This makes it easier to obtain more useful information.
 ただし、制御部は、眼底における動脈と静脈が共に表れた血管画像を複数取得し、取得した複数の血管画像を加算することで、動脈と静脈が纏まった網膜血管分布確率マップを生成してもよい。動脈と静脈が何ら分類されていない網膜血管分布確率マップが生成されてもよい。また、眼底画像処理装置は、動脈分布確率マップおよび静脈分布確率マップの一方のみを生成することも可能である。 However, the control unit can generate a retinal vascular distribution probability map that includes arteries and veins by acquiring multiple blood vessel images showing both arteries and veins in the fundus and adding the acquired multiple blood vessel images. good. A retinal vascular distribution probability map may be generated in which arteries and veins are not classified in any way. Further, the fundus image processing device can also generate only one of the artery distribution probability map and the vein distribution probability map.
 制御部は、取得された血管画像における視神経乳頭の位置を特定する乳頭位置特定ステップをさらに実行してもよい。確率マップ生成ステップでは、制御部は、視神経乳頭の位置を基準として、複数の血管画像の各々の位置が合わされた状態で加算することで、網膜血管分布確率マップを生成してもよい。眼底の網膜の血管の構造は、乳頭から外側に向けて広がる構造となっている。乳頭から広がる網膜の血管の大まかな構造は、被検眼に関わらず均一な構造となり易い。従って、網膜血管分布確率マップが生成される際に、乳頭の位置を基準として複数の血管画像が位置合わせされることで、網膜血管分布確率マップが示す血管の存在確率の分布の精度がさらに向上する。 The control unit may further execute a papilla position specifying step of specifying the position of the optic disc in the acquired blood vessel image. In the probability map generation step, the control unit may generate the retinal vascular distribution probability map by adding together the positions of each of the plurality of blood vessel images with the position of the optic disc as a reference. The structure of the retinal blood vessels in the fundus of the eye is such that they spread outward from the papilla. The general structure of the retinal blood vessels extending from the papilla tends to be uniform regardless of the eye to be examined. Therefore, when a retinal vascular distribution probability map is generated, multiple blood vessel images are aligned based on the position of the papilla, thereby further improving the accuracy of the distribution of the probability of existence of blood vessels indicated by the retinal vascular distribution probability map. do.
 詳細には、制御部は、乳頭位置特定ステップにおいて、血管画像に写る視神経乳頭の重心位置を特定してもよい。制御部は、確率マップ生成ステップにおいて、複数の血管画像の各々を、視神経乳頭の重心位置を基準として位置が合わされた状態で加算してもよい。この場合、複数の血管画像が、1点の重心位置を基準として位置合わせされるので、生成される網膜血管分布確率マップの精度がさらに向上する。 Specifically, the control unit may specify the centroid position of the optic disc shown in the blood vessel image in the papillary position specifying step. In the probability map generation step, the control unit may add each of the plurality of blood vessel images in a state where the positions thereof are aligned with respect to the centroid position of the optic disc. In this case, since the plurality of blood vessel images are aligned based on the centroid position of one point, the accuracy of the generated retinal blood vessel distribution probability map is further improved.
 乳頭位置特定ステップでは、制御部は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像を入力することで、血管画像に写る乳頭の位置を特定してもよい。この場合、乳頭の位置が高い精度で特定され易い。なお、血管画像の取得にも機械学習アルゴリズムが利用される場合、血管画像と乳頭の位置は、共に同一の数学モデルによって出力されてもよいし、異なる数学モデルによって別々に出力されてもよい。 In the nipple position identifying step, the control unit may identify the position of the nipple in the blood vessel image by inputting the fundus image to a mathematical model trained by a machine learning algorithm. In this case, the position of the nipple can be easily identified with high accuracy. Note that when a machine learning algorithm is also used to obtain the blood vessel image, the blood vessel image and the position of the nipple may both be output using the same mathematical model, or may be output separately using different mathematical models.
 ただし、乳頭の位置を特定する方法を変更することも可能である。例えば、作業者が、眼底画像または血管画像における乳頭の位置を把握し、把握した位置を眼底画像処理装置に入力してもよい。制御部は、操作部等を介して入力された位置を、乳頭の位置として特定してもよい。また、制御部は、眼底画像または血管画像に対して公知の画像処理を行うことで、乳頭位置を特定してもよい。 However, it is also possible to change the method of identifying the position of the nipple. For example, the operator may grasp the position of the papilla in the fundus image or blood vessel image, and input the grasped position into the fundus image processing device. The control unit may specify the position input via the operation unit or the like as the position of the nipple. Further, the control unit may specify the papilla position by performing known image processing on the fundus image or the blood vessel image.
 制御部は、複数の血管画像の位置合わせを自動的に行ってもよい。この場合、網膜血管分布確率マップを生成するために必要な作業者の作業量が適切に減少する。また、作業者は、複数の血管画像の位置合わせを行うための指示を、操作部を介して眼底画像処理装置に入力してもよい。眼底画像処理装置は、操作部を介して入力された指示に応じて、複数の血管画像の位置合わせを行ってもよい。この場合でも、網膜血管分布確率マップは適切に生成される。 The control unit may automatically align the multiple blood vessel images. In this case, the amount of work required by the operator to generate the retinal vascular distribution probability map is appropriately reduced. Further, the operator may input an instruction for aligning the plurality of blood vessel images to the fundus image processing device via the operation unit. The fundus image processing device may align a plurality of blood vessel images in accordance with an instruction input via the operation unit. Even in this case, the retinal blood vessel distribution probability map is appropriately generated.
 制御部は、眼底画像および血管画像に写る眼底上の範囲を均一に近づける処理を行ってもよい。例えば、制御部は、眼底画像が撮影された際の撮影倍率に応じて、撮影倍率によって変化する画像内の撮影範囲の広さを均一に近づけてもよい。また、制御部は、眼底画像または血管画像から、特定の範囲内の画像を抽出することで、画像に写る眼底上の範囲を均一に近づけてもよい。ただし、同一の範囲の複数の眼底画像が、同一の倍率で眼底画像撮影装置によって撮影されている場合等には、眼底上の範囲を均一に近づける処理を省略することも可能である。 The control unit may perform processing to make the range on the fundus reflected in the fundus image and blood vessel image uniform. For example, the control unit may make the width of the photographing range within the image, which changes depending on the photographing magnification, uniform, depending on the photographing magnification when the fundus image is photographed. Further, the control unit may make the range on the fundus reflected in the image uniform by extracting an image within a specific range from the fundus image or blood vessel image. However, if a plurality of fundus images in the same range are captured by a fundus image capturing device at the same magnification, it is possible to omit the process of making the range on the fundus uniform.
 制御部は、対象血管画像取得ステップと、特徴マップ生成ステップをさらに実行してもよい。対象血管画像取得ステップでは、制御部は、解析対象の血管画像である対象血管画像を取得する。特徴マップ生成ステップでは、制御部は、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域のマップを、対象血管特徴マップとして生成する。対象血管特徴マップは、対象血管画像における血管分布の特徴を示す。例えば、対象血管特徴マップによると、対象血管画像に写る血管の領域における、母集団の血管の存在確率が、画像上で適切に把握される。また、前述したように、血管分布確率マップ上の各領域の血管の存在確率と、母集団における各領域の血管の太さの間の相関は高くなる。従って、対象血管特徴マップによると、例えば、母集団では細い血管が存在する可能性が高い領域に、太い血管が存在する場合や、母集団では太い血管が存在する可能性が高い領域に、細い血管が存在する場合等が、容易に理解され易い。以上のように、対象血管特徴マップによると、対象血管画像に写る血管の特徴が容易に把握される。 The control unit may further execute a target blood vessel image acquisition step and a feature map generation step. In the target blood vessel image acquisition step, the control unit acquires a target blood vessel image that is a blood vessel image to be analyzed. In the feature map generation step, the control unit generates, as a target blood vessel feature map, a map of a region corresponding to a region of a blood vessel in the target blood vessel image, out of the retinal blood vessel distribution probability map. The target blood vessel feature map indicates the characteristics of blood vessel distribution in the target blood vessel image. For example, according to the target blood vessel feature map, the probability of existence of blood vessels in the population in the region of blood vessels shown in the target blood vessel image can be appropriately grasped on the image. Furthermore, as described above, the correlation between the existence probability of blood vessels in each region on the blood vessel distribution probability map and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the target blood vessel feature map, for example, a large blood vessel may exist in an area where there is a high possibility that a small blood vessel exists in the population, or a thin blood vessel may exist in an area where there is a high possibility that a large blood vessel exists in the population. It is easy to understand cases where blood vessels are present. As described above, according to the target blood vessel feature map, the characteristics of the blood vessel shown in the target blood vessel image can be easily grasped.
 なお、対象血管画像は、解析対象の眼底画像(以下、「対象眼底画像という」)に基づいて取得されてもよい。例えば、機械学習アルゴリズムによって訓練された数学モデルに対象眼底画像が入力されることで、対象血管画像が取得されてもよい。また、特徴マップ生成ステップでは、網膜血管分布確率マップと対象血管画像の位置合わせが行われた状態で、対象血管特徴マップが生成されてもよい。位置合わせには、前述した種々の方法(例えば、乳頭の位置を基準とする方法等)を採用できる。位置合わせは、自動で行われてもよいし、作業者によって手動で行われてもよい。 Note that the target blood vessel image may be acquired based on a fundus image to be analyzed (hereinafter referred to as "target fundus image"). For example, the target blood vessel image may be acquired by inputting the target fundus image to a mathematical model trained by a machine learning algorithm. Furthermore, in the feature map generation step, the target blood vessel feature map may be generated with the retinal blood vessel distribution probability map and the target blood vessel image aligned. For positioning, various methods described above (for example, a method using the position of the nipple as a reference) can be employed. The alignment may be performed automatically or manually by an operator.
 対象血管特徴マップの具体的な生成方法も、適宜選択できる。例えば、制御部は、網膜血管分布確率マップから、対象血管画像の血管の領域に対応する領域を抽出することで、対象血管特徴マップを生成してもよい。また、制御部は、網膜血管分布確率マップのうち、対象血管画像の血管の領域以外の領域をマスクすることで、対象血管特徴マップを生成してもよい。 A specific method for generating the target blood vessel feature map can also be selected as appropriate. For example, the control unit may generate the target blood vessel feature map by extracting a region corresponding to a blood vessel region of the target blood vessel image from the retinal blood vessel distribution probability map. Further, the control unit may generate the target blood vessel feature map by masking a region other than the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map.
 制御部は、対象血管画像取得ステップと、血管分布ヒストグラム生成ステップをさらに実行してもよい。対象血管画像取得ステップでは、制御部は、解析対象の前記血管画像である対象血管画像を取得する。血管分布ヒストグラム生成ステップでは、制御部は、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報に基づいて、血管分布ヒストグラムを生成する。血管分布ヒストグラムは、対象血管画像の血管の領域内の画素数を、網膜血管分布確率マップによって示される血管の存在確率に応じて示す。血管分布ヒストグラムでは、対象血管画像における血管分布の特徴が顕著に表れる。例えば、網膜血管分布確率マップにおける血管の存在確率の最低値から最高値までの間を、N個(一例として、本開示では256個)に区分けする場合を想定する。この場合、対象血管画像の血管の領域内に存在する各々の画素が、血管の存在確率順に区分けされたN個のエリア(以下、「存在確率エリア」という)のいずれかに属することになる。血管分布ヒストグラムでは、対象血管画像の血管の領域内に存在する複数の画素について、N個の存在確率エリアの各々に属する画素の数の分布が適切に把握される。よって、血管分布ヒストグラムは、過去の解析方法では得られなかった有用な情報となる。 The control unit may further execute a target blood vessel image acquisition step and a blood vessel distribution histogram generation step. In the target blood vessel image acquisition step, the control unit acquires a target blood vessel image that is the blood vessel image to be analyzed. In the blood vessel distribution histogram generation step, the control unit generates a blood vessel distribution histogram based on information of a region corresponding to a blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map. The blood vessel distribution histogram shows the number of pixels in the blood vessel region of the target blood vessel image according to the existence probability of the blood vessel indicated by the retinal blood vessel distribution probability map. In the blood vessel distribution histogram, the characteristics of the blood vessel distribution in the target blood vessel image appear prominently. For example, assume that the range from the lowest value to the highest value of the existence probability of blood vessels in the retinal blood vessel distribution probability map is divided into N pieces (256 pieces in the present disclosure, as an example). In this case, each pixel existing within the blood vessel region of the target blood vessel image belongs to one of N areas (hereinafter referred to as "existence probability areas") divided in order of blood vessel existence probability. In the blood vessel distribution histogram, the distribution of the number of pixels belonging to each of the N existence probability areas is appropriately grasped for a plurality of pixels existing in the blood vessel region of the target blood vessel image. Therefore, the blood vessel distribution histogram provides useful information that could not be obtained using past analysis methods.
 対象血管画像は、対象血管特徴マップの生成方法において説明した方法と同様の方法で取得されてもよい。また、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報は、網膜血管分布確率マップと対象血管画像の位置合わせが行われた状態で取得されてもよい。位置合わせには、前述した種々の方法を採用できる。 The target blood vessel image may be acquired by a method similar to the method described in the method for generating a target blood vessel feature map. Furthermore, information on a region of the retinal vascular distribution probability map that corresponds to a region of a blood vessel in the target blood vessel image may be acquired in a state where the retinal vascular distribution probability map and the target blood vessel image are aligned. The various methods described above can be used for alignment.
 制御部は、複数の対象血管画像について生成された複数の血管分布ヒストグラムを集計した集計ヒストグラムを生成する集計ヒストグラム生成ステップをさらに実行してもよい。この場合、集計対象とした母集団についての血管分布の特徴が、集計ヒストグラムによって適切に把握される。 The control unit may further execute an aggregate histogram generation step of generating an aggregate histogram that aggregates a plurality of blood vessel distribution histograms generated for a plurality of target blood vessel images. In this case, the characteristics of the blood vessel distribution of the population targeted for aggregation can be appropriately grasped by the aggregation histogram.
 制御部は、集計ヒストグラム生成ステップにおいて、各々の解析対象の被検者の年齢による血管の存在確率の分布の変化の影響を補正したうえで、複数の血管分布ヒストグラムを集計してもよい。この場合、年齢による変化の影響が抑制された集計ヒストグラムによって、血管分布の特徴を把握することが可能である。 In the aggregate histogram generation step, the control unit may aggregate the plurality of blood vessel distribution histograms after correcting the influence of changes in the distribution of blood vessel existence probabilities due to the age of each subject to be analyzed. In this case, it is possible to understand the characteristics of the blood vessel distribution using the aggregate histogram in which the influence of changes due to age is suppressed.
 なお、年齢による血管の存在確率の分布の変化の影響を補正するための具体的な方法は、適宜選択できる。例えば、制御部は、血管の存在確率順に区分けされた各々の存在確率エリア毎に、年齢を説明変数とし、存在確率を目的変数とする回帰式を用いた回帰分析を行うことで、年齢による変化の影響を補正してもよい。 Note that a specific method for correcting the influence of changes in the distribution of blood vessel existence probabilities due to age can be selected as appropriate. For example, the control unit performs a regression analysis using a regression equation in which age is an explanatory variable and existence probability is an objective variable for each existence probability area divided in order of the existence probability of blood vessels, and thereby changes due to age. may be corrected for the influence of
 制御部は、個々の対象血管画像について生成された血管分布ヒストグラムと、集計ヒストグラム生成ステップにおいて生成された集計ヒストグラムの間の差分ヒストグラムを生成する差分ヒストグラム生成ステップをさらに実行してもよい。この場合、個々の解析対象についての血管分布の特徴が、集計ヒストグラムの母集団における血管分布の特徴と比較された状態で示される。よって、個々の解析対象についての血管分布の特徴が、さらに容易に把握される。 The control unit may further execute a difference histogram generation step of generating a difference histogram between the blood vessel distribution histogram generated for each target blood vessel image and the aggregate histogram generated in the aggregate histogram generation step. In this case, the characteristics of the vascular distribution for each analysis target are shown in comparison with the characteristics of the vascular distribution in the population of the aggregate histogram. Therefore, the characteristics of blood vessel distribution for each analysis target can be more easily understood.
 制御部は、複数の対象血管画像の各々について生成された複数の血管分布ヒストグラムを、解析対象の被検者の年齢が属する年代毎に集計した年代別集計ヒストグラムを生成してもよい。年代別集計ヒストグラムによると、年代別の血管分布の特徴を適切に把握することが可能である。 The control unit may generate an age-specific aggregate histogram in which the plurality of blood vessel distribution histograms generated for each of the plurality of target blood vessel images are aggregated for each age group to which the age of the subject to be analyzed belongs. According to the age-specific aggregate histogram, it is possible to appropriately understand the characteristics of blood vessel distribution by age.
 制御部は、母集団別集計ヒストグラム生成ステップと、差分ヒストグラム生成ステップをさらに実行してもよい。母集団別集計ヒストグラム生成ステップでは、制御部は、対象血管画像が属する複数の母集団毎に、各母集団に属する複数の対象血管画像について生成された複数の血管分布ヒストグラムを集計することで、母集団毎の集計ヒストグラムを生成する。差分ヒストグラム生成ステップでは、制御部は、母集団毎に生成された複数の集計ヒストグラムの間の差分ヒストグラムを生成する。この場合、母集団毎の血管分布の特徴を、差分ヒストグラムによって適切に把握することができる。例えば、被検者の疾患の有無、または疾患の状態等の様々な条件に応じて母集団が決定されることで、過去の解析方法では得られなかった有用な情報が得られる。 The control unit may further execute a population-specific aggregate histogram generation step and a differential histogram generation step. In the population-specific aggregate histogram generation step, the control unit aggregates the plurality of blood vessel distribution histograms generated for the plurality of target blood vessel images belonging to each population, for each of the plurality of populations to which the target blood vessel image belongs. Generate an aggregate histogram for each population. In the difference histogram generation step, the control unit generates a difference histogram between the plurality of aggregate histograms generated for each population. In this case, the characteristics of blood vessel distribution for each population can be appropriately understood using the difference histogram. For example, by determining the population according to various conditions such as the presence or absence of a disease in the subject or the state of the disease, useful information that could not be obtained with past analysis methods can be obtained.
 集計ヒストグラムが生成される対象の複数の母集団には、糖尿病網膜症を発症していない糖尿病患者の母集団が含まれていてもよい。この場合、糖尿病網膜症を発症していない糖尿病患者の特徴が、差分ヒストグラムに明確に反映される。従って、例えば、個々の被検眼についての血管分布ヒストグラムについての情報が、差分ヒストグラムと比較されることで、被検者が糖尿病を患っている可能性や、糖尿病網膜症が発症する可能性等を予測できる可能性もある。 The plurality of populations for which aggregate histograms are generated may include a population of diabetic patients who have not developed diabetic retinopathy. In this case, the characteristics of diabetic patients who have not developed diabetic retinopathy are clearly reflected in the difference histogram. Therefore, for example, information about the blood vessel distribution histogram for each eye to be examined can be compared with the difference histogram to determine the possibility that the examinee has diabetes or the possibility that diabetic retinopathy will develop. There is a possibility that it can be predicted.
 ただし、本開示で例示する技術によると、糖尿病に関連する疾患以外の疾患に関する有用な情報を得られる可能性も高い。例えば、動脈硬化性疾患、網膜血管閉塞性疾患(網膜静脈分枝閉塞症(BRVO)、網膜動脈分枝閉塞症(BRAO)、網膜中心静脈閉塞症(CRVO)、網膜中心動脈閉塞症(CRAO))、網膜色素変性症などの網膜変性疾患、緑内障、ぶどう膜炎、屈折異常(近視、遠視)、未熟児網膜症のフォローアップ等のうち、少なくともいずれかに関する有用な情報が、本開示で例示する技術によって得られる可能性がある。また、網膜では、生物の血管の状態を非侵襲で観察することができる。従って、被検眼の網膜血管には、被検眼自体の疾患等の有無に関わらず、被検眼以外の生体の状態(例えば、各種臓器の状態、血管の状態、血液成分の状態、血圧の状態等)が表れることも多い。よって、本開示で例示する技術によると、被検眼以外の生体の状態に関する有用な情報が得られる可能性も高い。 However, according to the technology exemplified in the present disclosure, there is a high possibility that useful information regarding diseases other than those related to diabetes can be obtained. For example, arteriosclerotic disease, retinal vascular occlusive disease (branch retinal vein occlusion (BRVO), branch retinal artery occlusion (BRAO), central retinal vein occlusion (CRVO), central retinal artery occlusion (CRAO)) ), retinal degenerative diseases such as retinitis pigmentosa, glaucoma, uveitis, refractive errors (myopia, hyperopia), follow-up of retinopathy of prematurity, etc., useful information regarding at least one of the following is exemplified in this disclosure. There is a possibility that it can be obtained by using technology that Furthermore, the state of blood vessels in living things can be observed non-invasively using the retina. Therefore, regardless of the presence or absence of a disease in the eye to be examined, the retinal blood vessels of the eye to be examined may be affected by the conditions of living organisms other than the eye to be examined (e.g., the condition of various organs, the condition of blood vessels, the condition of blood components, the state of blood pressure, etc.). ) often appear. Therefore, according to the technology exemplified in the present disclosure, there is a high possibility that useful information regarding the condition of a living body other than the subject's eye can be obtained.
 なお、特定の条件を満たす母集団の集計ヒストグラムと、比較対象の母集団の集計ヒストグラムの差分を取る場合、比較対象の母集団は適宜設定できる。例えば、比較対象の母集団には、全ての被検者・被検眼が含まれていてもよい。また、比較対象の母集団は、全ての被検者・被検眼から、比較対象の母集団の被検者・被検眼を除外した母集団であってもよい。 Note that when taking the difference between the aggregate histogram of a population that satisfies a specific condition and the aggregate histogram of a comparison target population, the comparison target population can be set as appropriate. For example, the comparison target population may include all subjects/eyes to be examined. Further, the population to be compared may be a population obtained by excluding the subjects/eyes to be examined from the population to be compared from all the subjects/eyes to be examined.
 制御部は、特徴情報生成ステップと数学モデル構築ステップをさらに実行してもよい。特徴情報生成ステップでは、制御部は、網膜血管分布確率マップのうち、特定の血管画像の血管の領域に対応する領域の情報を処理することで、特定の血管画像の血管分布の特徴を示す血管分布特徴情報(例えば、前述した血管分布ヒストグラムまたは対象血管特徴マップ等の少なくともいずれか)を生成する。数学モデル構築ステップでは、制御部は、血管分布特徴情報を入力用訓練データとし、入力用訓練データが取得された被検者の特定の疾患の有無を示す情報を出力用訓練データとする複数の訓練データセットを用いて、特定の疾患に関する情報を出力する数学モデルを、機械学習アルゴリズムによって訓練させて構築する。 The control unit may further execute a feature information generation step and a mathematical model construction step. In the feature information generation step, the control unit processes information of a region corresponding to a blood vessel region of a specific blood vessel image in the retinal blood vessel distribution probability map to generate a blood vessel that shows the characteristics of the blood vessel distribution of the specific blood vessel image. Distribution feature information (for example, at least one of the aforementioned blood vessel distribution histogram or target blood vessel feature map) is generated. In the mathematical model construction step, the control unit generates a plurality of sets of training data in which blood vessel distribution feature information is used as input training data, and information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data. A training dataset is used to train a machine learning algorithm to build a mathematical model that outputs information about a specific disease.
 血管分布特徴情報には、対象血管画像の血管分布の特徴が表れる。また、疾患の種類によっては、対象血管画像が取得された解析対象の被検者(対象被検者)の疾患の状態に応じて、対象血管画像の血管分布の特徴が変化する場合も多い。従って、複数の被検者の血管分布特徴情報を含む訓練データセットによって予め訓練された数学モデルに、対象被検者の血管分布特徴情報が入力されることで、特定の疾患に関する情報が適切に取得される。 The vascular distribution feature information represents the vascular distribution characteristics of the target blood vessel image. Furthermore, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the disease state of the subject to be analyzed (target subject) from whom the target blood vessel image was acquired. Therefore, by inputting the vascularity characteristic information of a target subject into a mathematical model that has been trained in advance using a training data set that includes vascularity characteristic information of multiple subjects, information regarding a specific disease can be appropriately obtained. be obtained.
 なお、数学モデルを訓練するための血管分布特徴情報(入力用訓練データ)、および、数学モデルに入力される血管分布特徴情報には、血管分布ヒストグラムが用いられてもよい。前述したように、血管分布ヒストグラムでは、対象血管画像における血管分布の特徴が顕著に表れる。従って、血管分布ヒストグラムが用いられることで、特定の疾患に関する情報が、より高い精度で取得され易くなる。 Note that a vascular distribution histogram may be used as the vascular distribution characteristic information (input training data) for training the mathematical model and the vascular distribution characteristic information input to the mathematical model. As described above, the blood vessel distribution histogram clearly shows the characteristics of the blood vessel distribution in the target blood vessel image. Therefore, by using a blood vessel distribution histogram, information regarding a specific disease can be easily obtained with higher accuracy.
 ただし、前述したように、対象血管画像における血管分布の特徴は、対象血管特徴マップにも適切に表れる。従って、血管分布ヒストグラムに代えて、または血管分布ヒストグラムと共に、対象血管特徴マップを入力用訓練データおよび数学モデルへの入力データとして用いることも可能である。 However, as described above, the characteristics of the blood vessel distribution in the target blood vessel image also appear appropriately in the target blood vessel feature map. Therefore, it is also possible to use the target blood vessel feature map as input training data and input data to the mathematical model instead of or in conjunction with the blood vessel distribution histogram.
 数学モデルは、血管分布特徴情報を入力用訓練データとし、且つ、入力用訓練データが取得された被検者の糖尿病の有無を示す情報を出力用訓練データとして訓練されていてもよい。数学モデルは、対象血管画像の血管分布特徴情報が入力されることで、対象被検者の糖尿病に関する情報(例えば、糖尿病を有する確率等)を出力してもよい。本願発明の発明者が行った研究によって、対象被検者が糖尿病を患っているか否かによって、血管分布特徴情報が(糖尿病網膜症の発症の有無に関わらず)変化することが判明した。従って、対象被検者の血管分布情報が数学モデルに入力されることで、対象被検者の糖尿病に関する情報が高い精度で取得される。 The mathematical model may be trained using vascular distribution feature information as input training data and using information indicating the presence or absence of diabetes of the subject from whom the input training data was acquired as output training data. The mathematical model may output information regarding diabetes of the target subject (for example, probability of having diabetes, etc.) by inputting the blood vessel distribution characteristic information of the target blood vessel image. Through research conducted by the inventor of the present invention, it has been found that vascular distribution characteristic information changes depending on whether or not a target subject suffers from diabetes (regardless of whether or not diabetic retinopathy has developed). Therefore, by inputting the blood vessel distribution information of the target subject into the mathematical model, information regarding the target subject's diabetes can be acquired with high accuracy.
 ただし、出力用訓練データとして、糖尿病に関連する疾患以外の疾患の有無を示す情報(例えば、動脈硬化性疾患、網膜血管閉塞性疾患(網膜静脈分枝閉塞症(BRVO)、網膜動脈分枝閉塞症(BRAO)、網膜中心静脈閉塞症(CRVO)、網膜中心動脈閉塞症(CRAO))、網膜色素変性症などの網膜変性疾患、緑内障、ぶどう膜炎、屈折異常(近視、遠視)、未熟児網膜症等の少なくともいずれか)が用いられてもよい。この場合でも、対象被検者の疾患に関する情報が適切に得られる可能性がある。 However, as training data for output, information indicating the presence or absence of diseases other than diabetes-related diseases (for example, arteriosclerotic disease, retinal vascular occlusive disease (branch retinal vein occlusion (BRVO), branch retinal artery occlusion) (BRAO), central retinal vein occlusion (CRVO), central retinal artery occlusion (CRAO)), retinal degenerative diseases such as retinitis pigmentosa, glaucoma, uveitis, refractive errors (myopia, hyperopia), and premature infants. retinopathy, etc.) may be used. Even in this case, there is a possibility that information regarding the target subject's disease can be appropriately obtained.
 また、数学モデルを訓練するための血管分布特徴情報(入力用訓練データ)、および、数学モデルに入力される血管分布特徴情報には、動脈についての血管分布特徴情報と、静脈についての血管分布特徴情報が共に用いられてもよい。この場合、より高い精度で疾患に関する情報が取得され易くなる。ただし、動脈と静脈が纏まった血管分布特徴情報を用いることも可能である。また、動脈および静脈の一方の血管分布特徴情報を用いることも可能である。 In addition, the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include vascular distribution feature information about arteries and vascular distribution features about veins. The information may be used together. In this case, information regarding the disease can be easily obtained with higher accuracy. However, it is also possible to use vascular distribution characteristic information that includes arteries and veins. Furthermore, it is also possible to use vascular distribution characteristic information for either arteries or veins.
 数学モデルを訓練するための入力用訓練データには、血管分布特徴情報に加えて、血管分布特徴情報が取得された被検者の年齢および性別の少なくとも一方の情報が含まれていてもよい。血管分布特徴情報は、被検者の年齢および性別の少なくとも一方に応じて変化する場合が多い。従って、年齢および性別の少なくとも一方の情報を入力用訓練データに含めることで、対象被検者の年齢・性別に応じた適切な情報が数学モデルによって出力され易くなる。なお、年齢および性別の少なくとも一方の情報を入力用訓練データに含める場合、構築された数学モデルには、血管分布特徴情報に加えて、血管分布特徴情報が取得された被検者の年齢および性別の少なくとも一方の情報も入力することが望ましい。 The input training data for training the mathematical model may include, in addition to the vascularity characteristic information, information on at least one of the age and gender of the subject from whom the vascularity characteristic information was acquired. Blood vessel distribution characteristic information often changes depending on at least one of the age and gender of the subject. Therefore, by including at least one of age and gender information in the input training data, it becomes easier for the mathematical model to output appropriate information according to the age and gender of the target subject. Note that when at least one of age and gender information is included in the input training data, the constructed mathematical model includes the age and gender of the subject from whom the vascular distribution characteristic information was acquired in addition to the vascular distribution characteristic information. It is desirable to input at least one of the following information.
 本開示で例示する第2態様の眼底画像処理装置の制御部は、眼底画像取得ステップ、血管画像取得ステップ、および特徴情報生成ステップを実行する。眼底画像取得ステップでは、制御部は、眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む解析対象の眼底画像を取得する。血管画像取得ステップでは、制御部は、取得した眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像である対象血管画像を取得する。特徴情報生成ステップでは、制御部は、複数の血管画像が位置合わせされた状態で加算されることで生成された、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報を処理することで、対象血管画像の血管分布の特徴を示す血管分布特徴情報を生成する。血管分布確率マップによると、眼底画像が撮影された複数の被検眼の集合(母集団)内における、網膜の血管の存在確率の二次元分布が適切に示される。さらに、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報には、対象血管画像の血管分布の特徴が適切に表れる。従って、血管分布特徴情報が生成されることで、対象血管画像の血管分布の特徴が容易に把握される。 The control unit of the fundus image processing device of the second aspect exemplified in the present disclosure executes a fundus image acquisition step, a blood vessel image acquisition step, and a feature information generation step. In the fundus image acquisition step, the control unit obtains a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, which is captured by the fundus image capturing device. In the blood vessel image acquisition step, the control unit acquires a target blood vessel image that is a blood vessel image indicating at least one of an artery and a vein included in the acquired fundus image. In the feature information generation step, the control unit generates a retinal blood vessel distribution probability map, which is generated by adding the plurality of blood vessel images in an aligned state, and indicates the distribution of the existence probability of blood vessels existing in the retina of the subject's eye. Among them, vascular distribution feature information indicating the characteristics of the vascular distribution of the target blood vessel image is generated by processing information on a region corresponding to the blood vessel region of the target blood vessel image. According to the blood vessel distribution probability map, the two-dimensional distribution of the existence probability of retinal blood vessels in a set (population) of a plurality of eyes to be examined whose fundus images have been taken is appropriately shown. Further, in the retinal vascular distribution probability map, the characteristics of the vascular distribution of the target blood vessel image are appropriately expressed in the information of the area corresponding to the blood vessel area of the target blood vessel image. Therefore, by generating the vascular distribution characteristic information, the characteristics of the vascular distribution of the target blood vessel image can be easily grasped.
 なお、第2態様では、眼底画像処理装置は、医療機器メーカーまたは研究施設等で予め生成された網膜血管分布確率マップの情報を取得し、取得した網膜血管分布確率マップの情報に基づいて特徴情報生成ステップを実行してもよい。また、眼底画像処理装置が網膜血管分布確率マップを生成してもよい。なお、血管分布確率マップの生成方法には、第1態様で説明した方法と同様の方法を採用できる。 Note that in the second aspect, the fundus image processing device acquires information on a retinal vascular distribution probability map generated in advance by a medical device manufacturer or a research facility, and generates feature information based on the information on the acquired retinal vascular distribution probability map. A generation step may be performed. Further, the fundus image processing device may generate a retinal blood vessel distribution probability map. Note that a method similar to the method described in the first aspect can be adopted as a method for generating the vascular distribution probability map.
 制御部は、特徴情報生成ステップにおいて、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域のマップを、対象血管特徴マップとして生成してもよい。前述したように、対象血管特徴マップによると、対象血管画像における血管分布の特徴が一目で把握される。従って、被検眼の診断等が適切に補助される。 In the feature information generation step, the control unit may generate, as the target blood vessel feature map, a map of a region corresponding to the region of the blood vessel in the target blood vessel image, out of the retinal blood vessel distribution probability map. As described above, according to the target blood vessel feature map, the characteristics of the blood vessel distribution in the target blood vessel image can be grasped at a glance. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted.
 なお、生成した対象血管特徴マップの出力方法は適宜選択できる。例えば、制御部は、対象血管特徴マップを表示部に表示させることで、対象血管特徴マップを出力してもよい。 Note that the output method of the generated target blood vessel feature map can be selected as appropriate. For example, the control unit may output the target blood vessel feature map by displaying the target blood vessel feature map on the display unit.
 制御部は、特徴情報生成ステップにおいて、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報に基づいて、対象血管画像の血管の領域内の画素数を、網膜血管分布確率マップによって示される血管の存在確率に応じて示す血管分布ヒストグラムを生成してもよい。前述したように、血管分布ヒストグラムでは、対象血管画像における血管分布の特徴が顕著に表れる。従って、被検眼の診断等が適切に補助される。 In the feature information generation step, the control unit calculates the number of pixels in the blood vessel region of the target blood vessel image based on the information of the region corresponding to the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map. A blood vessel distribution histogram may be generated depending on the existence probability of blood vessels indicated by the distribution probability map. As described above, the blood vessel distribution histogram clearly shows the characteristics of the blood vessel distribution in the target blood vessel image. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted.
 生成した血管分布ヒストグラムの出力方法も適宜選択できる。例えば、制御部は、血管分布ヒストグラムを表示部に表示させることで、血管分布ヒストグラムを出力してもよい。この場合、制御部は、対象血管画像について生成された血管分布ヒストグラムと、第1態様において説明した集計ヒストグラムを、比較可能な状態で(例えば並べて)表示部に表示させてもよい。この場合、ユーザは、集計ヒストグラムの母集団についての血管分布の特徴と、解析対象の眼底の血管分布の特徴を、容易に比較することができる。なお、第2態様では、眼底画像処理装置は、医療機器メーカーまたは研究施設等で予め生成された集計ヒストグラムを取得してもよい。前述したように、集計ヒストグラムは、複数の血管画像の各々について生成された複数の血管分布ヒストグラムが集計されることで生成される。血管分布ヒストグラムは、網膜血管分布確率マップのうち、各々の血管画像の血管の領域に対応する領域の情報に基づいて生成される。血管分布ヒストグラムは、各々の血管画像の血管の領域内の画素数を、網膜血管分布確率マップによって示される血管の存在確率に応じて示す。 The output method of the generated blood vessel distribution histogram can also be selected as appropriate. For example, the control unit may output the blood vessel distribution histogram by displaying the blood vessel distribution histogram on the display unit. In this case, the control unit may display the blood vessel distribution histogram generated for the target blood vessel image and the aggregate histogram described in the first aspect on the display unit in a comparable state (for example, side by side). In this case, the user can easily compare the characteristics of the blood vessel distribution for the population of the aggregate histogram and the characteristics of the blood vessel distribution of the fundus of the eye to be analyzed. Note that in the second aspect, the fundus image processing device may acquire an aggregate histogram generated in advance by a medical device manufacturer, a research facility, or the like. As described above, the aggregated histogram is generated by summing up the plurality of blood vessel distribution histograms generated for each of the plurality of blood vessel images. The blood vessel distribution histogram is generated based on the information of the region corresponding to the blood vessel region of each blood vessel image in the retinal blood vessel distribution probability map. The blood vessel distribution histogram shows the number of pixels in the blood vessel region of each blood vessel image according to the probability of existence of the blood vessel indicated by the retinal blood vessel distribution probability map.
 特徴情報生成ステップにおいて、制御部は、網膜血管分布確率マップのうち、対象血管画像の血管の領域に対応する領域の情報に基づいて血管分布ヒストグラムを生成してもよい。制御部は、複数の血管画像の各々について生成された複数の血管分布ヒストグラムが集計された集計ヒストグラムと、対象血管画像について生成した血管分布ヒストグラムの差分である差分ヒストグラムを生成してもよい。この場合、個々の解析対象についての血管分布の特徴が、集計ヒストグラムの母集団における血管分布の特徴と比較された状態で示される。よって、個々の解析対象についての血管分布の特徴が、さらに容易に把握される。なお、前述したように、眼底画像処理装置は、医療機器メーカーまたは研究施設等で予め生成された集計ヒストグラムを取得してもよい。 In the feature information generation step, the control unit may generate a blood vessel distribution histogram based on information of a region corresponding to a blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability map. The control unit may generate a difference histogram that is a difference between a total histogram in which a plurality of blood vessel distribution histograms generated for each of the plurality of blood vessel images are aggregated and a blood vessel distribution histogram generated for the target blood vessel image. In this case, the characteristics of the vascular distribution for each analysis target are shown in comparison with the characteristics of the vascular distribution in the population of the aggregate histogram. Therefore, the characteristics of blood vessel distribution for each analysis target can be more easily understood. Note that, as described above, the fundus image processing device may acquire a total histogram generated in advance by a medical device manufacturer, a research facility, or the like.
 解析対象の血管分布ヒストグラムとの間の比較(比較表示、または差分の取得)の対象とされる集計ヒストグラムの母集団は、適宜選択できる。例えば、比較対象の母集団には、全ての被検者・被検眼が含まれていてもよい。この場合、ユーザは、出力される情報に基づいて、解析対象の血管画像における血管分布の特徴を、平均的な血管分布の特徴と比較することができる。また、比較対象の母集団は、特定の条件を満たす被検者・被検眼の母集団(例えば、糖尿病網膜症を発症していない糖尿病患者の母集団等)であってもよい。この場合、ユーザは、出力される情報に基づいて、解析対象の血管画像における血管分布の特徴が、特定の条件を満たす母集団の血管分布の特徴と近似しているか否かを適切に判断することができる。 The population of the aggregated histogram to be compared (comparison display or acquisition of difference) with the vascular distribution histogram to be analyzed can be selected as appropriate. For example, the comparison target population may include all subjects/eyes to be examined. In this case, the user can compare the characteristics of the blood vessel distribution in the blood vessel image to be analyzed with the characteristics of the average blood vessel distribution based on the output information. Further, the population to be compared may be a population of subjects/eyes to be examined that meet specific conditions (for example, a population of diabetic patients who have not developed diabetic retinopathy). In this case, the user appropriately determines, based on the output information, whether the characteristics of the blood vessel distribution in the blood vessel image to be analyzed are similar to the characteristics of the blood vessel distribution of a population that satisfies a specific condition. be able to.
 また、制御部は、生成した差分ヒストグラムをそのまま表示部に表示させてもよい。また、制御部は、差分ヒストグラムによって示される差分(つまり、解析対象の血管分布の特徴と、集計ヒストグラムの母集団における血管分布の特徴の差分)に基づく情報を出力してもよい。例えば、制御部は、差分ヒストグラムによって示される差分を数値化して出力してもよい。また、制御部は、差分ヒストグラムによって示される差分に基づいて、解析対象の血管分布の特徴と、集計ヒストグラムの母集団における血管分布の特徴の近似度等を出力してもよい。この場合、ユーザは、解析対象の血管画像における血管分布の特徴が、特定の条件を満たす母集団の血管分布の特徴と近似しているか否かを、さらに容易に判断することができる。 Additionally, the control unit may display the generated difference histogram as is on the display unit. Further, the control unit may output information based on the difference shown by the difference histogram (that is, the difference between the characteristics of the vascular distribution to be analyzed and the characteristics of the vascular distribution in the population of the aggregated histogram). For example, the control unit may digitize and output the difference shown by the difference histogram. Further, the control unit may output the degree of approximation between the characteristics of the vascular distribution to be analyzed and the characteristics of the vascular distribution in the population of the aggregated histogram, based on the difference shown by the difference histogram. In this case, the user can more easily determine whether the characteristics of the blood vessel distribution in the blood vessel image to be analyzed are similar to the characteristics of the blood vessel distribution of a population that satisfies a specific condition.
 血管分布特徴情報を入力用訓練データとし、且つ、入力用訓練データが取得された被検者の特定の疾患の有無を示す情報を出力用訓練データとする複数の訓練データセットが用いられることで、特定の疾患に関する情報を出力する数学モデルが、機械学習アルゴリズムによって訓練されて構築されていてもよい。制御部は、特徴情報生成ステップにおいて生成された対象血管画像の血管分布特徴情報を数学モデルに入力することで、数学モデルによって出力される対象被検者の特定の疾患に関する情報(例えば、特定の疾患を有する確率等)を取得する疾患情報取得ステップをさらに実行してもよい。 By using a plurality of training data sets in which vascular distribution characteristic information is used as input training data, and information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data. , a mathematical model that outputs information regarding a specific disease may be trained and constructed using a machine learning algorithm. The control unit inputs the vascular distribution feature information of the target blood vessel image generated in the feature information generation step into the mathematical model, thereby generating information regarding a specific disease of the target subject (for example, a specific disease) output by the mathematical model. You may further perform a disease information acquisition step of acquiring a probability of having a disease, etc.).
 前述したように、血管分布特徴情報には、対象血管画像の血管分布の特徴が表れる。また、疾患の種類によっては、対象血管画像が取得された被検者(対象被検者)の疾患の状態に応じて、対象血管画像の血管分布の特徴が変化する場合も多い。従って、複数の被検者の血管分布特徴情報を含む訓練データセットによって予め訓練された数学モデルに、対象被検者の血管分布特徴情報が入力されることで、特定の疾患に関する情報が適切に取得される。 As described above, the vascular distribution characteristic information represents the characteristics of the vascular distribution of the target blood vessel image. Further, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the disease state of the subject (target subject) from whom the target blood vessel image was acquired. Therefore, by inputting the vascularity characteristic information of a target subject into a mathematical model that has been trained in advance using a training data set that includes vascularity characteristic information of multiple subjects, information regarding a specific disease can be appropriately obtained. be obtained.
 前述したように、数学モデルを訓練するための血管分布特徴情報(入力用訓練データ)、および、数学モデルに入力される血管分布特徴情報には、血管分布ヒストグラムおよび対象血管特徴マップの少なくともいずれかを用いることが可能である。また、出力用訓練データには、入力用訓練データが取得された被検者の糖尿病の有無を示す情報が用いられてもよい。数学モデルは、対象血管画像の血管分布特徴情報が入力されることで、対象被検者の糖尿病に関する情報(例えば、糖尿病を有する確率等)を出力してもよい。この場合、対象被検者の糖尿病に関する情報が高い精度で取得される。ただし、前述したように、糖尿病に関連する疾患以外の疾患の有無を示す情報を出力用訓練データとして用いることも可能である。また、数学モデルを訓練するための血管分布特徴情報(入力用訓練データ)、および、数学モデルに入力される血管分布特徴情報には、動脈についての血管分布特徴情報と、静脈についての血管分布特徴情報が共に用いられてもよい。 As mentioned above, the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include at least one of a vascular distribution histogram and a target vascular feature map. It is possible to use Further, the output training data may include information indicating whether or not the subject from whom the input training data was acquired has diabetes. The mathematical model may output information regarding diabetes of the target subject (for example, probability of having diabetes, etc.) by inputting the blood vessel distribution characteristic information of the target blood vessel image. In this case, information regarding the target subject's diabetes is acquired with high accuracy. However, as described above, it is also possible to use information indicating the presence or absence of a disease other than a diabetes-related disease as the output training data. In addition, the vascular distribution feature information (input training data) for training the mathematical model and the vascular distribution feature information input to the mathematical model include vascular distribution feature information about arteries and vascular distribution features about veins. The information may be used together.
眼底画像処理装置1,21、および眼底画像撮影装置11A,11Bの概略構成を示すブロック図である。1 is a block diagram showing a schematic configuration of fundus image processing devices 1 and 21 and fundus image photographing devices 11A and 11B. FIG. 眼底画像30と、眼底画像30に含まれる血管を示す血管画像40A,40Bの一例を示す図である。3 is a diagram illustrating an example of a fundus image 30 and blood vessel images 40A and 40B showing blood vessels included in the fundus image 30. FIG. 第1実施形態の眼底画像処理装置1が実行する眼底画像処理のフローチャートである。1 is a flowchart of fundus image processing performed by the fundus image processing device 1 of the first embodiment. 動脈の網膜血管分布確率マップ50Aと静脈の網膜血管分布確率マップ50Bの一例を示す図である。It is a figure which shows an example of the retinal blood vessel distribution probability map 50A of arteries, and the retinal blood vessel distribution probability map 50B of veins. 動脈の対象血管特徴マップ51A,静脈の対象血管特徴マップ51B、動脈の血管分布ヒストグラム52A、および静脈の血管分布ヒストグラム52Bの一例を示す図である。5 is a diagram showing an example of an artery target blood vessel characteristic map 51A, a vein target blood vessel characteristic map 51B, an artery blood vessel distribution histogram 52A, and a vein blood vessel distribution histogram 52B. FIG. 動脈の年代別集計ヒストグラムの一例を示す図である。FIG. 3 is a diagram illustrating an example of an age-specific aggregate histogram of arteries. 静脈の年代別集計ヒストグラムの一例を示す図である。It is a figure which shows an example of the aggregate histogram of veins by age. DM群とcontrol群の、動脈についての差分ヒストグラムである。This is a difference histogram of arteries between the DM group and the control group. DM群とcontrol群の、静脈についての差分ヒストグラムである。It is a difference histogram of veins between the DM group and the control group. 図8に示すDM群の動脈についての差分ヒストグラムのデータを眼底画像30に重ねたマップ53Aと、図9に示すDM群の静脈についての差分ヒストグラムのデータを眼底画像30に重ねたマップ53Bである。A map 53A in which differential histogram data for arteries in the DM group shown in FIG. 8 is superimposed on the fundus image 30, and a map 53B in which differential histogram data for veins in the DM group shown in FIG. 9 in FIG. 9 are superimposed on the fundus image 30. . 第2実施形態の眼底画像処理装置21が実行する眼底画像処理のフローチャートである。It is a flowchart of fundus oculi image processing performed by the fundus oculi image processing device 21 of the second embodiment.
(装置構成)
 以下、本開示における典型的な実施形態の1つについて、図面を参照して説明する。図1に示すように、本実施形態では、眼底画像処理装置1、眼底画像処理装置21、および眼底画像撮影装置11A,11Bが用いられる。眼底画像処理装置1は、複数の眼底画像に基づいて、網膜血管分布確率マップ、集計ヒストグラム、および、特定の疾患に関する情報を出力する数学モデル(詳細は後述する)等を生成または構築する。網膜血管分布確率マップ、集計ヒストグラム、および数学モデル等を生成または構築するための眼底画像処理プログラムは、例えば、眼底画像処理装置1の記憶装置4等に記憶される。また、眼底画像処理装置21は、眼底画像処理装置1によって生成された網膜血管分布確率マップおよび集計ヒストグラム等に基づいて、解析対象の血管画像(対象血管画像)の血管分布の特徴を示す血管分布特徴情報を生成する。さらに、眼底画像処理装置21は、眼底画像処理装置1によって構築された数学モデルに血管分布情報を入力することで、特定の疾患に関する情報を取得することも可能である。血管分布特徴情報の生成処理等を実行するための眼底画像処理プログラムは、例えば、眼底画像処理装置21の記憶装置24等に記憶される。なお、眼底画像処理装置21が網膜血管分布確率マップおよび集計ヒストグラム等を生成することも可能である。また、眼底画像処理装置1が血管分布特徴情報を生成することも可能である。眼底画像撮影装置11A,11Bは、被検眼の眼底画像を撮影する。
(Device configuration)
One typical embodiment of the present disclosure will be described below with reference to the drawings. As shown in FIG. 1, in this embodiment, a fundus image processing device 1, a fundus image processing device 21, and fundus image photographing devices 11A and 11B are used. The fundus image processing device 1 generates or constructs a retinal blood vessel distribution probability map, an aggregate histogram, a mathematical model (details will be described later), etc. that outputs information regarding a specific disease, based on a plurality of fundus images. A fundus image processing program for generating or constructing a retinal blood vessel distribution probability map, aggregate histogram, mathematical model, etc. is stored in, for example, the storage device 4 of the fundus image processing device 1. Further, the fundus image processing device 21 generates a blood vessel distribution that indicates the characteristics of the blood vessel distribution of the blood vessel image to be analyzed (target blood vessel image) based on the retinal blood vessel distribution probability map and aggregate histogram generated by the fundus image processing device 1. Generate feature information. Further, the fundus image processing device 21 can also acquire information regarding a specific disease by inputting blood vessel distribution information into the mathematical model constructed by the fundus image processing device 1. A fundus image processing program for executing the generation process of blood vessel distribution characteristic information and the like is stored, for example, in the storage device 24 of the fundus image processing device 21. Note that it is also possible for the fundus image processing device 21 to generate a retinal blood vessel distribution probability map, a summary histogram, and the like. It is also possible for the fundus image processing device 1 to generate blood vessel distribution characteristic information. The fundus image photographing devices 11A and 11B photograph fundus images of the eye to be examined.
 一例として、本実施形態の眼底画像処理装置1,21にはパーソナルコンピュータ(以下、「PC」という)が用いられる。しかし、眼底画像処理装置1,21として機能できるデバイスは、PCに限定されない。例えば、眼底画像撮影装置11A,11Bまたはサーバ等が、眼底画像処理装置1,21として機能してもよい。眼底画像撮影装置11A,11Bが眼底画像処理装置1,21として機能する場合、眼底画像撮影装置11A,11Bは、眼底画像を撮影しつつ、撮影した眼底画像に基づいて、網膜血管分布確率マップ、集計ヒストグラム、および血管分布特徴情報等の少なくともいずれかを生成することができる。また、複数のデバイスの制御部(例えば、PCのCPUと、眼底画像撮影装置のCPU)が、後述する眼底画像処理を協働して実行してもよい。 As an example, a personal computer (hereinafter referred to as "PC") is used for the fundus image processing apparatuses 1 and 21 of this embodiment. However, devices that can function as the fundus image processing apparatuses 1 and 21 are not limited to PCs. For example, the fundus image photographing devices 11A, 11B, a server, or the like may function as the fundus image processing devices 1, 21. When the fundus image photographing devices 11A, 11B function as the fundus image processing devices 1, 21, the fundus image photographing devices 11A, 11B, while photographing the fundus image, create a retinal blood vessel distribution probability map, based on the photographed fundus image. At least one of an aggregate histogram and blood vessel distribution characteristic information can be generated. Further, the control units of a plurality of devices (for example, the CPU of a PC and the CPU of a fundus image photographing device) may cooperate to execute fundus image processing, which will be described later.
 また、本実施形態では、各種処理を行うコントローラの一例としてCPUが用いられる場合について例示する。しかし、各種デバイスの少なくとも一部に、CPU以外のコントローラが用いられてもよいことは言うまでもない。例えば、コントローラとしてGPUを採用することで、処理の高速化を図ってもよい。 Furthermore, in this embodiment, a case will be described in which a CPU is used as an example of a controller that performs various processes. However, it goes without saying that a controller other than the CPU may be used for at least some of the various devices. For example, a GPU may be used as the controller to speed up the processing.
 眼底画像処理装置1について説明する。眼底画像処理装置1は、例えば、眼底画像処理装置21または眼底画像処理プログラムをユーザに提供するメーカー、または各種研究施設(例えば、大学病院)等に配置される。眼底画像処理装置1は、各種制御処理を行う制御ユニット2と、通信I/F5を備える。制御ユニット2は、制御を司るコントローラであるCPU3と、プログラムおよびデータ等を記憶することが可能な記憶装置4を備える。記憶装置4には、後述する眼底画像処理(図3参照)を実行するための眼底画像処理プログラムが記憶されている。また、通信I/F5は、眼底画像処理装置1を他のデバイス(例えば、眼底画像撮影装置11Aおよび眼底画像処理装置21等)と接続する。 The fundus image processing device 1 will be explained. The fundus image processing device 1 is placed, for example, at a manufacturer that provides the fundus image processing device 21 or a fundus image processing program to users, or at various research facilities (for example, a university hospital). The fundus image processing device 1 includes a control unit 2 that performs various control processes, and a communication I/F 5. The control unit 2 includes a CPU 3, which is a controller that controls, and a storage device 4 that can store programs, data, and the like. The storage device 4 stores a fundus image processing program for executing fundus image processing (see FIG. 3), which will be described later. Further, the communication I/F 5 connects the fundus image processing device 1 with other devices (for example, the fundus image photographing device 11A and the fundus image processing device 21, etc.).
 眼底画像処理装置1は、操作部7および表示装置8に接続されている。操作部7は、ユーザが各種指示を眼底画像処理装置1に入力するために、ユーザによって操作される。操作部7には、例えば、キーボード、マウス、タッチパネル等の少なくともいずれかを使用できる。なお、操作部7と共に、または操作部7に代えて、各種指示を入力するためのマイク等が使用されてもよい。表示装置8は、各種画像を表示する。表示装置8には、画像を表示可能な種々のデバイス(例えば、モニタ、ディスプレイ、プロジェクタ等の少なくともいずれか)を使用できる。 The fundus image processing device 1 is connected to an operation section 7 and a display device 8. The operation unit 7 is operated by the user in order for the user to input various instructions to the fundus image processing device 1 . As the operation unit 7, for example, at least one of a keyboard, a mouse, a touch panel, etc. can be used. Note that a microphone or the like for inputting various instructions may be used together with or in place of the operation section 7. The display device 8 displays various images. As the display device 8, various devices capable of displaying images (for example, at least one of a monitor, a display, a projector, etc.) can be used.
 眼底画像処理装置1は、眼底画像撮影装置11Aから眼底画像のデータ(以下、単に「眼底画像」という場合もある)を取得することができる。眼底画像処理装置1は、例えば、有線通信、無線通信、着脱可能な記憶媒体(例えばUSBメモリ)等の少なくともいずれかによって、眼底画像撮影装置11Aから眼底画像のデータを取得してもよい。 The fundus image processing device 1 can acquire fundus image data (hereinafter sometimes simply referred to as “fundus image”) from the fundus image capturing device 11A. The fundus image processing device 1 may acquire fundus image data from the fundus image capturing device 11A, for example, by wired communication, wireless communication, a removable storage medium (eg, USB memory), or the like.
 眼底画像処理装置21について説明する。眼底画像処理装置21は、例えば、被検者の診断または検査等を行う施設(例えば、病院または健康診断施設等)に配置される。眼底画像処理装置21は、各種制御処理を行う制御ユニット22と、通信I/F25を備える。制御ユニット22は、制御を司るコントローラであるCPU23と、プログラムおよびデータ等を記憶することが可能な記憶装置24を備える。記憶装置24には、後述する眼底画像処理(図11参照)を実行するための眼底画像処理プログラムが記憶されている。通信I/F25は、眼底画像処理装置21を他のデバイス(例えば、眼底画像撮影装置11Bおよび眼底画像処理装置1等)と接続する。 The fundus image processing device 21 will be explained. The fundus image processing device 21 is placed, for example, in a facility (for example, a hospital or a medical examination facility) that performs diagnosis or examination of subjects. The fundus image processing device 21 includes a control unit 22 that performs various control processes, and a communication I/F 25. The control unit 22 includes a CPU 23 that is a controller that controls control, and a storage device 24 that can store programs, data, and the like. The storage device 24 stores a fundus image processing program for executing fundus image processing (see FIG. 11), which will be described later. The communication I/F 25 connects the fundus image processing device 21 with other devices (for example, the fundus image photographing device 11B and the fundus image processing device 1, etc.).
 眼底画像処理装置21は、操作部27および表示装置28に接続されている。操作部27および表示装置28には、前述した操作部7および表示装置8と同様に、種々のデバイスを使用することができる。 The fundus image processing device 21 is connected to an operation section 27 and a display device 28. As with the operation section 7 and display device 8 described above, various devices can be used for the operation section 27 and the display device 28.
 眼底画像処理装置21は、眼底画像撮影装置11Bから眼底画像を取得することができる。また、眼底画像処理装置21は、眼底画像処理装置1によって生成された網膜血管分布確率マップおよび集計ヒストグラム等の少なくともいずれかを取得することができる。眼底画像処理装置21は、例えば、有線通信、無線通信、着脱可能な記憶媒体(例えばUSBメモリ)等の少なくともいずれかによって、眼底画像および網膜血管分布確率マップ等を取得してもよい。 The fundus image processing device 21 can acquire a fundus image from the fundus image photographing device 11B. Further, the fundus image processing device 21 can acquire at least one of the retinal blood vessel distribution probability map and the aggregate histogram generated by the fundus image processing device 1. The fundus image processing device 21 may acquire a fundus image, a retinal blood vessel distribution probability map, etc., for example, through wired communication, wireless communication, a removable storage medium (eg, a USB memory), or at least one of the following.
 眼底画像撮影装置11(11A,11B)について説明する。眼底画像撮影装置11には、被検眼の眼底の画像を撮影する種々の装置を用いることができる。一例として、本実施形態で使用される眼底画像撮影装置11は、可視光を用いて眼底の二次元カラー正面画像を撮影することが可能な眼底カメラである。従って、後述する血管画像の取得処理等が、カラーの眼底画像に基づいて適切に行われる。しかし、眼底カメラ以外の装置(例えば、OCT装置、レーザ走査型検眼鏡(SLO)等の少なくともいずれか)が、眼底画像撮影装置として用いられてもよい。眼底画像は、眼底を被検眼の正面側から撮影した二次元正面画像でもよいし、眼底の三次元画像でもよい。 The fundus image photographing device 11 (11A, 11B) will be explained. As the fundus image capturing device 11, various devices that capture images of the fundus of the eye to be examined can be used. As an example, the fundus image capturing device 11 used in this embodiment is a fundus camera capable of capturing a two-dimensional color front image of the fundus using visible light. Therefore, the blood vessel image acquisition process, which will be described later, is appropriately performed based on the color fundus image. However, a device other than the fundus camera (for example, at least one of an OCT device, a laser scanning ophthalmoscope (SLO), etc.) may be used as the fundus image capturing device. The fundus image may be a two-dimensional front image of the fundus taken from the front side of the eye to be examined, or a three-dimensional image of the fundus.
 眼底画像撮影装置11は、各種制御処理を行う制御ユニット12(12A,12B)と、眼底画像撮影部16(16A,16B)を備える。制御ユニット12は、制御を司るコントローラであるCPU13(13A,13B)と、プログラムおよびデータ等を記憶することが可能な記憶装置14(14A,14B)を備える。眼底画像撮影部16は、被検眼の眼底画像を撮影するための光学部材等を備える。後述する眼底画像処理(図3および図11参照)の少なくとも一部を眼底画像撮影装置11が実行する場合には、眼底画像処理を実行するための眼底画像処理プログラムの少なくとも一部が記憶装置14に記憶されることは言うまでもない。 The fundus image photographing device 11 includes a control unit 12 (12A, 12B) that performs various control processes, and a fundus image photographing section 16 (16A, 16B). The control unit 12 includes a CPU 13 (13A, 13B) that is a controller that controls control, and a storage device 14 (14A, 14B) that can store programs, data, and the like. The fundus image photographing unit 16 includes an optical member and the like for photographing a fundus image of the eye to be examined. When the fundus image photographing device 11 executes at least a part of fundus image processing (see FIGS. 3 and 11) to be described later, at least a part of the fundus image processing program for executing the fundus image processing is stored in the storage device 14. Needless to say, it will be remembered.
 図2を参照して、本実施形態の眼底画像処理装置1,21が血管画像を取得する方法の一例について説明する。血管画像とは、眼底画像に含まれる血管を示す画像である。本実施形態の眼底画像処理装置1,21は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像を入力することで、入力した眼底画像の血管を示す血管画像を取得する。また、本実施形態の眼底画像処理装置1,21は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像を入力することで、入力した眼底画像の視神経乳頭(以下、単に「乳頭」という場合もある)の位置(詳細には、乳頭の重心位置)を特定する。数学モデルは、眼底画像が入力されると、入力された眼底画像についての血管画像と乳頭の位置を出力するように、予め訓練されている。なお、血管画像を出力する数学モデルと、乳頭の位置を出力する数学モデルは、別々に構築されていてもよい。 With reference to FIG. 2, an example of a method by which the fundus oculi image processing apparatuses 1 and 21 of this embodiment acquire blood vessel images will be described. The blood vessel image is an image showing blood vessels included in the fundus image. The fundus image processing apparatuses 1 and 21 of this embodiment acquire a blood vessel image showing blood vessels in the input fundus image by inputting the fundus image to a mathematical model trained by a machine learning algorithm. In addition, the fundus image processing devices 1 and 21 of the present embodiment input the fundus image into a mathematical model trained by a machine learning algorithm, thereby processing the optic disc (hereinafter also simply referred to as "papilla") of the input fundus image. 2) (specifically, the center of gravity of the nipple). The mathematical model is trained in advance so that, when a fundus image is input, it outputs a blood vessel image and a papillary position for the input fundus image. Note that the mathematical model that outputs the blood vessel image and the mathematical model that outputs the position of the nipple may be constructed separately.
 数学モデルは、訓練用データセットによって訓練されることで、血管画像および乳頭の位置を出力するように構築されている。訓練用データセットには、入力側のデータ(入力用訓練データ)と出力側のデータ(出力用訓練データ)が含まれる。 The mathematical model is constructed to output blood vessel images and nipple positions by being trained with a training dataset. The training data set includes input side data (input training data) and output side data (output training data).
 図2に、眼底画像30と血管画像40(40A,40B)の一例を示す。本実施形態では、眼底画像撮影装置(本実施形態では眼底カメラ)11Aによって撮影された二次元カラー正面画像である眼底画像30が、入力用訓練データとされる。本実施形態では、入力用訓練データとして使用される眼底画像30の画像領域に、被検眼の乳頭31および黄斑32が共に含まれる。また、入力用訓練データとされる眼底画像30における動脈および静脈の少なくともいずれかを示す画像である血管画像40A,40Bと、乳頭の位置(本実施形態では、乳頭の重心位置G)を示す情報が、出力用訓練データとされる。本実施形態では、眼底画像30における動脈の血管画像40Aと、眼底画像30における静脈の血管画像40Bが、出力用訓練データに含まれる。従って、構築された数学モデルは、眼底画像30が入力されると、入力された眼底画像30に含まれる動脈の血管画像40Aと静脈の血管画像40Bを出力することができる。出力用訓練データ(つまり、血管画像40A,40Bと、乳頭の位置を示す情報)は、例えば、入力用訓練データである眼底画像30を確認した作業者によって入力される指示に応じて生成されてもよい。 FIG. 2 shows an example of the fundus image 30 and the blood vessel image 40 (40A, 40B). In this embodiment, the fundus image 30, which is a two-dimensional color front image captured by the fundus image photographing device (in this embodiment, the fundus camera) 11A, is used as input training data. In this embodiment, the image region of the fundus image 30 used as input training data includes both the papilla 31 and the macula 32 of the eye to be examined. Further, blood vessel images 40A and 40B, which are images showing at least one of arteries and veins in the fundus image 30 used as input training data, and information showing the position of the papilla (in this embodiment, the center of gravity position G of the papilla) is used as the training data for output. In this embodiment, an artery blood vessel image 40A in the fundus image 30 and a vein blood vessel image 40B in the fundus image 30 are included in the output training data. Therefore, when the fundus image 30 is input, the constructed mathematical model can output the artery blood vessel image 40A and the venous blood vessel image 40B included in the input fundus image 30. The output training data (that is, the blood vessel images 40A, 40B and information indicating the position of the papilla) is generated in response to an instruction input by an operator who has confirmed the fundus image 30, which is the input training data, for example. Good too.
(第1実施形態)
 図3~図10を参照して、第1実施形態の眼底画像処理装置1が実行する眼底画像処理について説明する。前述したように、眼底画像処理装置1は、複数の眼底画像30に基づいて、網膜血管分布確率マップおよび集計ヒストグラム等を生成する。さらに、眼底画像処理装置1は、機械学習アルゴリズムによって数学モデルを訓練することで、特定の疾患に関する情報を出力する数学モデルを構築する。図3に例示する眼底画像処理は、記憶装置4に記憶された眼底画像処理プログラムに従って、眼底画像処理装置1のCPU3によって実行される。
(First embodiment)
Referring to FIGS. 3 to 10, fundus image processing performed by the fundus image processing apparatus 1 of the first embodiment will be described. As described above, the fundus image processing device 1 generates a retinal blood vessel distribution probability map, a summary histogram, and the like based on the plurality of fundus images 30. Furthermore, the fundus image processing device 1 constructs a mathematical model that outputs information regarding a specific disease by training the mathematical model using a machine learning algorithm. The fundus image processing illustrated in FIG. 3 is executed by the CPU 3 of the fundus image processing device 1 according to the fundus image processing program stored in the storage device 4.
 図3に示すように、CPU3は、眼底画像撮影装置(本実施形態では眼底カメラ)11Aによって撮影された、二次元カラー正面画像である眼底画像30(図2参照)を複数取得する(S11)。 As shown in FIG. 3, the CPU 3 acquires a plurality of fundus images 30 (see FIG. 2), which are two-dimensional color frontal images, taken by the fundus image capturing device (in this embodiment, the fundus camera) 11A (S11). .
 次いで、CPU3は、S1で取得した各々の眼底画像30に含まれる動脈および静脈の少なくともいずれかを示す血管画像40A,40Bを取得する(S2)。本実施形態では、CPU3は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像30を入力することで、数学モデルが出力した血管画像40A,40Bを取得する。従って、眼底画像30における血管を高い精度で示す血管画像40A,40Bが取得され易い。また、本実施形態では、各々の眼底画像30に含まれる動脈の血管画像40Aと、静脈の血管画像40Bが、別々に取得される。 Next, the CPU 3 acquires blood vessel images 40A and 40B indicating at least one of an artery and a vein included in each fundus image 30 acquired in S1 (S2). In this embodiment, the CPU 3 inputs the fundus image 30 to a mathematical model trained by a machine learning algorithm, thereby acquiring blood vessel images 40A and 40B output by the mathematical model. Therefore, blood vessel images 40A and 40B that show blood vessels in the fundus image 30 with high accuracy are easily obtained. Furthermore, in this embodiment, the artery blood vessel image 40A and the vein blood vessel image 40B included in each fundus image 30 are acquired separately.
 なお、複数の眼底画像30、または、複数の血管画像40A,40Bに写る眼底上の範囲を均一に近づける処理が行われてもよい。例えば、眼底画像30が撮影された際の撮影倍率に応じて、撮影倍率によって変化する画像内の撮影範囲の広さが均一に近づけられてもよい。また、眼底画像30または血管画像40A,40Bから、特定の範囲内の画像が抽出されることで、画像に写る眼底上の範囲が均一に近づけられてもよい。眼底上の範囲を均一に近づける処理は、作業者によって入力される指示に応じて(つまり、手動で)行われてもよいし、CPU3によって自動で行われてもよい。 Note that processing may be performed to make the ranges on the fundus reflected in the plurality of fundus images 30 or the plurality of blood vessel images 40A and 40B uniform. For example, depending on the imaging magnification at which the fundus image 30 was photographed, the width of the imaging range within the image, which varies depending on the imaging magnification, may be made uniform. Further, by extracting images within a specific range from the fundus image 30 or the blood vessel images 40A and 40B, the range on the fundus reflected in the image may be made uniform. The process of making the range on the fundus more uniform may be performed in response to an instruction input by the operator (that is, manually), or may be performed automatically by the CPU 3.
 CPU3は、S2で取得された各々の血管画像40A,40Bにおける乳頭の位置(本実施形態では、乳頭の重心位置)を特定する(S3)。本実施形態では、CPU3は、機械学習アルゴリズムによって訓練された数学モデルに眼底画像30を入力することで、数学モデルが出力した眼底画像30の乳頭の位置を取得する。従って、眼底画像30および血管画像40A,40Bにおける乳頭の位置が高い精度で特定され易い。 The CPU 3 identifies the position of the nipple (in this embodiment, the position of the center of gravity of the nipple) in each of the blood vessel images 40A and 40B acquired in S2 (S3). In this embodiment, the CPU 3 inputs the fundus image 30 to a mathematical model trained by a machine learning algorithm, thereby acquiring the position of the papilla in the fundus image 30 output by the mathematical model. Therefore, the position of the papilla in the fundus image 30 and the blood vessel images 40A and 40B can be easily identified with high accuracy.
 CPU3は、S2で取得された複数の血管画像40A,40Bを、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップ50A,50B(図4参照)を生成する(S4)。網膜血管分布確率マップ50A,50Bによると、眼底画像30が撮影された複数の被検眼の集合(母集団)内における、網膜の血管の存在確率の二次元分布が適切に示される。よって、網膜血管分布確率マップ50A,50Bによると、眼底画像における広範囲の血管に関する情報が、適切に提示される。なお、網膜の血管の大まかな構造は、被検眼に関わらず均一な構造となり易い。従って、網膜血管分布確率マップ50A,50B上の各領域の血管の存在確率と、母集団における各領域の血管の太さの間の相関は高くなる。よって、網膜血管分布確率マップ50A,50Bによると、血管の太さの状態を、各々の領域毎に把握することも容易となる。 The CPU 3 adds the plurality of blood vessel images 40A and 40B acquired in S2 in a state in which alignment has been performed, thereby creating a retinal blood vessel distribution probability map that indicates the distribution of the probability of blood vessels existing in the retina of the subject's eye. 50A and 50B (see FIG. 4) are generated (S4). According to the retinal blood vessel distribution probability maps 50A and 50B, the two-dimensional distribution of the existence probability of retinal blood vessels within a set (population) of a plurality of eyes to be examined in which the fundus image 30 was taken is appropriately shown. Therefore, according to the retinal blood vessel distribution probability maps 50A and 50B, information regarding a wide range of blood vessels in the fundus image is appropriately presented. Note that the general structure of blood vessels in the retina tends to be uniform regardless of the eye to be examined. Therefore, the correlation between the existence probability of blood vessels in each region on the retinal blood vessel distribution probability maps 50A and 50B and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the retinal blood vessel distribution probability maps 50A and 50B, it is also easy to understand the state of blood vessel thickness for each region.
 S4では、CPU3は、位置合わせされた複数の血管画像40A,40Bを加算して平均化することで、網膜血管分布確率マップ50A,50Bを生成する。従って、各々の血管画像40A,40Bの各画素の値(例えば輝度値等)の単位と同じ単位で網膜血管分布確率マップ50A,50Bを取り扱うことができる。 In S4, the CPU 3 generates retinal blood vessel distribution probability maps 50A, 50B by adding and averaging the aligned plural blood vessel images 40A, 40B. Therefore, the retinal blood vessel distribution probability maps 50A, 50B can be handled in the same unit as the value (for example, brightness value) of each pixel of each blood vessel image 40A, 40B.
 S4では、CPU3は、複数の動脈の血管画像40Aを加算することで、動脈の網膜血管分布確率マップ50Aを生成すると共に、複数の静脈の血管画像40Bを加算することで、静脈の網膜血管分布確率マップ50Bを生成する。被検者の疾患等の状態によっては、眼底の動脈と静脈の間で異なる変化が表れる場合もある。従って、動脈の網膜血管分布確率マップ50Aと、静脈の網膜血管分布確率マップ50Bが共に生成されることで、より有用な情報が得られ易くなる。 In S4, the CPU 3 generates an arterial retinal vascular distribution probability map 50A by adding the plurality of artery blood vessel images 40A, and generates a venous retinal blood vessel distribution probability map 50A by adding the plurality of vein blood vessel images 40B. A probability map 50B is generated. Depending on the condition of the subject, such as disease, different changes may appear between the arteries and veins of the fundus. Therefore, by generating both the arterial retinal vascular distribution probability map 50A and the venous retinal vascular distribution probability map 50B, it becomes easier to obtain more useful information.
 S4では、S3で特定された各々の血管画像40A,40Bにおける乳頭の位置を基準として、複数の血管画像40A,40Bの各々の位置が合わされた状態で、複数の血管画像40A,40Bが加算される。眼底の網膜の血管の構造は、乳頭から外側に向けて広がる構造となっている。乳頭から広がる網膜の血管の大まかな構造は、被検眼に関わらず均一な構造となり易い。従って、網膜血管分布確率マップ50A,50Bが生成される際に、乳頭の位置を基準として複数の血管画像40A,40Bが位置合わせされることで、網膜血管分布確率マップ50A,50Bが示す血管の存在確率の分布の精度がさらに向上する。 In S4, the plurality of blood vessel images 40A, 40B are added with the positions of the plurality of blood vessel images 40A, 40B being aligned based on the position of the papilla in each of the blood vessel images 40A, 40B specified in S3. Ru. The structure of the retinal blood vessels in the fundus of the eye is such that they spread outward from the papilla. The general structure of the retinal blood vessels extending from the papilla tends to be uniform regardless of the eye to be examined. Therefore, when the retinal vascular distribution probability maps 50A, 50B are generated, the plurality of vascular images 40A, 40B are aligned based on the position of the papilla, so that the blood vessels indicated by the retinal vascular distribution probability maps 50A, 50B are aligned. The accuracy of the distribution of existence probabilities is further improved.
 詳細には、S4では、S3で特定された各々の血管画像40A,40Bにおける乳頭の重心位置を基準として、複数の血管画像40A,40Bの各々の位置合わせが行われる。その結果、複数の血管画像40A,40Bが、1点の重心位置を基準として位置合わせされるので、生成される網膜血管分布確率マップ50A,50Bの精度がさらに向上する。 Specifically, in S4, each of the plurality of blood vessel images 40A, 40B is aligned based on the centroid position of the nipple in each of the blood vessel images 40A, 40B specified in S3. As a result, the plurality of blood vessel images 40A, 40B are aligned based on the position of the center of gravity of one point, so the accuracy of the generated retinal blood vessel distribution probability maps 50A, 50B is further improved.
 なお、CPU3は、S4において、複数の血管画像40A,40Bの位置合わせを自動的に行ってもよい。この場合、網膜血管分布確率マップ50A,50Bを生成するために必要な作業者の作業量が適切に減少する。また、作業者は、複数の血管画像40A,50Bの位置合わせを行うための指示を、操作部7を介して眼底画像処理装置1に入力してもよい。CPU3は、操作部7を介して入力された指示に応じて、複数の血管画像40A,40Bの位置合わせを行ってもよい。この場合でも、網膜血管分布確率マップ50A,50Bは適切に生成される。 Note that the CPU 3 may automatically align the plurality of blood vessel images 40A and 40B in S4. In this case, the amount of work required by the operator to generate the retinal vascular distribution probability maps 50A, 50B is appropriately reduced. Further, the operator may input an instruction to align the plurality of blood vessel images 40A and 50B to the fundus image processing device 1 via the operation unit 7. The CPU 3 may align the plurality of blood vessel images 40A and 40B in accordance with instructions input via the operation unit 7. Even in this case, the retinal blood vessel distribution probability maps 50A and 50B are appropriately generated.
 図4に、表示装置8に表示された動脈の網膜血管分布確率マップ50Aと静脈の網膜血管分布確率マップ50Bの一例を示す。本実施形態では、CPU3は、網膜血管分布確率マップ50A,50Bを表示装置8に表示させる場合、二次元の網膜血管分布確率マップ50A,50Bにおける各画素の輝度値(本実施形態では、0~255の256段階の輝度値)を、色および濃淡で表現して表示させる(つまり、ヒートマップ化して表示させる)。図4に示す例では、画素の輝度値が高い程、濃い暖色とされ、画素の輝度値が低い程、濃い寒色とされている。ヒートマップ化した網膜血管分布確率マップ50A,50Bを表示させることで、血管の存在確率の二次元分布が、さらに適切に把握され易くなる。 FIG. 4 shows an example of an artery retinal vascular distribution probability map 50A and a venous retinal vascular distribution probability map 50B displayed on the display device 8. In this embodiment, when displaying the retinal blood vessel distribution probability maps 50A, 50B on the display device 8, the CPU 3 determines the brightness value of each pixel in the two-dimensional retinal blood vessel distribution probability maps 50A, 50B (in this embodiment, the brightness value is 0 to 255 and 256 levels of brightness values) are expressed and displayed using colors and shading (in other words, they are displayed as a heat map). In the example shown in FIG. 4, the higher the luminance value of the pixel, the darker the warm color, and the lower the luminance value of the pixel, the darker the cooler color. By displaying the heat mapped retinal blood vessel distribution probability maps 50A and 50B, it becomes easier to understand the two-dimensional distribution of blood vessel existence probabilities more appropriately.
 CPU3は、解析対象の血管画像における血管分布の特徴を示すマップである対象血管特徴マップ51A,51B(図5参照)を生成する(S5)。詳細には、CPU3は、解析対象の血管画像40A,40Bである対象血管画像を取得する。本実施形態では、解析対象の眼底についての動脈の対象血管画像と静脈の対象血管画像が取得される。対象血管画像は、前述したS2で用いられた方法と同様の方法で取得されてもよいし、S2で用いられた方法とは異なる方法で取得されてもよい。CPU3は、網膜血管分布確率マップ50A,50Bのうち、対象血管画像の血管の領域に対応する領域のマップを、対象血管特徴マップ51A,51Bとして生成する。本実施形態では、CPU3は、動脈の網膜血管分布確率マップ50Aのうち、動脈の対象血管画像の血管の領域に対応する領域のマップを、動脈の対象血管特徴マップ51Aとして生成する。また、CPU3は、静脈の網膜血管分布確率マップ50Bのうち、静脈の対象血管画像の血管の領域に対応する領域のマップを、静脈の対象血管特徴マップ51Bとして生成する。 The CPU 3 generates target blood vessel characteristic maps 51A and 51B (see FIG. 5), which are maps showing characteristics of blood vessel distribution in the blood vessel image to be analyzed (S5). Specifically, the CPU 3 acquires target blood vessel images, which are blood vessel images 40A and 40B to be analyzed. In this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein are acquired for the fundus of the eye to be analyzed. The target blood vessel image may be acquired by a method similar to the method used in S2 described above, or may be acquired by a method different from the method used in S2. Of the retinal blood vessel distribution probability maps 50A and 50B, the CPU 3 generates maps of regions corresponding to the blood vessel regions of the target blood vessel image as target blood vessel characteristic maps 51A and 51B. In this embodiment, the CPU 3 generates a map of a region corresponding to the blood vessel region of the target blood vessel image of the artery, out of the retinal blood vessel distribution probability map 50A of the artery, as the target blood vessel feature map 51A of the artery. Further, the CPU 3 generates a map of a region corresponding to the region of the blood vessel in the vein target blood vessel image from the vein retinal blood vessel distribution probability map 50B as the vein target blood vessel characteristic map 51B.
 図5に示すように、対象血管特徴マップ51A,51Bは、対象血管画像における血管分布の特徴を示す。例えば、対象血管特徴マップ51A,51Bによると、対象血管画像に写る血管の領域における、母集団の血管の存在確率が、画像上で適切に把握される。また、前述したように、網膜血管分布確率マップ50A,50B上の各領域の血管の存在確率と、母集団における各領域の血管の太さの間の相関は高くなる。従って、対象血管特徴マップ51A,51Bによると、例えば、母集団では細い血管が存在する可能性が高い領域に、太い血管が存在する場合や、母集団では太い血管が存在する可能性が高い領域に、細い血管が存在する場合等が、容易に理解され易い。以上のように、対象血管特徴マップ51A,51Bによると、対象血管画像に写る血管の特徴が容易に把握される。 As shown in FIG. 5, target blood vessel feature maps 51A and 51B show characteristics of blood vessel distribution in target blood vessel images. For example, according to the target blood vessel characteristic maps 51A and 51B, the existence probability of blood vessels in the population in the region of blood vessels shown in the target blood vessel image can be appropriately grasped on the image. Furthermore, as described above, the correlation between the existence probability of blood vessels in each region on the retinal blood vessel distribution probability maps 50A and 50B and the thickness of blood vessels in each region in the population becomes high. Therefore, according to the target blood vessel characteristic maps 51A and 51B, for example, a large blood vessel may exist in an area where there is a high possibility that a small blood vessel exists in the population, or an area where there is a high possibility that a large blood vessel exists in the population. It is easy to understand cases where small blood vessels are present. As described above, according to the target blood vessel characteristic maps 51A and 51B, the characteristics of the blood vessels appearing in the target blood vessel images can be easily grasped.
 S5では、CPU3は、網膜血管分布確率マップ50A,50Bと対象血管画像の位置合わせが行われた状態で、対象血管特徴マップ51A,51Bを生成する。位置合わせには、前述した種々の方法(例えば、乳頭の位置を基準とする方法等)を採用できる。位置合わせは、自動で行われてもよいし、作業者によって手動で行われてもよい。なお、CPU3は、網膜血管分布確率マップ50A,50Bから、対象血管画像の血管の領域に対応する領域を抽出することで、対象血管特徴マップ51A,51Bを生成してもよい。また、CPU3は、網膜血管分布確率マップ50A,50Bのうち、対象血管画像の血管の領域以外の領域をマスクすることで、対象血管特徴マップ51A,51Bを生成してもよい。 In S5, the CPU 3 generates target blood vessel feature maps 51A, 51B with the retinal blood vessel distribution probability maps 50A, 50B and target blood vessel images aligned. For positioning, various methods described above (for example, a method using the position of the nipple as a reference) can be employed. The alignment may be performed automatically or manually by an operator. Note that the CPU 3 may generate the target blood vessel feature maps 51A, 51B by extracting a region corresponding to the blood vessel region of the target blood vessel image from the retinal blood vessel distribution probability maps 50A, 50B. Further, the CPU 3 may generate the target blood vessel characteristic maps 51A and 51B by masking regions other than the blood vessel region of the target blood vessel image in the retinal blood vessel distribution probability maps 50A and 50B.
 CPU3は、解析対象の血管画像における血管分布の特徴を示すヒストグラムである血管分布ヒストグラム52A,52B(図5参照)を生成する(S6)。詳細には、CPU3は、S5と同様に、解析対象の血管画像40A,40Bである対象血管画像を取得する。本実施形態では、解析対象の眼底についての動脈の対象血管画像と静脈の対象血管画像が取得される。CPU3は、網膜血管分布確率マップ50A,50Bのうち、対象血管画像の血管の領域の情報に基づいて、対象血管画像の血管の領域内の画素数を、網膜血管分布確率マップ50A,50Bによって示される血管の存在確率に応じて示すヒストグラムを、血管分布ヒストグラム52A,52Bとして生成する。詳細には、本実施形態では、網膜血管分布確率マップ50A,50Bにおける血管の存在確率の最低値から最高値までの間が、輝度値に応じた256段階に区分けされる。この場合、対象血管画像の血管の領域内に存在する各々の画素は、血管の存在確率順に区分けされた256個のエリア(以下、「存在確率エリア」という)のいずれかに属することになる。CPU3は、各々の存在確率エリアに属する画素の数をヒストグラム化する。本実施形態では、CPU3は、動脈の網膜血管分布確率マップ50Aのうち、動脈の対象血管画像の血管の領域の情報に基づいて、動脈の血管分布ヒストグラム52Aを生成する。また、CPU3は、静脈の網膜血管分布確率マップ50Bのうち、静脈の対象血管画像の血管の領域の情報に基づいて、静脈の血管分布ヒストグラム52Bを生成する。 The CPU 3 generates blood vessel distribution histograms 52A and 52B (see FIG. 5), which are histograms indicating characteristics of blood vessel distribution in the blood vessel image to be analyzed (S6). Specifically, similarly to S5, the CPU 3 acquires target blood vessel images, which are the blood vessel images 40A and 40B to be analyzed. In this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein are acquired for the fundus of the eye to be analyzed. The CPU 3 indicates the number of pixels in the blood vessel region of the target blood vessel image using the retinal blood vessel distribution probability maps 50A and 50B based on the information on the blood vessel region of the target blood vessel image among the retinal blood vessel distribution probability maps 50A and 50B. The blood vessel distribution histograms 52A and 52B are generated as blood vessel distribution histograms 52A and 52B. Specifically, in this embodiment, the range from the lowest value to the highest value of the existence probability of blood vessels in the retinal blood vessel distribution probability maps 50A and 50B is divided into 256 levels according to the brightness value. In this case, each pixel existing within the blood vessel region of the target blood vessel image belongs to one of 256 areas (hereinafter referred to as "existence probability areas") divided in order of blood vessel existence probability. The CPU 3 creates a histogram of the number of pixels belonging to each existence probability area. In the present embodiment, the CPU 3 generates the arterial vascular distribution histogram 52A based on the information of the blood vessel region of the target artery blood vessel image in the arterial retinal vascular distribution probability map 50A. Further, the CPU 3 generates a venous vascular distribution histogram 52B based on information on the region of the blood vessel in the target venous blood vessel image in the venous retinal vascular distribution probability map 50B.
 図5に示す血管分布ヒストグラム52A,52Bでは、横軸が、網膜血管分布確率マップ50A,50Bにおける血管の存在確率(つまり、複数の存在確率エリア)、縦軸が、対象血管画像の血管の領域内に存在する複数の画素のうち、各々の存在確率エリアに属する画素数とされている。血管分布ヒストグラム52A,52Bでは、対象血管画像における血管分布の特徴が顕著に表れる。つまり、血管分布ヒストグラム52A,52Bでは、対象血管画像の血管の領域内に存在する複数の画素について、256個の存在確率エリアの各々に属する画素の数の分布が適切に把握される。よって、血管分布ヒストグラム52A,52Bも、過去の解析方法では得られなかった有用な情報となる。 In the blood vessel distribution histograms 52A and 52B shown in FIG. 5, the horizontal axis is the probability of existence of a blood vessel in the retinal blood vessel distribution probability maps 50A and 50B (that is, a plurality of existence probability areas), and the vertical axis is the area of the blood vessel in the target blood vessel image. This is the number of pixels that belong to each existence probability area among a plurality of pixels existing within the area. In the blood vessel distribution histograms 52A and 52B, the characteristics of the blood vessel distribution in the target blood vessel image are prominently displayed. That is, in the blood vessel distribution histograms 52A and 52B, the distribution of the number of pixels belonging to each of the 256 existence probability areas is appropriately grasped for a plurality of pixels existing in the blood vessel region of the target blood vessel image. Therefore, the blood vessel distribution histograms 52A and 52B also provide useful information that could not be obtained using past analysis methods.
 S6では、CPU3は、網膜血管分布確率マップ50A,50Bと対象血管画像の位置合わせが行われた状態で、血管分布ヒストグラム52A,52Bを生成する。位置合わせには、前述した種々の方法(例えば、乳頭の位置を基準とする方法等)を採用できる。位置合わせは、自動で行われてもよいし、作業者によって手動で行われてもよい。 In S6, the CPU 3 generates blood vessel distribution histograms 52A, 52B with the retinal blood vessel distribution probability maps 50A, 50B and the target blood vessel image aligned. For positioning, various methods described above (for example, a method using the position of the nipple as a reference) can be employed. The alignment may be performed automatically or manually by an operator.
 CPU3は、S6で複数の対象血管画像について生成された複数の血管分布ヒストグラム52A,52Bのうち、集計対象とする複数の血管分布ヒストグラム52A,52Bを集計することで、集計ヒストグラムを生成する(S7)。本実施形態では、CPU3は、複数の動脈の血管分布ヒストグラム52Aを集計することで、動脈の集計ヒストグラムを生成する。また、CPU3は、複数の静脈の血管分布ヒストグラム52Aを集計することで、静脈の集計ヒストグラムを生成する。なお、本実施形態では、CPU3は、複数の血管分布ヒストグラム52A,52Bの集計結果を平均化することで、集計ヒストグラムを生成する。後述する年代別集計ヒストグラム(図6および図7参照)は、集計ヒストグラムの一例である。CPU3は、集計対象とする母集団を適宜設定することで、種々の集計ヒストグラムを生成することができる。例えば、後述のように、年代別に母集団が設定されてもよいし、疾患等についての特定の条件が定められた母集団が設定されてもよい。集計ヒストグラムによると、集計対象とした母集団についての血管分布の特徴が適切に把握される。 The CPU 3 generates a total histogram by totalizing the multiple blood vessel distribution histograms 52A, 52B to be totaled among the multiple blood vessel distribution histograms 52A, 52B generated for the multiple target blood vessel images in S6 (S7 ). In this embodiment, the CPU 3 generates a total arterial histogram by totaling the blood vessel distribution histograms 52A of a plurality of arteries. Further, the CPU 3 generates a vein total histogram by totaling the blood vessel distribution histograms 52A of a plurality of veins. In addition, in this embodiment, the CPU 3 generates the aggregate histogram by averaging the aggregate results of the plurality of blood vessel distribution histograms 52A and 52B. The age-based aggregate histogram (see FIGS. 6 and 7), which will be described later, is an example of an aggregate histogram. The CPU 3 can generate various aggregate histograms by appropriately setting the population to be aggregated. For example, as described below, populations may be set by age group, or populations with specific conditions regarding diseases etc. may be set. According to the aggregation histogram, the characteristics of the blood vessel distribution of the population targeted for aggregation can be appropriately grasped.
 なお、集計ヒストグラムの利用方法は様々である。例えば、CPU3は、生成した集計ヒストグラムを表示装置8に表示させることで、集計対象とした母集団についての血管分布の特徴をユーザに把握させてもよい。また、CPU3は、個々の対象血管画像について生成された血管分布ヒストグラム52A,52Bと、集計ヒストグラムの間の差分である差分ヒストグラムを生成する。この処理については、第2実施形態の眼底画像処理(図11参照)において詳細に説明する。 Note that there are various ways to use the aggregate histogram. For example, the CPU 3 may display the generated aggregate histogram on the display device 8 to allow the user to understand the characteristics of the blood vessel distribution of the population targeted for the aggregate. Further, the CPU 3 generates a difference histogram that is the difference between the blood vessel distribution histograms 52A and 52B generated for each target blood vessel image and the total histogram. This processing will be described in detail in the fundus image processing of the second embodiment (see FIG. 11).
 CPU3は、年代別集計ヒストグラム(図6および図7参照)を生成する(S8)。詳細には、CPU3は、S6で複数の対象血管画像について生成された複数の血管分布ヒストグラム52A,52Bを、解析対象の被検者の年齢が属する年代毎(本実施形態では、20代、30代、40代、50代、60代、70代)に集計し、集計結果を年代毎に平均化することで、年代別集計ヒストグラムを生成する。本実施形態では、CPU3は、動脈の血管分布ヒストグラム52Aを年代毎に集計することで、動脈の年代別集計ヒストグラム(図6参照)を生成する。また、CPU3は、静脈の血管分布ヒストグラム52Aを年代毎に集計することで、静脈の年代別集計ヒストグラム(図7参照)を生成する。 The CPU 3 generates an age-specific aggregate histogram (see FIGS. 6 and 7) (S8). Specifically, the CPU 3 analyzes the plurality of blood vessel distribution histograms 52A and 52B generated for the plurality of target blood vessel images in S6 for each age group to which the age of the subject to be analyzed belongs (in this embodiment, 20s, 30s, etc.). A total histogram by age group is generated by averaging the results for each age group. In this embodiment, the CPU 3 generates an age-specific aggregate histogram of arteries (see FIG. 6) by aggregating the artery blood vessel distribution histogram 52A for each age. Further, the CPU 3 generates a venous age-specific aggregate histogram (see FIG. 7) by aggregating the venous blood vessel distribution histogram 52A for each age group.
 図6および図7に示す集計ヒストグラム(年代別集計ヒストグラム)では、横軸が、網膜血管分布確率マップ50A,50Bにおける血管の存在確率(つまり、複数の存在確率エリア)、縦軸が、対象血管画像の血管の領域内に存在する複数の画素のうち、各々の存在確率エリアに属する画素数の平均値とされている。図6および図7に示すように、年代別集計ヒストグラムによると、年代別の血管分布の特徴を適切に把握することが可能である。図6および図7に示す年代別集計ヒストグラムは、集計対象の被検者の条件を定めずに生成されたものである。図6および図7を参照すると、年代が大きくなる程(被検者の年齢が高くなるほど)、動脈および静脈のいずれにおいても、ほぼ全ての存在確率エリアにおいて血管が減少していることが分かる。つまり、被検者の年齢の経過と、被検者の眼底血管の減少に高い相関がある可能性が高い。 In the aggregated histograms (aggregated histograms by age group) shown in FIGS. 6 and 7, the horizontal axis is the existence probability of blood vessels in the retinal vascular distribution probability maps 50A and 50B (that is, multiple existence probability areas), and the vertical axis is the target blood vessel. It is the average value of the number of pixels belonging to each existence probability area among a plurality of pixels existing in the blood vessel region of the image. As shown in FIGS. 6 and 7, according to the age-specific aggregate histograms, it is possible to appropriately understand the characteristics of blood vessel distribution by age. The age-specific aggregation histograms shown in FIGS. 6 and 7 were generated without determining the conditions for the subjects to be aggregated. Referring to FIGS. 6 and 7, it can be seen that as the age increases (as the age of the subject increases), the number of blood vessels decreases in almost all existence probability areas, both in arteries and veins. In other words, there is a high possibility that there is a high correlation between the aging of the subject and the decrease in the blood vessels of the subject's fundus.
 従って、CPU3は、年代が異なる複数の被検者について生成された複数の血管分布ヒストグラム52A,52Bを集計する場合(例えば、前述したS7、および、後述するS9の処理を実行する場合)、各々の解析対象の被検者の年齢による血管の存在確率の分布の変化の影響を補正したうえで、複数の血管分布ヒストグラム52A,52Bを集計する。一例として、本実施形態では、CPU3は、血管の存在確率順に区分けされた各々の存在確率エリア毎に、年齢を説明変数とし、存在確率を目的変数とする回帰式を用いた回帰分析を行うことで、年齢による変化の影響を補正する。その結果、年齢による変化の影響が抑制された集計ヒストグラムによって、血管分布の特徴を把握することが可能である。年齢による血管の存在確率の分布の変化の影響を補正する処理は、動脈の血管分布ヒストグラム52Aの集計処理と、静脈の血管分布ヒストグラム52Bの集計処理の両方に適用される。 Therefore, when aggregating a plurality of blood vessel distribution histograms 52A and 52B generated for a plurality of subjects of different ages (for example, when executing the processing of S7 described above and S9 described later), the CPU 3 The plurality of blood vessel distribution histograms 52A and 52B are aggregated after correcting the influence of changes in the distribution of blood vessel existence probabilities depending on the age of the subject to be analyzed. As an example, in the present embodiment, the CPU 3 performs a regression analysis using a regression equation in which age is an explanatory variable and existence probability is an objective variable for each existence probability area divided in order of blood vessel existence probability. to correct for the effects of changes due to age. As a result, it is possible to understand the characteristics of blood vessel distribution using an aggregated histogram in which the influence of changes due to age is suppressed. The process of correcting the influence of changes in the distribution of blood vessel existence probabilities due to age is applied to both the aggregation process of the arterial vascular distribution histogram 52A and the aggregation process of the venous vascular distribution histogram 52B.
 CPU3は、母集団毎に複数の集計ヒストグラムを生成する(S9)。詳細には、CPU3は、対象血管画像が属する複数の母集団毎に、S6で複数の対象血管画像について生成された複数の血管分布ヒストグラム52A,52Bのうち、各母集団に属する対象血管画像について生成された複数の血管分布ヒストグラム52A,52Bを集計し、集計結果を母集団毎に平均化する。その結果、母集団別に集計ヒストグラムが生成される。母集団別に集計ヒストグラムが生成されることで、母集団毎の眼底の血管分布の特徴が適切に把握される。なお、S9では、動脈の血管分布ヒストグラム52Aの集計処理と、静脈の血管分布ヒストグラム52Bの集計処理が実行される。 The CPU 3 generates a plurality of aggregate histograms for each population (S9). Specifically, for each of the plurality of populations to which the target blood vessel images belong, the CPU 3 determines the target blood vessel images belonging to each population among the plurality of blood vessel distribution histograms 52A and 52B generated for the plurality of target blood vessel images in S6. The plurality of generated blood vessel distribution histograms 52A, 52B are aggregated, and the aggregated results are averaged for each population. As a result, aggregate histograms are generated for each population. By generating aggregation histograms for each population, the characteristics of the fundus blood vessel distribution for each population can be appropriately understood. In addition, in S9, aggregation processing of the arterial vascular distribution histogram 52A and aggregation processing of the venous vascular distribution histogram 52B are executed.
 CPU3は、S9で母集団毎に生成された複数の集計ヒストグラムの間の差分を示す差分ヒストグラム(図8および図9参照)を生成する(S10)。S10では、動脈の差分ヒストグラムと静脈の差分ヒストグラムが生成される。S10で生成される差分ヒストグラムによると、母集団毎の血管分布の特徴が適切に把握される。例えば、被検者の疾患の有無、または疾患の状態等の様々な条件に応じて母集団が決定されることで、過去の解析方法では得られなかった有用な情報が得られる。 The CPU 3 generates a difference histogram (see FIGS. 8 and 9) indicating the difference between the plurality of aggregate histograms generated for each population in S9 (S10). In S10, an artery difference histogram and a vein difference histogram are generated. According to the difference histogram generated in S10, the characteristics of blood vessel distribution for each population can be appropriately understood. For example, by determining the population according to various conditions such as the presence or absence of a disease in the subject or the state of the disease, useful information that could not be obtained with past analysis methods can be obtained.
 図8および図9は、異なる2つの母集団についての差分ヒストグラムの一例である。図8および図9に示す差分ヒストグラムは、2つの母集団として、DM群とcontrol群が設定されたうえで、母集団毎の集計ヒストグラムの生成処理(S9)、および、2つの集計ヒストグラムの各々についての差分ヒストグラムの生成処理(S10)が実行されることで生成されたものである。DM群は、糖尿病網膜症を発症していない糖尿病患者の母集団(該当数=495)である。control群は、全ての被検者・被検眼(該当数=10460)から、比較対象であるDM群の被検者・被検眼を除外した母集団(該当数=9965)である。また、DM群およびcontrol群の各々の差分ヒストグラムを生成するための比較対象とされた集計ヒストグラムは、全ての被検者・被検眼(該当数=10460)を母集団とする集計ヒストグラムである。図8は、動脈についての差分ヒストグラムであり、図9は、静脈についての差分ヒストグラムである。なお、図10は、図8に示すDM群の動脈についての差分ヒストグラムのデータを眼底画像30に重ねたマップ53Aと、図9に示すDM群の静脈についての差分ヒストグラムのデータを眼底画像30に重ねたマップ53Bである。 FIGS. 8 and 9 are examples of difference histograms for two different populations. The difference histograms shown in FIGS. 8 and 9 are generated by setting the DM group and the control group as two populations, and then generating the aggregate histogram for each population (S9), and each of the two aggregate histograms. This is generated by executing the difference histogram generation process (S10) for . The DM group is a population of diabetic patients who have not developed diabetic retinopathy (number of subjects = 495). The control group is a population (number of matches = 9965) obtained by excluding subjects/eyes of the DM group, which are comparison targets, from all subjects/eyes to be tested (number of matches = 10,460). Further, the aggregate histogram used as a comparison target for generating the difference histograms for each of the DM group and the control group is an aggregate histogram whose population is all subjects/eyes to be examined (corresponding number = 10460). FIG. 8 is a difference histogram for arteries, and FIG. 9 is a difference histogram for veins. Note that FIG. 10 shows a map 53A in which the difference histogram data for arteries in the DM group shown in FIG. 8 is superimposed on the fundus image 30, and the difference histogram data for veins in the DM group shown in FIG. This is a superimposed map 53B.
 図8の差分ヒストグラム、および図10のマップ53Aに示されるように、DM群(糖尿病網膜症を発症していない糖尿病患者の母集団)の動脈に着目すると、もともと血管の存在確率が高い主幹血管周囲の領域(48~84の存在確率エリア)では、control群に比べて輝度(画素数)が増加しているが、血管の存在確率が低い領域(0~46の存在確率エリア)では、control群に比べて輝度(画素数)が減少している。また、図9の差分ヒストグラム、および図10のマップ53Bに示されるように、DM群の静脈では、全ての領域においてcontrol群よりも輝度(画素数)が減少しているが、特に、中心窩周囲を含む網膜血管の存在確率が疎な領域で減少量が大きいことが分かる。解析対象の被検眼の血管分布ヒストグラムと、比較対象の集計ヒストグラム(例えば、全ての被検者・被検眼を母集団とする集計ヒストグラム)の間の差分ヒストグラムが、図8および図9に示すDM群の差分ヒストグラムに近似しているか否かによって、糖尿病網膜症の発症前に危険度を予測できる可能性がある。 As shown in the difference histogram in FIG. 8 and the map 53A in FIG. In the surrounding area (existence probability area 48 to 84), the brightness (number of pixels) increases compared to the control group, but in the area where the existence probability of blood vessels is low (existence probability area 0 to 46), the control group The brightness (number of pixels) is reduced compared to the group. Furthermore, as shown in the difference histogram in FIG. 9 and the map 53B in FIG. 10, in the veins of the DM group, the luminance (number of pixels) is decreased compared to the control group in all regions, but especially in the fovea. It can be seen that the amount of decrease is large in areas where the existence probability of retinal blood vessels including the surrounding area is sparse. The difference histogram between the blood vessel distribution histogram of the eye to be analyzed and the aggregated histogram to be compared (e.g., the aggregated histogram with all subjects/eyes as the population) is the DM shown in FIGS. 8 and 9. It is possible to predict the risk of diabetic retinopathy before the onset of diabetic retinopathy, depending on whether or not it approximates the group difference histogram.
 CPU3は、機械学習アルゴリズムによって数学モデルを訓練させることで、特定の疾患に関する情報を出力する数学モデルを構築する(S11)。S11では、複数の訓練データセットによって数学モデルが訓練されることで、数学モデルが構築される。各々の訓練データセットには、入力側のデータ(入力用訓練データ)と出力側のデータ(出力用訓練データ)が含まれる。詳細には、入力用訓練データには、血管分布ヒストグラム52A,52Bが用いられる。血管分布ヒストグラム52A,52Bは、特定の血管画像の血管分布の特徴を示す血管分布特徴情報の一例である。血管分布特徴情報は、網膜血管分布確率マップ50A,50Bのうち、特定の血管画像の血管の領域に対応する領域の情報を処理することで生成される。出力用訓練データには、入力用訓練データが取得された被検者の特定の疾患の有無(本実施形態では、糖尿病の有無)を示す情報が用いられる。特定の疾患の有無を示す情報は、例えば、作業者が操作部7を操作することで生成されてもよい。数学モデルは、対象被検者の血管分布ヒストグラム52A,52Bが入力されることで、対象被検者の特定の疾患に関する情報(本実施形態では、対象被検者が糖尿病である確率)を出力するように訓練される。なお、眼底画像処理装置1によって構築された数学モデルを実現するプログラムは、眼底画像処理装置21に組み込まれる。 The CPU 3 constructs a mathematical model that outputs information regarding a specific disease by training the mathematical model using a machine learning algorithm (S11). In S11, a mathematical model is constructed by training the mathematical model using a plurality of training data sets. Each training data set includes input side data (input training data) and output side data (output training data). Specifically, the blood vessel distribution histograms 52A and 52B are used as input training data. The blood vessel distribution histograms 52A and 52B are examples of blood vessel distribution characteristic information indicating the characteristics of the blood vessel distribution of a specific blood vessel image. The vascular distribution characteristic information is generated by processing information of a region corresponding to a blood vessel region of a specific blood vessel image among the retinal blood vessel distribution probability maps 50A and 50B. The output training data uses information indicating the presence or absence of a specific disease (in this embodiment, the presence or absence of diabetes) of the subject from whom the input training data was acquired. Information indicating the presence or absence of a specific disease may be generated, for example, by an operator operating the operating unit 7. The mathematical model outputs information regarding a specific disease of the target subject (in this embodiment, the probability that the target subject has diabetes) by inputting the blood vessel distribution histograms 52A and 52B of the target subject. be trained to do so. Note that a program for realizing the mathematical model constructed by the fundus image processing device 1 is incorporated into the fundus image processing device 21.
 血管分布ヒストグラム52A,52Bには、対象血管画像の血管分布の特徴が表れる。また、被検者が糖尿病を有するか否かに応じて、対象血管画像の血管分布の特徴が変化する。従って、血管分布ヒストグラム52A,52Bを含む訓練データセットによって訓練された数学モデルに、対象被検者の血管分布ヒストグラム52A,52Bが入力されることで、対象被検者が糖尿病である確率が適切に取得される。 The blood vessel distribution histograms 52A and 52B show the characteristics of the blood vessel distribution of the target blood vessel image. Further, the characteristics of the blood vessel distribution in the target blood vessel image change depending on whether the subject has diabetes or not. Therefore, by inputting the target subject's vascular distribution histograms 52A and 52B to a mathematical model trained by the training data set including the vascular distribution histograms 52A and 52B, the probability that the target subject has diabetes can be determined appropriately. be obtained.
 本実施形態では、数学モデルを訓練するための入力用訓練データ、および、構築された数学モデルに入力される血管分布ヒストグラム52A,52Bには、動脈についての血管分布ヒストグラム52Aと、静脈についての血管分布ヒストグラム52Bが共に用いられる。従って、より高い精度で疾患に関する情報が取得され易くなる。 In this embodiment, input training data for training a mathematical model and blood vessel distribution histograms 52A and 52B input to the constructed mathematical model include a blood vessel distribution histogram 52A for arteries and a blood vessel distribution histogram for veins. A distribution histogram 52B is used together. Therefore, it becomes easier to obtain information regarding the disease with higher accuracy.
 さらに、本実施形態では、数学モデルを訓練するための入力用訓練データには、血管分布ヒストグラム52A,52Bに加えて、血管分布ヒストグラム52A,52Bが取得された被検者の年齢および性別の情報が含まれる。血管分布ヒストグラム52A,52B等の血管分布特徴情報は、被検者の年齢および性別の少なくとも一方に応じて変化する場合が多い。従って、年齢および性別の少なくとも一方の情報を入力用訓練データに含めることで、対象被検者の年齢・性別に応じた適切な情報が数学モデルによって出力され易くなる。構築された数学モデルには、血管分布ヒストグラム52A,52Bに加えて、血管分布ヒストグラム52A,52Bが取得された被検者の年齢および性別の情報も入力される。 Furthermore, in this embodiment, the input training data for training the mathematical model includes, in addition to the blood vessel distribution histograms 52A and 52B, information on the age and sex of the subject from whom the blood vessel distribution histograms 52A and 52B were obtained. is included. Blood vessel distribution characteristic information such as the blood vessel distribution histograms 52A, 52B often changes depending on at least one of the age and gender of the subject. Therefore, by including at least one of age and gender information in the input training data, it becomes easier for the mathematical model to output appropriate information according to the age and gender of the target subject. In addition to the vascular distribution histograms 52A, 52B, information on the age and sex of the subject from whom the vascular distribution histograms 52A, 52B were obtained is also input into the constructed mathematical model.
 一例として、本実施形態では、5層の全結合層を用いたニューラルネットワークに、5-fold-cross validationにて訓練データセットを学習させる。しかし、アルゴリズムの内容が適宜選択できることは言うまでもない。 As an example, in this embodiment, a neural network using five fully connected layers is made to learn a training data set through 5-fold-cross validation. However, it goes without saying that the content of the algorithm can be selected as appropriate.
 本願発明の発明者は、構築された数学モデルの有用性を評価するための試行を行った。本試行では、発明者は、糖尿病を有する複数の被検者(糖尿病群)の各々の血管分布ヒストグラム・年齢・性別を、構築された数学モデルに入力することで、各々の被検者が糖尿病である確率を取得した。同様に、発明者は、糖尿病を有さない複数の被検者(正常群)の各々の血管分布ヒストグラム・年齢・性別を、構築された数学モデルに入力することで、各々の被検者が糖尿病である確率を取得した。発明者は、糖尿病群について取得された確率と、正常群について取得された確率の差を、仮説検定の手法の1つであるt検定によって比較した。 The inventor of the present invention conducted a trial to evaluate the usefulness of the constructed mathematical model. In this trial, the inventor inputted the blood vessel distribution histogram, age, and gender of multiple subjects with diabetes (diabetic group) into the constructed mathematical model, thereby determining whether each subject had diabetes. We obtained the probability that . Similarly, the inventor inputs the blood vessel distribution histogram, age, and gender of each of multiple subjects without diabetes (normal group) into the constructed mathematical model. Obtained the probability of having diabetes. The inventors compared the difference between the probabilities obtained for the diabetic group and the probabilities obtained for the normal group using a t-test, which is one of the hypothesis testing methods.
 その結果、糖尿病群について取得された確率の平均値は「0.57±0.080」、正常群について取得された確率の平均値は「0.53±0.075」となり、統計学的有意差が認められた。 As a result, the average value of the probabilities obtained for the diabetic group was ``0.57±0.080,'' and the average value of the probabilities obtained for the normal group was ``0.53±0.075,'' which was statistically significant. A difference was observed.
(第2実施形態)
 図11を参照して、第2実施形態の眼底画像処理装置21が実行する眼底画像処理について説明する。前述したように、第2実施形態の眼底画像処理装置21は、第1実施形態の眼底画像処理装置1によって生成された網膜血管分布確率マップ50A,50Bおよび集計ヒストグラムを取得する。第2実施形態の眼底画像処理装置21は、眼底画像処理装置1によって生成された網膜血管分布確率マップ50A,50Bおよび集計ヒストグラム等に基づいて、解析対象の血管画像(対象血管画像)の血管分布の特徴を示す血管分布特徴情報を生成する。第2実施形態では、血管分布特徴情報として、例えば、対象血管特徴マップ51A,51B(図5参照)、血管分布ヒストグラム52A,52B(図5参照)、および差分ヒストグラム(図8および図9参照)等が生成される。さらに、第1実施形態の眼底画像処理装置1によって構築された数学モデルを実現するためのプログラムが、第2実施形態の眼底画像処理装置21に組み込まれる。第2実施形態では、解析対象の眼底画像に基づいて生成された血管分布ヒストグラム52A,52Bが数学モデルに入力されることで、解析対象の被検者が糖尿病である確率を取得する。図11に例示する眼底画像処理は、記憶装置24に記憶された眼底画像処理プログラムに従って、眼底画像処理装置21のCPU23によって実行される。
(Second embodiment)
Referring to FIG. 11, fundus image processing performed by the fundus image processing device 21 of the second embodiment will be described. As described above, the fundus image processing device 21 of the second embodiment acquires the retinal blood vessel distribution probability maps 50A, 50B and the aggregate histogram generated by the fundus image processing device 1 of the first embodiment. The fundus image processing device 21 of the second embodiment calculates the blood vessel distribution of the blood vessel image to be analyzed (target blood vessel image) based on the retinal blood vessel distribution probability maps 50A, 50B generated by the fundus image processing device 1 and the aggregated histogram. Generate blood vessel distribution characteristic information indicating the characteristics of. In the second embodiment, the blood vessel distribution characteristic information includes, for example, target blood vessel characteristic maps 51A and 51B (see FIG. 5), blood vessel distribution histograms 52A and 52B (see FIG. 5), and difference histograms (see FIGS. 8 and 9). etc. are generated. Furthermore, a program for realizing the mathematical model constructed by the fundus image processing device 1 of the first embodiment is incorporated into the fundus image processing device 21 of the second embodiment. In the second embodiment, the probability that the subject to be analyzed has diabetes is obtained by inputting the blood vessel distribution histograms 52A and 52B generated based on the fundus image to be analyzed into a mathematical model. The fundus image processing illustrated in FIG. 11 is executed by the CPU 23 of the fundus image processing device 21 according to the fundus image processing program stored in the storage device 24.
 まず、CPU23は、眼底画像撮影装置(本実施形態では眼底カメラ)11Bによって撮影された、二次元カラー正面画像である眼底画像30(図2参照)を、解析対象の眼底画像として取得する(S11)。CPU23は、S11で取得された対象血管画像に含まれる動脈および静脈の少なくともいずれかを示す対象血管画像(本実施形態では、動脈の対象血管画像と、静脈の対象血管画像)を取得する(S12)。CPU23は、対象血管画像における乳頭の位置(本実施形態では、乳頭の重心位置)を特定する(S13)。なお、S11~S13の処理には、第1実施形態におけるS1~S3の処理と同様の処理を採用できる。 First, the CPU 23 acquires the fundus image 30 (see FIG. 2), which is a two-dimensional color frontal image, captured by the fundus image capturing device (in this embodiment, the fundus camera) 11B, as a fundus image to be analyzed (S11 ). The CPU 23 acquires a target blood vessel image (in this embodiment, a target blood vessel image of an artery and a target blood vessel image of a vein) indicating at least one of an artery and a vein included in the target blood vessel image acquired in S11 (S12). ). The CPU 23 identifies the position of the papilla (in this embodiment, the position of the center of gravity of the papilla) in the target blood vessel image (S13). Note that the same processes as the processes S1 to S3 in the first embodiment can be adopted for the processes S11 to S13.
 CPU23は、対象血管画像における血管分布の特徴を示す対象血管特徴マップ51A,51B(図5参照)を生成し、表示装置28に表示させる(S14)。前述したように、対象血管特徴マップ51A,51Bによると、対象血管画像における血管分布の特徴が一目で把握される。従って、被検眼の診断等が適切に補助される。なお、S14における対象血管特徴マップ51A,51Bの生成処理には、第1実施形態におけるS5の処理と同様の処理を採用できる。 The CPU 23 generates target blood vessel characteristic maps 51A and 51B (see FIG. 5) that indicate the characteristics of blood vessel distribution in the target blood vessel image, and displays them on the display device 28 (S14). As described above, according to the target blood vessel characteristic maps 51A and 51B, the characteristics of the blood vessel distribution in the target blood vessel image can be grasped at a glance. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted. Note that the process similar to the process of S5 in the first embodiment can be adopted for the process of generating the target blood vessel feature maps 51A and 51B in S14.
 CPU23は、対象血管画像における血管分布の特徴を示すヒストグラムである血管分布ヒストグラム52A,52B(図5参照)を生成し、表示装置28に表示させる(S15)。前述したように、血管分布ヒストグラム52A,52Bでは、対象血管画像における血管分布の特徴が顕著に表れる。従って、被検眼の診断等が適切に補助される。なお、S15における血管分布ヒストグラム52A,52Bの生成処理には、第1実施形態におけるS6の処理と同様の処理を採用できる。 The CPU 23 generates blood vessel distribution histograms 52A and 52B (see FIG. 5), which are histograms indicating characteristics of blood vessel distribution in the target blood vessel image, and displays them on the display device 28 (S15). As described above, in the blood vessel distribution histograms 52A and 52B, the characteristics of the blood vessel distribution in the target blood vessel image are prominently displayed. Therefore, diagnosis of the eye to be examined, etc. is appropriately assisted. Note that the same process as the process of S6 in the first embodiment can be adopted for the process of generating the blood vessel distribution histograms 52A and 52B in S15.
 なお、S15において、CPU23は、対象血管画像について生成した血管分布ヒストグラム52A,52Bと、第1実施形態の眼底画像処理装置1によって生成された集計ヒストグラム(例えば、特定の条件を満たす被検者・被検眼の母集団の集計ヒストグラム、または、全ての被検者・被検眼の母集団の集計ヒストグラム)を、比較可能な状態で(例えば並べて)表示装置28に表示させてもよい。この場合、ユーザは、集計ヒストグラムの母集団についての血管分布の特徴と、解析対象の眼底の血管分布の特徴を、容易に比較することができる。 In addition, in S15, the CPU 23 uses the blood vessel distribution histograms 52A and 52B generated for the target blood vessel images and the aggregate histogram generated by the fundus image processing apparatus 1 of the first embodiment (for example, the blood vessel distribution histograms 52A and 52B generated for the target blood vessel images) The aggregate histogram of the population of the eyes to be examined, or the aggregate histogram of the population of all subjects/eyes to be examined) may be displayed on the display device 28 in a comparable state (for example, side by side). In this case, the user can easily compare the characteristics of the blood vessel distribution for the population of the aggregate histogram and the characteristics of the blood vessel distribution of the fundus of the eye to be analyzed.
 CPU23は、S15で生成した血管分布ヒストグラム52A,52Bと、第1実施形態の眼底画像処理装置1によって生成された集計ヒストグラム(例えば、特定の条件を満たす被検者・被検眼の母集団の集計ヒストグラム、または、全ての被検者・被検眼の母集団の集計ヒストグラム)の差分である差分ヒストグラムを生成し、表示装置28に表示させる(S16)。S16で生成される差分ヒストグラムによると、個々の解析対象についての血管分布の特徴が、集計ヒストグラムの母集団における血管分布の特徴と比較された状態で示される。よって、個々の解析対象についての血管分布の特徴が、さらに容易に把握される。 The CPU 23 uses the blood vessel distribution histograms 52A and 52B generated in S15 and the aggregate histogram generated by the fundus image processing device 1 of the first embodiment (for example, aggregates the population of subjects/eyes that meet specific conditions). A difference histogram, which is the difference between the histogram (or a total histogram of the population of all subjects/eyes to be examined), is generated and displayed on the display device 28 (S16). According to the difference histogram generated in S16, the characteristics of the blood vessel distribution for each analysis target are shown in a state where they are compared with the characteristics of the blood vessel distribution in the population of the aggregate histogram. Therefore, the characteristics of blood vessel distribution for each analysis target can be more easily understood.
 CPU23は、S11(図3参照)で構築された数学モデルに、S15で生成された解析対象の被検者(対象被検者)の血管分布ヒストグラム52A,52Bを入力することで、数学モデルによって出力される特定の疾患に関する情報(本実施形態では、対象被検者が糖尿病である確率)を取得する(S17)。前述したように、数学モデルは、血管分布ヒストグラム52A,52Bが入力されることで、血管分布ヒストグラム52A,52Bが取得された被検者が糖尿病である確率を出力するように、機械学習アルゴリズムによって訓練されている。数学モデルは、血管分布ヒストグラム52A,52Bを入力用訓練データとし、入力用訓練データが取得された被検者の糖尿病の有無を示す情報を出力用訓練データとする複数の訓練データセットによって、予め訓練されている。前述したように、血管分布ヒストグラム52A,52Bには、対象血管画像の血管分布の特徴が表れる。また、疾患の種類によっては、対象血管画像が取得された被検者の糖尿病の状態に応じて、対象血管画像の血管分布の特徴が変化する場合が多い。従って、複数の被検者の血管分布ヒストグラム52A,52Bを含む訓練データセットによって予め訓練された数学モデルに、対象被検者の血管分布ヒストグラム52A,52Bが入力されることで、対象被検者が糖尿病である確率が適切に取得される。 The CPU 23 inputs the blood vessel distribution histograms 52A and 52B of the subject to be analyzed (target subject) generated in S15 into the mathematical model constructed in S11 (see FIG. 3). Information regarding a specific disease (in this embodiment, the probability that the target subject has diabetes) is obtained (S17). As mentioned above, the mathematical model is created by a machine learning algorithm such that when the blood vessel distribution histograms 52A, 52B are input, the probability that the subject from whom the blood vessel distribution histograms 52A, 52B are obtained is diabetic is output. trained. The mathematical model is created in advance using a plurality of training data sets in which the blood vessel distribution histograms 52A and 52B are used as input training data, and information indicating the presence or absence of diabetes of the subject whose input training data was acquired is used as output training data. trained. As described above, the blood vessel distribution histograms 52A and 52B show the characteristics of the blood vessel distribution of the target blood vessel image. Furthermore, depending on the type of disease, the characteristics of the blood vessel distribution in the target blood vessel image often change depending on the diabetes status of the subject from whom the target blood vessel image was acquired. Therefore, by inputting the target subject's vascular distribution histograms 52A, 52B into a mathematical model that has been trained in advance using a training data set including the vascular distribution histograms 52A, 52B of a plurality of subjects, the target subject's vascular distribution histograms 52A, 52B can be The probability that a person has diabetes is appropriately obtained.
 なお、本実施形態のS11では、動脈についての血管分布ヒストグラム52Aと、静脈についての血管分布ヒストグラム52Bが、共に数学モデルに入力される。また、本実施形態のS11では、血管分布ヒストグラム52A,52Bに加えて、血管分布ヒストグラム52A,52Bが取得された被検者の年齢および性別の情報も数学モデルに入力される。その結果、より高い精度で糖尿病である確率が取得され易くなる。 Note that in S11 of this embodiment, both the blood vessel distribution histogram 52A for arteries and the blood vessel distribution histogram 52B for veins are input into the mathematical model. Further, in S11 of this embodiment, in addition to the blood vessel distribution histograms 52A, 52B, information on the age and sex of the subject from whom the blood vessel distribution histograms 52A, 52B were acquired is also input into the mathematical model. As a result, it becomes easier to obtain the probability of having diabetes with higher accuracy.
 上記実施形態で開示された技術は一例に過ぎない。従って、上記実施形態で例示された技術を変更することも可能である。上記実施形態で例示された複数の処理のうちの一部のみを実行することも可能である。例えば、第2実施形態では、血管分布特徴情報として、対象血管特徴マップ51A,51B、血管分布ヒストグラム52A,52B、および差分ヒストグラムのうちの1つ、または2つのみが出力されてもよい。また、第2実施形態では、血管分布ヒストグラム52A,52Bの生成処理(S11~S13、S15)、および、数学モデルによる特定の疾患情報の取得処理(S17)のみが実行されてもよい。 The techniques disclosed in the above embodiments are merely examples. Therefore, it is also possible to modify the techniques exemplified in the above embodiments. It is also possible to execute only some of the multiple processes exemplified in the above embodiments. For example, in the second embodiment, one or only two of the target blood vessel characteristic maps 51A, 51B, the blood vessel distribution histograms 52A, 52B, and the difference histogram may be output as the blood vessel distribution characteristic information. Further, in the second embodiment, only the generation process of the blood vessel distribution histograms 52A and 52B (S11 to S13, S15) and the acquisition process of specific disease information using a mathematical model (S17) may be executed.
 図3のS1で眼底画像を取得する処理は、第1態様の「眼底画像取得ステップ」の一例である。図3のS2で血管画像を取得する処理は、第1態様の「血管画像取得ステップ」の一例である。図3のS4で網膜血管分布確率マップを生成する処理は、第1態様の「確率マップ生成ステップ」の一例である。図3のS3で乳頭の位置を特定する処理は、第1態様の「乳頭位置特定ステップ」の一例である。図3のS5,S6で対象血管画像を取得する処理は、第1態様の「対象血管画像取得ステップ」の一例である。図3のS5で対象血管特徴マップを生成する処理は、第1態様の「特徴マップ生成ステップ」の一例である。図3のS6で血管分布ヒストグラムを生成する処理は、第1態様の「血管分布ヒストグラム生成ステップ」の一例である。図3のS7~S9で集計ヒストグラムを生成する処理は、第1態様の「集計ヒストグラム生成ステップ」の一例である。図3のS8で年代別集計ヒストグラムを生成する処理は、第1態様の「年代別集計ヒストグラム生成ステップ」の一例である。図3のS9で母集団毎に集計ヒストグラムを生成する処理は、第1態様の「母集団別集計ヒストグラム生成ステップ」の一例である。図3のS10で差分ヒストグラムを生成する処理は、第1態様の「差分ヒストグラム生成ステップ」の一例である。図3のS5,S6で血管分布特徴情報を取得する処理は、「特徴情報生成ステップ」の一例である。図3のS11で数学モデルを構築する処理は、「数学モデル構築ステップ」の一例である。 The process of acquiring the fundus image in S1 of FIG. 3 is an example of the "fundus image acquisition step" of the first aspect. The process of acquiring a blood vessel image in S2 of FIG. 3 is an example of the "blood vessel image acquisition step" of the first aspect. The process of generating the retinal blood vessel distribution probability map in S4 of FIG. 3 is an example of the "probability map generation step" of the first aspect. The process of specifying the position of the nipple in S3 of FIG. 3 is an example of the "nipple position specifying step" of the first aspect. The process of acquiring target blood vessel images in S5 and S6 of FIG. 3 is an example of the "target blood vessel image acquisition step" of the first aspect. The process of generating the target blood vessel feature map in S5 of FIG. 3 is an example of the "feature map generation step" of the first aspect. The process of generating a blood vessel distribution histogram in S6 of FIG. 3 is an example of the "blood vessel distribution histogram generation step" of the first aspect. The process of generating the aggregate histogram in S7 to S9 in FIG. 3 is an example of the "aggregate histogram generation step" of the first aspect. The process of generating the age-specific aggregate histogram in S8 of FIG. 3 is an example of the "age-specific aggregate histogram generation step" of the first aspect. The process of generating a total histogram for each population in S9 of FIG. 3 is an example of the "population-specific total histogram generation step" of the first aspect. The process of generating a difference histogram in S10 of FIG. 3 is an example of the "difference histogram generation step" of the first aspect. The process of acquiring blood vessel distribution feature information in S5 and S6 of FIG. 3 is an example of a "feature information generation step." The process of constructing a mathematical model in S11 of FIG. 3 is an example of a "mathematical model construction step."
 図11のS11で解析対象の眼底画像を取得する処理は、第2態様の「眼底画像取得ステップ」の一例である。図11のS12で対象血管画像を取得する処理は、第2態様の「血管画像取得ステップ」の一例である。図11のS14~S16で血管分布特徴情報を生成する処理は、第2態様の「特徴情報生成ステップ」の一例である。図11のS17で疾患情報(上記実施形態では糖尿病である確率)を取得する処理は、「疾患情報取得ステップ」の一例である。 The process of acquiring the fundus image to be analyzed in S11 of FIG. 11 is an example of the "fundus image acquisition step" of the second aspect. The process of acquiring the target blood vessel image in S12 of FIG. 11 is an example of the "blood vessel image acquisition step" of the second aspect. The process of generating blood vessel distribution feature information in S14 to S16 in FIG. 11 is an example of the "feature information generation step" of the second aspect. The process of acquiring disease information (probability of diabetes in the above embodiment) in S17 of FIG. 11 is an example of a "disease information acquisition step."

Claims (19)

  1.  被検眼の眼底画像を処理する眼底画像処理装置であって、
     前記眼底画像処理装置の制御部は、
     眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む眼底画像を複数取得する眼底画像取得ステップと、
     取得した各々の前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得する血管画像取得ステップと、
     複数の前記眼底画像の各々について取得された複数の前記血管画像を、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップを生成する確率マップ生成ステップと、
     を実行することを特徴とする眼底画像処理装置。
    A fundus image processing device that processes a fundus image of an eye to be examined,
    The control unit of the fundus image processing device includes:
    a fundus image acquisition step of acquiring a plurality of fundus images including blood vessels in the fundus of the eye to be examined, captured by a fundus image capturing device;
    a blood vessel image acquisition step of acquiring a blood vessel image showing at least one of an artery and a vein included in each of the acquired fundus images;
    A retinal blood vessel distribution probability that indicates the distribution of the existence probability of blood vessels existing in the retina of the eye to be examined by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed. a probability map generation step of generating a map;
    A fundus image processing device that performs the following.
  2.  請求項1に記載の眼底画像処理装置であって、
     前記制御部は、
     前記血管画像取得ステップにおいて、各々の前記眼底画像に含まれる動脈の血管画像と静脈の血管画像を取得し、
     前記確率マップ生成ステップにおいて、複数の前記動脈の血管画像を加算することで、動脈の前記網膜血管分布確率マップを生成すると共に、複数の前記静脈の血管画像を加算することで、静脈の前記網膜血管分布確率マップを生成することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 1,
    The control unit includes:
    In the blood vessel image acquisition step, acquiring an artery blood vessel image and a vein blood vessel image included in each fundus image,
    In the probability map generation step, the retinal vascular distribution probability map of the artery is generated by adding the blood vessel images of the plurality of arteries, and the retinal blood vessel distribution probability map of the artery is generated by adding the blood vessel images of the plurality of veins. A fundus image processing device characterized by generating a blood vessel distribution probability map.
  3.  請求項1に記載の眼底画像処理装置であって、
     前記制御部は、
     前記血管画像取得ステップにおいて取得された前記血管画像における視神経乳頭の位置を特定する乳頭位置特定ステップをさらに実行し、
     前記確率マップ生成ステップでは、複数の前記血管画像の各々を、特定された視神経乳頭の位置を基準として位置合わせされた状態で加算することで、前記網膜血管分布確率マップを生成することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 1,
    The control unit includes:
    further performing a papilla position specifying step of specifying the position of the optic disc in the blood vessel image acquired in the blood vessel image acquisition step;
    In the probability map generation step, the retinal blood vessel distribution probability map is generated by adding each of the plurality of blood vessel images in a state where they are aligned with the position of the identified optic disc as a reference. Fundus image processing device.
  4.  請求項1に記載の眼底画像処理装置であって、
     前記制御部は、
     解析対象の前記血管画像である対象血管画像を取得する対象血管画像取得ステップと、
     前記網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域のマップを、対象血管特徴マップとして生成する特徴マップ生成ステップと、
     をさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 1,
    The control unit includes:
    a target blood vessel image acquisition step of acquiring a target blood vessel image that is the blood vessel image to be analyzed;
    A feature map generation step of generating a map of a region corresponding to a blood vessel region of the target blood vessel image as a target blood vessel feature map, out of the retinal blood vessel distribution probability map;
    A fundus image processing device further comprising:
  5.  請求項1に記載の眼底画像処理装置であって、
     前記制御部は、
     解析対象の前記血管画像である対象血管画像を取得する対象血管画像取得ステップと、
     前記網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報に基づいて、前記対象血管画像の血管の領域内の画素数を、前記網膜血管分布確率マップによって示される血管の存在確率に応じて示す血管分布ヒストグラムを生成する血管分布ヒストグラム生成ステップと、
     をさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 1,
    The control unit includes:
    a target blood vessel image acquisition step of acquiring a target blood vessel image that is the blood vessel image to be analyzed;
    Based on information on a region of the retinal vascular distribution probability map that corresponds to the blood vessel region of the target blood vessel image, the number of pixels in the blood vessel region of the target blood vessel image is determined by the retinal vascular distribution probability map. a blood vessel distribution histogram generation step of generating a blood vessel distribution histogram shown according to the probability of existence of blood vessels;
    A fundus image processing device further comprising:
  6.  請求項5に記載の眼底画像処理装置であって、
     前記制御部は、
     複数の前記対象血管画像について生成された複数の前記血管分布ヒストグラムを集計した集計ヒストグラムを生成する集計ヒストグラム生成ステップをさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 5,
    The control unit includes:
    A fundus oculi image processing device further comprising: generating a total histogram that is a total of a plurality of blood vessel distribution histograms generated for a plurality of target blood vessel images.
  7.  請求項6に記載の眼底画像処理装置であって、
     前記制御部は、
     前記集計ヒストグラム生成ステップにおいて、各々の解析対象の被検者の年齢による血管の存在確率の分布の変化の影響を補正したうえで、複数の前記血管分布ヒストグラムを集計することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 6,
    The control unit includes:
    A fundus image characterized in that, in the aggregate histogram generation step, the plurality of blood vessel distribution histograms are aggregated after correcting the influence of changes in the distribution of blood vessel existence probability depending on the age of each subject to be analyzed. Processing equipment.
  8.  請求項6に記載の眼底画像処理装置であって、
     前記制御部は、
     個々の前記対象血管画像について生成された前記血管分布ヒストグラムと、前記集計ヒストグラム生成ステップにおいて生成された前記集計ヒストグラムの間の差分ヒストグラムを生成する差分ヒストグラム生成ステップをさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 6,
    The control unit includes:
    A fundus oculi characterized by further performing a difference histogram generation step of generating a difference histogram between the blood vessel distribution histogram generated for each of the target blood vessel images and the aggregate histogram generated in the aggregate histogram generation step. Image processing device.
  9.  請求項5に記載の眼底画像処理装置であって、
     前記制御部は、
     複数の前記対象血管画像について生成された複数の前記血管分布ヒストグラムを、解析対象の被検者の年齢が属する年代毎に集計した年代別集計ヒストグラムを生成する年代別集計ヒストグラム生成ステップをさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 5,
    The control unit includes:
    further performing an age-specific aggregate histogram generation step of generating an age-specific aggregate histogram in which the plurality of blood vessel distribution histograms generated for the plurality of target blood vessel images are aggregated for each age group to which the age of the subject to be analyzed belongs; A fundus image processing device characterized by:
  10.  請求項5に記載の眼底画像処理装置であって、
     前記制御部は、
     前記対象血管画像が属する複数の母集団毎に、各母集団に属する複数の前記対象血管画像について生成された複数の前記血管分布ヒストグラムを集計することで、母集団毎の集計ヒストグラムを生成する母集団別集計ヒストグラム生成ステップと、
     母集団毎に生成された複数の前記集計ヒストグラムの間の差分ヒストグラムを生成する差分ヒストグラム生成ステップと、
     をさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 5,
    The control unit includes:
    A population for generating an aggregate histogram for each population by summing up the plurality of blood vessel distribution histograms generated for the plurality of target blood vessel images belonging to each population, for each of the plurality of populations to which the target blood vessel images belong. a group-specific aggregate histogram generation step;
    a differential histogram generation step of generating a differential histogram between the plurality of aggregate histograms generated for each population;
    A fundus image processing device further comprising:
  11.  請求項10に記載の眼底画像処理装置であって、
     前記集計ヒストグラムが生成される対象の前記複数の母集団には、糖尿病網膜症を発症していない糖尿病患者の母集団が含まれることを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 10,
    A fundus image processing apparatus, wherein the plurality of populations for which the aggregate histogram is generated include a population of diabetic patients who have not developed diabetic retinopathy.
  12.  請求項1に記載の眼底画像処理装置であって、
     前記網膜血管分布確率マップのうち、特定の前記血管画像の血管の領域に対応する領域の情報を処理することで、前記特定の血管画像の血管分布の特徴を示す血管分布特徴情報を生成する特徴情報生成ステップと、
     前記血管分布特徴情報を入力用訓練データとし、前記入力用訓練データが取得された被検者の特定の疾患の有無を示す情報を出力用訓練データとする複数の訓練データセットを用いて、前記特定の疾患に関する情報を出力する数学モデルを、機械学習アルゴリズムによって訓練させて構築する数学モデル構築ステップと、
     をさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 1,
    A feature of generating vascular distribution characteristic information indicating characteristics of the vascular distribution of the specific blood vessel image by processing information of a region corresponding to a blood vessel region of the specific blood vessel image in the retinal blood vessel distribution probability map. an information generation step;
    Using a plurality of training data sets in which the vascular distribution characteristic information is used as input training data, and the information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data, a mathematical model construction step of training and constructing a mathematical model that outputs information regarding a specific disease using a machine learning algorithm;
    A fundus image processing device further comprising:
  13.  被検眼の眼底画像を処理する眼底画像処理装置であって、
     前記眼底画像処理装置の制御部は、
     眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む解析対象の眼底画像を取得する眼底画像取得ステップと、
     取得した前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像である対象血管画像を取得する血管画像取得ステップと、
     複数の血管画像が位置合わせされた状態で加算されることで生成された、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報を処理することで、前記対象血管画像の血管分布の特徴を示す血管分布特徴情報を生成する特徴情報生成ステップと、
     を実行することを特徴とする眼底画像処理装置。
    A fundus image processing device that processes a fundus image of an eye to be examined,
    The control unit of the fundus image processing device includes:
    a fundus image acquisition step of acquiring a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, captured by a fundus image capturing device;
    a blood vessel image acquisition step of acquiring a target blood vessel image that is a blood vessel image indicating at least one of an artery and a vein included in the acquired fundus image;
    Of the retinal blood vessel distribution probability map, which is generated by summing multiple blood vessel images in an aligned state and shows the distribution of the existence probability of blood vessels existing in the retina of the subject's eye, the blood vessels in the target blood vessel image are a feature information generation step of generating vascular distribution feature information indicating characteristics of the vascular distribution of the target blood vessel image by processing information of a region corresponding to the region;
    A fundus image processing device that performs the following.
  14.  請求項13に記載の眼底画像処理装置であって、
     前記制御部は、前記特徴情報生成ステップにおいて、前記網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域のマップを、対象血管特徴マップとして生成することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 13,
    The control unit, in the feature information generation step, generates a map of a region corresponding to a region of a blood vessel in the target blood vessel image as a target blood vessel feature map, out of the retinal blood vessel distribution probability map. Image processing device.
  15.  請求項13に記載の眼底画像処理装置であって、
     前記制御部は、前記特徴情報生成ステップにおいて、前記網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報に基づいて、前記対象血管画像の血管の領域内の画素数を、前記網膜血管分布確率マップによって示される血管の存在確率に応じて示す血管分布ヒストグラムを生成することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 13,
    In the feature information generation step, the control unit determines pixels in the blood vessel region of the target blood vessel image based on information of a region of the retinal blood vessel distribution probability map that corresponds to the blood vessel region of the target blood vessel image. A fundus oculi image processing device, characterized in that a fundus oculi image processing device generates a blood vessel distribution histogram showing the number of blood vessels in accordance with the existence probability of blood vessels indicated by the retinal blood vessel distribution probability map.
  16.  請求項13に記載の眼底画像処理装置であって、
     前記制御部は、前記特徴情報生成ステップにおいて、
     前記網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報に基づいて、前記対象血管画像の血管の領域内の画素数を、前記網膜血管分布確率マップによって示される血管の存在確率に応じて示す血管分布ヒストグラムを生成し、
     複数の血管画像の各々について生成された複数の前記血管分布ヒストグラムが集計された集計ヒストグラムと、前記対象血管画像について生成した前記血管分布ヒストグラムの差分である差分ヒストグラムを生成することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 13,
    In the feature information generation step, the control unit includes:
    Based on information on a region of the retinal vascular distribution probability map that corresponds to the blood vessel region of the target blood vessel image, the number of pixels in the blood vessel region of the target blood vessel image is determined by the retinal vascular distribution probability map. Generates a blood vessel distribution histogram that is shown according to the probability of blood vessel existence,
    A fundus oculi that is characterized in that a difference histogram is generated that is a difference between a total histogram in which the plurality of blood vessel distribution histograms generated for each of a plurality of blood vessel images are aggregated and the blood vessel distribution histogram generated for the target blood vessel image. Image processing device.
  17.  請求項13に記載の眼底画像処理装置であって、
     前記血管分布特徴情報を入力用訓練データとし、前記入力用訓練データが取得された被検者の特定の疾患の有無を示す情報を出力用訓練データとする複数の訓練データセットが用いられることで、前記特定の疾患に関する情報を出力する数学モデルが、機械学習アルゴリズムによって訓練されて構築されており、
     前記制御部は、
     前記特徴情報生成ステップにおいて生成された前記血管分布特徴情報を前記数学モデルに入力することで、前記数学モデルによって出力される前記特定の疾患に関する情報を取得する疾患情報取得ステップをさらに実行することを特徴とする眼底画像処理装置。
    The fundus image processing device according to claim 13,
    By using a plurality of training data sets, the vascular distribution characteristic information is used as input training data, and the information indicating the presence or absence of a specific disease of the subject from whom the input training data was acquired is used as output training data. , a mathematical model that outputs information regarding the specific disease is trained and constructed by a machine learning algorithm,
    The control unit includes:
    further performing a disease information acquisition step of acquiring information regarding the specific disease output by the mathematical model by inputting the vascular distribution characteristic information generated in the characteristic information generation step to the mathematical model; Characteristic fundus image processing device.
  18.  被検眼の眼底画像を処理する眼底画像処理装置によって実行される眼底画像処理プログラムであって、
     前記眼底画像処理プログラムが前記眼底画像処理装置の制御部によって実行されることで、
     眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む眼底画像を複数取得する眼底画像取得ステップと、
     取得した各々の前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像を取得する血管画像取得ステップと、
     複数の前記眼底画像の各々について取得された複数の前記血管画像を、位置合わせが行われた状態で加算することで、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップを生成する確率マップ生成ステップと、
     を前記眼底画像処理装置に実行させることを特徴とする眼底画像処理プログラム。
    A fundus image processing program executed by a fundus image processing device that processes a fundus image of an eye to be examined,
    The fundus image processing program is executed by the control unit of the fundus image processing device,
    a fundus image acquisition step of acquiring a plurality of fundus images including blood vessels in the fundus of the eye to be examined, captured by a fundus image capturing device;
    a blood vessel image acquisition step of acquiring a blood vessel image showing at least one of an artery and a vein included in each of the acquired fundus images;
    A retinal blood vessel distribution probability that indicates the distribution of the existence probability of blood vessels existing in the retina of the eye to be examined by adding the plurality of blood vessel images acquired for each of the plurality of fundus images in a state in which alignment has been performed. a probability map generation step of generating a map;
    A fundus image processing program that causes the fundus image processing device to execute.
  19.  被検眼の眼底画像を処理する眼底画像処理装置によって実行される眼底画像処理プログラムであって、
     前記眼底画像処理プログラムが前記眼底画像処理装置の制御部によって実行されることで、
     眼底画像撮影装置によって撮影された、被検眼の眼底の血管を含む解析対象の眼底画像を取得する眼底画像取得ステップと、
     取得した前記眼底画像に含まれる動脈および静脈の少なくともいずれかを示す血管画像である対象血管画像を取得する血管画像取得ステップと、
     複数の血管画像が位置合わせされた状態で加算されることで生成された、被検眼の網膜に存在する血管の存在確率の分布を示す網膜血管分布確率マップのうち、前記対象血管画像の血管の領域に対応する領域の情報を処理することで、前記対象血管画像の血管分布の特徴を示す血管分布特徴情報を生成する特徴情報生成ステップと、
     を前記眼底画像処理装置に実行させることを特徴とする眼底画像処理プログラム。
    A fundus image processing program executed by a fundus image processing device that processes a fundus image of an eye to be examined,
    The fundus image processing program is executed by the control unit of the fundus image processing device,
    a fundus image acquisition step of acquiring a fundus image to be analyzed that includes blood vessels in the fundus of the eye to be examined, captured by a fundus image capturing device;
    a blood vessel image acquisition step of acquiring a target blood vessel image that is a blood vessel image indicating at least one of an artery and a vein included in the acquired fundus image;
    Of the retinal blood vessel distribution probability map, which is generated by summing multiple blood vessel images in an aligned state and shows the distribution of the existence probability of blood vessels existing in the retina of the subject's eye, the blood vessels in the target blood vessel image are a feature information generation step of generating vascular distribution feature information indicating characteristics of the vascular distribution of the target blood vessel image by processing information of a region corresponding to the region;
    A fundus image processing program that causes the fundus image processing device to execute.
PCT/JP2023/031715 2022-09-16 2023-08-31 Ocular fundus image processing device and ocular fundus image processing program WO2024057942A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2022-148527 2022-09-16
JP2022148527 2022-09-16
JP2023022789 2023-02-16
JP2023-022789 2023-02-16

Publications (1)

Publication Number Publication Date
WO2024057942A1 true WO2024057942A1 (en) 2024-03-21

Family

ID=90275099

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/031715 WO2024057942A1 (en) 2022-09-16 2023-08-31 Ocular fundus image processing device and ocular fundus image processing program

Country Status (1)

Country Link
WO (1) WO2024057942A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061601A1 (en) * 2008-04-25 2010-03-11 Michael Abramoff Optimal registration of multiple deformed images using a physical model of the imaging distortion
JP2017077413A (en) * 2015-10-21 2017-04-27 株式会社ニデック Ophthalmic analysis apparatus and ophthalmic analysis program
JP2019513449A (en) * 2016-03-31 2019-05-30 バイオ−ツリー システムズ, インコーポレイテッド Method of obtaining 3D retinal blood vessel morphology from optical coherence tomography images and method of analyzing them
JP2019103762A (en) * 2017-12-14 2019-06-27 株式会社ニデック Oct data analysis device, and oct data analysis program
WO2019203311A1 (en) * 2018-04-18 2019-10-24 株式会社ニコン Image processing method, program, and image processing device
US20200178794A1 (en) * 2017-06-20 2020-06-11 University Of Louisville Research Foundation, Inc. Segmentation of retinal blood vessels in optical coherence tomography angiography images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061601A1 (en) * 2008-04-25 2010-03-11 Michael Abramoff Optimal registration of multiple deformed images using a physical model of the imaging distortion
JP2017077413A (en) * 2015-10-21 2017-04-27 株式会社ニデック Ophthalmic analysis apparatus and ophthalmic analysis program
JP2019513449A (en) * 2016-03-31 2019-05-30 バイオ−ツリー システムズ, インコーポレイテッド Method of obtaining 3D retinal blood vessel morphology from optical coherence tomography images and method of analyzing them
US20200178794A1 (en) * 2017-06-20 2020-06-11 University Of Louisville Research Foundation, Inc. Segmentation of retinal blood vessels in optical coherence tomography angiography images
JP2019103762A (en) * 2017-12-14 2019-06-27 株式会社ニデック Oct data analysis device, and oct data analysis program
WO2019203311A1 (en) * 2018-04-18 2019-10-24 株式会社ニコン Image processing method, program, and image processing device

Similar Documents

Publication Publication Date Title
Ting et al. Deep learning in ophthalmology: the technical and clinical considerations
KR101977645B1 (en) Eye image analysis method
EP4023143A1 (en) Information processing device, information processing method, information processing system, and program
Trucco et al. Validating retinal fundus image analysis algorithms: issues and a proposal
CN113646805A (en) Image-based detection of ophthalmic and systemic diseases
CN112513999A (en) Deep learning based diagnosis and referral of ophthalmic diseases and conditions
JP6469387B2 (en) Fundus analyzer
Valikodath et al. Imaging in retinopathy of prematurity
JP2018121886A (en) Image processing device and image processing program
US20180360304A1 (en) Ophthalmologic information processing device and non-transitory computer-readable storage medium storing computer-readable instructions
KR102071774B1 (en) Method for predicting cardio-cerebrovascular disease using eye image
Rege et al. Noninvasive assessment of retinal blood flow using a novel handheld laser speckle contrast imager
US20220284577A1 (en) Fundus image processing device and non-transitory computer-readable storage medium storing computer-readable instructions
KR102343796B1 (en) Method for predicting cardiovascular disease using eye image
JP2008206830A (en) Schizophrenia diagnosing apparatus and program
Solano et al. Mapping pulsatile optic nerve head deformation using OCT
WO2024057942A1 (en) Ocular fundus image processing device and ocular fundus image processing program
Senger et al. Spatial correlation between localized decreases in exploratory visual search performance and areas of glaucomatous visual field loss
JPWO2020116351A1 (en) Diagnostic support device and diagnostic support program
JP2020058615A (en) Image processing device, learned model, image processing method, and program
Erraguntla et al. Assessment of change of optic nerve head cupping in pediatric glaucoma using the RetCam 120
Ruggeri et al. From laboratory to clinic: The development of web-based tools for the estimation of retinal diagnostic parameters
KR102426065B1 (en) Method for predicting cardiovascular disease using eye image
Ludwig The future of automated mobile eye diagnosis
de Moura et al. Fully automated identification and clinical classification of macular edema using optical coherence tomography images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23865298

Country of ref document: EP

Kind code of ref document: A1