EP3509475A1 - Retinal imager device and system with edge processing - Google Patents
Retinal imager device and system with edge processingInfo
- Publication number
- EP3509475A1 EP3509475A1 EP17849536.2A EP17849536A EP3509475A1 EP 3509475 A1 EP3509475 A1 EP 3509475A1 EP 17849536 A EP17849536 A EP 17849536A EP 3509475 A1 EP3509475 A1 EP 3509475A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- retinal
- image data
- output data
- retinal image
- fundoscope
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000002207 retinal effect Effects 0.000 title claims abstract description 178
- 238000012545 processing Methods 0.000 title description 55
- 230000004256 retinal image Effects 0.000 claims abstract description 642
- 238000004891 communication Methods 0.000 claims abstract description 251
- 230000005540 biological transmission Effects 0.000 claims abstract description 179
- 238000004458 analytical method Methods 0.000 claims abstract description 158
- 230000003287 optical effect Effects 0.000 claims abstract description 141
- 210000001525 retina Anatomy 0.000 claims description 105
- 238000005286 illumination Methods 0.000 claims description 94
- 238000003384 imaging method Methods 0.000 claims description 49
- 238000000034 method Methods 0.000 description 124
- 230000008569 process Effects 0.000 description 121
- 210000001508 eye Anatomy 0.000 description 62
- 230000036541 health Effects 0.000 description 49
- 230000007170 pathology Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 30
- 230000003068 static effect Effects 0.000 description 29
- 230000004044 response Effects 0.000 description 26
- 230000009471 action Effects 0.000 description 25
- 238000001514 detection method Methods 0.000 description 25
- 230000008859 change Effects 0.000 description 19
- 238000010191 image analysis Methods 0.000 description 17
- 230000033001 locomotion Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 238000005259 measurement Methods 0.000 description 14
- 230000009467 reduction Effects 0.000 description 14
- 208000024827 Alzheimer disease Diseases 0.000 description 13
- 208000017442 Retinal disease Diseases 0.000 description 12
- 208000024172 Cardiovascular disease Diseases 0.000 description 11
- 201000010183 Papilledema Diseases 0.000 description 11
- 206010038923 Retinopathy Diseases 0.000 description 11
- 206010033712 Papilloedema Diseases 0.000 description 10
- 208000002780 macular degeneration Diseases 0.000 description 10
- 230000002829 reductive effect Effects 0.000 description 10
- 206010012689 Diabetic retinopathy Diseases 0.000 description 9
- 208000010412 Glaucoma Diseases 0.000 description 9
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 230000001413 cellular effect Effects 0.000 description 8
- VFLDPWHFBUODDF-FCXRPNKRSA-N curcumin Chemical compound C1=C(O)C(OC)=CC(\C=C\C(=O)CC(=O)\C=C\C=2C=C(OC)C(O)=CC=2)=C1 VFLDPWHFBUODDF-FCXRPNKRSA-N 0.000 description 8
- 238000004091 panning Methods 0.000 description 8
- 208000032843 Hemorrhage Diseases 0.000 description 7
- 239000008280 blood Substances 0.000 description 7
- 210000004369 blood Anatomy 0.000 description 7
- 210000004087 cornea Anatomy 0.000 description 7
- 238000002845 discoloration Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 7
- 210000003733 optic disk Anatomy 0.000 description 7
- 210000001747 pupil Anatomy 0.000 description 7
- 238000011160 research Methods 0.000 description 7
- 210000004204 blood vessel Anatomy 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 210000001328 optic nerve Anatomy 0.000 description 6
- 230000000638 stimulation Effects 0.000 description 6
- 230000004304 visual acuity Effects 0.000 description 6
- 206010025421 Macule Diseases 0.000 description 5
- 230000017531 blood circulation Effects 0.000 description 5
- 230000010339 dilation Effects 0.000 description 5
- 239000003814 drug Substances 0.000 description 5
- 229940079593 drug Drugs 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 230000004075 alteration Effects 0.000 description 4
- 210000003484 anatomy Anatomy 0.000 description 4
- 210000002565 arteriole Anatomy 0.000 description 4
- 239000002131 composite material Substances 0.000 description 4
- 229940109262 curcumin Drugs 0.000 description 4
- 235000012754 curcumin Nutrition 0.000 description 4
- 239000004148 curcumin Substances 0.000 description 4
- VFLDPWHFBUODDF-UHFFFAOYSA-N diferuloylmethane Natural products C1=C(O)C(OC)=CC(C=CC(=O)CC(=O)C=CC=2C=C(OC)C(O)=CC=2)=C1 VFLDPWHFBUODDF-UHFFFAOYSA-N 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 230000014759 maintenance of location Effects 0.000 description 4
- 230000001575 pathological effect Effects 0.000 description 4
- 102000004169 proteins and genes Human genes 0.000 description 4
- 108090000623 proteins and genes Proteins 0.000 description 4
- 230000004286 retinal pathology Effects 0.000 description 4
- 239000000126 substance Substances 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 239000000090 biomarker Substances 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 208000037111 Retinal Hemorrhage Diseases 0.000 description 2
- 208000002367 Retinal Perforations Diseases 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 208000034158 bleeding Diseases 0.000 description 2
- 230000000740 bleeding effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000037213 diet Effects 0.000 description 2
- 235000005911 diet Nutrition 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000004371 high visual acuity Effects 0.000 description 2
- 238000011065 in-situ storage Methods 0.000 description 2
- 208000015181 infectious disease Diseases 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 206010025482 malaise Diseases 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000010344 pupil dilation Effects 0.000 description 2
- 208000004644 retinal vein occlusion Diseases 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 210000000264 venule Anatomy 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 208000037259 Amyloid Plaque Diseases 0.000 description 1
- 240000001436 Antirrhinum majus Species 0.000 description 1
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 208000003569 Central serous chorioretinopathy Diseases 0.000 description 1
- 208000033379 Chorioretinopathy Diseases 0.000 description 1
- 241000701022 Cytomegalovirus Species 0.000 description 1
- 208000001351 Epiretinal Membrane Diseases 0.000 description 1
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- 240000004260 Garcinia hombroniana Species 0.000 description 1
- 206010061218 Inflammation Diseases 0.000 description 1
- 208000032578 Inherited retinal disease Diseases 0.000 description 1
- 208000001344 Macular Edema Diseases 0.000 description 1
- 206010025415 Macular oedema Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 241000282373 Panthera pardus Species 0.000 description 1
- 206010073286 Pathologic myopia Diseases 0.000 description 1
- 201000007527 Retinal artery occlusion Diseases 0.000 description 1
- 206010038848 Retinal detachment Diseases 0.000 description 1
- 206010038862 Retinal exudates Diseases 0.000 description 1
- 206010038910 Retinitis Diseases 0.000 description 1
- 208000007014 Retinitis pigmentosa Diseases 0.000 description 1
- 201000000582 Retinoblastoma Diseases 0.000 description 1
- 206010038926 Retinopathy hypertensive Diseases 0.000 description 1
- 208000032458 Retinopathy solar Diseases 0.000 description 1
- 206010041349 Somnolence Diseases 0.000 description 1
- 206010046851 Uveitis Diseases 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000036626 alertness Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 201000005667 central retinal vein occlusion Diseases 0.000 description 1
- 235000020974 cholesterol intake Nutrition 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 208000001309 degenerative myopia Diseases 0.000 description 1
- 230000004340 degenerative myopia Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 206010014801 endophthalmitis Diseases 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000003889 eye drop Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010006 flight Effects 0.000 description 1
- 201000001948 hypertensive retinopathy Diseases 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000002458 infectious effect Effects 0.000 description 1
- 230000002757 inflammatory effect Effects 0.000 description 1
- 230000004054 inflammatory process Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 208000029233 macular holes Diseases 0.000 description 1
- 201000010230 macular retinal edema Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002911 mydriatic effect Effects 0.000 description 1
- 210000003739 neck Anatomy 0.000 description 1
- 210000004126 nerve fiber Anatomy 0.000 description 1
- 230000000414 obstructive effect Effects 0.000 description 1
- 231100001035 ocular change Toxicity 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000037081 physical activity Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 239000006187 pill Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000004264 retinal detachment Effects 0.000 description 1
- 210000001957 retinal vein Anatomy 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 201000000824 solar retinopathy Diseases 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000004127 vitreous body Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/13—Ophthalmic microscopes
- A61B3/132—Ophthalmic microscopes in binocular arrangement
Definitions
- Certain embodiments of the invention relate generally to a retinal imager device and system with edge processing.
- a machine-vision enabled fundoscope for retinal analysis includes, but is not limited to, an optical lens arrangement; an image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data; computer readable memory; at least one communication interface; and an image processor communicably linked to the image sensor, the computer readable memory, and the at least one communication interface, the image processor programmed to execute operations including at least: obtain the retinal image data from the image sensor; generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data; and transmit the output data via the at least one communication interface.
- a process executed by a computer processor component of a fundoscope that includes an optical lens arrangement, an image sensor configured to convert detected light to retinal image data, and at least one communication interface, includes, but is not limited to, obtain the retinal image data from the image sensor; generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data; and transmit the output data via the at least one communication interface.
- a fundoscope includes, but is not limited to, means for obtaining retinal image data from an image sensor; means for generating output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data; and means for transmitting the output data via the at least one communication interface.
- FIGURE 1 is a perspective view of a retinal imager device with edge processing, in accordance with an embodiment
- FIGURE 2 is a side view of an arrangement usable within a retinal imager device with edge processing, in accordance with an embodiment
- FIGURE 3A is a zoom side view of anatomical structures of an eye positioned with a retinal imager device with edge processing, in accordance with an embodiment
- FIGURE 3B is an illustration of non-uniform illumination of the retina, in accordance with an embodiment
- FIGURE 4 is a component diagram of a retinal imager device with edge processing, in accordance with an embodiment.
- FIGURES 5-33 are block diagrams of processes implemented using a retinal imager device with edge processing, in accordance with various embodiments.
- Embodiments disclosed herein relate generally to an imaging device and system with edge processing. Specific details of certain embodiments are set forth in the following description and in FIGURES 1-33 to provide a thorough understanding of such embodiments.
- FIGURE 1 is a perspective view of a retinal imager device 100 or fundoscope with edge processing, in accordance with an embodiment.
- the retinal imager device 100 provides machine vision for healthcare that enables minimally-obtrusive retinal monitoring and with extremely high visual acuity.
- the retinal imager device 100 can perform rapid imaging of the retina with or without doctor or nurse supervision as and when needed and without requiring pupil dilation.
- Use contexts can include home, public, remote, health clinic, hospital, care facilities, outer space/space flights, or the like.
- the retinal imager device 100 can be usable/deployable on the International Space Station, Orion, or other crew spacecraft.
- One particular embodiment includes a standalone compact self-contained device 100 including a housing 102, eye pieces 104, a mount bracket 106, a visible light emitting diode 118 (e.g., red, white, etc.), and/or an infrared light emitting diode 116 and an infrared imager 112 for enabling manual or automated retinal focus.
- a visible light emitting diode 118 e.g., red, white, etc.
- an infrared light emitting diode 116 and an infrared imager 112 for enabling manual or automated retinal focus.
- an optical lens arrangement 120 includes an image sensor 114 positioned with the optical lens arrangement 120 and configured to convert detected light to retinal image data; computer readable memory; at least one communication interface; and an image processor communicably linked to the image sensor 114, the computer readable memory, and the at least one communication interface, the image processor programmed to execute operations including at least: obtain the retinal image data from the image sensor 114; generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data; and transmit the output data via the at least one communication interface.
- the mount bracket 106 can be coupled or removably coupled to a support structure, such as a desk, table, wall, or other platform.
- the mount bracket 106 includes a z-axis track 108 and a y-axis track 110.
- the z-axis track 108 enables the housing 102 and the eyepieces 104 to move relative to a support structure along a z-axis (e.g., forward and aft).
- the y-axis track 110 enables the housing 102 to move relative to a support structure and relative to the eyepieces 104 along a y-axis (e.g., left and right).
- the housing 102 can move left and right between the eyepieces 104 to sample left and/or right eyes of a user.
- the housing 102 can further move forward and aft for user comfort or other adjustment.
- the retinal imager device 100 includes one or more of the following properties or characteristics: approximately 10mm eye-relief, polarizing optics to reduce stray light, operates over 450nm to 650nm, less than approximately seven microns spot size at the imager, annular illumination to mitigate stray light, adjustable focus for better than -4D to 4D accommodation, and/or an infrared channel with an approximately 850nm light source and infrared imager for imaging approximately 10mm of an eye for boresight alignment of the visible channel.
- the retinal imager device 100 can assume a variety of forms and shapes and is not limited to the form illustrated in FIGURE 1.
- the retinal imager device 100 can be incorporated into a wall, table, desk, kiosk, computer, smartphone, laptop, virtual reality headset, augmented reality headset, handheld device, pole mounted device, or other structure that may integrate, include, expose, or conceal part, most, or all of the structure depicted in FIGURE 1.
- the housing 102, the eyepieces 104, and the mount bracket 106 may be integrated into a personal health kiosk that conceals all but the eyepieces 104 to enable positioning of left and right eyes of a user with respect to the retinal imager device 100.
- the retinal imager device 100 may omit the mount bracket 106 in favor of a non-movable mount bracket, a mount bracket that moves and pivots in additional directions (e.g., 360-degree rotation, tilt, y-axis movement, etc.), or in favor of integration with a structure (e.g., a special purpose table that includes the retinal imager device 100 integrated thereon).
- the housing 102 can include two housings with redundant components for each of the left and right portions of the eyepieces 104.
- the eyepieces 104 can include a single eye piece that is shared for left and right eyes of a user.
- the retinal imager device 100 is incorporated into or distributed between an eyebox, a laptop, monitor, phone, tablet, or computer that includes an interrogation signal device (e.g., tunable laser or infrared emitting device) and that includes a camera, which may be used to capture retinal imagery and/or detect eye position, rotation, pupil diameter, or vergence.
- the camera can comprise a co-aligned illumination device (e.g., red or infrared laser) and a plurality of high resolution cameras (e.g., 2-3).
- the display of the laptop or other device can auto-dim during imaging and output a visual indication or spot of focus for looking or staring at while the camera captures imagery of the retina or retinas of a user.
- An image processor coupled to the camera or cameras enables real-time on-board video acquisition, cropping, resizing, stitching, or other disclosed processing of imagery.
- FIGURE 2 is a side view of an arrangement 200 usable within a retinal imager device 100 or fundoscope with edge processing, in accordance with an embodiment.
- the arrangement 200 includes an imaging lens arrangement 202 aligned in a first axis, an illumination lens arrangement 204 aligned in a second axis that is perpendicular to the first axis, at least one polarizing splitter/combiner 206, an illumination LED 208 configured to emit light 209 for imaging, an image sensor 222 configured to convert detected light directed from a splitter 207 to retinal image data, and one or more masks 210 configured to obscure at least some of the light 209 of the illumination LED 208 prior to passing through the illumination lens arrangement 204, wherein the at least one polarizing splitter/combiner 206 is configured to redirect the light 209 passing through the illumination lens arrangement 204 aligned in the second axis into the imaging optical lens arrangement 202 aligned in the first axis to illuminate at least one portion of the retina 214.
- the imaging lens arrangement 202 is approximately 267 mm in length and an eye of an individual is positionable approximately 13mm from an end of the imaging lens arrangement 202.
- the arrangement 200 further includes an infrared LED 216 configured to emit infrared light 218 for positioning and/or focus determinations, a combiner 205, an infrared image sensor 226, and one or more infrared masks 220 configured to obscure at least some of the infrared light 218 of the infrared LED 216 prior to passing through the illumination lens arrangement 204, wherein the at least one polarizing splitter/combiner 206 is configured to redirect the infrared light 218 passing through the illumination lens arrangement 204 aligned in the second axis into the imaging optical lens arrangement 202 aligned in the first axis to illuminate at least one portion of the retina 214.
- the arrangement 200 further includes a microdisplay or is couplable to a computer, smartphone, laptop, or other personal device.
- the arrangement 200 operates as follows: the infrared LED 216 emits infrared light 218, which passes through one or more infrared masks 220, whereby at least some of the infrared light 218 is controllably blocked from further transmission.
- the infrared light 218 that passes by the one or more infrared masks 220 is directed into the illumination lens arrangement 204 via the combiner 205.
- the infrared light 218 then is directed into the imaging lens arrangement 202 via the polarizing splitter/combiner 206.
- the infrared light 218 then passes through the scattering elements of the eye 212 (e.g., of a person) before being reflected by the retina 214.
- the reflected infrared light 218 then returns through the imaging lens arrangement 202 and is detected by the infrared imager 226.
- the infrared light 218 detected by the infrared imager 226 is used to determine whether the retina is centered and/or focused.
- the illumination LED 208 then emits light 209 for imaging that passes through the one or more masks 210 that block at least some of the light 209.
- the light 209 that passes through the one or more masks 210 then passes through the illumination lens arrangement 204 where it is directed into the imaging lens arrangement 202 via the polarizing splitter/combiner 206.
- the light 209 then passes through the scattering elements of the eye 212 before being reflected by the retina 214.
- the reflected light 209 then passes back through the imaging lens arrangement 202 and is directed by the splitter 207 to the image sensor 222.
- Retinal image data captured by the image sensor 222 can be stored, validated, and/or processed as disclosed herein. This process can be repeated as needed or requested, such as for both eyes of a person or for multiple individuals.
- the arrangement 200 can be modified or substituted in whole or in part with one or more different arrangements to capture high resolution retinal imagery. For instance, any of the lenses, combination of lenses, position of lenses, shape of lenses, or the like may be modified as desired for a particular application. Also, the arrangement 200 may include at least one additional imaging lens arrangement; and at least one additional image sensor positioned with the at least one additional imaging lens arrangement and configured to convert detected light to additional retinal image data. In this embodiment, the imaging lens arrangement 202 and the at least one additional imaging lens arrangement can have at least partially overlapping fields of view for capturing segments of a particular retina.
- the imaging lens arrangement 202 and the at least one additional imaging lens arrangement may have substantially parallel fields of view for capturing segments of a particular retina or for simultaneous capture of image data associated with a second retina (e.g., both eyes sampled concurrently).
- the infrared LED 216 may be co- located with the illumination LED 208, the infrared LED 216 may be swapped in position with the illumination LED 208, or the infrared LED 216 and the illumination LED 208 may be positioned in alignment or differently with respect to the imaging lens arrangement 202.
- the image sensor 222 may be co-located with the infrared imager 226 and/or have their respective positioned swapped or changed.
- the arrangement 200 can also be adapted or used for non-retinal, facial, body, eye, or other imagery purposes, such as for any other scientific, research, investigative, or learning purpose.
- FIGURE 3A is a zoom side view of anatomical structures of an eye 300 positioned with a retinal imager device with edge processing, in accordance with an embodiment.
- the eye 300 can be a left or right eye of an individual and is positioned with the arrangement 200.
- the eye 300 includes the cornea 302, the pupil 304, the lens 306, and the retina 214.
- the light rays of FIGURE 3A are simplified for illustration and clarity, but in essence the illumination light 209 from the illumination LED 208 enters and passes through the cornea 302, the pupil 304, and the lens 306 before being reflected by the retina 214 as imaging light 308.
- the illumination light 209 provides annular illumination input to the retina 214.
- the imaging light 308 is reflected back through the lens 306, the pupil 304, and the cornea 302 for capture by the image sensor 222 as retinal image data with a field of view of approximately forty-two degrees. Due to the positioning of the one or more masks 210, the illumination light 209 and the imaging light 308 have paths that do not intersect or minimally intersect within the scattering elements of the eye (e.g. the lens 306 and the cornea 302).
- the one or more masks 210 reduce stray light, but can result in non-uniform illumination of the retina that is compensated using one or more compensation program operations (FIGURE 3B).
- the one or more masks 210 (and/or the one or more infrared masks 220) can be moved to adjust the illumination light 209 distribution on the retina 214.
- FIGURE 4 is a component diagram 400 of a retinal imager device 402 or fundoscope with edge processing, in accordance with an embodiment.
- the machine-vision enabled fundoscope 402 for retinal analysis includes, but is not limited to, an optical lens arrangement 404, an image sensor 408 positioned with the optical lens arrangement 404 and configured to convert detected light to retinal image data, computer readable memory 406, at least one communication interface 410, and an image processor 412 communicably linked to the image sensor 408, the computer readable memory 406, and the at least one communication interface 410, the image processor 408 being programmed to execute operations including at least: obtain the retinal image data from the image sensor at 412, generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data 416, and transmit the output data via the at least one communication interface 418.
- the retinal imager device 402 or fundoscope can assume the form of the retinal imager device 100 or a different form.
- the optical lens arrangement 404 is arranged to focus light onto the image sensor 408 as discussed herein.
- the image sensor 408 is coupled via a high bandwidth link to the image processor 412.
- the image processor 412 is then coupled to the computer memory 406 and to the communication interface 410 for communication via a communication link having low bandwidth capability.
- the optical lens arrangement 404 can include any of the optical arrangements discussed herein, such as arrangement 200, illumination lens arrangement 204, and/or imaging lens arrangement 202, or another different optical arrangement and are directed to a particular field of view associated with a human retina.
- the optical lens arrangement 404 can be stationary and/or movable, rotatable, pivotable, or slidable.
- the image sensor 408 includes a high pixel density imager enabling ultrahigh resolution retinal imaging.
- the image sensor 408 can include at least an eighteen or twenty-megapixel sensor that provides around twenty gigabytes per second in image data, ten thousand pixels per square degree, and a resolution of at least approximately twenty microns.
- One particular example of the image sensor 408 is the SONY IMX 230, which includes 5408 H x 4412 V pixels of 1.12 microns.
- the image sensor 408 is communicably linked with the image processor 412 via a high bandwidth communication link.
- the relatively high bandwidth communication link enables the image processor 412 to have real-time or near-real-time access to the ultra-high resolution imagery output by the image sensor 408 in the tens of Gbps range.
- An example of the high bandwidth communication link includes a M PI-CSI to LEOPARD/INTRINSYC adaptor that provides data and/or power between the image processor 412 and the image sensor 408.
- the image processor 412 is communicably linked with the image sensor 408. Due to the high bandwidth communication link, the image processor 412 has full access to every pixel of the image sensor 408 in real-time or near-real-time. Using this access, the image processor 412 performs one or more operations on the full resolution retinal imagery prior to communication of any data via the communication interface 410 (e.g., "edge processing").
- Example operations for functions executed by the image processor 412 include, but are not limited to, obtain the retinal image data from the image sensor at 412, generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data 416, and transmit the output data via the at least one communication interface 418.
- image processor 412 includes a cellphone-class SOM, such as SNAPDRAGON SOM.
- image processor 412 can also be any general purpose computer processor, such as an INTEL or AMTEL computer processor, programmed or configured to perform special purpose operations as disclosed herein.
- the fundoscope 402 can include a plurality of the optical lens arrangement 404/image sensor 408/image processor 412 combinations linked to a hub processor via a backplane/hub circuit to leverage and distribute processing load.
- Each of the optical lens arrangements 404 can be directed to an overlapping field of view or a partial segment of the retina, such as to increase an overall resolution of the retinal image data.
- the communication interface 410 provides a relatively low bandwidth communication interface between the image processor 412 and a client, device, server, or cloud destination via a communication link on the order of Mbps. While the communication interface 410 may provide the highest wireless bandwidth available or feasible, such bandwidth is relatively low as compared to the high bandwidth communication between the image sensor 408 and the image processor 412 within the fundoscope 402. Thus, the image processor 412 does not necessarily transmit all available pixel data via the wireless communication interface 410, but instead uses edge processing on -board the fundoscope 402 to enable collection of the very high resolution retinal imagery and selection/reduction of that retinal imagery for transmission (or non-transmission) via the communication interface 410.
- the communication interface 410 can, in certain embodiments, be substituted with a wire-based network interface, such as ethernet, USB, and/or HDMI.
- a wire-based network interface such as ethernet, USB, and/or HDMI.
- One particular example of the communication interface 410 includes a cellular, WIFI, BLUETOOTH, satellite network, and/or websocket enabling communication over the internet with a client running JAVASCRIPT, HTML5, CANVAS GPU, and WEBGL.
- JAVASCRIPT JAVASCRIPT
- HTML5 CANVAS GPU
- WEBGL WEBGL.
- an HTML- 5 client with a zoom viewer application can connect to an ANDROID server video/camera application of the fundoscope 402 via WIFI to stream retinal imagery at approximately 720p.
- the computer memory 406 can include non-transitory computer storage memory and/or transitory computer memory.
- the computer memory 406 can store program instructions for configuring the image processor 412 and/or store raw retinal image data, processed retinal image data, derived alphanumeric text or binary data, or other similar information.
- Example operations and/or characteristics of the fundoscope 402 can include one or more of the following: enable user-self imaging in approximately twenty seconds to three minutes, enable manual or automated capture of retinal images without pupil dilation (non-mydriatic), provide automatic alignment, capture a wide angle retinal image of approximately forty plus degrees, enable adjustable focus, enable multiple image capture of high resolution retinal imagery per session, enable display/review of captured retinal imagery, transmit high resolution retinal imagery in real-time or in batch or at intervals using relatively low bandwidth communication links (e.g., 1-2 Mbps) (e.g., from satellite to ground station), enable self-testing, perform automated image comparison or analysis of images, detect differences in retinal images such as between a current image vs.
- relatively low bandwidth communication links e.g., 1-2 Mbps
- baseline image detect a health issue, reduce to text output, perform machine vision or on- board/in-situ/edge processing, enable remote viewing of high resolution imagery using standard relatively low bandwidth communication links (e.g., wireless or internet speeds), enable monitoring of patients remotely and as frequently as needed, detect diabetic retinopathy, macular degeneration, cardiovascular disease, glaucoma, malarial retinopathy, Alzheimer's disease via on-site/on-board/edge processing, transmit a video preview of the zoom-able window to a client computer or device to enable browsing of high resolution retinal imagery, enable transmission of full resolution imagery to a client device or computer for the field of view and zoom level requested, and/or enable machine vision applications or 3 rd party applications.
- standard relatively low bandwidth communication links e.g., wireless or internet speeds
- FIGURE 5 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- process 500 is executed by a computer processor component 412 of a fundoscope 402 that includes an optical lens arrangement 404, an image sensor 408 configured to convert detected light to retinal image data, and at least one communication interface 410, the process including at least obtain the retinal image data from the image sensor at 502, generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504, and transmit the output data via the at least one communication interface at 506.
- the processor 412 can obtain ultra-high resolution retinal imagery from the image sensor 408 and can select a wide field of view and low zoom of the retinal imagery. Due to the very high resolution of the retinal image data, the processor 412 can decimate pixels within the selected field of view to reduce the image data to a still relatively high-resolution for transmission to a client device via the communication interface 410. The pixel decimation results in lower bandwidth requirements for transmission, but the transmitted retinal image data may still meet or exceed the resolution capabilities of a display screen of the client device.
- the processor 412 can obtain ultra-high resolution retinal imagery from the image sensor 408 and can select a narrow field of view and high zoom of the retinal imagery. Due to the very high resolution of the retinal image data, the processor 412 can decimate few to no pixels within the selected field of view and decimate many to all pixels outside the selected field of view to reduce the image data and maintain a high resolution and high acuity for transmission to a client device via the communication interface 410. The selective pixel decimation results in lower bandwidth requirements for transmission, but the transmitted retinal image data provides high acuity for the portion of the selected field of view on a display screen of the client device.
- the processor 412 can obtain ultra-high resolution retinal imagery from the image sensor 408 and compare the obtained retinal imagery to stored historical or baseline retinal imagery to detect one or more pathologies. In an event no pathologies are detected, the processor 412 can transmit no image data or, in certain embodiments, transmit a binary or alphanumeric text indication of a result of the analysis. The load on the communication interface 410 can thereby be reduced by avoiding image data transmission or transmitting data that requires only a few bytes per second.
- the processor 412 can obtain ultra-high resolution retinal imagery from the image sensor 408 and compare the obtained retinal imagery to stored historical or baseline retinal imagery to detect one or more pathologies.
- the processor 412 can transmit a selected field of view or portion of the retinal image data pertaining to the pathology or, in certain embodiments, transmit a binary or alphanumeric text indication of a result of the analysis.
- the load on the communication interface 410 can thereby be reduced by tailoring image data for transmission or transmitting data that requires only a few bytes per second.
- FIGURE 6 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the obtain the retinal image data from the image sensor at 502 includes one or more of obtain the retinal image data from the image sensor positioned with the optical arrangement at 602, obtain the retinal image data from the image sensor positioned with the optical arrangement that is movable along at least one of an x, y, or z axis at 604, obtain the retinal image data from the image sensor positioned with the optical arrangement that is rotatable and/or pivotable at 606, or obtain the retinal image data from the image sensor positioned with an optical arrangement that is perpendicular to an illumination lens arrangement at 608.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with the optical arrangement 404 at 602.
- the image sensor 408 can be positioned with the optical arrangement 404 as illustrated and described with respect to FIGURES 1 and/or 2. However, the image sensor 408 can be positioned in a common axis with the optical arrangement 404, a perpendicular axis with the optical arrangement 404, an obtuse or acute axis with the optical arrangement 404, or some other position relative to the optical arrangement 404.
- the image sensor 408 can move relative to the optical arrangement 404. Alternatively, one or more lenses of the optical arrangement 404 can move relative to the image sensor 408, such as for focusing light on the image sensor 408.
- the image sensor 408 can be removable, changeable, and/or replaceable, such as to enable use of image sensors 408 having a variety of characteristics, capabilities, or resolutions.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with the optical arrangement 404 that is movable along at least one of an x, y, or z axis at 604.
- the optical arrangement 404 can move in various directions in order, for example, to accommodate a position of an eye of a user.
- the optical arrangement 404 can be moved up, back, down, forward, left, or right to be in a position where an eyepiece coincides with a position of an eye of a particular user (e.g., automatic detection of eye position and movement of the optical arrangement or housing containing the optical arrangement to move the eyepiece to the eye position).
- the optical arrangement 404 can be moved to a particular position that corresponds to an average height, location, and/or position of an eye for various individuals.
- the optical arrangement 404 can be moved manually or automatically between eyes of an individual (e.g., left and right) during a sampling session, such that the individual maintains a constant position with respect to any eyepiece or eyebox during the sampling session.
- the optical arrangement 404 can move or a housing containing the optical arrangement 404 can move.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with the optical arrangement 404 that is rotatable and/or pivotable at 606.
- the optical arrangement 404 can rotate relative to a support structure, such as a table, post, or extension to enable retinal image sampling from different positions.
- the optical arrangement 404 can move along a curve, such as to track a head shape or eye position of a particular user. This can occur during retinal image sampling, such as to obtain different angles of image data while one or more eyes of an individual remain stationary.
- the rotation, pivoting, or movement of the optical arrangement 404 can be manual or automatic, such as through use of an electromagnetic motor.
- the optical arrangement 404 can rotate, pivot, or move or a housing containing the optical arrangement 404 can rotate, pivot, or move.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with an optical arrangement 404 that is perpendicular to an illumination lens arrangement at 608.
- FIGURE 2 illustrates an illumination lens arrangement 204 that is perpendicular to an imaging lens arrangement 202, whereby the illumination lens arrangement 204 directs illumination light 209 into the imaging lens arrangement 202 using the polarizing splitter/combiner 206.
- a path of the illumination light 209 can be controlled to reduce or eliminate intersection with a path of imaging light 308 within the scattering elements of the eye 212 as depicted in FIGURE 3A.
- the image sensor 408 can alternatively be positioned with an optical arrangement 404 that is other than perpendicular to an illumination lens arrangement.
- the optical arrangement 404 can be obtuse, orthogonal, acute, or movable relative to an illumination lens arrangement. In certain circumstances, the illumination lens arrangement is omitted.
- FIGURE 7 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the obtain the retinal image data from the image sensor at 502 includes, but is not limited to, obtain the retinal image data from the image sensor positioned with an optical arrangement that minimizes or eliminates illumination/reflection intersection within scattering elements of an eye at 702, obtain the retinal image data from the image sensor positioned with an optical arrangement that includes one or more masks at 704, obtain the retinal image data from the image sensor positioned with an optical arrangement that includes one or more movable masks at 706, or obtain the retinal image data from the image sensor of at least eighteen megapixels at 708.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with an optical arrangement 404 that minimizes or eliminates illumination/reflection intersection within scattering elements of an eye at 702.
- FIGURE 3A illustrates the scattering elements 212 of the eye, including the cornea 302 and the lens 306, which focus and/or scatter incoming light against the retina 214.
- Illumination light 209 is directed along a path through the scattering elements of the eye 212 and distributed against one or more portions of the retina 214. Some of the illumination light 209 is reflected as the imaging light 308 which passes along a path back through the scattering elements of the eye 212 for detection.
- the optical arrangement 404 is configured to minimize the interaction and/or interference of the illumination light 209 and the reflected imaging light 308 within or in an area proximate to the scattering elements of the eye 212.
- the image processor 412 obtains the retinal image data from the image sensor 408 positioned with an optical arrangement 404 that includes one or more masks at 704 or obtains the retinal image data from the image sensor 408 positioned with an optical arrangement 404 that includes one or more movable masks at 706.
- FIGURE 2 illustrates the one or more masks 210 positioned proximate to the illumination LED 208. Light 209 from the illumination LED 208 passes to and is at least partially obscured by the one or more masks 210 before passing through the illumination lens arrangement 204 and into the imaging lens arrangement 202. The light 209 is then directed to the retina 214.
- the position of the one or more masks 210 therefore affects a path of the light 209 from the illumination LED 208, the location of the light 209 within the scattering elements 212 of the eye, and ultimately an area of illumination at the retina 214.
- the one or more masks 210 includes anywhere from one to three or more masks 210.
- the one or more masks 210 can be positioned at one point along a path of the light 209 or at different points sequentially along a path of the light 209.
- the one or more masks 210 can be total or partial obscuring masks, such as masks that obscure a percentage of total the light 209, masks that polarize the light 209, or masks that filter the light 209.
- the one or more mask 210 are movable, such as manually or automatically, to adjust a path of the light 209 or an area of illumination on the retina 214.
- the one or more masks 210 can be automatically moved to illuminate various portions of the retina 214 and resultant retinal image data can be stitched together to establish a comprehensive retinal image view.
- FIGURE 8 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the obtain the retinal image data from the image sensor at 502 includes one or more of obtain the retinal image data from the image sensor of at least twenty megapixels at 802, obtain the retinal image data from the image sensor of at least ten thousand pixels per square degree at 804, obtain the retinal image data as static image data from the image sensor at 806, or obtain the retinal image data as video data from the image sensor at 808.
- the image processor 412 obtains the retinal image data from the image sensor 408 of at least eighteen megapixels at 708 or twenty megapixels at 802.
- the image sensor 408 provides ultra-high resolution imagery, which can range from approximately one megapixel to around twenty megapixels to a hundred or more megapixels.
- the image sensor 408 contains the highest number of pixels technologically/commercially available. The image sensor 408 therefore enables capture of retinal image data with an extremely high level of resolution and visual acuity.
- the image processor 412 has access to the full resolution retinal imagery captured by the image sensor 408 for analysis, field of view selection, focus selection, pixel decimation, resolution reduction, static object removal, unchanged object removal, or other operation illustrated or disclosed herein.
- the image processor 412 obtains the retinal image data from the image sensor 408 of at least ten thousand pixels per square degree at 804.
- the image sensor 408 provides ultra-high resolution imagery, which can range from approximately one a thousand pixels per square degree to tens of thousands of pixels per square degree.
- the image sensor 408 contains the highest number of pixels technologically/commercially available.
- the pixel density varies or is non-uniform in distribution across the image sensor 408 to provide greater resolution for certain retinal areas as compared to other retinal areas. Note that the pixel density can be measured in square inches or square centimeters or by some other metric.
- the image processor 412 obtains the retinal image data as static image data from the image sensor 408 at 806.
- the image processor 412 can obtain one or more retinal images as static image data at one or more different times, triggered by a manual indication or automatic indication such as by control from a computer program.
- the static retinal image data can be associated with an entire field of view or of a select field of view of the retina.
- the static retinal image data can include a series of images each covering a portion of the retina, with illumination and/or masks changing between each of the images.
- the static retinal image data can include a sequence of images covering overlapping fields of view, which may be used for resolution enhancement and/or stitching.
- the static retinal image data can include retinal images for left and right eyes of an individual.
- the image processor 412 obtains the retinal image data as video data from the image sensor 408 at approximately twenty frames per second at 902.
- the frame rate of the video data can be more or less than twenty frames per second depending upon a particular application. For instance, the frame rate can be slowed to approximately one frame per second or can be increased to approximately thirty or more frames per second.
- the frame rate can be adjustable based on user input or an application control.
- multiple frames from the video data are usable to generate an enhanced resolution static image by combining pixels from the multiple frames of video data.
- the image processor 412 on-board or at-the-edge with the fundoscope 402 prior to any transmission of the image data.
- the image processor 412 has high bandwidth access to full resolution imagery captured by the image sensor 408 to perform analysis, pathology detection, imagery comparisons, selective pixel decimation, selective pixel retention, static imagery removal, or other operations discussed herein.
- the output of the image processor 412 following any full-resolution processing operations can require less bandwidth and may be more timely transmittable via the communication interface 410.
- the image processor 412 obtains the retinal image data from the image sensor 408 and from at least one additional image sensor at 908.
- the at least one additional image sensor can be associated with an additional lens arrangement, whereby each of the image sensor 408 and the at least one additional image sensor capture image data associated with different segments of the retina, with overlapping portions of the retina, or with different retinas (e.g., left and right retinas of an individual sampled substantially concurrently or sequentially).
- the at least one additional image sensor can be an infrared image sensor configured to capture infrared image data, which is usable by the image processor 412 to perform functions such as focus and eye positioning or centering while avoiding an iris constriction response.
- FIGURE 10 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the obtain the retinal image data from the image sensor at 502 includes one or more of obtain the retinal image data from the image sensor and from at least one additional image sensor associated with at least a partially overlapping field of view at 1002, obtain the retinal image data from the image sensor and from at least one additional image sensor associated with a parallel field of view at 1004, obtain the retinal image data at a resolution of at least twenty microns at 1006, or obtain the retinal image data associated with approximately a 40 degree annular field of view at 1008.
- the image processor 412 obtains the retinal image data from the image sensor 408 and from at least one additional image sensor associated with at least a partially overlapping field of view at 1002 or from at least one additional image sensor associated with a parallel field of view at 1004.
- Each of the image sensors can capture ultra-high resolution imagery, which can be independently analyzed or combined by the image processor 412. For instance, one image sensor can capture left retina image data and another image sensor can capture right retina image data.
- Independent image processors can simultaneously process the respective left and right retina image data and perform functions and operations disclosed herein, such as retinal analysis, pathology detection, change detection, pixel decimation, pixel selection, unchanged pixel removal, or other operation. Concurrent processing of the left and right retina image data can reduce the duration of overall retinal analysis and testing.
- the processor 412 obtains the retinal image data associated with approximately a forty-degree annular field of view at 1008.
- the optical lens arrangement 404 can include the imaging lens arrangement 202 illustrated in FIGURE 2, which provides for approximately a +/- 21.7 degree field of view from center. However, different fields of view are possible with different lens arrangements, from very narrow fields of view of approximately a few degrees to very broad fields of view of more than forty degrees.
- the optical arrangement can be configured to provide an adjustable, modifiable, or selectable field of view.
- the optical arrangement 404 can be replaceable with a different optical arrangement to achieve a different field of view.
- FIGURE 11 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the obtain the retinal image data from the image sensor at 502 includes one or more of obtain the retinal image data as multiple sequentially captured images of different, adjacent, overlapping, and/or at least partially overlapping areas of a retina and stitch the multiple sequentially captured images of the retina to create an overall view at 1102 and/or obtain the retinal image data as multiple at least partially overlapping images of a retina and combine the multiple images into high resolution retinal image data at 1104.
- the image processor 412 obtains the retinal image data as multiple sequentially captured images of different, adjacent, overlapping, and/or at least partially overlapping areas of a retina and stitches the multiple sequentially captured images of the retina to create an overall view at 1102.
- the image processor 412 can obtain from the image sensor 408 retinal image data of a left-bottom quadrant, a left-top quadrant, a right-top quadrant, and a right-bottom quadrant associated with a retina, each with approximately a five percent overlap with adjacent quadrant images.
- the image processor 412 can stitch the quadrant images together using the overlapping portions for positional alignment to create an overall composite image of the retina.
- the processor 412 can increase the pixel density for a particular retinal region of interest (e.g., a region that has changed or is exhibiting a particular pathology) while maintaining the pixel density for other areas.
- the processor 412 can initiate pixel density enhancements based on one or more trigger events in one or more obtained retinal images, such as detection of a potential problem area, in anticipation of that particular area being requested by a healthcare person.
- FIGURE 12 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data based on analysis of the retinal image data, the output data requiring approximately one tenth in bandwidth for transmission than the retinal image data at 1202, or generate output data based on analysis of the retinal image data, the output data requiring approximately 1 Mbps in bandwidth for transmission as compared to approximately 20 Gbps in bandwidth for transmission of the retinal image data at 1204.
- the image processor 412 generates output data based on analysis of the retinal image data, the output data requiring approximately one tenth in bandwidth for transmission than the retinal image data at 1202 or generates output data based on analysis of the retinal image data, the output data requiring approximately 1 Mbps in bandwidth for transmission as compared to approximately 20 Gbps in bandwidth for transmission of the retinal image data at 1204.
- the image processor 412 obtains ultra-high resolution imagery from the image sensor 408 for one or more instances in time (e.g., static imagery or video). The volume of raw retinal image data obtained can far exceed the communication bandwidth capabilities of the communication interface 410.
- the required bandwidth for communicating all of the raw retinal image data can be ten, twenty, or more times the amount of available bandwidth of the communication interface 410.
- the processor 412 overcomes this potential deficiency by performing operations on the ultra-high resolution retinal imagery at the fundoscope 402 level, which can be referred to as edge-processing, in-situ-processing, or on-board processing.
- edge processing By performing edge processing of the raw retinal image data, the image processor 412 has access to real-time or near-real time imagery of ultra-high resolution and can generate output data that is reduced in size and/or tailored to a specific need or request.
- FIGURE 13 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data including a reduced resolution version of the retinal image data for transmission at 1302 and/or generate output data including at least one of the following types of alterations of the retinal image data for transmission: size, pixel reduction, resolution, stitch, compress, color, overlap subtraction, static subtraction, and/or background subtraction at 1304.
- the image processor 412 generates output data including a reduced resolution version of the retinal image data for transmission at 1302.
- the image processor 412 obtains ultra-high resolution imagery from the image sensor 408, which includes a very large number of pixels.
- the raw retinal imagery may therefore have an overall resolution that far exceeds a screen resolution of a requesting device (e.g. twenty megapixels of raw retinal image data vs. one megapixel display screen). Therefore, the image processor 412 can reduce a resolution of the raw retinal image data to a still very high-resolution that meets or exceeds a display screen resolution of a requesting device or an average display screen resolution.
- the image processor 412 generates output data including at least one of the following alterations of the retinal image data for transmission: size, pixel reduction, resolution, stitch, compress, color, overlap subtraction, static subtraction, and/or background subtraction at 1304.
- the image processor 412 need not transmit all of the raw retinal image data and can utilize various operations to reduce that raw retinal image data into highly useful data that is focused and targeted. For instance, the image processor 412 can reduce an overall area size of the retinal image data by decimating pixel data other than a particular region of possible interest.
- the image processor 412 can perform pixel decimation or pixel reduction within a selected area of interest to reduce a resolution to a still high resolution for a particular application (e.g., print, large high-definition monitor, mobile phone display, etc.).
- the image processor 412 can, in some embodiments, stitch together various retinal image segments to produce an overall retinal image before performing additional analysis or reduction operations of the overall retinal image.
- the image processor 412 can identify redundant over overlapping portions of the retinal image data that is requested by multiple users and transmit the redundant or overlapping portions of the retinal image data only once.
- the image processor 412 identifies areas of the retinal image data that have not changed since a previous transmission and then removes those areas from transmission, such that a server or client device gap-fills the omitted areas back into the retinal image data.
- the image processor can transmit a selected portion of the retinal image data at a first resolution and transmit an adj acent area or background portion of the retinal image data at a second resolution that is lower than the first resolution.
- the first resolution may be a high resolution relative to a screen display resolution and the second resolution may be a low resolution relative to the screen display.
- the image processor 412 can perform image compression on any image data prior to transmission.
- An example operation sequence of the image processor 412 illustrates how one or more of the foregoing techniques can be utilized by the image processor 412.
- the image processor 412 can obtain the ultra-high resolution retinal imagery from the image sensor 408 and select an overall field of view of substantially the entire area of the retinal imagery.
- the image processor can identify an area of change (e.g., due to a new manifestation of a pathology).
- the image processor 412 then performs pixel decimation uniformly across the retinal imagery to reduce the resolution of the retinal imagery to retain approximately 1/10 th of the retinal image data.
- the image processor 412 then further reduces a resolution of the retinal imagery data corresponding to other than the area of change by another fifty percent.
- the image processor 412 can detect and/or measure blood vessel growth, blood leakage, or fluid leakage in the macula area of the retina.
- the image processor 412 can detect and/or measure inflammatory markers such as narrower retinal arteriolar diameters or larger retinal venular diameters.
- the image processor 412 can detect and/or measure the optic disk, optic cup, and neuroretinal rim and calculate the cup- to-disk ratio and share of the neuroretinal rim.
- the image processor 412 can detect and/or measure vessel discoloration, retinal whitening, and hemorrhages or red lesions.
- the image processor 412 can detect and/or measure plaque deposits, venous blood column diameters, or thinning of a retinal nerve fiber layer. With respect to globe flattening, choroidal folds, the image processor 412 can detect and/or measure physical indentation, shape, compression, or displacement in the retina. With respect to papilledema, the image processor 412 can detect and/or measure swelling of the optic disk, engorged or tortuous retinal veins, or retinal hemorrhages around the optic disk. The image processor 412 can be configured to measure or detect any visually detectable parameter including any of the aforementioned or others. Furthermore, the image processor 412 can be configured to have any one or more parameters tied to any one or more potential pathologies.
- retinal images including for example optic disc edema, optic nerve sheath distension, optic disc protraction, cotton wool spots, macular holes, macular puckers, degenerative myopia, lattice degeneration, retinal tears, retinal detachment retinal artery occlusion, branch retinal vein occlusion, central retinal vein occlusion, intraocular tumors, inherited retinal disorders, penetrating ocular trauma, pediatric and neonatal retinal disorders, cytomegalovirus (cmv) retinal infection, macular edema, uveitis, infectious retinitis, central serous retinopathy, retinoblastoma, endophthalmitis, hypertensive retinopathy, retinal hemorrhage, solar retinopathy, retinitis pigmentosa, or other optic nerve or ocular changes.
- optic disc edema optic nerve sheath distension, optic disc protraction, cotton wool spots, macular holes, macular puckers, de
- FIGURE 15 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data including added contextual information based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1502, generate alphanumeric text output data based on analysis of the retinal image data, the alphanumeric text output data requiring less bandwidth for transmission than the retinal image data at 1504, or generate binary output data based on analysis of the retinal image data, the binary output data requiring less bandwidth for transmission than the retinal image data 1506.
- the image processor 412 generates output data including added contextual information based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1502.
- the image processor 412 can add information to the retinal image data for transmission via the communication interface 410, such as date/time, subject first/last name, session ID of exam, a highlight indication of the problematic or pathological area (e.g., an arrow or circle added to the image to focus a clinician's attention), or additional historical image data (e.g., past retinal image data of a patient juxtaposed with current retinal image data of the patient to aid in comparisons).
- the contextual information generated by the image processor 412 can include text, image data, binary data, coordinate information, or the like.
- the contextual information can be transmitted with retinal image data, before or after retinal image data, or instead or in lieu of retinal image data.
- the image processor 412 generates alphanumeric text output data based on analysis of the retinal image data, the alphanumeric text output data requiring less bandwidth for transmission than the retinal image data at 1504.
- the image processor 412 has access to the ultra-high resolution retinal imagery from the image sensor 408.
- the image processor 412 can perform image recognition with respect to the retinal imagery to determine a pathology or lack of pathology and generate alphanumeric text based on the same. For instance, the alphanumeric text can describe a detected pathology or indicate that there is no change since a previous analysis.
- the alphanumeric text can be a letter, a word, a phrase, or a paragraph, and can include numbers and/or symbols.
- the alphanumeric text can be transmitted by the image processor 412 via the communication interface 410, which may only require a few bytes per second in bandwidth as opposed to megabytes per second or gigabytes per second for the raw retinal image data.
- the image processor 412 can obtain the ultra-high resolution imagery from the image sensor 408 and perform image recognition to identify an increase in blood vessel growth, blood leakage, or fluid leakage in the macula area of the retina.
- the image processor 412 can then generate alphanumeric text such as "Subject John Q. Smith has some indications of macular degeneration in the left eye, including a ten percent increase in blood vessel growth, two instances of blood leakage and/or fluid leakage in the macula of the left retina.”.
- the image processor 412 can then transmit the alphanumeric text description via the communication interface, requiring only a few bytes per second for transmission, to enable a care provider to consider the same.
- Retinal image data may be transmitted in response to a request for further information or can be discarded, such as in the event that the care provider is aware of the situation and doesn't need to further review the retinal imagery.
- the image processor 412 can perform image recognition or comparative analysis on the ultra-high resolution retinal imagery to determine that there is no change or potential pathology presented. The image processor 412 can then generate a zero indication and transmit the same via the communication interface 410 without requiring any transmission of retinal image data.
- FIGURE 16 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data through pixel decimation to maintain a constant resolution independent of a selected area and/or zoom level of the retinal image data at 1602, generate output data through pixel decimation to maintain a resolution independent of a selected area and/or zoom level of the retinal image data, the resolution being less than or equal to a resolution of a client device at 1604, or generate output data based on analysis of the retinal image data and compress the output data, the output data requiring less bandwidth for transmission than the retinal image data at 1606.
- the image processor 412 can decimate a large portion of the pixel data when a wide field of view is selected corresponding to substantially the entire retina. This is due to the selection including virtually all of the raw image data and pixels. However, the image processor 412 can decimate few to no pixels when a narrow or small field of view or high zoom level is selected corresponding to a small area of the retina (e.g., the optic nerve or macula area). This is due to the selection including possibly fewer than the given resolution (e.g. fewer than one to five megapixels). In this regard, the image processor 412 can maintain a very high acuity level for wide or low zoom selections through to very small or high zoom selections without substantial difference in the relatively low bandwidth requirement of the communication interface 410.
- the image processor 412 generates output data through pixel decimation to maintain a resolution independent of a selected area and/or zoom level of the retinal image data, the resolution being less than or equal to a resolution of a client device at 1604.
- the image processor 412 can obtain metadata that indicates a type of requesting device or a screen resolution of the requesting device. Based on the metadata, the image processor 412 can adjust the desired resolution and pixel decimation amounts to provide the highest resolution retinal image data that can be accommodated by a particular device. Thus, for higher screen resolution devices or print applications, for example, the image processor 412 can adjust the decimation amount downward, such that fewer pixels are decimated and a higher resolution image is transmitted.
- the image processor 412 can adjust the decimation amount upward, such that more pixels are decimated and a lower resolution image is transmitted.
- the image processor 412 can adjust the decimation amounts in real-time for various user-requests to accommodate many different devices or applications of the retinal image data.
- the image processor 412 generates output data based on analysis of the retinal image data and compresses the output data, the output data requiring less bandwidth for transmission than the retinal image data at 1606.
- the image processor 412 can compress raw retinal image data or compress retinal image data postreduction (e.g., pixel reduction, static object omission, unchanged area omission, etc.).
- the compressed or coded output data can be transmitted via the communication interface 410 with less bandwidth load. Examples of compression techniques performed by the image processor 412 include one or more of reducing color space, chroma subsampling, transform coding, fractal compression, run-length encoding, DPCM, entropy encoding, deflation, chain coding, or the like.
- FIGURE 17 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data including a portion of the retinal image data corresponding to an object or feature detected based on analysis of the retinal image data at 1702 or generate output data based on object or feature recognition in the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1704.
- the image processor 412 can obtain an ultra-high resolution retinal imagery from the image sensor 408 and perform image analysis to identify one or more plaque deposits possibly indicative of Alzheimer's disease.
- the image processor 412 can select an area of the retinal imagery including the plaque deposits plus approximately 10% beyond the plaque deposits.
- the non-selected area of the retinal imagery can be decimated and either stored or discarded while the selected area can undergo a pixel reduction and/or compression prior to transmission via the communication interface 410.
- FIGURE 18 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data based on event or action recognition in the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1802 or generate output data of a specified field of view within the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1804.
- the image processor 412 generates output data based on event or action recognition in the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1802.
- the image processor 412 obtains the ultra-high resolution imagery from the image sensor 408 and performs image analysis to identify an event or action, such as a change from a previous retinal image, a measurement beyond a threshold, a deviation from a specified standard, or other defined event or action.
- the image processor 412 Upon detection of the event or action, the image processor 412 generates output data which may include the relevant portions of the image data and/or other data.
- Other data generated by the image processor 412 can include a program or function call, alphanumeric text, binary data, or other similar information or action.
- the image processor 412 generates output data of a specified field of view within the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1804.
- the image processor 412 obtains the ultra-high resolution retinal imagery from the image sensor 408, but in some cases, not all of the retinal imagery contains useful information. Accordingly, the image processor 412 can perform a reduction operation to eliminate or remove unneeded or non-useful information and retain a field-of-view or selection that contains needed or useful information.
- Fields of view can include quadrants, sections, segments, radiuses, user defined areas, user requested areas, or areas corresponding to particular features, objects, or events, for example. Fields of view generated by the image processor 412 can also be small, high zoom areas or large, low zoom areas.
- the image processor 412 can transmit a large field of view for substantially the entire retinas of both eyes via the communication interface 410 to a client device.
- a user at the client device can draw a box or pinch and zoom to a specified area of the retina within the large field of view.
- the client device can present the relatively low resolution specified area of the retina using data previously obtained and further request additional pixel data for the specified area.
- the image processor 412 can transmit, in response to the client request, additional pixel data, that may have previously been decimated, via the communication interface 410 to enhance the acuity and/or resolution of the specified area at the client device.
- FIGURE 19 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 504 includes one or more of generate output data of a specified zoom-level within the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data at 1902 or generate output data based on analysis of the retinal image data and based on a user request for at least one of the following: specified field of view, specified resolution, specified zoom-level, specified action or event, specified object or feature, and/or specified health issue, the output data requiring less bandwidth for transmission than the retinal image data or 1904.
- the image processor 412 can digitally generate a high-zoom of the optic nerve area of the retina by obtaining the ultra-high resolution retinal imagery, decimating all pixels outside the optic nerve area of the retinal imagery, and retaining most to all of the pixels within the optic nerve area of the retinal imagery.
- the image processor 412 can digitally generate a low-zoom of the entire retina by obtaining the ultra-high resolution retinal imagery and decimating a portion of the pixels uniformly across the entire retina of the retinal imagery (e.g., every other pixel is removed or a pattern of pixels is removed).
- the image processor 412 generates output data based on analysis of the retinal image data and based on a user request for at least one of the following: specified field of view, specified resolution, specified zoom-level, specified action or event, specified object or feature, and/or specified health issue, the output data requiring less bandwidth for transmission than the retinal image data or 1904.
- the image processor 412 can be configured to generate output data based on one or more user requests, which one or more user requests can be received via the communication interface 410.
- the image processor 412 generates output data based on analysis of the retinal image data and based on a program request for at least one of the following: specified field of view, specified resolution, specified zoom-level, specified action or event, specified object or feature, and/or specified health issue, the output data requiring less bandwidth for transmission than the retinal image data at 2002.
- the image processor 412 can receive one or more program requests from a remotely hosted or running application via the communication interface 410.
- the program request can specify a particular parameter that is executable by the image processor 412 against obtained raw high-resolution retinal imagery data to generate output data.
- the output data is then transmittable by the image processor 412 to the remote application or to another location (e.g., client or server device).
- the image processor 412 generates output data based on analysis of the retinal image data and based on a locally hosted application program request, the output data requiring less bandwidth for transmission than the retinal image data at 2004.
- the image processor 412 and the computer memory 406 are configurable to host applications, such as third-party applications, that perform one or more specified functions to generate specified output data.
- Various individuals or entities can create the applications for specialized purposes or research and upload the applications to the fundoscope 402 via the communication interface.
- the image processor 412 can execute the hosted application alone or in parallel with a plurality of different hosted applications to perform custom analysis and data generation of the ultra-high resolution retinal imagery obtained from the image sensor 408.
- the image processor 412 can analyze the full resolution retinal imagery and detect an instance of papilledema in the astronaut. Pertinent retinal imagery related to the papilledema can be obtained, reduced, and/or compressed before being transmitted via the communication interface 410 for the clinician.
- FIGURE 21 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the transmit the output data via the at least one communication interface at 506 includes one or more of transmit the output data via the at least one communication interface of at least one of the following types: WIFI, cellular, satellite, and/or internet at 2102, transmit the output data via the at least one communication interface that includes a bandwidth capability of approximately one tenth a capture rate of the retinal image data at 2104, or transmit at a first time the output data via the at least one communication interface, the output data requiring less bandwidth for transmission than the retinal image data and transmit at least some of the retinal image data at a second time corresponding to at least one of an interval time, batch time, and/or available bandwidth time at 2106.
- the image processor 412 transmits the output data via the at least one communication interface 410 of at least one of the following types: WIFI, cellular, satellite, and/or internet at 2102.
- the communication interface 410 can be wireless or wired (e.g., ethernet, telephone, coaxial cable, conductor, etc.). In instances of wireless communication, the communication interface 410 can include local, ZIGBEE, WIFI, BLUETOOTH, BLE, WEVIAX, cellular, GSM, CDMA, HSPA, LTE, AWS, XLTE, VOLTE, satellite, infrared, microwave, broadcast radio, or any other type of electromagnetic or acoustic transmission.
- the fundoscope 402 can include multiple different types of communication interfaces 410 to accommodate different or simultaneous communications.
- the image processor 412 transmits the output data via the at least one communication interface 410 that includes a bandwidth capability of approximately one tenth a capture rate of the retinal image data at 2104.
- the image processor 408 can obtain ultra-high resolution imagery from the image sensor 408 at high data rates, such as ten, twenty, thirty, or more gigabytes per second.
- the communication interface 410 has bandwidth constraints that can be less, significantly less, or orders of magnitude less. For instance, the communication interface 410 can have a bandwidth limitation of approximately one to ten megabytes per second or one gigabyte per second or even as high as five to ten gigabytes per second. In any case, the image processor 412 can have access to more image data than can be timely transmitted via the communication interface 410.
- the fundoscope 402 can be used throughout a space voyage by astronauts to monitor for and detect retinal pathologies.
- the communication interface 410 may be a WIFI to microwave-based communication channel having a bandwidth constraint of approximately one to ten megabytes per second when the spacecraft passes over an Earth-based ground station.
- the image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface 410 an ultra-low bandwidth text-based description of the detected pathology along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology.
- FIGURE 22 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the transmit the output data via the at least one communication interface at 506 includes one or more of transmit the output data via the at least one communication interface in response to detection of at least one health issue and otherwise not transmitting any data at 2202, transmit the output data via the at least one communication interface in response to detection of at least one object or feature and otherwise not transmitting any data at 2204, transmit the output data via the at least one communication interface to satisfy a client request at 2206, or transmit the output data as image data via the at least one communication interface at 2208.
- the image processor 412 transmits the output data via the at least one communication interface 410 to satisfy a client request at 2206.
- the image processor 412 can respond to one or more client requests received via the communication interface 410.
- the one or more client requests can include one or more of the following types: field of view, zoom -level, resolution, compression, pathologies to monitor, transmission trigger events, panning, or another similar request.
- the image processor 412 can respond to the request with a handshake, confirmation, or with the requested information in real-time, near-real time, delayed-time, scheduled-time, or periodic time.
- the image processor 412 transmits the output data as image data via the at least one communication interface 410 at 2208.
- the image processor 412 can be configured to transmit a variety of data forms, including image data.
- the image data can be transmitted by the image processor 412 in various forms and formats including any one or more of the following: raster, jpeg, jfif, jpeg 2000, exif, tiff, gif, bmp, png, ppm, pgm, pbm, pnm, webp, hdr, heif, bat, bpg, vector, cgm, gerber, svg, 2d vector, 3d vector, compound format, stereo format.
- the image processor 412 transmits the output data as alphanumeric or binary data via the at least one communication interface 410 at 2302.
- the image processor 412 can transmit binary or alphanumeric output data derived from or based on the retinal image data instead of or in addition to transmitting the retinal image data.
- the alphanumeric text can include words, phrases, paragraphs, artificial intelligence-generated statements, sentences, symbols, numbers, or the like.
- Binary data can include any of the following: on, off, high, low, 0, 1, yes, no, or other similar representations of binary values.
- the image processor 412 transmits the output data as image data via the at least one communication interface 410 without one or more of static pixels, previously transmitted pixels, or overlapping pixels, wherein the image data is gap filled at a remote server at 2304.
- the image processor 412 can transmit retinal image data that is then retained or stored at a remote location, such as a network location, server, or client device.
- the transmission by the image processor 412 can be in response to a client request, a program request, a scheduled transmission or can be accomplished during low bandwidth or low activity periods.
- the image processor 412 can obtain new retinal image data from the image sensor 408 and perform analysis to determine when any of the retinal image data has previously been transmitted.
- the image processor 412 can remove any identified previously transmitted retinal image data and retain only changed or non-previously transmitted retinal image data. The image processor 412 can then transmit the changed or non-previously transmitted retinal image data via the communication interface 410, such that the previously transmitted retinal image data is gap-filled, combined, or inserted to establish a composite retinal image prior to display or print output.
- the image processor 412 can obtain retinal image data from the image sensor 408 for John Q. Smith.
- the retinal image data includes no pathological indications or unusual biomarkers, deposits, or discolorations.
- a server device receives the retinal image data for John Q. Smith and stores it in memory.
- the image processor 412 obtains retinal image data from the image sensor 408 for John Q. Smith.
- the image processor 408 identifies one or more instances of hemorrhaging. Instead of transmitting all of the retinal image data, the image processor 412 decimates all unchanged pixels of the retinal image other than the area surrounding the hemorrhaging.
- the image processor 412 transmits the retinal image data corresponding to the hemorrhaging and the server gap-fills the previously transmitted retinal image data to recreate the composite retinal image data for John Q. Smith.
- the image processor 412 transmits the output data as image data of a specified area via the at least one communication interface 410 at 2306.
- the image processor 412 can determine the specified area from a client request, a program request, or can be determined in response to a detected pathology.
- Client requests for areas can be received via the communication interface 410 and include coordinates, vector values, raster image drawings, text, binary, or other data.
- Program requests can be provided manually or automatically by one or more programs that may be resident on the fundoscope 402 or on a remote computer, server, cloud, or client device.
- the program requests can similarly include coordinates, vector values, raster image drawings, text, binary, or other data.
- the program requests can be triggered in response to detected values, pathologies, indications, or measurements.
- the image processor 412 can obtain retinal image data and perform image analysis to detect an instance of a choroidal fold.
- An application program request can be generated automatically to obtain measurements, generate a textual description of the choroidal fold, and retain high-zoom level retinal image data pertaining to the choroidal fold for transmission via the communication interface 410 for a client device output.
- a client device can request retinal image data at a 1600 x 1200 pixels.
- the image processor 412 can apply the specified resolution to the pixel retention of the retinal image data non-uniformly such that the areas surrounding the optic nerve head, the fovea, the macula, and the venules and arterioles are reduced to 1600 x 1200 pixels.
- the image processor 412 can further reduce other areas of the retinal image data to less than 1600 x 1200, such as to 300 x 200 pixels.
- the image processor 412 can transmit the non-uniform resolution retinal image data to the client device at a first time and then follow up with full 1600 x 1200 retinal imagery at a later second time (e.g., immediately thereafter the first time).
- FIGURE 24 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the transmit the output data via the at least one communication interface at 506 includes one or more of transmit the output data as image data of a specified zoom level via the at least one communication interface at 2402, transmit the output data as image data of a specified object or feature via the at least one communication interface at 2404, or transmit the output data as image data including metadata via the at least one communication interface at 2406.
- the image processor 412 transmits the output data as image data of a specified zoom level via the at least one communication interface 410 at 2402.
- the image processor 412 can obtain a specified zoom level from a client request, program request, or in response to a detected parameter.
- the specified zoom level can be a percentage or level (e.g., 10% or 90% zoom, low or high-level zoom).
- the specified zoom level can include a specified area as well as a specified visual acuity for that particular area.
- the specified area can be defined by a default area, a selected area, a box, a focus center, an anatomical structure, or a pathological area.
- the image processor 412 can also generate a specified zoom level in anticipation of a client or program request and transmit at least some of the anticipated zoom level data prior to the client or program request to reduce future latency.
- the image processor 412 can respond to a client request and provide retinal image data corresponding to a low-zoom substantially entire field of view of the retina.
- the image processor 412 can also detect through image analysis an instance of a plaque or discoloration in the retinal image data.
- the image processor 412 can begin transmitting high-zoom level retinal image data corresponding to the plaque or discoloration prior to any user request in anticipation that a request for the zoom will be forthcoming. If and when a user request for high-zoom retinal image data corresponding to the plaque or discoloration is received, the image processor 412 can already have transmitted some or all of the retinal image data.
- the image processor 412 transmits the output data as image data of a specified object or feature via the at least one communication interface 410 at 2404.
- the image processor 412 can receive an indication of a specified object or feature from a user request, a program request, or based on a detected pathology or variation in the retinal image data.
- the specified object or feature can be an anatomical feature, a biomarker, or an area corresponding to a detected pathology, change, or variation.
- the image processor 412 can select and transmit only the retinal image data associated with the specified object or feature or can transmit additional retinal image data. For instance, the image processor 412 can transmit retinal image data corresponding to an object or feature in addition to retinal image data corresponding to one or more other instances of the object or feature.
- the image processor 412 can receive a user request for retinal image data corresponding to a particular engorged arteriole.
- the image processor 412 can select and transmit the retinal image data corresponding to the particular engorged arteriole, but also select and transmit unrequested portions of the retinal image data.
- the unrequested portions of the retinal image data can be determined by the image processor 412 to relate to the requested portions, such as retinal image data corresponding to all engorged venules or arterioles.
- a client device can then receive the transmitted selected retinal image data and the unselected retinal image data related to the selected retinal image data for display.
- FIGURE 25 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the process 500 further includes an operation of receive a communication of a request at 2502.
- FIGURES 26-28 are block diagrams of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the receive a communication of a request at 2502 includes one or more of receive a communication of a request for at least one specified area or field of view at 2602, receive a communication of a request for at least one specified resolution at 2604, receive a communication of a request for at least one specified zoom level at 2606, receive a communication of a request for at least one specified object or feature at 2608, receive a communication of a request involving zooming at 2702, receive a communication of a request involving panning at 2704, receive a communication of a request for at least one specified action or event at 2706, receive a communication of a program request at 2708, or receive via the at least one communication interface a communication of a client request at 2802.
- the image processor 412 can receive via the at least one communication interface 410 a communication of a client request at 2802.
- the client request can be received directly or indirectly via a communication network from a client device.
- Client devices can include any one or more of a smartwatch, a smartphone, a mobile phone, a tablet device, a laptop device, a computer, a server, an augmented reality headset, a virtual reality headset, a game console, or a combination of the foregoing.
- the communication network can include a direct wire link, a direct wireless link, an indirect wire link, an indirect wireless link, the Internet, a local network, a wide area network, a virtual network, a cellular network, a satellite network, or a combination of the foregoing.
- the image processor 412 can receive from the client device a request for at least one specified area or field of view at 2602, at least one specified resolution at 2604, at least one specified zoom level at 2606, at least one specified object or feature at 2608, zooming at 2702, panning at 2704, or at least one specified action or event at 2706.
- the requests can be transmitted in audio, binary, or alphanumeric text form and can be generated from voice input, graphical selection, physical control movement, device movement or tilt, finger gesture, sensor input, or another source.
- a client device provides a user interface associated with one or more fundoscopes 402.
- a particular fundoscope can be selected from the one or more fundoscopes 402 to obtain retinal image data from that particular fundoscope 402.
- Retinal image data is obtained and displayed from the fundoscope 402 in real-time or near-real-time for a particular individual being analyzed.
- the retinal imagery data is output for display and can be interacted with through a combination of graphical user interface elements, input fields, gestures, and/or movements of the client device.
- the graphical user interface elements can include buttons or sliding bars, such as to enable control of zoom, pan, resolution, or other parameters.
- the input fields can enable text entry, such as a number value for a zoom level or a specific object to anchor the field of view.
- Gestures and device movement can be combined to enable functions, such as panning by movement of the client device, zooming by pinching opposing fingers on the touch screen, and/or switching between retinas of the particular individual by swiping a finger.
- Voice input can be accepted to instruct the particular individual with respect to particular actions, such as to communicate with the particular individual and inform that individual to move, shift, change eyes, stay still, or another instruction.
- the client device can also provide notifications and/or alerts regarding the availability of retinal image data or regarding potential detected pathologies, changes, or variations associated with retinal image data.
- the program can be a special-purpose program dedicated to obtaining, storing, analyzing, forwarding, or otherwise processing retinal image data for one or more individuals.
- the program can be part of another general purpose or specialized purpose application or system, such as an electronic medical records system, a health and physiology monitoring program, a home health system, or the like.
- Another certain application may request, store, transmit, and/or analyze retinal image data pertaining only to certain features (e.g., retinal imagery of plaques when present for control and non- control groups of individuals taking part in a study involving a particular Alzheimer's disease drug).
- Another application may request, store, transmit and/or analyze retinal image data of medium resolutions for all individuals without any person-identifying information (e.g., a medical school may want real-time imagery to present during an ophthalmology lecture during class).
- a variety of customized specific third-party applications can be developed and hosted on the fundoscope 402 for a variety of different entities to perform specific functions and generate different outputs based on the same retinal imagery data.
- FIGURE 29 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the process 500 further includes an operation of illuminate a retina at 2902.
- the optical lens arrangement 404 can include an illumination source, such as an incandescent light, an organic light emitting diode, a light emitting diode, a laser, or another light source or combination of light sources.
- FIGURE 30 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the illuminate a retina at 2902 includes one or more of illuminate a retina using a light source and at least one mask that minimizes illumination/reflection intersection within scattering elements of an eye at 3002, illuminate a retina using an infrared light source and the optical lens arrangement at 3004, illuminate a retina using a visible light source and the optical lens arrangement at 3006, or moving at least one mask to change an area of retinal illumination at 3008.
- the optical lens arrangement 404 illuminates a retina using a light source and at least one mask that minimizes illumination/reflection intersection within scattering elements of an eye at 3002.
- the optical lens arrangement includes a light source that is directed onto the retina and reflected for imaging. The intersection of the illumination light and the reflected light is minimized in the cornea and lens structures of the eye through use of one or more masks that block at least some of the illumination light.
- the masks can be constructed from any light obstructing material and may be partially or fully obstructive to light.
- the optical lens arrangement 404 illuminates a retina using an infrared light source at 3004.
- the infrared light source can include an infrared light emitting diode, an infrared organic light emitting diode, a laser, or another infrared light source.
- the infrared light is directed onto the retina via the optical lens arrangement and reflected for infrared imaging. Infrared light does not trigger the same iris constriction response and can therefore be used prior to visible imaging for eye positioning or repositioning, focus, or other operation where iris constriction is to be avoided or limited.
- the infrared light source can include one or more masks that at least partially obscure the infrared light to minimize the intersection of the illumination infrared light and reflected infrared light within the scattering elements of the eye (e.g., cornea and lens).
- the optical lens arrangement 404 illuminates a retina using a visible light source at 3006.
- the visible light source can include a light emitting diode, an organic light emitting diode, an incandescent light, a laser, or another visible light source.
- the visible light source is limited to a certain wavelength (e.g. white or red).
- the visible light source is directed via the optical lens arrangement 404 as illumination light onto the retina where it is reflected for retinal imaging.
- One or more masks are used to at least partially obscure the visible light to limit the intersection of the illumination light and the reflected light within the scattering elements of the eye (e.g. cornea and lens).
- Minimization can be less than a certain percentage, for example less than 1% or less than 5% or less than 10% or less than 25% interaction between the illumination light and the reflected light within the scattering elements of the eye.
- the visible light source is emitted for retinal imaging following focus and/or eye positioning performed using an infrared light source.
- the optical lens arrangement 404 moves at least one mask to change an area of retinal illumination at 3008.
- the use of at least one mask can limit the illumination on certain parts of the retina.
- the at least one mask is moved over the course of retinal imaging (e.g., smoothly or stepped over video retinal imagery capture or to different prespecified locations between static imagery capture).
- the captured retinal imagery over time or from different images can then be used to create a complete composite retinal image by retaining the portions with high acuity and stitching those retained portions together, for example.
- FIGURE 31 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the process 500 further includes an operation of perform analysis of the retinal image data at 3102.
- the image processor 412 can perform the analysis of the retinal image data in the course of performance of one or more operations illustrated or disclosed herein.
- the analysis can include one or more of image recognition, image comparison, feature extraction, object recognition, image segmentation, motion detection, image preprocessing, image enhancement, image classification, contrast stretching, noise filtering, histogram modification, or other similar operation.
- FIGURE 32 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the perform analysis of the retinal image data at 3102 includes one or more of obtain baseline retinal image data from the computer readable memory, compare the retinal image data to the baseline retinal image data, and identify at least one deviation between the retinal image data and the baseline retinal image data indicative of at least one health issue at 3202 or perform obj ect or feature recognition analysis using the retinal image data to identify at least one health issue at 3204.
- the image processor 412 can identify a change or deviation between the retinal image data and the baseline retinal image data, which may be indicative of a health issue.
- Health issues have been illustrated and discussed herein and can include, for example, one or more of diabetic retinopathy, macular degeneration, cardiovascular disease, glaucoma, malarial retinopathy, Alzheimer's disease, globe flattening, papilledema, and/or choroidal folds.
- the image processor 412 can perform one or more of the operations illustrated and/or disclosed herein.
- the baseline retinal image data can be for a different individual or associated with a normal retina.
- the image processor 412 performs object or feature recognition analysis using the retinal image data to identify at least one health issue at 3204.
- the image processor 412 can perform object or feature recognition analysis with or without a corresponding image baseline comparison analysis.
- the object or feature recognition can include identifying anatomical structures, biomarkers, discolorations, measurements, shapes, contours, lines, or the like within any of the retinal image data.
- the objects or features can be associated with various potential health issues and used by the image processor 412 to identify a potential health issue or array of possible potential health issues.
- potential health issues have been disclosed and illustrated herein, but can include diabetic retinopathy, macular degeneration, cardiovascular disease, glaucoma, malarial retinopathy, Alzheimer's disease, globe flattening, papilledema, and/or choroidal folds.
- the image processor 412 can perform one or more operations as discussed and/or illustrated herein.
- FIGURE 33 is a block diagram of a process 500 implemented using a retinal imager device 400 with edge processing, in accordance with various embodiments.
- the process 500 further includes operations of receive a retinal image analysis application via the at least one communication interface at 3302 and implement the retinal image analysis application with respect to the retinal image data at 3304.
- the image processor 412 of the fundoscope 402 is not necessarily static in its configuration. Instead, the image processor 412 can be programmed to perform special purpose operations that change over time by receiving software applications via the communication interface 410 and deploying the software applications for specialized analysis and output of the retinal image data.
- the customization of the image processor 412 configuration enables modifications over time to any of the amount and timing of retinal image data collection, mask movement, illumination intensity or duration or wavelength, pixel decimation, pixel selection, object removal, unselected retinal imagery transmission, anticipated object or area transmissions, gap-filling, image analysis, data generation, data output, output data destination or timing, bandwidth usage, feature or object detection, event triggers, comparison or health issue detection algorithms, health issue focuses, retinal areas of interest, or the like. Entities such as companies, individuals, research institutions, scientific bodies, consumer groups, educational institutions, or the like can therefore develop specialized applications based on their respective needs and upload the specialized applications to the fundoscope 402 for implementation in parallel or series via the image processor 412.
- the applications can be updated, deleted, stopped, started, or otherwise controlled as needs change over time.
- a pharmaceutical company interested in understanding cardiovascular disease in a population of individuals ages 40-50 can develop an application that collects summary alphanumeric text data regarding age of patient and type of retinal markers indicative of cardiovascular disease detected.
- This application can be uploaded to the fundoscope 402 or an array of fundoscopes 402 used in a cardiology clinics and hospital wards.
- the image processor 412 can execute the application during the normal course of retinal image data collection and document the requested data.
- the output data can be transmitted back to a computer destination for the pharmaceutical company to be used for research or commercialization decisions.
- the same fundoscope 402 can be performing one or more of the operations disclosed herein with respect to a specific patient for real-time or near-real time health analysis or monitoring by a clinician and can be executing one or more other third-party applications for one or more different entities with different data outputs.
- the retinal imager 402 can be used in coordination with fluorescence to identify particular indications.
- fluorescent tagged proteins or fluorescent chemicals can be introduced into the eye globe via the sclera and vitreous humor (e.g., via an eye drop or needle).
- the fluorescent tagged proteins or fluorescent chemicals can be introduced via blood flow to the retina (e.g., capsule, pill, consumable, or IV injection).
- the fluorescent chemical or protein adheres to certain pathological indications of the retina and can be captured via illumination and imaging via the image sensor 408.
- the image processor 412 determines and detects the presence of the fluorescent tagged proteins or fluorescent chemicals and can generate output data as discussed and illustrated herein based on the same.
- curcumin has been shown to adhere to amyloid plaques and will fluoresce in response to the proper optical stimulation.
- optical stimulation of the retina or other near surface blood flows in conjunction with curcumin fluoresce can be an indicator of potential Alzheimer's disease.
- the image processor 412 can generate output data, such as high visual acuity retinal imagery of areas of the retina associated with the detected curcumin or such as a binary indication of potential Alzheimer's disease.
- This information can be used to perform diagnostic functions, such as determine driver awareness, alertness, drowsiness, sickness, drug use, alcohol use, energy, or heath.
- diagnostic functions such as determine driver awareness, alertness, drowsiness, sickness, drug use, alcohol use, energy, or heath.
- the imager 402 can inform the activation of stimulation routines, such as via digital games, displays, body worn stimulators, audio devices, an illumination source, or the like.
- the imager 402 can monitor and/or detect responses to stimulation and make adjustments to the stimulation or initiate control of other devices or equipment based on the same. For example, the imager 402 can monitor dilation or pupil size of a driver's eyes.
- the imager 402 can signal an LED repeatedly or periodically while monitoring the dilation response.
- the imager 402 can obtain measurements of the dilation or pupil size of the driver's eyes from before, during, and after stimulation, and determine from this information, and optionally from other sensor inputs, whether the driver is suffering from or experiencing fatigue, whether the driver may have another health issue, or whether the driver is intoxicated or under the influence of drugs.
- the imager 402 can signal a music player, roll a window down, adjust a seat position, slow the vehicle, set a limit on vehicle use (e.g., shut down after 30 miles), notify a 3 rd -party, record the data, initiate a phone call, or other similar action to mitigate or address the fatigue.
- a limit on vehicle use e.g., shut down after 30 miles
- the imager 402 is configured to perform eulerian video magnification in the context of retinal imagery, facial imagery, or body part imagery.
- the imager 402 captures one or more images or videos of the individual and magnifies one or more of color changes or movement within the one or more images or videos.
- the imager 402 can generate a video of the retina or face where the pulse, pulse strength, or pulse duration is detectable and/or measurable through magnification of the color changes.
- the imager 402 can generate a video of a neck or arm of an individual where pulse, pulse strength, or pulse duration is detectable and/or measurable through magnification of skin perturbances.
- the image sensor 402 can use pulse, pulse rate, pulse strength, or other information obtained through the eulerian video magnification to identify instances of stress, anxiety, fatigue, attentiveness, illness, sickness, disease, or other health issue.
- the imager 402 can signal or control one or more devices based on any identified or detected parameter or health issue, including signaling an alert, signaling for an additional parameter measurement, capturing imagery, generating imagery, transmitting imagery, controlling a medication dispenser, controlling a climate control device, controlling a vehicle, or the like.
- the imager 402 obtains retinal image data as video data from the image sensor 408.
- a machine-vision enabled fundoscope for retinal analysis comprising:
- an optical lens arrangement an image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data;
- optical lens arrangement comprises: an optical lens arrangement that is rotatable and/or pivotable.
- optical lens arrangement comprises: an imaging optical lens arrangement aligned in a first axis;
- an illumination lens arrangement aligned in a second axis that is perpendicular to the first axis; and at least one polarizing splitter/combiner.
- an illumination LED configured to emit light
- At least one mask configured to obscure at least some of the light of the illumination source to minimize illumination/reflection intersection within scattering elements of an eye.
- a light source configured to emit infrared light for positioning and/or focus determinations.
- an illumination light source configured to emit visible light for imaging.
- the fundoscope of clause 1, wherein the image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data comprises: an image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data at a resolution of at least twenty microns.
- the fundoscope of clause 1, wherein the image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data comprises: an image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data associated with approximately a 40 degree annular field of view.
- the fundoscope of clause 1, wherein the image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal image data comprises: an image sensor positioned with the optical lens arrangement and configured to convert detected light to retinal video data of at least twenty frames per second.
- At least one additional image sensor positioned with the at least one additional optical lens arrangement and configured to convert detected light to additional retinal image data.
- At least one additional image processor associated with each of the at least one additional image sensors
- the at least one communication interface includes at least one of the following types: WIFI, cellular, satellite, and/or internet.
- the fundoscope of clause 1 wherein the at least one communication interface includes a bandwidth capability of approximately one tenth of a capture rate of the retinal image data. 34. The fundoscope of clause 1, wherein the at least one communication interface includes a websocket enabling communication with a browser client.
- the retinal image data as a plurality of sequentially captured images of different, adjacent, and/or at least partly overlapping parts of a retina
- the retinal image data as a plurality of at least partly overlapping images from the image sensor
- the output data requiring approximately 1 Mbps in bandwidth for transmission as compared to approximately 20 Gbps in bandwidth for transmission of the retinal image data.
- the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data comprises: generate output data including at least one of the following types of alterations of the retinal image data for transmission: size, pixel selection, pixel reduction, resolution reduction, pixel extraction, pixel decimation, static object removal, unchanged pixel removal, stitching, compression, color, overlap subtraction, static subtraction, and/or background subtraction.
- output data including added contextual information based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data.
- the fundoscope of clause 1, wherein the generate output data based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data comprises:
- output data based on analysis of the retinal image data and based on a user request for at least one of the following: specified field of view, specified resolution, specified zoom -level, specified action or event, specified object or feature, and/or specified health issue, the output data requiring less bandwidth for transmission than the retinal image data.
- the output data as image data via the at least one communication interface without one or more of static pixels, previously transmitted pixels, or overlapping pixels, wherein the image data is gap filled at a remote server.
- the output data via the at least one communication interface, the output data requiring less bandwidth for transmission than the retinal image data; and transmit at least some of the retinal image data at a second time corresponding to at least one of an interval time, batch time, and/or available bandwidth time.
- a process executed by a computer processor component of a fundoscope that includes an optical lens arrangement, an image sensor configured to convert detected light to retinal image data, and at least one communication interface, the process comprising: obtain the retinal image data from the image sensor;
- the retinal image data from the image sensor positioned with the optical arrangement that is movable along at least one of an x, y, or z axis.
- the retinal image data from the image sensor positioned with an optical arrangement that is perpendicular to an illumination lens arrangement.
- the retinal image data from the image sensor positioned with an optical arrangement that minimizes or eliminates illumination/reflection intersection within scattering elements of an eye.
- the retinal image data from the image sensor positioned with an optical arrangement that includes one or more masks.
- the retinal image data from the image sensor positioned with an optical arrangement that includes one or more movable masks.
- the retinal image data from the image sensor of at least eighteen megapixels.
- the retinal image data from the image sensor of at least twenty megapixels.
- the retinal image data from the image sensor of at least ten thousand pixels per square degree.
- retinal image data as video data from the image sensor at approximately twenty frames per second.
- the retinal image data from the image sensor and from at least one additional image sensor associated with at least a partially overlapping field of view.
- the retinal image data from the image sensor and from at least one additional image sensor associated with a parallel field of view.
- the retinal image data at a resolution of at least twenty microns.
- the retinal image data as multiple high resolution images of adjacent, overlapping, and/or at least partially overlapping areas of a retina.
- the retinal image data as multiple sequentially captured images of different, adjacent, overlapping, and/or at least partially overlapping areas of a retina
- the retinal image data as multiple at least partially overlapping images of a retina
- the output data requiring approximately 1 Mbps in bandwidth for transmission as compared to approximately 20 Gbps in bandwidth for transmission of the retinal image data.
- output data including at least one of the following types of alterations of the retinal image data for transmission: size, pixel reduction, resolution, stitch, compress, color, overlap subtraction, static subtraction, and/or background subtraction.
- output data including an identification of at least one of the following health issues based on analysis of the retinal image data: diabetic retinopathy, macular degeneration, cardiovascular disease, glaucoma, malarial retinopathy, Alzheimer's disease, globe flattening, papilledema, and/or choroidal folds.
- output data including added contextual information based on analysis of the retinal image data, the output data requiring less bandwidth for transmission than the retinal image data.
- output data based on analysis of the retinal image data and based on a user request for at least one of the following: specified field of view, specified resolution, specified zoom -level, specified action or event, specified object or feature, and/or specified health issue, the output data requiring less bandwidth for transmission than the retinal image data.
- the output data as image data via the at least one communication interface without one or more of static pixels, previously transmitted pixels, or overlapping pixels, wherein the image data is gap filled at a remote server.
- a fundoscope comprising: means for obtaining retinal image data from an image sensor;
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Description
Claims
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662384685P | 2016-09-07 | 2016-09-07 | |
US201662429302P | 2016-12-02 | 2016-12-02 | |
US201762522493P | 2017-06-20 | 2017-06-20 | |
US201762532247P | 2017-07-13 | 2017-07-13 | |
US201762537425P | 2017-07-26 | 2017-07-26 | |
PCT/US2017/050498 WO2018049041A1 (en) | 2016-09-07 | 2017-09-07 | Retinal imager device and system with edge processing |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3509475A1 true EP3509475A1 (en) | 2019-07-17 |
EP3509475A4 EP3509475A4 (en) | 2020-04-22 |
Family
ID=61562312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17849536.2A Withdrawn EP3509475A4 (en) | 2016-09-07 | 2017-09-07 | Retinal imager device and system with edge processing |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP3509475A4 (en) |
JP (1) | JP2019526416A (en) |
CN (1) | CN109963495A (en) |
WO (1) | WO2018049041A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11138732B2 (en) | 2018-12-21 | 2021-10-05 | Welch Allyn, Inc. | Assessment of fundus images |
US10993613B2 (en) | 2018-12-21 | 2021-05-04 | Welch Allyn, Inc. | Fundus image capturing |
EP3915046A4 (en) * | 2019-01-22 | 2022-11-02 | Adam Cogtech Ltd. | Detection of cognitive state of a driver |
KR102243332B1 (en) * | 2020-02-13 | 2021-04-22 | 학교법인 건국대학교 | Apparatus for purpil inspection using mobile terminal |
WO2022208903A1 (en) * | 2021-03-31 | 2022-10-06 | 株式会社ニデック | Oct device, oct data processing method, program, and storage medium |
CN113283298B (en) * | 2021-04-26 | 2023-01-03 | 西安交通大学 | Real-time behavior identification method based on time attention mechanism and double-current network |
WO2023229690A1 (en) * | 2022-05-24 | 2023-11-30 | Verily Life Sciences Llc | Pathology and/or eye-sided dependent illumination for retinal imaging |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2342095A1 (en) * | 2000-03-27 | 2001-09-27 | Symagery Microsystems Inc. | Image capture and processing accessory |
WO2006016366A2 (en) * | 2004-08-12 | 2006-02-16 | Elop Electro-Optical Industries Ltd. | Integrated retinal imager and method |
CN101296368B (en) * | 2007-04-28 | 2011-07-06 | 长春奥普光电技术股份有限公司 | Image recognition tracking apparatus of television tracking machine |
JP4857326B2 (en) * | 2008-11-19 | 2012-01-18 | キヤノン株式会社 | Ophthalmic equipment |
US10226174B2 (en) * | 2011-03-02 | 2019-03-12 | Brien Holden Vision Institute | Ocular fundus imaging systems, devices and methods |
TWI446890B (en) * | 2011-04-28 | 2014-08-01 | Crystalvue Medical Corp | Portable funduscopic image detecting apparatus |
US9137433B2 (en) * | 2011-09-19 | 2015-09-15 | Michael Mojaver | Super resolution binary imaging and tracking system |
US20150021228A1 (en) * | 2012-02-02 | 2015-01-22 | Visunex Medical Systems Co., Ltd. | Eye imaging apparatus and systems |
WO2015054672A1 (en) * | 2013-10-10 | 2015-04-16 | The Regents Of The University Of California | Ocular cellscope apparatus |
US10248856B2 (en) * | 2014-01-14 | 2019-04-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9723978B2 (en) * | 2014-03-31 | 2017-08-08 | Nidek Co., Ltd. | Fundus photography device |
MX2017003776A (en) * | 2014-09-24 | 2018-03-23 | Princeton Identity Inc | Control of wireless communication device capability in a mobile device with a biometric key. |
-
2017
- 2017-09-07 CN CN201780068689.3A patent/CN109963495A/en active Pending
- 2017-09-07 EP EP17849536.2A patent/EP3509475A4/en not_active Withdrawn
- 2017-09-07 WO PCT/US2017/050498 patent/WO2018049041A1/en unknown
- 2017-09-07 JP JP2019533312A patent/JP2019526416A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
JP2019526416A (en) | 2019-09-19 |
EP3509475A4 (en) | 2020-04-22 |
CN109963495A (en) | 2019-07-02 |
WO2018049041A1 (en) | 2018-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180064335A1 (en) | Retinal imager device and system with edge processing | |
WO2018049041A1 (en) | Retinal imager device and system with edge processing | |
US11766172B2 (en) | Ophthalmic examination and disease management with multiple illumination modalities | |
US10915180B2 (en) | Systems and methods for monitoring a user's eye | |
US11808943B2 (en) | Imaging modification, display and visualization using augmented and virtual reality eyewear | |
CN110944571B (en) | System and method for improving ophthalmic imaging | |
CN109997174B (en) | Wearable spectrum inspection system | |
US20210290056A1 (en) | Systems and methods for capturing, annotating and sharing ophthalmic images obtained using a hand held computer | |
EP3157432B1 (en) | Evaluating clinician attention | |
US20150213725A1 (en) | Method, apparatus, and system for viewing multiple-slice medical images | |
KR20160015785A (en) | Apparatus and method for improving accuracy of contactless thermometer module | |
EP3721320B1 (en) | Communication methods and systems | |
US20230282080A1 (en) | Sound-based attentive state assessment | |
David et al. | What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? | |
KR101769506B1 (en) | Control mehthod for smart glasses using brain wave | |
KR101727155B1 (en) | Smart glasses using brain wave | |
KR20230020246A (en) | patient care methods and systems through artificial intelligence-based monitoring | |
KR20240001224U (en) | Home health care monitoring system using artificial intelligence based companion type robot | |
Cornelissen et al. | fMRI evidence for two distinct ventral cortical vision systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190408 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20200323 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: A61B 3/00 20060101ALI20200317BHEP Ipc: A61B 3/14 20060101AFI20200317BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20200429 |