CN113768460A - Fundus image analysis system and method and electronic equipment - Google Patents

Fundus image analysis system and method and electronic equipment Download PDF

Info

Publication number
CN113768460A
CN113768460A CN202111059503.2A CN202111059503A CN113768460A CN 113768460 A CN113768460 A CN 113768460A CN 202111059503 A CN202111059503 A CN 202111059503A CN 113768460 A CN113768460 A CN 113768460A
Authority
CN
China
Prior art keywords
fundus
arc
shaped
module
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111059503.2A
Other languages
Chinese (zh)
Other versions
CN113768460B (en
Inventor
杨志文
王欣
贺婉佶
姚轩
黄烨霖
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd, Beijing Airdoc Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202111059503.2A priority Critical patent/CN113768460B/en
Publication of CN113768460A publication Critical patent/CN113768460A/en
Application granted granted Critical
Publication of CN113768460B publication Critical patent/CN113768460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0033Operational features thereof characterised by user input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention provides a fundus image analysis system, a fundus image analysis method and electronic equipment, wherein the system comprises a feature extraction module, a fundus prediction module and a segmentation prediction module, wherein the feature extraction module is used for sampling a fundus image to be analyzed so as to extract a fundus feature map; the fundus prediction module analyzes the fundus type corresponding to the fundus image according to the fundus feature map, wherein the fundus type comprises a normal fundus and a plurality of myopia-associated fundus; the segmentation prediction module samples the fundus feature map to analyze a corresponding segmentation prediction map for a fundus image, which indicates a class of each pixel in the fundus image, including a background pixel class, a disc pixel class, a plurality of arc spot pixel classes, and a plurality of atrophy spot pixel classes. The invention provides a method for solving the problem that the classification of the traditional myopia eyeground is only the rough classification related to pathological myopia by utilizing a machine learning technology so as to provide better diagnosis and treatment or daily eye use suggestions for detected personnel.

Description

Fundus image analysis system and method and electronic equipment
Technical Field
The invention relates to a fundus analysis method for myopia, in particular to the technical field of fundus image analysis by using a neural network model, and more particularly to a fundus image analysis system, a fundus image analysis method and electronic equipment.
Background
In recent years, the age of the teenager group in China with myopia is getting lower, the vision problem of the teenagers gradually becomes a main problem which puzzles many parents and schools and teachers, and the vision prevention and control of the teenagers also rises to the national strategic level.
Myopia is often caused by poor eye use habits and heavy academic burden, most of adolescent groups use the eyes excessively, the axis of the eye is stretched and lengthened, and corresponding image characteristics exist on the fundus oculi, for example, common leopard-shaped eyeground is the retinal thinning caused by the axis of the eye stretching, and choroids are displayed on the eyeground image. In the early stage of myopia, the fundus image is slightly changed, the eyeball is slightly deformed, so that the peripheral area of the optic disc is drawn, different retina tissue layers are exposed to form fundus arc-shaped spots, and the color morphological characteristics of the different tissue layers on the fundus image have slightly different expressions. Highly myopic fundus also often leads to more severe structural changes in the fundus, such as diffuse atrophy, macular atrophy fundus. These are all fundus manifestations when the degree of myopia is high. The device is limited to the shortage of specialized ophthalmologists in China, most of eye vision detection mechanisms are not provided with fundus cameras, and fundus examination is carried out while optometry is carried out and glasses are matched, so that some myopic fundus lesions are not found in time and cannot be guided correctly. Therefore, in addition to the conventional visual chart test, the axial length measurement, the retinal fundus picture can be detected. The accurate monitoring of the characteristics of the fundus images under different myopia conditions also can be used as an important ring for preventing and controlling the vision of teenagers.
However, the current myopia fundus is often limited to identifying whether the current fundus is abnormal, and a more detailed analysis cannot be given for the specific situation of the myopia fundus, so that the relevant personnel have difficulty in providing better diagnosis and treatment or daily eye use advice for the myopia patient according to the specific situation.
Disclosure of Invention
Therefore, the present invention is directed to overcome the above-mentioned drawbacks of the prior art, and to provide a fundus image analysis system, a fundus image analysis method, and an electronic device.
The purpose of the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a fundus image analysis system comprising a feature extraction module, a fundus prediction module, and a segmentation prediction module, wherein the feature extraction module samples a fundus image to be analyzed to extract a fundus feature map; the fundus prediction module analyzes the corresponding fundus type of the fundus image according to the fundus characteristic map, wherein the fundus type comprises a normal fundus and a plurality of myopia-associated fundus; the segmentation prediction module samples the fundus feature map to analyze a corresponding segmentation prediction map of the fundus image, which indicates a class of each pixel in the fundus image, the classes of the pixels including a background pixel class, a optic disc pixel class, a plurality of arc spot pixel classes, and a plurality of atrophy spot pixel classes.
In some embodiments of the invention, the feature extraction module performs down-sampling on the fundus image a plurality of times to obtain a fundus feature map; the fundus prediction module performs multiple upsampling on the fundus feature map to obtain a multi-channel segmentation map, and performs analysis based on the multi-channel segmentation map to obtain a segmentation prediction map.
In some embodiments of the invention, the system is trained in the following manner: acquiring training data which comprises a plurality of fundus pictures, fundus category labels and pixel category labels; the method comprises the steps of training a system by utilizing training data, wherein fundus classification sub-losses are calculated according to the output of a fundus prediction module and fundus category labels, fundus segmentation sub-losses are calculated according to the output of a segmentation prediction module and pixel category labels, total losses are calculated according to the fundus classification sub-losses and the fundus segmentation sub-losses, and gradient calculation and parameter updating are performed on a feature extraction module, the fundus prediction module and the segmentation prediction module based on the total losses.
In some embodiments of the present invention, the system further includes an eye classification module and a quantitative analysis module, wherein the eye classification module determines an eye classification corresponding to the fundus image according to the fundus feature map, and the eye classification is a left eye or a right eye; the quantitative analysis module carries out quantitative analysis according to the fundus type and the segmentation prediction chart corresponding to the fundus image, or carries out quantitative analysis according to the fundus type, the eye type and the segmentation prediction chart corresponding to the fundus image, so as to obtain various quantitative indexes.
In some embodiments of the invention, the system is trained in the following manner: acquiring training data which comprises a plurality of fundus pictures (the fundus pictures in the training data are fundus images, and the names of the fundus pictures are different and are only used for distinguishing), a fundus class label, an eye class label and a pixel class label; the training data is utilized to train the system, wherein fundus classification sub-losses are calculated according to the output of a fundus prediction module and fundus category labels, eye classification sub-losses are calculated according to the output of the eye classification module and eye category labels, fundus segmentation sub-losses are calculated according to the output of a segmentation prediction module and pixel category labels, total losses are calculated according to the fundus classification sub-losses, the eye classification sub-losses and the fundus segmentation sub-losses, and gradient calculation and parameter updating are performed on the feature extraction module, the fundus prediction module, the segmentation prediction module and the eye classification module based on the total losses.
In some embodiments of the invention, the total loss is calculated as follows:
Lall=α*Lseg+β*Lclf-1+γ*Lclf-2
wherein L isallDenotes the total loss, LsegIndicates ocular fundus segmentation sub-loss, Lclf-1Indicates ocular fundus classifier loss, Lclf-2The eye classification sub-loss is expressed, α represents a weight of the fundus segmentation sub-loss, β represents a weight of the fundus classification sub-loss, and γ represents a weight corresponding to the eye classification sub-loss.
In some embodiments of the present invention, the plurality of quantitative indicators includes a number indicator and an area indicator of the lesion, and the quantitative analysis module is configured to:
when the fundus type corresponding to the fundus image is any myopia-associated fundus, the degree of fundus lesions corresponding to the fundus image is graded according to at least one quantitative index of the quantity index and the area index of the focus, and a grading index is obtained.
In some embodiments of the present invention, when the fundus type corresponding to the fundus image is any myopia-associated fundus, the classifying the degree of fundus lesion corresponding to the fundus image according to at least one of a quantitative index and an area index of a lesion may include:
determining a lesion value of fundus lesions according to at least one quantitative index of the quantity index and the area index of lesions, and determining the grade of the fundus lesions according to a grading threshold interval in which the lesion value is positioned, wherein the grading threshold used for constructing the grading threshold interval is obtained in the following way:
randomly sampling part of samples from a collected sample set containing a plurality of age groups, a plurality of regions and a plurality of myopia degrees; determining a sampling interval according to the graded granularity and the number of all samples sampled; and (3) arranging lesion values corresponding to all the sampled samples in order of magnitude, and sampling from the middle interval according to the sampling interval to determine a plurality of grading threshold values for grading.
In some embodiments of the invention, the area indicators comprise: the minimum area of the lesion, the maximum area of the lesion, the total area of the lesion, and the ratio of the total area of the lesion to the total area of the optic disc.
In some embodiments of the invention, when the fundus category corresponding to the fundus image is fundus oculi with arc-shaped spots, the degree of arc-shaped spot lesions corresponding to the fundus image is graded according to the ratio of the total lesion area of the arc-shaped spots to the total optic disc area.
In some embodiments of the present invention, the total lesion area of the arc-shaped spot is a weighted area, wherein the temporal side and the nasal side of the optic disc on the segmentation prediction map are determined according to the eye category, the segmentation prediction map is divided into a plurality of sub-areas according to the temporal side and the nasal side, and the area of the arc-shaped spot in the segmentation prediction map is weighted and summed according to the area weight of each sub-area and the category weight of the pixel categories of the arc-shaped spot, so as to obtain the weighted area.
Preferably, the region weight of the subregion relatively closer to the nasal side is greater than the region weight of the subregion relatively further from the nasal side.
Preferably, the plurality of arc-shaped spot pixel categories include: the group of the arc-shaped pigmented spots comprises arc-shaped pigmented spots, arc-shaped choroid spots, arc-shaped mixed spots and arc-shaped sclera spots, wherein the class weights of the pixel classes of the arc-shaped pigmented spots are the class weights corresponding to the arc-shaped pigmented spots, the arc-shaped choroid spots, the arc-shaped mixed spots and the arc-shaped sclera spots from small to large.
In some embodiments of the invention, the multiple myopia-associated fundus is a combination of categories among arcuately macular fundus, diffusely atrophic fundus, macular atrophic fundus, leopard fundus; the multiple arc spot pixel classes are the combination of the classes in the pigment arc spot, the choroidal arc spot, the mixed arc spot and the sclera arc spot; various atrophic plaque pixel categories include diffuse atrophy and plaque atrophy.
According to a second aspect of the present invention, there is provided a fundus image analysis method implemented based on the system of the first aspect, the method including: acquiring a fundus image to be analyzed; sampling the fundus image by a characteristic extraction module to extract a fundus characteristic map; analyzing the corresponding fundus type of the fundus image according to the fundus characteristic map by a fundus prediction module; sampling, by a segmentation prediction module, the fundus feature map to analyze a corresponding segmentation prediction map of the fundus image, indicating a category of each pixel in the fundus image; outputting the analyzed fundus type and segmentation prediction map.
In some embodiments of the invention, the system further comprises an eye classification module and a quantitative analysis module, the method further comprising: analyzing the eye classification corresponding to the fundus image by an eye classification module according to the fundus feature map, wherein the eye classification is a left eye or a right eye; the quantitative analysis module carries out quantitative analysis according to the fundus type and the segmentation prediction map corresponding to the fundus image, or carries out quantitative analysis according to the fundus type, the eye type and the segmentation prediction map corresponding to the fundus image; and outputting various quantization indexes.
Some detailed embodiments of the method may refer to the foregoing system embodiments, and are not described herein.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising: one or more processors; and a memory, wherein the memory is to store executable instructions; the one or more processors are configured to execute the executable instructions to implement the method of the second aspect.
Compared with the prior art, the invention has the advantages that:
1. the invention provides a method for solving the problem that the classification of the traditional myopia eyeground only has rough classification related to pathological myopia by utilizing a machine learning technology;
2. the invention integrates near-vision eye-bottom classification, eye classification and pixel-level segmentation into a unified end-to-end training frame, and fully exerts the advantages of each type of model;
3. aiming at myopia eyegrounds at different stages of early myopia, high myopia, pathological myopia and the like, the invention utilizes a segmentation prediction model to carry out quantization grading of finer granularity on atrophic eyegrounds (corresponding to the high myopia, pathological myopia) and arc-shaped macular eyegrounds (corresponding to the early myopia eyegrounds) after a whole crowd randomly samples, analyzes and selects a threshold value;
4. according to the invention, the fundus image is divided into different areas according to the center of the optic disc area and the type of eyes, different weights are given to different types of arc-shaped spots in each area, and thus, a quantitative index which can reflect the illness state of the arc-shaped spots more accurately and intuitively is obtained.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 is a schematic fundus image corresponding to multiple representations of fundus lesions;
FIG. 2 is a block schematic diagram of a fundus image analysis system according to an embodiment of the present invention;
FIG. 3 is an exemplary implementation of a feature extraction module according to an embodiment of the invention;
FIG. 4 is another illustrative implementation of a feature extraction module in accordance with an embodiment of the invention;
FIG. 5 is a schematic diagram of a second layer of the feature extraction module of the implementation shown in FIG. 3, according to an embodiment of the invention;
FIG. 6 is a diagram of a U-type network structure formed by a feature extraction module and a partition prediction module according to an embodiment of the present invention;
FIG. 7 is a block schematic diagram of a fundus image analysis system according to another embodiment of the present invention;
FIG. 8 is a schematic illustration of the training loss of a fundus image analysis system according to an embodiment of the present invention;
fig. 9 is a schematic diagram of region division of the fundus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As mentioned in the background section, the current myopic fundus is often limited to identifying whether the current fundus is abnormal, and a more detailed analysis cannot be given for the condition specific to the myopic fundus, so that it is difficult for the relevant person to provide a better diagnosis and treatment or a daily eye use advice for the myopic patient according to the specific condition. For improvement, the invention extracts a corresponding fundus characteristic map for an original fundus image, predicts that the fundus image is a normal fundus or a certain myopia-associated fundus according to the fundus characteristic map, and performs multi-channel segmentation on the fundus characteristic map, so as to obtain a segmentation prediction map by predicting based on the multi-channel segmentation map, wherein the prediction value of each pixel of the segmentation prediction map corresponds to one of a background pixel category, a disc pixel category, a plurality of arc spot pixel categories and a plurality of atrophy spot pixel categories; therefore, a doctor or an optician can know the near vision condition of the fundus of the detected person and the distribution condition of the various arc-shaped spot pixel types and the various atrophy-spot pixel types in the fundus image, and better diagnosis and treatment or daily eye use advice can be provided for the detected person.
Before describing embodiments of the present invention in detail, some of the terms used therein will be explained as follows:
the fundus refers to the posterior region of the eyeball, including the retina, optic papilla, macula, and central retinal artery.
Optic Disc (optical Disc), refers to the Optic Disc, also known as the Optic nerve head. The retina has a pale red discoid structure with a diameter of about 1.5mm from the macula lutea to about 3mm of the nasal side, and is called optic disc, which is called optic disc for short.
The pigmented arcus macula is a type of arcus macula in a black crescent shape formed on the fundus oculi. The eye axis slightly pulls the pigment epithelial layer cells at the edge of the temporal optic disc to gather at the early myopia stage to form a black crescent arc shape; the fundus image corresponding to the relevant symptom can be referred to fig. 1 a.
The choroidal arc plaque refers to the type of arc plaque formed on the fundus due to choroidal exposure. In high myopia and pathologic myopia, due to the fact that eyeballs stretch backwards, sclera is expanded and dragged, retinal pigment epithelial cells and choroids (Branch membranes) are separated from a temporal disc and stop at a certain distance away from the disc; the retinal pigment epithelium layer is missing in the detached region, exposing the underlying choroid, appearing as a gray crescent-shaped region under the fundus image; the fundus image corresponding to the relevant symptom can be referred to fig. 1 b.
Mixed arc-shaped spots, refers to the type of arc-shaped spots where cross-exposure of the choroid and sclera occurs on the fundus. The involvement zone is accompanied by cross-exposure of the choroid and sclera, appearing as alternating gray-white features on the fundus image; the fundus image corresponding to the relevant symptom can be referred to fig. 1 c.
Scleral arcus plaques, are the type of arcus plaques formed on the fundus due to severe scleral exposure. The mixed arc-shaped spots are relatively weak in dragging, if the dragging is heavier, the choroid is also pulled away from the optic disc, the retinal pigment epithelium layer and the choroid are both lost in a separation area, the corresponding sclera is exposed, and the eyeground image shows a special white arc-shaped spot; the fundus image corresponding to the relevant symptom can be referred to fig. 1 d.
Diffuse atrophy refers to the appearance of atrophic plaques corresponding to the retinal pigment epithelium and choroidal pigment disorder in the temporal region of the optic disc on the ocular fundus. Retinal pigment epithelium and choroidal pigment disorder in the temporal region of the optic disc can form isolated or frequently-occurring yellow-white regions which are irregular, small and extensive in shape; the fundus image corresponding to the relevant symptom can be referred to fig. 1 e.
Macular atrophy refers to the type of atrophy spots that appear in the fundus at some macular atrophy zone. The fundus presents a small and localized isolated focal atrophy (zone of atrophy), circular, white or yellowish white, with pigmentary masses visible at its edges; the fundus image corresponding to the relevant symptom can be referred to fig. 1 f.
The leopard streak fundus refers to the type of fundus with leopard skin-like texture. The myopia fundus can be extended backwards, the retinal blood vessels become thinner and straighten after leaving the optic disc, and the choroid blood vessels become thinner and straighten or obviously reduce correspondingly. Meanwhile, due to the pigment epithelial layer nutrient barrier, superficial pigment disappears, and choroid orange blood vessels are exposed more to present leopard skin-like texture; the fundus image corresponding to the relevant symptom can be referred to fig. 1 g.
The atrophy in the macular region refers to the type of atrophy spots in the atrophy region. Macular atrophy is the type of atrophic spot where a patchy atrophic area appears in the macular area. Choroidal vascular occlusion may occur in areas of the macula late in certain pathologic myopia, with single or multiple focal pigment epithelium and choriocapillaris atrophic degeneration with pigment migration. The fundus is represented by the atrophic areas of different sizes and shapes distributed around the macula; the fundus image corresponding to the relevant symptom can be referred to fig. 1 h.
An annular arc-shaped spot, which generally extends from the temporal region to the superotemporal and subtopic sides. The arc-shaped spot change of the annular region appears when a small part of the arc-shaped spot extends to the nasal side region; the fundus image corresponding to the relevant symptom can be referred to fig. 1 i.
According to an embodiment of the present invention, referring to fig. 2, a fundus image analysis system is provided, which includes a feature extraction module 1, a segmentation prediction module 2, and a fundus prediction module 3. The system can be installed in electronic equipment such as a computer or a server.
The feature extraction module 1 may also be referred to as an encoding module (Encoder), and is a multi-layer neural network model for sampling fundus images to be analyzed to extract fundus feature maps. Preferably, the feature extraction module performs downsampling on the fundus image a plurality of times to extract the fundus feature map. After each time of downsampling of the input fundus image by the module, the resolution is reduced to 1/2, the number of characteristic channels is gradually increased, and a corresponding fundus characteristic map is obtained. Therefore, a high-level abstract semantic space can be sampled, and the expression capability of the model is improved.
Embodiment 1 of the feature extraction module 1 can be seen in fig. 3, where the feature extraction module 1 includes five sub-networks, and the fundus image is downsampled a plurality of times through the sub-networks of the first layer, the second layer, the third layer, the fourth layer, and the fifth layer, the resolution of the image gradually decreases, and the number of feature channels gradually increases. The original 600 × 3 fundus image (the number of feature channels with length × width, the meaning of the subsequent data is the same, and details are not described) is processed by the first layer to obtain a 300 × 20 feature map 1, then is processed by the second layer to obtain a 150 × 40 feature map 2, then is processed by the third layer to obtain a 75 × 80 feature map 3, then is processed by the fourth layer to obtain a 38 × 160 feature map 4, and then is processed by the fifth layer to obtain a 19 × 320 feature map 5.
The number of layers of the sub-networks of the feature extraction module 1, the size features of the input fundus image, and the size of the feature map can be set according to the needs of the practitioner. Embodiment 2 of the extraction module 1 can be seen in fig. 4, where the feature extraction module 1 includes four layers of sub-networks, the fundus image is downsampled for a plurality of times through the first, second, third and fourth layers of sub-networks, the resolution of the image gradually decreases, and the number of feature channels gradually increases. The original 400 × 3 fundus image was processed through the first layer to obtain 150 × 40 feature map 1, then through the second layer to obtain 75 × 80 feature map 2, then through the third layer to obtain 38 × 160 feature map 3, and then through the fourth layer to obtain 19 × 320 feature map 4.
According to other embodiments of the present invention, the feature extraction module 1 has a variety of alternative network structures, such as: the structure of a characteristic extraction part in a network such as a CNN volume block comprising a convolutional neural network, a Transformer block based on a self-attention mechanism, a U-net and the like, or a combination thereof.
In one embodiment, each layer of the sub-network of the feature extraction module 1 may be a CNN volume block based on a convolutional neural network, for example, any combination of a batch normalization layer (BN), a convolutional layer, a pooling layer, an activation layer, and the like. Referring to fig. 5, taking the second layer sub-network of embodiment 1 shown in fig. 3 as an example, it may include a convolution block 1 and a convolution block 2, where the feature map 1 is input into the convolution block 1, and after sequentially processing the batch normalization layer, the convolution layer, and the active layer of the convolution block 1, the convolution block 2 is input, and sequentially processing the batch normalization layer, the convolution layer, the active layer, and the average pooling layer of the convolution block 2, a feature map 2 is obtained. Wherein, the 300 × 20 feature map 1 is processed by the batch normalization layer of the convolution block 1 to obtain a 300 × 20 feature map 1.1, then is processed by the convolution layer to obtain a 300 × 40 feature map 1.2, and then is processed by the activation layer to obtain a 300 × 40 feature map 1.3; the feature map 1.3 is input into the convolution block 2, the processing of the batch normalization layer of the convolution block 2 is performed to obtain a 300 x 40 feature map 1.4, the processing of the convolution layer is performed to obtain a 300 x 40 feature map 1.5, the processing of the activation layer is performed to obtain a 300 x 40 feature map 1.6, and the processing of the average normalization layer is performed to obtain a 150 x 40 feature map 2. It can be seen that the convolution process of convolution block 1 is configured to adjust the number of eigen-channels to twice that of the original, i.e. from 20 to 40; the convolution processing procedure of the convolution block 2 is configured to keep the number of characteristic channels unchanged; the activation function used by both activation layers is Relu.
In another embodiment, the feature extraction module 1 comprises a CNN volume block of a convolutional neural network, a transform block based on a self-attention mechanism. For example, in the structure shown in fig. 5, volume block 1 is replaced with a transform block, in which the transform block does not use a volume layer, but uses a multi-head attention layer transform, and the others do not change.
It should be understood that the number of network layers, the number of convolution blocks in each layer, the batch normalization layer, the convolution layer, the active layer, the pooling layer, and the like in the above embodiments can be adjusted according to actual situations. In other words, the specific implementation structure of the neural network of the feature extraction module is only used as an example to facilitate understanding of the technical solution of the present invention. The specific implementation structure of the neural network can be set or adjusted by those skilled in the art as required within the scope of the present invention. The convolution processing of the convolution block 2 is configured to be used for increasing the number of the characteristic channels, and the convolution processing of the convolution block 1 does not adjust the number of the characteristic channels; alternatively, the convolution processing of the volume block 1 is configured to adjust the number of characteristic channels from 20 to 30, and the convolution processing of the volume block 2 is configured to adjust the number of characteristic channels from 30 to 40. Similarly, the activation function used by the activation layer may also be set and adjusted as needed, such as using a Mish activation function. In addition, the size and the number of characteristic channels of the input and output images or the characteristic diagram of the intermediate layer can be set and adjusted according to the needs.
According to an embodiment of the present invention, the feature extraction module 1 may also adopt some existing structures. For example, Unet + +, Upper net, or the encoded part (downsampled part) of the full convolutional neural network FCN without t-hop links.
The segmentation prediction module 2 in fig. 2, which may also be referred to as a decoding module (Decoder), is a multi-layer neural network model for sampling the fundus feature map to analyze the segmentation prediction map corresponding to the fundus image. Preferably, the fundus prediction module performs a plurality of upsampling on the fundus feature map to a multi-channel segmentation map, and performs analysis based on the multi-channel segmentation map to obtain a segmentation prediction map. Optionally, before the second and subsequent upsampling, the upsampled feature map is superimposed on the feature map output by the corresponding layer in the feature extraction module 1, the resolution of the fundus feature map obtained by the last downsampling is increased by 2 times layer by layer, and finally a multi-channel segmentation map is obtained, wherein the resolution of the multi-channel segmentation map is the same as that of the original input image, the number of channels is equal to the total number of classes (for example, 8) corresponding to the background pixel class, the optic disc pixel class, the multiple arc spot pixel classes and the multiple atrophy spot pixel classes, and the pixel class corresponding to each pixel position is obtained after a maximum value (Argmax) operation, and the pixel classes corresponding to all the pixels form a segmentation prediction map.
The network structure formed by the feature extraction module 1 and the partition prediction module 2 is a U-type network structure, and an exemplary embodiment can refer to the network structure shown in fig. 6. The fundus image is 600 × 3 in size, and is processed (down-sampled) by 5 layers of sub-networks of the feature extraction module 1, the resolution is reduced to half of the input resolution after each layer of processing, the number of channels is gradually increased, a feature map 1(300 × 20), a feature map 2(150 × 40), a feature map 3(75 × 80), a feature map 4(38 × 160) and a feature map 5(19 × 19) are sequentially obtained, and the feature map 5 is the fundus feature map obtained by the last down-sampling. Then, after the processing (upsampling) of the 5-layer sub-network of the segmentation prediction module 2, the resolution of the fundus feature map obtained by the last downsampling is increased to 2 times of the input resolution, the number of channels is gradually reduced, a feature map 6(38 × 160), a feature map 7(75 × 80), a feature map 8(150 × 40), a feature map 9(300 × 20) and a feature map 10(600 × 20) are sequentially obtained, and then a feature map 11(600 × 8) is obtained through convolution by 1 × 1; the characteristic diagram 11 is a multi-channel segmentation diagram. It can be seen that, in other layers than the first layer of the partition prediction module 2, the feature map with the same resolution from the corresponding layer of the feature extraction module 1 is also obtained through the skip link, and after the feature map is superimposed with the output of the previous layer of the partition prediction module 2, the upsampling is performed.
The fundus prediction module 3 in fig. 2 is a neural network model for predicting the fundus type task, and comprises a full connection layer or a transform layer, performs fundus type prediction on fundus images according to a fundus feature map, and outputs one or more types of normal fundus and multiple myopia-associated fundus. The fundus classification predicted by the fundus prediction module 3 is a preliminary classification of the progression of the myopic fundus.
According to one embodiment of the present invention, the multiple myopia-associated fundus is a combination of categories among an arcuated macular fundus, a diffuse atrophic fundus, an macular atrophic fundus, and a macular atrophic fundus. Besides, the leopard streak ocular fundus is also a common expression of the myopia ocular fundus, and the leopard streak ocular fundus can be additionally added to learn corresponding ocular fundus semantic information, so that the system performance is improved. Preferably, the multiple myopia-associated fundus is a combination of categories of an arc-shaped macular fundus, a leopard-streak fundus, a diffuse atrophic fundus, a macular atrophic fundus, and a macular atrophic fundus. For example, the various myopia-associated eyegrounds are an arc macular eyeground, a leopard streak eyeground, a diffuse atrophic eyeground, a macular atrophic eyeground. Alternatively, one or more categories are removed, for example, the leopard-streak fundus is removed, and the various myopia-associated fundus is an arc macular fundus, a diffuse atrophic fundus, a macular atrophic fundus, or a macular atrophic fundus. It will be appreciated that the practitioner may also add one or more corresponding categories beyond the myopia-associated fundus category described above, as desired. The arc-shaped macula fundus, the leopard streak fundus, the diffuse atrophic fundus, the macula-shaped atrophic fundus and the macular atrophic fundus of the myopic fundus may exist at the same time, the activation function of the fundus prediction module 3 is a Sigmoid activation function suitable for a multi-label classification task, each type of label is divided into positive and negative through respective independent threshold values, and finally all prediction classifications are summarized. However, the normal fundus and the multiple kinds of myopia-associated fundus are mutually exclusive, and if the normal fundus is output, any kind of type corresponding to the myopia-associated fundus is not output. According to the research of the applicant on the data result during training, the fact that the observation of the pigment arc-shaped spots, the choroid arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots are similar based on the whole eyeground shows that the difficulty in distinguishing the eyeground corresponding to the subdivided pigment arc-shaped spots, the choroid arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots is high by training the eyeground prediction module, and the diffuse atrophic eyeground, the patch atrophic eyeground and the atrophic eyeground in the macular region can be clearly distinguished on the whole eyeground. Therefore, in order to classify more accurately, the eyeground corresponding to a plurality of arc-shaped spots is unified into the eyeground of the arc-shaped spots, so that the influence on model parameters is reduced, diffuse atrophy, patch-like atrophy and macular area atrophy respectively correspond to one eyeground, and richer high-efficiency semantic information is provided.
The following describes training of the feature extraction module 1, the segmentation prediction module 2, and the fundus prediction module 3. In one example, a total of 3814 fundus pictures were used, all from the real world, obtained by random sampling, covering various age groups and camera brand distributions and different degrees of myopia. Each picture is labeled by a professional ophthalmologist. The labeling content comprises:
fundus type label: normal fundus, arcuated macular fundus, leopard streak fundus, diffuse atrophic fundus, macular atrophic fundus;
pixel class label (pixel level segmentation label): background, optic disc, pigmented arc macula, choroidal arc macula, scleral arc macula, mixed arc macula, diffuse atrophy, and macula lamellar atrophy.
The training set and the test set can be divided according to the 8:2 principle. Training set data 3000 random samples were taken from 3814 pictures, and the remaining 814 samples were used as test sets.
The total loss calculated during training is equal to the weighted sum of the fundus classification sub-loss and the fundus segmentation sub-loss:
Lall=α*Lseg+β*Lclf-1
wherein L issegIndicates ocular fundus segmentation sub-loss, Lclf-1Represents the fundus classifier loss, α represents the weight of the fundus segmentation sub-loss, and β represents the weight of the fundus classifier loss. Fundus segmentation sub-losses are typically Dics loss or pixel level cross entropy, etc. The fundus classifier loss may be cross entropy or arbitrarily classified loss. It is understood that α, β may be adjusted as appropriate. In the embodiment, the system is trained by providing high-level semantic supervision information about the type and the type of the eyeground and the eye level based on the whole image, and providing pixel-level semantic supervision information based on each pixel of the image, so that the system can learn knowledge of different aspects and different levels more fully, and the performance of the model is improved better.
The above fundus image analysis mainly analyzes possible focuses of the whole fundus and the categories of each pixel of the fundus, and can provide reference for a doctor or an optician to know the myopia condition of the fundus of a detected person and the distribution condition of various arc-shaped spot pixel categories and various atrophy spot pixel categories in the fundus image. However, because fine-grained quantitative analysis is not performed on the main focus type of the myopic fundus, the severity and prediction basis of the corresponding diseases of the myopic fundus are not clear enough, and doctors or opticians are inconvenient to give diagnosis and treatment or prevention suggestions to the myopic patients better. Thus, the above system can be further improved.
According to an embodiment of the present invention, referring to fig. 7, the fundus image analysis system further includes an eye classification module 4 and a quantitative analysis module 5.
The eye classification module 4 is a neural network for predicting the eye classification (eye classification, i.e. left eye or right eye) task to which the fundus image belongs, and includes a full connection layer or a transform layer, and performs eye classification prediction on the fundus image according to the fundus feature map, and outputs the left eye or the right eye. The classification of the eye is used for the quantitative analysis module 5 to perform the area division of the fundus according to the optic disc center position. The left eye class and the right eye class cannot appear simultaneously, the module belongs to a multi-class classification task, a softmax activation function is adopted, and finally the class with the maximum probability (Argmax) is taken as output.
Training is required because the added eye classification module 4 is also a neural network model. Referring to fig. 8, the inputs to the system include the fundus image and three labels (fundus class label, eye class label, and pixel class label), and the model outputs are fundus class, eye class, and pixel class. At this time, the total loss calculated at the time of training is equal to the weighted sum of the fundus classification sub-loss, fundus segmentation sub-loss, and eye classification sub-loss:
Lall=α*Lseg+β*Lclf-1+γ*Lclf-2
wherein L isallDenotes the total loss, LsegIndicates ocular fundus segmentation sub-loss, Lclf-1Represents the fundus classification sub-loss, alpha represents the weight of the fundus segmentation sub-loss, beta represents the weight of the fundus classification sub-loss, Lclf-2Represents the eye classification sub-loss, and gamma represents the weight corresponding to the eye classification sub-loss. During training, a corresponding loss calculation module can be arranged in the system for calculating the loss. The eye classification sub-impairments may be cross-entropy impairments or impairments of arbitrary classification. Alpha, beta, gamma can be adjusted as appropriate. For example, α, β, γ may be set to 0.2, 1, 0.4, respectively. The invention divides and integrates fundus classification, eye classification, pixel-level background, optic disc and various focuses (pigment arc-shaped spots, choroid arc-shaped spots, sclera arc-shaped spots, mixed arc-shaped spots, diffuse atrophy and spot lamellar atrophy) into a unified end-to-end training frame, fully exerts the advantages of each type of model and can improve the prediction precision of the system.
The quantitative analysis module 5 can give a comprehensive evaluation based on the segmentation prediction map, the fundus type, and the eye type. The quantitative analysis module 5 is used for performing post-processing analysis on the segmentation prediction map output by the segmentation prediction module 2, the fundus type output by the fundus prediction module 3 and the eye type output by the eye type classification module 4, and comprises a number index, an area index, a grading index and a combination thereof. Aiming at the myopia eyegrounds at different stages of early myopia, high myopia, pathological myopia and the like, the invention utilizes the segmentation prediction module to predict, and carries out quantization grading with finer granularity on the atrophic eyegrounds (corresponding to the high myopia, pathological myopia and the arc-shaped macular eyegrounds (corresponding to the early myopia eyegrounds) after the threshold is randomly sampled, analyzed and selected by the whole crowd, thereby carrying out more accurate disease analysis and better providing diagnosis and treatment suggestions for myopia patients.
If the classification result predicted by the fundus prediction module 3 is a normal fundus, this quantitative analysis module 5 does not need to perform quantitative analysis.
If the classification result predicted by the fundus prediction module 3 includes the atrophy spot types (diffuse atrophy, patch atrophy and macular area atrophy), the module performs quantitative analysis on the segmentation types (diffuse atrophy and patch atrophy), and the analysis content includes the total number, the total area, the maximum area, the minimum area, the ratio of the total area of the lesions to the total area of the optic disc, and the quantitative classification of fine granularity. The area index is the number of pixel points relative to the resolution of the original image, and the pixel area can be converted into a real physical area (actual area) according to the real physical diameters of the eyeground corresponding to different camera brands.
For example, if the fundus prediction module 3 predicts a diffusely atrophic fundus, the lesion value of that type of lesion (diffuse atrophy) is determined according to a predetermined quantitative index or indices (e.g., total number, total area, maximum area, minimum area, ratio of total area of diffuse atrophy to total area of optic disc, and combinations thereof). For example, the practitioner may select the ratio of the total area corresponding to diffuse atrophy to the total area of the optic disc as the lesion value for grading diffuse atrophy lesions (it should be understood that the result of weighted summation of multiple quantitative indicators may also be used as the lesion value, if desired). The macular atrophy is actually the condition that the macular atrophy exists in the macular area, belongs to one of the macular atrophies, and has higher severity, so that the invention independently sets an eyeground category for the macular atrophy, and if the macular atrophy exists, the corresponding eyeground category is output to remind a doctor of paying attention, thereby more accurately providing diagnosis and treatment suggestions for the myope. However, when a macular atrophy fundus and a macular atrophy fundus appear in quantitative classification, the macular atrophy (macular atrophy patch) may be classified in a combined manner, and the severity of the macular atrophy may be further finely divided according to the size of the classification.
According to one embodiment of the invention, for the fine particle size classification of the atrophy spots, the classification threshold interval corresponding to the corresponding atrophy spot needs to be determined, and the classification threshold required by the classification threshold interval corresponding to the atrophy spot is obtained in the following way: randomly sampling part of samples from a collected sample set containing a plurality of age groups, a plurality of regions and a plurality of myopia degrees; determining a sampling interval according to the grading granularity of the atrophic plaques and the number of all samples to be sampled; all the sampled samples are arranged according to the size sequence of the quantization indexes for the shrinkage spot grading, and a plurality of grading threshold values for dividing the grading threshold value intervals corresponding to the shrinkage spots are sampled from the middle interval according to the sampling interval. For the fine-grained quantitative classification of segmentation classes (diffuse atrophy, macular atrophy), taking the ten grades of the macular atrophy classes as an example, the following method is adopted:
randomly sampling 10 thousands of images in a population distribution (for example, data covers various age groups, various regions, different myopia degrees and the like), calculating the ratio of the total area of the focus in each image to the total area of the optic disc (the two areas can be pixel areas or actual areas calculated based on the pixel areas, such as the ratio of the total pixel area of the patch-like atrophy focus to the total pixel area of the optic disc), and sorting the images from small to large;
sequentially intercepting the ratio of the total area of the focus of 9 macular atrophies to the total area of the optic disc at intervals of every 1 ten thousands of cases, taking the ratio as a threshold for dividing ten levels (9 grading thresholds, and constructing grading threshold intervals corresponding to 10 atrophies) in a subsequent system reasoning and predicting stage, and taking the threshold as fine-grained grading for determining the macular atrophies; for example, the 9 classification threshold values are 0.05318,0.11904, 0.18408,0.258, 0.32726,0.40234, 0.50363,0.6608, 0.97272, and the corresponding classification threshold value sections corresponding to 10 atrophic plaques are section (0,0.05318) corresponding to level 1, section [0.05318,0.11904) corresponding to level 2, section [0.11904,0.18408) corresponding to level 3, section [0.18408,0.258) corresponding to level 4, section [0.258,0.32726) corresponding to level 5, section [0.32726,0.40234) corresponding to level 6, section [0.40234,0.50363) corresponding to level 7, section [0.50363,0.6608) corresponding to level 8, section [0.6608,0.97272) corresponding to level 9, and section [0.97272, + -) corresponding to level 10;
the random sample truncation threshold may be performed again as needed.
In the eye ground of a myopic patient, with the increase of myopia degree, optic disc traction is gradually aggravated, arc-shaped spots generally appear in the temporal side area of the optic disc, then gradually develop to the temporal side upper and temporal side lower areas, then develop to the nasal side upper and nasal side lower areas, and finally develop to the nasal side area, even form annular arc-shaped spots. Namely, the severity of the arc-shaped spots in different areas around the eyeground optic disk is from low to high: temporal region, superotemporal region, superonasal region, and nasale region. As myopia progresses, optic disc traction becomes progressively worse, with the first possible occurrence of a pigmented arc, followed by a choroidal arc, a mixed arc, and finally a scleral arc. Namely the severity of the arc-shaped spots from low to high is respectively: arcus pigmentosa, arcus choroidalis, arcus mixed arcus, and arcus scleral macula.
In one embodiment, if the classification result of the fundus prediction module 3 includes a type of arc-shaped spot (at least one of a pigment arc-shaped spot, a choroidal arc-shaped spot, a sclera arc-shaped spot, and a mixed arc-shaped spot), the quantitative analysis module 4 performs more accurate quantitative analysis on the type of the arc-shaped spot according to the predicted segmentation prediction map, including the following analysis:
and dividing the fundus region into a nasal side and a temporal side according to the classification result of the left eye and the right eye by taking the form center of the optic disc region in the segmentation prediction map as a reference. If the eyes are left eyes, the left side of the optic disc center is the nasal side, and the right side of the optic disc center is the temporal side; if the eyes are the right eyes, the right side of the optic disc center is the nasal side and the left side is the temporal side. It should be understood that the practitioner may also divide the fundus region into corresponding sub-regions by making a vertical line through the disc center, with reference to the disc center, and rotating the line through different angles about the disc center. An exemplary division result is shown in fig. 9, which takes the left eye as an example and rotates clockwise and counterclockwise by 45 degrees, respectively, to divide the entire fundus into six regions of temporal side, temporal superior, temporal inferior, nasal superior, nasal inferior and nasal side. The method can also rotate by more angles to divide areas with finer granularity, which is described by 45 degrees;
in one embodiment, the areas of the arc-shaped spots are weighted and summed according to the category of the arc-shaped spot and two dimensions of the sub-area where the arc-shaped spot is located, and the calculation formula is as follows:
Figure BDA0003255827800000141
wherein, α represents the area weight (also called area position weight) of the arc spot, and has six subregions of temporal side, temporal upper, temporal lower, nasal upper, nasal lower and nasal side, and according to the pathogenesis of the arc spot, the position weight is from small to large as follows: temporal side<Greater than temporal lower<On the nose, under the nose<The nasal side; beta represents the class weight of the arc-shaped spots, and the total number of the pigment arc-shaped spots, the choroid arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots is four. According to the pathogenesis of the arc-shaped spots, the category weight is from small to large as follows: arc-shaped pigment spot<Arcuated choroidal macula<Mixed arc spot<A scleral arc spot; sijRepresenting the area of pixels within a certain area within a certain class of arc-shaped spots relative to the resolution of the original image. Such as alphaiTaking the temporal region, betajTaking out the arc-shaped macula of choroid, then SijPixel area representing the arcuated choroidal patch in the temporal region; alternatively, alpha in the invention1、α2、α3、α4、α5、α6Corresponding to temporalis, nasally and nasallyThe weights of the six lateral regions (for example, the values are respectively 1, 1.2, 1.5 and 2); beta is a1、β2、β3、β4Respectively corresponding to the pigmented arc-shaped spots and the choroid arc-shaped spots, and mixing the weights of the arc-shaped spots and the sclera arc-shaped spots (for example, the values are respectively 0.5, 1, 1.5 and 2);
calculating the ratio of the total focal area of the arc-shaped spots to the total optic disc area based on the formula;
and carrying out fine-grained quantitative grading on the arc-shaped spots. According to the method, the corresponding region weight and the corresponding category weight are set for calculating the weighted sum of the total area of the arc-shaped spots according to the difference of the influence of the category characteristics and the region distribution characteristics of the arc-shaped spots on the degree of eye lesion, so that the grading quantization result of the arc-shaped spots is more accurate.
In one embodiment, the ratio of the total lesion area to the total optic disc area of the arc-shaped spot is used as a lesion value, and the grading threshold value required by the grading threshold value interval corresponding to the arc-shaped spot is obtained as follows:
randomly sampling part of samples from a collected sample set containing a plurality of age groups, a plurality of regions and a plurality of myopia degrees;
determining a sampling interval according to the classified granularity of the arc-shaped spots and the number of all samples to be sampled;
and arranging all the sampled samples in sequence according to the ratio of the total lesion area of the arc-shaped spots to the total optic disc area, and sampling from the middle interval according to the sampling interval to determine a plurality of arc-shaped spot thresholds for dividing the grading threshold interval corresponding to the arc-shaped spots.
For example, taking a ten-stage as an example, the following method is adopted:
randomly sampling 10 thousands of images in the population distribution, calculating the ratio of the total weighted area of the arc-shaped spots of all the images to the total area of the optic disc, and sequencing the images from small to large;
taking every 1 ten thousand cases as an interval, sequentially intercepting 9 ratios which are used as a grading threshold value for dividing ten levels;
in the system reasoning and predicting stage, the threshold value is used for determining the fine granularity grading of the arc-shaped spot;
the random sample truncation threshold may be performed again as needed.
Regarding the fine-grained quantitative analysis of the arc-shaped spots, in one embodiment, the module finally outputs the quantity index, the area index and the fine-grained grading index of the arc-shaped spots. For example, the minimum area of a curved plaque lesion, the maximum area of a lesion, the total area of a lesion, and the ratio of the total area of a lesion to the total area of the optic disc. Further, the total number, the total pixel area and the maximum pixel area of each type of arc-shaped spots of each sub-area of the eyeground can be output.
Therefore, the system provided by the invention solves the problem that the classification of the traditional myopia eyeground is only the rough classification related to pathological myopia by using a machine learning technology, predicts a plurality of focuses related to myopia by using the segmentation prediction module, and provides quantity indexes, area indexes and/or grading indexes of the focuses by using the quantization analysis module to provide detail parameters for doctors or opticians, so that potential eye diseases can be conveniently found, and diagnosis and treatment or eye protection suggestions are provided for myopia patients.
According to an embodiment of the present invention, the present invention provides a fundus image analysis method, which can be executed by an electronic device such as a computer or a server. The method analyzes the fundus image by means of the fundus image analysis system comprising the neural network. The above system embodiments may be supplemented by embodiments of the method.
In order to verify the effect of the present invention, the applicant also performed corresponding experiments, and the following are experimental descriptions:
1. description of data set
The total number of the fundus images used by the invention is 3814, all the fundus images are from the real world and obtained by random sampling, and the fundus images cover all age groups, camera brand distribution and different myopia degrees
The original fundus image is resolved into 600 × 600, and is a color map in a format of jpg, png, tif and the like containing three channels of RGB. Each picture is labeled by a professional ophthalmologist. The labeling content comprises:
fundus type label: normal fundus, arcuated macular fundus, leopard streak fundus, diffuse atrophic fundus, macular atrophic fundus; inputting a number list with a format of 0-5. For example, [0] indicates the normal fundus, and [1,2,3] indicates the presence of fundus oculi with arcuated macula, fundus oculi with leopard streak, and fundus oculi with diffuse atrophy. Other category labels may be present in addition to the normal fundus.
Eye identification label: left and right eyes; a list of numbers in the format of 0 or 1 is input with only two possibilities, [0], [1] representing the left eye and the right eye, respectively.
Pixel class label (pixel level segmentation label): background, optic disc, pigmented arc macula, choroidal arc macula, scleral arc macula, mixed arc macula, diffuse atrophy, macula lamellar atrophy; there are the following segmentation categories: background, optic disc, pigmented arc macula, choroidal arc macula, scleral arc macula, mixed arc macula, diffuse atrophy, and macula lamellar atrophy. The input format is 600 x 600 single channel array. Each pixel position may be an integer of 0-7, indicating the segmentation class to which the pixel belongs. Wherein 0, 1,2,3, 4, 5, 6, and 7 respectively represent background, optic disc, pigmented arc spot, choroidal arc spot, scleral arc spot, mixed arc spot, diffuse atrophy, and macular atrophy.
The training set and the testing set are divided according to the 8:2 principle. Training set data 3000 random samples were taken from 3814 pictures, and the remaining 814 samples were used as test sets.
2. Model architecture corresponding to system
In the fundus image analysis system, the feature extraction module and the segmentation prediction module in the fundus image analysis system were configured as shown in fig. 6, the fundus prediction module was configured as a fully connected layer (fully connected network), and the eye classification module was configured as a fully connected layer (fully connected network).
3. Abstract of training process
As in table 1 below, the 97 th Epoch model was finally adopted as the optimal model for the myopia fundus segmentation identification. Wherein the segmentation loss refers to a loss L calculated by segmenting a prediction resultsegThe classification loss means L corresponding to the prediction by the fundus type and the eye typeclf-1And Lclf-2Calculated weighted sum, total loss finger Lall
For categorical prediction, the AUC (area under the sensitivity-specificity/ROC curve) of each category was evaluated, with larger values being better;
for segmentation prediction, using Iou for each category (the ratio of intersection and union of model predicted lesion region and true labeled lesion region) for evaluation, the larger the value the better;
the total loss of the model is minimum at the 97 th training round (Epoch), and the corresponding index of each segmentation classification is relatively optimal. Therefore, each module obtained by 97 th round training is selected to form a fundus image analysis system and is deployed into a corresponding fundus image analysis device.
TABLE 1
Figure BDA0003255827800000171
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A fundus image analysis system comprising a feature extraction module, a fundus prediction module, and a segmentation prediction module, wherein,
the characteristic extraction module samples fundus images to be analyzed to extract a fundus characteristic map;
the fundus prediction module analyzes the fundus type corresponding to the fundus image according to the fundus feature map, wherein the fundus type comprises a normal fundus and a plurality of myopia-associated fundus;
the segmentation prediction module samples the fundus feature map to analyze a corresponding segmentation prediction map for a fundus image, which indicates a class of each pixel in the fundus image, including a background pixel class, a disc pixel class, a plurality of arc spot pixel classes, and a plurality of atrophy spot pixel classes.
2. The system of claim 1, wherein the feature extraction module downsamples the fundus image a plurality of times to obtain the fundus feature map; the fundus prediction module performs multiple upsampling on the fundus feature map to a multi-channel segmentation map, and performs analysis based on the multi-channel segmentation map to obtain a segmentation prediction map.
3. A system according to claim 1 or 2, characterized in that the system is trained in the following way:
acquiring training data which comprises a plurality of fundus pictures, fundus category labels and pixel category labels;
and training the system by using training data, wherein the fundus classification sub-loss is calculated according to the output of the fundus prediction module and the fundus category label, the fundus segmentation sub-loss is calculated according to the output of the segmentation prediction module and the pixel category label, the total loss is calculated according to the fundus classification sub-loss and the fundus segmentation sub-loss, and gradient calculation and parameter updating are performed on the feature extraction module, the fundus prediction module and the segmentation prediction module based on the total loss.
4. The system of claim 1, further comprising an eye classification module and a quantitative analysis module, wherein,
the eye classification module determines an eye classification corresponding to the fundus image according to the fundus feature map, wherein the eye classification is a left eye or a right eye;
the quantitative analysis module carries out quantitative analysis according to the fundus type and the segmentation prediction chart corresponding to the fundus image, or carries out quantitative analysis according to the fundus type, the eye type and the segmentation prediction chart corresponding to the fundus image, so as to obtain various quantitative indexes.
5. The system of claim 4, wherein the system is trained in the following manner:
acquiring training data which comprises a plurality of fundus pictures, fundus category labels, eye category labels and pixel category labels;
the method comprises the steps of training a system by utilizing training data, wherein fundus classification sub-losses are calculated according to the output of a fundus prediction module and fundus category labels, eye classification sub-losses are calculated according to the output of the eye classification module and eye category labels, fundus segmentation sub-losses are calculated according to the output of a segmentation prediction module and pixel category labels, total losses are calculated according to the fundus classification sub-losses, the eye classification sub-losses and the fundus segmentation sub-losses, and gradient calculation and parameter updating are performed on a feature extraction module, the fundus prediction module, the segmentation prediction module and the eye classification module on the basis of the total losses.
6. The system of claim 5, wherein the total loss is calculated as follows:
Lall=α*Lseg+β*Lclf-1+γ*Lclf-2
wherein L isallDenotes the total loss, LsegIndicates ocular fundus segmentation sub-loss, Lclf-1Indicates ocular fundus classifier loss, Lclf-2The eye classification sub-loss is expressed, α represents a weight of the fundus segmentation sub-loss, β represents a weight of the fundus classification sub-loss, and γ represents a weight corresponding to the eye classification sub-loss.
7. The system of claim 6, wherein the plurality of quantitative indicators comprises a number indicator and an area indicator of a lesion, and wherein the quantitative analysis module is configured to:
when the fundus type corresponding to the fundus image is any myopia-associated fundus, the degree of fundus lesions corresponding to the fundus image is graded according to at least one quantitative index of the quantity index and the area index of the focus, and a grading index is obtained.
8. The system according to claim 7, wherein when the fundus type corresponding to the fundus image is any myopia-associated fundus, the grading of the degree of fundus lesion corresponding to the fundus image according to at least one of the quantitative index and the area index of the lesion comprises:
determining a lesion value of fundus lesions according to at least one quantitative index of the quantity index and the area index of lesions, and determining the grade of the fundus lesions according to a grading threshold interval in which the lesion value is positioned, wherein the grading threshold used for constructing the grading threshold interval is obtained in the following way:
randomly sampling part of samples from a collected sample set containing a plurality of age groups, a plurality of regions and a plurality of myopia degrees;
determining a sampling interval according to the graded granularity and the number of all samples sampled;
and (3) arranging lesion values corresponding to all the sampled samples in order of magnitude, and sampling from the middle interval according to the sampling interval to determine a plurality of grading threshold values for grading.
9. The system of claim 8, wherein the area indicator comprises: the minimum area of the lesion, the maximum area of the lesion, the total area of the lesion, and the ratio of the total area of the lesion to the total area of the optic disc.
10. The system according to claim 9, wherein when the fundus type corresponding to the fundus image is the fundus oculi with the arc-shaped macula, the degree of the arc-shaped macula lesion corresponding to the fundus image is graded according to the ratio of the total lesion area of the arc-shaped macula to the total optic disc area.
11. The system of claim 10, wherein the total lesion area of the arc-shaped spot is a weighted area, wherein the temporal side and the nasal side of the optic disc on the segmentation prediction map are determined according to the eye category and the segmentation prediction map is divided into a plurality of sub-areas according to the temporal side and the nasal side, and the weighted area is obtained by weighted summation of the area of the arc-shaped spot on the segmentation prediction map according to the area weight of each sub-area and the category weight of the plurality of arc-shaped spot pixel categories.
12. The system of claim 11, wherein the region weights of the subregions relatively closer to the nasal side are greater than the region weights of the subregions relatively further from the nasal side.
13. The system of claim 11, wherein the plurality of arc shaped spot pixel categories comprise: the method comprises the following steps of preparing a plurality of arc-shaped pigmented spots, arc-shaped choroid spots, arc-shaped mixed spots and arc-shaped sclera spots, wherein the class weights of the arc-shaped pigmented spots, the arc-shaped choroid spots, the arc-shaped mixed spots and the arc-shaped sclera spots are the class weights corresponding to the arc-shaped pigmented spots, the arc-shaped choroid spots, the arc-shaped mixed spots and the arc-shaped sclera spots from small to large.
14. The system according to any one of claims 1 to 13, wherein the plurality of myopia-associated fundus is a combination of categories of arcuated macular fundus, diffuse atrophic fundus, macular atrophic fundus, leopard fundus; the multiple arc-shaped spot pixel classes are the combination of the classes in the pigment arc-shaped spot, the choroid arc-shaped spot, the mixed arc-shaped spot and the sclera arc-shaped spot; the multiple atrophic plaque pixel categories include diffuse atrophy and plaque atrophy.
15. A fundus image analysis method based on the system of any one of claims 1 to 14, characterized in that said method comprises:
acquiring a fundus image to be analyzed;
sampling the fundus image by a feature extraction module to extract a fundus feature map;
analyzing the fundus type corresponding to the fundus image according to the fundus characteristic map by a fundus prediction module;
sampling, by a segmentation prediction module, the fundus feature map to analyze a corresponding segmentation prediction map for a fundus image, indicating a category for each pixel in the fundus image;
outputting the analyzed fundus type and segmentation prediction map.
16. A fundus image analysis method according to claim 15, said system further comprising an eye classification module and a quantitative analysis module, said method further comprising:
analyzing the eye classification corresponding to the fundus image by an eye classification module according to the fundus feature map, wherein the eye classification is a left eye or a right eye;
the quantitative analysis module carries out quantitative analysis according to the fundus type and the segmentation prediction map corresponding to the fundus image, or carries out quantitative analysis according to the fundus type, the eye type and the segmentation prediction map corresponding to the fundus image;
and outputting various quantization indexes.
17. A computer-readable storage medium, having embodied thereon a computer program, the computer program being executable by a processor to perform the steps of the method of claim 15 or 16.
18. An electronic device, comprising:
one or more processors; and
a memory, wherein the memory is to store executable instructions;
the one or more processors are configured to execute the executable instructions to implement the method of claim 15 or 16.
CN202111059503.2A 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment Active CN113768460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111059503.2A CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111059503.2A CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113768460A true CN113768460A (en) 2021-12-10
CN113768460B CN113768460B (en) 2023-11-14

Family

ID=78842460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111059503.2A Active CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113768460B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612656A (en) * 2022-01-12 2022-06-10 山东师范大学 MRI image segmentation method and system based on improved ResU-Net neural network
CN114887232A (en) * 2022-07-15 2022-08-12 北京鹰瞳科技发展股份有限公司 Method for controlling red light irradiation of eye fundus and related product
CN114937307A (en) * 2022-07-19 2022-08-23 北京鹰瞳科技发展股份有限公司 Method for myopia prediction and related products
CN115424084A (en) * 2022-11-07 2022-12-02 浙江省人民医院 Fundus photo classification method and device based on class weighting network
CN116503405A (en) * 2023-06-28 2023-07-28 依未科技(北京)有限公司 Myopia fundus change visualization method and device, storage medium and electronic equipment
CN117132777A (en) * 2023-10-26 2023-11-28 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014031086A1 (en) * 2012-08-24 2014-02-27 Agency For Science, Technology And Research Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN105310645A (en) * 2014-06-18 2016-02-10 佳能株式会社 Image processing apparatus and image processing method
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
US20170357879A1 (en) * 2017-08-01 2017-12-14 Retina-Ai Llc Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images
CN107680684A (en) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 For obtaining the method and device of information
CN108665447A (en) * 2018-04-20 2018-10-16 浙江大学 A kind of glaucoma image detecting method based on eye-ground photography deep learning
US20180315193A1 (en) * 2017-04-27 2018-11-01 Retinopathy Answer Limited System and method for automated funduscopic image analysis
US20190096111A1 (en) * 2016-12-09 2019-03-28 Microsoft Technology Licensing, Llc Automatic generation of fundus drawings
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
CN110236483A (en) * 2019-06-17 2019-09-17 杭州电子科技大学 A method of the diabetic retinopathy detection based on depth residual error network
US10413180B1 (en) * 2013-04-22 2019-09-17 VisionQuest Biomedical, LLC System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN110276356A (en) * 2019-06-18 2019-09-24 南京邮电大学 Eye fundus image aneurysms recognition methods based on R-CNN
CN110400289A (en) * 2019-06-26 2019-11-01 平安科技(深圳)有限公司 Eye fundus image recognition methods, device, equipment and storage medium
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN110570421A (en) * 2019-09-18 2019-12-13 上海鹰瞳医疗科技有限公司 multitask fundus image classification method and apparatus
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111144296A (en) * 2019-12-26 2020-05-12 湖南大学 Retina fundus picture classification method based on improved CNN model
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
CN113011450A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method, training device, recognition method and recognition system for glaucoma recognition
CN113066066A (en) * 2021-03-30 2021-07-02 北京鹰瞳科技发展股份有限公司 Retinal abnormality analysis method and device
CN113177981A (en) * 2021-04-29 2021-07-27 中国科学院自动化研究所 Double-channel craniopharyngioma invasiveness classification and focus region segmentation system thereof
CN113222927A (en) * 2021-04-30 2021-08-06 汕头大学·香港中文大学联合汕头国际眼科中心 Automatic examination method for retinopathy additive lesion of premature infant

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014031086A1 (en) * 2012-08-24 2014-02-27 Agency For Science, Technology And Research Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
US10413180B1 (en) * 2013-04-22 2019-09-17 VisionQuest Biomedical, LLC System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN105310645A (en) * 2014-06-18 2016-02-10 佳能株式会社 Image processing apparatus and image processing method
US20190096111A1 (en) * 2016-12-09 2019-03-28 Microsoft Technology Licensing, Llc Automatic generation of fundus drawings
US20180315193A1 (en) * 2017-04-27 2018-11-01 Retinopathy Answer Limited System and method for automated funduscopic image analysis
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
US20170357879A1 (en) * 2017-08-01 2017-12-14 Retina-Ai Llc Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images
CN107680684A (en) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 For obtaining the method and device of information
CN108665447A (en) * 2018-04-20 2018-10-16 浙江大学 A kind of glaucoma image detecting method based on eye-ground photography deep learning
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
CN110236483A (en) * 2019-06-17 2019-09-17 杭州电子科技大学 A method of the diabetic retinopathy detection based on depth residual error network
CN110276356A (en) * 2019-06-18 2019-09-24 南京邮电大学 Eye fundus image aneurysms recognition methods based on R-CNN
CN110400289A (en) * 2019-06-26 2019-11-01 平安科技(深圳)有限公司 Eye fundus image recognition methods, device, equipment and storage medium
CN110570421A (en) * 2019-09-18 2019-12-13 上海鹰瞳医疗科技有限公司 multitask fundus image classification method and apparatus
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN113011450A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method, training device, recognition method and recognition system for glaucoma recognition
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111144296A (en) * 2019-12-26 2020-05-12 湖南大学 Retina fundus picture classification method based on improved CNN model
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN113066066A (en) * 2021-03-30 2021-07-02 北京鹰瞳科技发展股份有限公司 Retinal abnormality analysis method and device
CN113177981A (en) * 2021-04-29 2021-07-27 中国科学院自动化研究所 Double-channel craniopharyngioma invasiveness classification and focus region segmentation system thereof
CN113222927A (en) * 2021-04-30 2021-08-06 汕头大学·香港中文大学联合汕头国际眼科中心 Automatic examination method for retinopathy additive lesion of premature infant

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NAGDEOTE, S.等: "Hybrid UNet Architecture based on Residual Learning of Fundus Images for Retinal Vessel Segmentation", 《JOURNAL OF PHYSICS: CONFERENCE SERIES》, vol. 2070, no. 1, pages 012104 *
容毅标: "卷积神经网络在眼科医学图像中的应用研究:分类、分割及回归分析", 《中国博士学位论文全文数据库 (医药卫生科技辑)》, no. 2021, pages 076 - 3 *
汤加: "基于眼底彩照的近视性黄斑病变自动分级和病灶识别***研究", 《中国博士学位论文全文数据库(医药卫生科技辑)》, pages 9 - 27 *
陆如意: "基于眼底彩照的病理性近视分类和脉络膜视网膜萎缩分割", 《中国优秀硕士学位论文全文数据库(医药卫生科技辑)》, vol. 2013, pages 15 - 37 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612656A (en) * 2022-01-12 2022-06-10 山东师范大学 MRI image segmentation method and system based on improved ResU-Net neural network
CN114887232A (en) * 2022-07-15 2022-08-12 北京鹰瞳科技发展股份有限公司 Method for controlling red light irradiation of eye fundus and related product
CN114937307A (en) * 2022-07-19 2022-08-23 北京鹰瞳科技发展股份有限公司 Method for myopia prediction and related products
CN115424084A (en) * 2022-11-07 2022-12-02 浙江省人民医院 Fundus photo classification method and device based on class weighting network
CN116503405A (en) * 2023-06-28 2023-07-28 依未科技(北京)有限公司 Myopia fundus change visualization method and device, storage medium and electronic equipment
CN116503405B (en) * 2023-06-28 2023-10-13 依未科技(北京)有限公司 Myopia fundus change visualization method and device, storage medium and electronic equipment
CN117132777A (en) * 2023-10-26 2023-11-28 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN117132777B (en) * 2023-10-26 2024-03-22 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113768460B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN113768460A (en) Fundus image analysis system and method and electronic equipment
Tan et al. Automated segmentation of exudates, haemorrhages, microaneurysms using single convolutional neural network
Li et al. A large-scale database and a CNN model for attention-based glaucoma detection
Liao et al. Clinical interpretable deep learning model for glaucoma diagnosis
Cao et al. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet
EP3659067B1 (en) Method of modifying a retina fundus image for a deep learning model
Kauppi et al. The diaretdb1 diabetic retinopathy database and evaluation protocol.
CN110197493A (en) Eye fundus image blood vessel segmentation method
Prentašić et al. Diabetic retinopathy image database (DRiDB): a new database for diabetic retinopathy screening programs research
Yang et al. Efficacy for differentiating nonglaucomatous versus glaucomatous optic neuropathy using deep learning systems
CN110013216B (en) Artificial intelligence cataract analysis system
Zhang et al. DeepUWF: an automated ultra-wide-field fundus screening system via deep learning
Sreng et al. Automated microaneurysms detection in fundus images using image segmentation
Kajan et al. Detection of diabetic retinopathy using pretrained deep neural networks
Firke et al. Convolutional neural network for diabetic retinopathy detection
Tobin Jr et al. Characterization of the optic disc in retinal imagery using a probabilistic approach
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Phridviraj et al. A bi-directional Long Short-Term Memory-based Diabetic Retinopathy detection model using retinal fundus images
Padilla-Pantoja et al. Etiology of macular edema defined by deep learning in optical coherence tomography scans
CN112652392A (en) Fundus anomaly prediction system based on deep neural network
Han et al. An automated framework for screening of glaucoma using cup-to-disc ratio and ISNT rule with a support vector machine
Velázquez-González et al. Detection and classification of non-proliferative diabetic retinopathy using a back-propagation neural network
Padmapriya et al. Image Processing Techniques in the Detection of Hemorrhages in Retinal Images (STARE & DRIVE)
Chalakkal Automatic Retinal Image Analysis to Triage Retinal Pathologies
Yadav et al. RD-Light-Net: Light Weight Network for Retinal Detachment Classification through Fundus Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant