WO2022160118A1 - 基于计算机视觉特征的oct图像分类方法及***、设备 - Google Patents

基于计算机视觉特征的oct图像分类方法及***、设备 Download PDF

Info

Publication number
WO2022160118A1
WO2022160118A1 PCT/CN2021/073936 CN2021073936W WO2022160118A1 WO 2022160118 A1 WO2022160118 A1 WO 2022160118A1 CN 2021073936 W CN2021073936 W CN 2021073936W WO 2022160118 A1 WO2022160118 A1 WO 2022160118A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
interest
oct
classification result
Prior art date
Application number
PCT/CN2021/073936
Other languages
English (en)
French (fr)
Inventor
相韶华
温华杰
肖志勇
赵建
Original Assignee
深圳技术大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳技术大学 filed Critical 深圳技术大学
Priority to PCT/CN2021/073936 priority Critical patent/WO2022160118A1/zh
Publication of WO2022160118A1 publication Critical patent/WO2022160118A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Definitions

  • the invention belongs to the technical fields of computer vision and medical image processing, and in particular relates to an OCT image classification method and system, equipment and storage medium based on computer vision features.
  • ALD age-related macular degeneration
  • DME diabetic macular edema
  • optical coherence tomography is a technology that performs tomographic imaging of objects by measuring the intensity of the backscattered light of the object. It is widely used as a clinical aid.
  • the initial medical image classification adopts the manual labeling method. Doctors observe a large number of OCT lateral scan images to label texts and store them to determine the patient's disease type. This manual analysis method is time-consuming and has a certain degree of professionalism for doctors. Therefore, an efficient and accurate retinal OCT image automatic classification method is needed to assist doctors in diagnosis.
  • UNet for semantic segmentation of medical images
  • the parameter amount of the model is relatively small, but when the OCT image noise is relatively large, semantic segmentation errors will occur.
  • the accuracy of using UNet for semantic segmentation of retinal OCT image lesions needs to be improved. Due to the high noise and irregularity of the retinal OCT image, the recognition accuracy cannot be effectively improved if the convolutional neural network recognition is performed directly.
  • Retinal OCT is diagnosed by a statistical method. This method does not require a large number of pictures for training, but the accuracy needs to be improved, and it cannot identify many diseases.
  • the present invention provides a method that can effectively remove the noise of the OCT image and perform deep learning recognition, thereby improving the overall recognition accuracy and improving the image quality.
  • the confidence method of the judgment result is: in view of the problems in the prior art, the present invention provides a method that can effectively remove the noise of the OCT image and perform deep learning recognition, thereby improving the overall recognition accuracy and improving the image quality.
  • the confidence method of the judgment result is: in view of the problems in the prior art, the present invention provides a method that can effectively remove the noise of the OCT image and perform deep learning recognition, thereby improving the overall recognition accuracy and improving the image quality.
  • the confidence method of the judgment result is: in view of the problems in the prior art, the present invention provides a method that can effectively remove the noise of the OCT image and perform deep learning recognition, thereby improving the overall recognition accuracy and improving the image quality. The confidence method of the judgment result.
  • the technical scheme adopted in the present invention is:
  • the present invention provides an OCT image classification method based on computer vision features, the method comprising:
  • Generate a first mask image in the OCT image process the first mask image, perform operations on the processed first mask image and the OCT image, and obtain the first mask image in the OCT image an area of interest image; process the first area of interest image, determine a foreground area in the OCT image, convert the foreground area into a second mask image, and process the second mask image;
  • Classify the image of the third region of interest through a convolutional neural network obtain the classification result of the image of the third region, obtain the confidence of the classification result, and display the final classification result in the On the OCT image, the classification result includes the final classification result.
  • the present invention provides an OCT image classification system based on computer vision features, the system comprising:
  • Determination module used to generate a first mask image in the OCT image, process the first mask image, perform operations on the processed first mask image and the OCT image, and obtain the OCT the first region of interest image in the image; process the first region of interest image, determine the foreground region in the OCT image, convert the foreground region into a second mask image image processing;
  • Extraction module used to perform operation on the processed second mask image and the first region of interest image, and extract the second region of interest image in the first region of interest image;
  • Enhancement module for performing feature enhancement processing on the second region of interest image to obtain a third region of interest image
  • Recognition module used to classify the image of the third region of interest through a convolutional neural network, obtain the classification result of the image of the third region, obtain the confidence of the classification result, and classify the classification result according to the confidence of the classification result.
  • a final classification result is displayed on the OCT image, and the classification result includes the final classification result.
  • the present invention also provides an OCT image classification device based on computer vision features, comprising a memory, a processor, and a computer program stored in the processor and executable on the processor, the processor When the computer program is executed, each step in the method for classifying OCT images based on computer vision features as described in the first aspect above is implemented.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes the OCT image based on the computer vision feature as described in the first aspect above The various steps in the classification method.
  • the present invention provides an OCT image classification method based on computer vision features.
  • the method includes: generating a first mask image in an OCT image, processing the first mask image, and processing the processed first mask image.
  • a mask image is operated on the OCT image to obtain a first region of interest image in the OCT image; the first region of interest image is processed to determine a foreground region in the OCT image, and the foreground region is determined by processing the first region of interest image.
  • the area is converted into a second mask image and the second mask image is processed; the processed second mask image and the first interest area image are operated to extract the The second region of interest image; performing feature enhancement processing on the second region of interest image to obtain a third region of interest image; classifying the third region of interest image through a convolutional neural network to obtain the image of the third region of interest The classification result, obtaining the confidence of the classification result, and displaying the final classification result on the OCT image according to the confidence of the classification result, where the classification result includes the final classification result.
  • the method adds a computer vision preprocessing part, effectively removes the noise of the OCT image, thereby increasing the signal-to-noise ratio, and performs feature extraction and feature enhancement on sensitive areas, and finally performs deep learning recognition, thereby improving the overall recognition accuracy. It improves the confidence of the judgment result of the image.
  • FIG. 1 is a schematic flowchart of an OCT image classification method based on computer vision features of the present invention
  • Fig. 2 is the sub-flow schematic diagram of the OCT image classification method based on computer vision feature of the present invention
  • Fig. 3 is another sub-flow schematic diagram of the computer vision feature-based OCT image classification method of the present invention.
  • Fig. 4 is another sub-flow schematic diagram of the computer vision feature-based OCT image classification method of the present invention.
  • Fig. 5 is another sub-flow schematic diagram of the computer vision feature-based OCT image classification method of the present invention.
  • FIG. 6 is a schematic diagram of program modules of the computer vision feature-based OCT image classification method of the present invention.
  • FIG. 1 is a schematic flowchart of an OCT image classification method based on computer vision features in an embodiment of the present application, and the method includes:
  • Step 101 Generate a first mask image in the OCT image, process the first mask image, perform an operation on the processed first mask image and the OCT image, and obtain the information in the OCT image.
  • the first region of interest image process the first region of interest image, determine the foreground region in the OCT image, convert the foreground region into a second mask image and perform the second mask image deal with.
  • a first mask image is generated from pixels whose brightness value is less than 255 in the OCT image, and the first mask image is operated through a convolution check, thereby removing noise in the first mask image.
  • the noise-removed first mask image and the OCT image are operated to extract the first region of interest image of the OCT image, and then the first region of interest image is denoised and the foreground and background are distinguished to obtain a second mask image.
  • Step 102 Perform operations on the processed second mask image and the first region of interest image to extract a second region of interest image in the first region of interest image.
  • step 101 after the second mask image is denoised in step 101, an AND operation is performed with the first region of interest image to extract the second region of interest image in the first region of interest.
  • Step 103 Perform feature enhancement processing on the second region of interest image to obtain a third region of interest image.
  • feature enhancement processing is performed on the second region of interest to obtain a third region of interest image with more obvious contrast, which improves the signal-to-noise ratio of the third region of interest image.
  • Step 104 Classify the image of the third region of interest by using a convolutional neural network to obtain a classification result of the image of the third region, obtain the confidence level of the classification result, and classify the final classification result according to the confidence level of the classification result.
  • Results are displayed on the OCT image, and the classification results include the final classification results.
  • the convolutional neural network realizes the prediction of image classification, inputs the image into the convolutional neural network, obtains multiple classification results, obtains the confidence level of the classification result, and calculates the confidence level according to the confidence level of the classification result.
  • the classification result of the probability maximum is displayed in the OCT image.
  • Convolutional neural networks can adopt different structures, such as AlexNet, VGG, Inception, and ResNet, etc., which are not limited here.
  • An embodiment of the present application provides an OCT image classification method based on computer vision features, the method includes: generating a first mask image in an OCT image, processing the first mask image, and processing the processed
  • the first mask image of the image is operated with the OCT image to obtain the first region of interest image in the OCT image; the first region of interest image is processed to determine the foreground region in the OCT image, and the The foreground area is converted into a second mask image and the second mask image is processed; the processed second mask image and the first interest area image are operated to extract the first interest area image the image of the second region of interest in the second region of interest; perform feature enhancement processing on the image of the second region of interest to obtain a third region of interest image; classify the image of the third region of interest through a convolutional neural network to obtain the third region of interest
  • the classification result of the image, the confidence level of the classification result is obtained, and the final classification result is displayed on the OCT image according to the confidence level of the classification result, and the classification result includes the final classification result
  • the method adds a computer vision preprocessing part, effectively removes the noise of the OCT image, thereby increasing the signal-to-noise ratio, and performs feature extraction and feature enhancement on sensitive areas, and finally performs deep learning recognition, thereby improving the overall recognition accuracy. It improves the confidence of the judgment result of the image.
  • FIG. 2 is a schematic diagram of a sub-flow of an OCT image classification method based on computer vision features in an embodiment of the present application.
  • the first mask image is processed, and the processed first mask image is processed.
  • the operation of the code image and the OCT image includes:
  • Step 201 extracting the peripheral contour of the processed first mask image, and filling the elements in the peripheral contour;
  • Step 202 using a convolution kernel to perform an erosion operation on the first mask image that fills the elements in the outer contour
  • Step 203 Perform AND operation on the first mask image after the erosion operation and the OCT image.
  • a first mask image is generated from pixels whose brightness value is less than 255 in the OCT image, as shown in formula (1), and the first mask image is opened with a convolution kernel of size 5*5. , and then perform the closing operation to remove the noise at the edge of the first mask image and preserve the integrity in the area of the first mask image.
  • the outer contour of the first mask image is extracted, and the elements within the contour are filled. Then use a convolution kernel of size 5*5 to erode the first mask image, reduce the edge range, and reduce the noise of the edge.
  • a first region of interest image in the OCT image is extracted by performing an AND operation on the processed first mask image and the OCT image.
  • processing the first region of interest image to determine the foreground region in the OCT image includes:
  • a convolution kernel is used to perform an opening operation on the first region of interest image, and a median filtering process is performed on the first region of interest image after the opening operation.
  • an open operation is performed on the first region of interest of the OCT image with a convolution kernel of size 3*3 to reduce noise in the image of the first region of interest, and then a median filtering operation is performed to reduce salt and pepper noise.
  • determine the foreground and background of the OCT image set a threshold p, p ⁇ [0, 255], and set the value of p one by one from 0 to 255, so that the value of formula (2) is the maximum, and the maximum ⁇ is retained 2 corresponds to the p-value.
  • p 1 is the probability when the OCT image pixel brightness value is less than p
  • m 1 is the average value of the pixels when the OCT image pixel brightness value is less than p
  • p 2 is the probability when the OCT image pixel brightness value is greater than p
  • m 2 is the average value of the pixels when the OCT image pixel brightness value is greater than p
  • v is the average value of all pixels in the OCT image.
  • FIG. 3 is a schematic diagram of another sub-flow of the OCT image classification method based on computer vision features in the embodiment of the present application.
  • the foreground area is converted into a second mask image and the second Mask image processing includes:
  • Step 301 converting the pixel brightness value of the foreground area to obtain the second mask image
  • Step 302 detect the outline of the second mask image, draw an approximate polygon to the outline and fill the interior of the approximate polygon;
  • Step 303 using a convolution kernel to perform median filtering on the filled approximate polygon on the second mask image
  • Step 304 perform a closing operation on the processed second mask image using a convolution check, and then perform an opening operation on the second mask image after the closing operation using a convolution check.
  • the luminance values of all pixels in the foreground area are converted according to formula (3) to obtain the second mask image.
  • Contour detection is performed on the second mask image, an approximate polygon is drawn for the contour, and the interior of the approximate polygon is filled.
  • the filled approximate polygons are median filtered with a convolution kernel of size 21*21 to remove salt and pepper noise.
  • FIG. 4 is a schematic diagram of another sub-flow of the OCT image classification method based on computer vision features in the embodiment of the present application.
  • Feature enhancement processing is performed on the image of the second region of interest to obtain a third region of interest. Images include:
  • Step 401 Count the probability of occurrence of pixel gray values in the second region of interest image and draw a statistical histogram
  • Step 402 Sort the pixel brightness values in the second region of interest image, and convert the second region of interest image with the sorted pixel brightness values into the third region of interest image.
  • the probability of occurrence of pixel gray values in the image of the second region of interest is counted, and a statistical histogram is drawn.
  • p i is the probability when the pixel gray value is i
  • x is the image gray level
  • t is the pixel gray value of the OCT image.
  • the convolutional neural network before classifying the image of the third region of interest by using the convolutional neural network, it includes:
  • a proportional cropping is performed on the third region of interest.
  • FIG. 5 is another sub-flow schematic diagram of the OCT image classification method based on computer vision features in the embodiment of the present application, the confidence level of the classification result is obtained, and the confidence level of the classification result is obtained.
  • the final classification results displayed on the OCT image include:
  • Step 501 obtaining the confidence level of the classification result through a normalized exponential function
  • Step 502 Screen the confidence of the classification result through non-maximum suppression to obtain the final classification result and its confidence.
  • the cropped third region of interest is input into the convolutional neural network to obtain multiple classification categories (classification results), and the confidence of each classification category (classification result) is calculated by the normalized exponential function
  • the maximum value of the confidence probability of the classification result is screened by non-maximum suppression, and the final classification result with the highest confidence probability is displayed on the OCT image.
  • FIG. 6 is a schematic diagram of the program modules of the OCT image classification system based on computer vision features in the embodiment of the present application.
  • the above-mentioned OCT image classification system based on computer vision features includes:
  • Determining module 601 used to generate a first mask image in the OCT image, process the first mask image, perform operations on the processed first mask image and the OCT image, and obtain the a first region of interest image in the OCT image; process the first region of interest image, determine a foreground region in the OCT image, convert the foreground region into a second mask image code image for processing;
  • Extraction module 602 configured to perform operation on the processed second mask image and the first region of interest image, and extract the second region of interest image in the first region of interest image;
  • Enhancement module 603 for performing feature enhancement processing on the second region of interest image to obtain a third region of interest image;
  • Recognition module 604 for classifying the image of the third region of interest through a convolutional neural network, obtaining a classification result of the third region image, obtaining a confidence level of the classification result, and obtaining a confidence level of the classification result according to the confidence level of the classification result A final classification result is displayed on the OCT image, and the classification result includes the final classification result.
  • the OCT image classification system based on computer vision features provided by the embodiments of the present application can realize: generating a first mask image in the OCT image, processing the first mask image, and processing the processed first mask image.
  • the code image and the OCT image are operated to obtain the first region of interest image in the OCT image; the first region of interest image is processed to determine the foreground region in the OCT image, and the foreground region is converted is a second mask image and process the second mask image; perform operations on the processed second mask image and the first region of interest image, and extract the second mask image in the first region of interest image an image of an area of interest; perform feature enhancement processing on the image of the second area of interest to obtain a third area of interest image; classify the image of the third area of interest through a convolutional neural network to obtain a classification result of the third area of interest image , obtaining the confidence of the classification result, and displaying the final classification result on the OCT image according to the confidence of the classification result, where the classification result includes the final classification result.
  • the method adds a computer vision preprocessing part, effectively removes the noise of the OCT image, thereby increasing the signal-to-noise ratio, and performs feature extraction and feature enhancement on sensitive areas, and finally performs deep learning recognition, thereby improving the overall recognition accuracy. It improves the confidence of the judgment result of the image.
  • an embodiment of the present application also provides an OCT image classification device based on computer vision features, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processing When the computer executes the computer program, each step in the above-mentioned computer vision feature-based OCT image classification method is implemented.
  • the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements each of the above-mentioned methods for classifying OCT images based on computer vision features. step.
  • Each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as independent products, can be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于计算机视觉特征的OCT图像分类方法及***、设备及存储介质,所述方法包括:生成OCT图像中的第一掩码图像并与OCT图像运算,获取第一兴趣图像;对第一兴趣图像进行处理,确定前景,将前景转换为第二掩码图像并进行处理;将处理后的第二掩码图像与第一兴趣图像运算,提取第二兴趣图像;对第二兴趣图像进行特征增强处理,得到第三兴趣图像;对第三兴趣图像进行分类,得到分类结果,获取分类结果的置信度,将最终结果显示OCT图像上。该方法增加了计算机视觉预处理部分,有效地去除OCT图像的噪声,从而增加信噪比,且对敏感区域进行特征提取及特征增强,进行深度学习的识别,从而提高总体的识别准确率,提高图像的判断结果的置信度。

Description

基于计算机视觉特征的OCT图像分类方法及***、设备 技术领域
本发明属于计算机视觉、医学图像处理技术领域,尤其涉及一种基于计算机视觉特征的OCT图像分类方法及***、设备及存储介质。
背景技术
黄斑(macula)的中央有一个凹陷的结构,称为中央凹,是视力最敏锐的地方。当黄斑区域发生病变时,常会导致患者中心视力的严重下降,甚至是不可逆性失明。常见的视网膜疾病包括年龄相关性黄斑病变(age-related macular degeneration,AMD)、糖尿病性黄斑水肿(diabetic macular edema,DME)等。
在眼科疾病的临床诊断中,光学相关断层扫描技术(opticalcoherencetomography,OCT),是一种通过测量物体后向散射光的强度对物体进行断层成像的技术,它有着高分辨率、非接触、无创伤等特点,作为一种临床辅助手段被广泛应用。最初的医学图像分类采用人工标注的方法,医生通过观察大量的OCT横向扫描图像进行文本标注并且对其存储以确定患者的疾病类型,这种人工分析的方法耗时而且对医生的专业性有一定的要求,因此需要一种高效准确的视网膜OCT图像自动分类方法来辅助医生诊断。
目前,利用UNet进行医学图像的语义分割,能有效分割医学图像的病灶部分,并且模型的参数量比较小,但当OCT图像噪音比较大时,会发生语义分割错误的情况。首先,视网膜图像结构的复杂性,并不是每种眼科疾病的OCT图像上都有明显的差别,因此使用UNet进行视网膜OCT图像病灶语义分割时准确率有待提高。由于视网膜OCT图像噪音大,图像不规则,如果直接进行卷积神经网络识别不能有效的提高识别准确率。利用统计学的方法进行视网膜OCT的诊断,该方法不需要大量图片进行训练,但准确率有待提高,而且不能识别多种疾病。
技术问题
本发明所要解决的技术问题是:针对现有技术的问题,本发明提供了一种可以有效地去除OCT图像的噪声,且进行深度学习的识别,从而提高总体的识别准确率,并且提高了图像的判断结果的置信度方法。
技术解决方案
为了解决上述技术问题,本发明采用的技术方案为:
第一方面,本发明提供了一种基于计算机视觉特征的OCT图像分类方法,所述方法包括:
生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;
将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;
对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;
通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
第二方面,本发明提供了一种基于计算机视觉特征的OCT图像分类***,所述***包括:
确定模块:用于生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一 兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;
提取模块:用于将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;
增强模块:用于对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;
识别模块:用于通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
第三方面,本发明还提供了一种基于计算机视觉特征的OCT图像分类设备,包括存储器、处理器、以及存储在处理器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如上述第一方面所述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
第四方面,本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如上述第一方面所述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
有益效果
本发明提供了一种基于计算机视觉特征的OCT图像分类方法,所述方法包括:生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;对所述第二兴趣区域图像进行特 征增强处理,得到第三兴趣区域图像;通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。该方法增加了计算机视觉预处理部分,有效地去除OCT图像的噪声,从而增加了信噪比,且对敏感的区域进行特征提取及特征增强,最后进行深度学习的识别,从而提高总体的识别准确率,提高了图像的判断结果的置信度。
附图说明
下面结合附图详述本发明的具体结构
图1为本发明的基于计算机视觉特征的OCT图像分类方法的流程示意图;
图2为本发明的基于计算机视觉特征的OCT图像分类方法的子流程示意图;
图3为本发明的基于计算机视觉特征的OCT图像分类方法的另一子流程示意图;
图4为本发明的基于计算机视觉特征的OCT图像分类方法的又一子流程示意图;
图5为本发明的基于计算机视觉特征的OCT图像分类方法的又一子流程示意图;
图6为本发明的基于计算机视觉特征的OCT图像分类方法的程序模块示意图。
本发明的实施方式
为详细说明本发明的技术内容、构造特征、所实现目的及效果,以下结合实施方式并配合附图详予说明。
请参阅图1,图1为本申请实施例中基于计算机视觉特征的OCT 图像分类方法的流程示意图,所述方法包括:
步骤101、生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理。
在本实施例中,将OCT图像中亮度值小于255的像素点生成第一掩码图像,并通过卷积核对第一掩码图像进行运算,从而去除第一掩码图像的噪点。将去除噪点后的第一掩码图像与OCT图像进行运算,从而提取OCT图像的第一兴趣区域图像,再对第一兴趣区域图像进行去噪处理及前景背景区分,得到第二掩码图像。
步骤102、将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像。
在本实施例中,在步骤101中对第二掩码图像进行去噪处理后,与第一兴趣区域图像进行与运算,提取第一兴趣区域中的第二兴趣区域图像。
步骤103、对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像。
在本实施例中,对第二兴趣区域进行特征增强处理,得到对比度更加明显的第三兴趣区域图像,提高了第三兴趣区域图像的信噪比。
步骤104、通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
在本实施例中,卷积神经网络实现对图像分类的预测,将图像输入至卷积神经网络中,得到多个分类结果,获取分类结果的置信度,根据分类结果的置信度,将置信度概率最大值的分类结果显示于OCT 图像中。卷积神经网络可以采用不同的结构,例如AlexNet、VGG、Inception和ResNet等等,在此不做限定。
本申请实施例提供了一种基于计算机视觉特征的OCT图像分类方法,所述方法包括:生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。该方法增加了计算机视觉预处理部分,有效地去除OCT图像的噪声,从而增加了信噪比,且对敏感的区域进行特征提取及特征增强,最后进行深度学习的识别,从而提高总体的识别准确率,提高了图像的判断结果的置信度。
进一步地,请参阅图2,图2为本申请实施例中基于计算机视觉特征的OCT图像分类方法的子流程示意图,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算包括:
步骤201、提取处理后的所述第一掩码图像的***轮廓,并填充所述***轮廓内的元素;
步骤202、采用卷积核对所述填充所述***轮廓内的元素的所述第一掩码图像进行腐蚀运算;
步骤203、将所述腐蚀运算过后的所述第一掩码图像与所述OCT 图像进行与运算。
在本实施例中,将OCT图像中亮度值小于255的像素点生成第一掩码图像,如公式(1)所示,并用大小为5*5的卷积核对第一掩码图像进行开运算,再进行闭运算,从而去除第一掩码图像边缘的噪点,以及保留第一掩码图像区域内的完整性。
Figure PCTCN2021073936-appb-000001
对第一掩码图像的***轮廓进行提取,并对轮廓内的元素进行填充。然后用大小为5*5的卷积核对第一掩码图像进行腐蚀运算,缩小边缘范围,并减小边缘的噪声。利用处理后的第一掩码图像与OCT图像进行与运算,提取OCT图像中的第一兴趣区域图像。
进一步地,对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域包括:
采用卷积核对所述第一兴趣区域图像进行开运算,并对所述进行开运算后的第一兴趣区域图像进行中值滤波处理。
在本实施例中,用大小为3*3的卷积核对OCT图像的第一兴趣区域进行开运算,减少第一兴趣区域图像内的噪点,然后进行中值滤波操作,减少椒盐噪声。
在本实施例中,确定OCT图像的前景和背景,设置一个阈值p,p∈[0,255],从0到255逐一设置p的值,使得公式(2)的值最大,并保留最大σ 2的时候对应的p值。
σ 2=p 1(m 1-v) 2+p 2(m 2-v) 2        (2)
其中p 1为OCT图像素亮度值小于p时的概率,m 1为OCT图像素亮度值小于p时的像素的平均值。其中p 2为OCT图像素亮度值大于p时的概率,m 2为OCT图像素亮度值大于p时的像素的平均值。v为OCT图所有像素的平均值。
进一步地,请参阅图3,图3为本申请实施例中基于计算机视觉特征的OCT图像分类方法的另一子流程示意图,将所述前景区域转 换为第二掩码图像并对所述第二掩码图像进行处理包括:
步骤301、转换所述前景区域的像素亮度值,得到所述第二掩码图像;
步骤302、检测所述第二掩码图像的轮廓,对所述轮廓画近似多边形并填充所述近似多边形的内部;
步骤303、采用卷积核对所述第二掩码图像上的所述填充后的近似多边形进行中值滤波处理;
步骤304、采用卷积核对处理后的所述第二掩码图像进行闭运算,再采用卷积核对所述进行闭运算后的第二掩码图像进行开运算。
在本实施例中,对前景区域的所有像素亮度值按公式(3)所示进行转换,得到第二掩码图像。
Figure PCTCN2021073936-appb-000002
对第二掩码图像进行轮廓检测,并对轮廓画近似多边形,并对近似多边形内部进行填充处理。
用大小为21*21的卷积核对填充后的近似多边形进行中值滤波,去除椒盐噪声。
用大小为61*61的卷积核对处理后的第二掩码图像进行闭运算,再用9*9的卷积核对第二掩码图像进行开运算,从而去除部分噪声,减少第二掩码图像区域内的特征损失。
进一步地,请参阅图4,图4为本申请实施例中基于计算机视觉特征的OCT图像分类方法的又一子流程示意图,对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像包括:
步骤401、统计所述第二兴趣区域图像中像素灰度值出现的概率并绘制统计直方图;
步骤402、对所述第二兴趣区域图像中的像素亮度值进行排序,并将所述排序过像素亮度值的第二兴趣区域图像转换为所述第三兴趣区域图像。
在本实施例中,统计第二兴趣区域图像中像素灰度值出现的概率,并绘制统计直方图。
从小到大排序第二兴趣区域图像中的所有像素亮度值,并进行公式(4)所示的转换,转换后得到第三兴趣区域图像。
Figure PCTCN2021073936-appb-000003
其中p i为像素灰度值为i时的概率,x为图像灰度级,t为OCT图像的像素灰度值。
进一步地,通过卷积神经网络对所述第三兴趣区域图像进行分类之前包括:
对所述第三兴趣区域进行等比例裁剪尺寸。
进一步地,请参阅图5,图5为本申请实施例中基于计算机视觉特征的OCT图像分类方法的又一子流程示意图,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上包括:
步骤501、通过归一化指数函数获取所述分类结果的置信度;
步骤502、通过非极大值抑制筛选所述分类结果的置信度,得到所述最终分类结果及其置信度。
在本实施例中,将裁剪尺寸后的第三兴趣区域输入至卷积神经网络中可得到多个分类类别(分类结果),通过归一化指数函数计算每个分类类别(分类结果)的置信度概率,通过非极大值抑制筛选分类结果的置信度概率的最大值,将置信度概率最大的最终分类结果显示于OCT图像上。
进一步地,本申请实施例还提供一种基于计算机视觉特征的OCT图像分类***200,参阅图6,图6为本申请实施例中基于计算机视觉特征的OCT图像分类***的程序模块示意图,本实施例中,上述基于计算机视觉特征的OCT图像分类***包括:
确定模块601:用于生成OCT图像中的第一掩码图像,对所述 第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;
提取模块602:用于将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;
增强模块603:用于对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;
识别模块604:用于通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
本申请实施例提供的基于计算机视觉特征的OCT图像分类***,可以实现:生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。该方法增加了计算机视觉预处理部分,有效地去除OCT图像的噪声,从而增加了信噪比,且对敏感的区域进行特征提取及特征增强,最后进行深度学习的识别,从而提高总体的识别准确率,提高了图像的判断结果的 置信度。
进一步地,本申请实施例还提供一种基于计算机视觉特征的OCT图像分类设备,包括存储器、处理器、以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
进一步地,本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如上述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不 一定都是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的一种基于计算机视觉特征的OCT图像分类方法及***、设备及存储介质的描述,对于本领域的技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于计算机视觉特征的OCT图像分类方法,其特征在于,所述方法包括:
    生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;
    将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;
    对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;
    通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
  2. 如权利要求1所述的方法,其特征在于,所述对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算包括:
    提取处理后的所述第一掩码图像的***轮廓,并填充所述***轮廓内的元素;
    采用卷积核对所述填充所述***轮廓内的元素的所述第一掩码图像进行腐蚀运算;
    将所述腐蚀运算过后的所述第一掩码图像与所述OCT图像进行与运算。
  3. 如权利要求1所述的方法,其特征在于,所述对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域包括:
    采用卷积核对所述第一兴趣区域图像进行开运算,并对所述进行开运算后的第一兴趣区域图像进行中值滤波处理。
  4. 如权利要求1所述的方法,其特征在于,所述将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理包括:
    转换所述前景区域的像素亮度值,得到所述第二掩码图像;
    检测所述第二掩码图像的轮廓,对所述轮廓画近似多边形并填充所述近似多边形的内部;
    采用卷积核对所述第二掩码图像上的所述填充后的近似多边形进行中值滤波处理;
    采用卷积核对处理后的所述第二掩码图像进行闭运算,再采用卷积核对所述进行闭运算后的第二掩码图像进行开运算。
  5. 如权利要求1所述的方法,其特征在于,所述对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像包括:
    统计所述第二兴趣区域图像中像素灰度值出现的概率并绘制统计直方图;
    对所述第二兴趣区域图像中的像素亮度值进行排序,并将所述排序过像素亮度值的第二兴趣区域图像转换为所述第三兴趣区域图像。
  6. 如权利要求1所述的方法,其特征在于,所述通过卷积神经网络对所述第三兴趣区域图像进行分类之前包括:
    对所述第三兴趣区域进行等比例裁剪尺寸。
  7. 如权利要求1所述的方法,其特征在于,所述获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上包括:
    通过归一化指数函数获取所述分类结果的置信度;
    通过非极大值抑制筛选所述分类结果的置信度,得到所述最终分类结果及其置信度。
  8. 一种基于计算机视觉特征的OCT图像分类***,其特征在于, 所述***包括:
    确定模块:用于生成OCT图像中的第一掩码图像,对所述第一掩码图像进行处理,将所述处理后的第一掩码图像与所述OCT图像进行运算,获取所述OCT图像中的第一兴趣区域图像;对所述第一兴趣区域图像进行处理,确定所述OCT图像中的前景区域,将所述前景区域转换为第二掩码图像并对所述第二掩码图像进行处理;
    提取模块:用于将处理后的第二掩码图像与所述第一兴趣区域图像进行运算,提取所述第一兴趣区域图像中的第二兴趣区域图像;
    增强模块:用于对所述第二兴趣区域图像进行特征增强处理,得到第三兴趣区域图像;
    识别模块:用于通过卷积神经网络对所述第三兴趣区域图像进行分类,得到所述第三区域图像的分类结果,获取所述分类结果的置信度,根据所述分类结果的置信度将最终分类结果显示于所述OCT图像上,所述分类结果包括所述最终分类结果。
  9. 一种基于计算机视觉特征的OCT图像分类设备,包括存储器、处理器、以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现如权利要求1至7任一项所述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至7任一项所述的基于计算机视觉特征的OCT图像分类方法中的各个步骤。
PCT/CN2021/073936 2021-01-27 2021-01-27 基于计算机视觉特征的oct图像分类方法及***、设备 WO2022160118A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/073936 WO2022160118A1 (zh) 2021-01-27 2021-01-27 基于计算机视觉特征的oct图像分类方法及***、设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/073936 WO2022160118A1 (zh) 2021-01-27 2021-01-27 基于计算机视觉特征的oct图像分类方法及***、设备

Publications (1)

Publication Number Publication Date
WO2022160118A1 true WO2022160118A1 (zh) 2022-08-04

Family

ID=82654034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073936 WO2022160118A1 (zh) 2021-01-27 2021-01-27 基于计算机视觉特征的oct图像分类方法及***、设备

Country Status (1)

Country Link
WO (1) WO2022160118A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392130A (zh) * 2017-07-13 2017-11-24 西安电子科技大学 基于阈值自适应和卷积神经网络的多光谱图像分类方法
CN109493954A (zh) * 2018-12-20 2019-03-19 广东工业大学 一种基于类别判别定位的sd-oct图像视网膜病变检测***
CN110428421A (zh) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 黄斑图像区域分割方法和设备
CN111369572A (zh) * 2020-02-28 2020-07-03 清华大学深圳国际研究生院 一种基于图像修复技术的弱监督语义分割方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392130A (zh) * 2017-07-13 2017-11-24 西安电子科技大学 基于阈值自适应和卷积神经网络的多光谱图像分类方法
CN109493954A (zh) * 2018-12-20 2019-03-19 广东工业大学 一种基于类别判别定位的sd-oct图像视网膜病变检测***
CN110428421A (zh) * 2019-04-02 2019-11-08 上海鹰瞳医疗科技有限公司 黄斑图像区域分割方法和设备
CN111369572A (zh) * 2020-02-28 2020-07-03 清华大学深圳国际研究生院 一种基于图像修复技术的弱监督语义分割方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YIBIAO RONG ET AL.: "Surrogate-Assisted Retinal OCT Image Classification Based on Convolutional Neural Networks", IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, vol. 23, no. 1, 31 January 2019 (2019-01-31), XP011695774, ISSN: 2168-2194, DOI: 10.1109/JBHI.2018.2795545 *

Similar Documents

Publication Publication Date Title
Neto et al. An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images
CN110276356B (zh) 基于r-cnn的眼底图像微动脉瘤识别方法
Issac et al. An adaptive threshold based image processing technique for improved glaucoma detection and classification
CN109472781B (zh) 一种基于串行结构分割的糖尿病视网膜病变检测***
Saleh et al. An automated decision-support system for non-proliferative diabetic retinopathy disease based on MAs and HAs detection
WO2021082691A1 (zh) 眼部oct图像病灶区域的分割方法、装置及终端设备
Soomro et al. Contrast normalization steps for increased sensitivity of a retinal image segmentation method
Agarwal et al. A novel approach to detect glaucoma in retinal fundus images using cup-disk and rim-disk ratio
Sánchez et al. Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images
Elbalaoui et al. Automatic detection of blood vessel in retinal images
Li et al. Vessel recognition of retinal fundus images based on fully convolutional network
KR20210012097A (ko) 딥러닝에 기반한 당뇨망막병증 검출 및 증증도 분류장치 및 그 방법
Juneja et al. DC-Gnet for detection of glaucoma in retinal fundus imaging
Bhardwaj et al. Automatic blood vessel extraction of fundus images employing fuzzy approach
Uribe-Valencia et al. Automated Optic Disc region location from fundus images: Using local multi-level thresholding, best channel selection, and an Intensity Profile Model
Escorcia-Gutierrez et al. Convexity shape constraints for retinal blood vessel segmentation and foveal avascular zone detection
Biyani et al. A clustering approach for exudates detection in screening of diabetic retinopathy
Gunay et al. Automated detection of adenoviral conjunctivitis disease from facial images using machine learning
Priya et al. Detection and grading of diabetic retinopathy in retinal images using deep intelligent systems: a comprehensive review
Gou et al. A novel retinal vessel extraction method based on dynamic scales allocation
Sachdeva et al. Automatic segmentation and area calculation of optic disc in ophthalmic images
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
WO2022160118A1 (zh) 基于计算机视觉特征的oct图像分类方法及***、设备
Shabbir et al. A comparison and evaluation of computerized methods for blood vessel enhancement and segmentation in retinal images
CN111292285A (zh) 一种基于朴素贝叶斯与支持向量机的糖网病自动筛查方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21921738

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21921738

Country of ref document: EP

Kind code of ref document: A1