CN114092464A - OCT image processing method and device - Google Patents

OCT image processing method and device Download PDF

Info

Publication number
CN114092464A
CN114092464A CN202111435331.4A CN202111435331A CN114092464A CN 114092464 A CN114092464 A CN 114092464A CN 202111435331 A CN202111435331 A CN 202111435331A CN 114092464 A CN114092464 A CN 114092464A
Authority
CN
China
Prior art keywords
target
image
pixel points
processing
scan image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111435331.4A
Other languages
Chinese (zh)
Other versions
CN114092464B (en
Inventor
叶重荣
区初斌
安林
秦嘉
韦喜飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Weiren Medical Technology Co ltd
Weizhi Medical Technology Foshan Co ltd
Original Assignee
Guangdong Weiren Medical Technology Co ltd
Weizhi Medical Technology Foshan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Weiren Medical Technology Co ltd, Weizhi Medical Technology Foshan Co ltd filed Critical Guangdong Weiren Medical Technology Co ltd
Priority to CN202111435331.4A priority Critical patent/CN114092464B/en
Publication of CN114092464A publication Critical patent/CN114092464A/en
Application granted granted Critical
Publication of CN114092464B publication Critical patent/CN114092464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an OCT image processing method and a device, wherein the method comprises the following steps: the method comprises the steps of obtaining a B-Scan image corresponding to target features, carrying out image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result, inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for expressing the possibility that each pixel point belongs to interlayer boundaries of two adjacent layers included in the initial layering result, the target area is an area including the target features, and interlayer boundary information corresponding to the target features in the B-Scan image is determined according to the probability corresponding to all the pixel points. Therefore, the method and the device can be used for executing image layering processing on the acquired B-Scan image in combination with the deep neural network model, and are beneficial to improving the image layering efficiency and improving the accuracy of the image layering result.

Description

OCT image processing method and device
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an OCT image processing method and device.
Background
With the rapid development of computer and medical technology, OCT (Optical Coherence Tomography) has been widely applied to diagnostic equipment for fundus diseases, and is of great significance for the detection and experiment of fundus diseases and the writing of teaching materials. OCT belongs to a high-sensitivity, high-resolution, high-speed and non-invasive tomography imaging mode, optical coherence is used for imaging fundus scan, each scan is called an A-scan, adjacent continuous multiple scans are combined together to be called a B-scan image, and the B-scan image is also commonly seen OCT sectional view (also can be understood as OCT image) and is the most important imaging mode of OCT in medical diagnosis.
In practical applications, diagnosis of fundus diseases by a diagnostic apparatus generally depends on a target feature layering result after OCT images are layered, such as a retinal layering result. However, practice shows that, although algorithm performance is robust, the target feature layering result obtained by layering the OCT image has a low layering speed, in the OCT image layering method based on the conventional layering technologies such as the histogram, the boundary layering, the region layering, and the like. Therefore, on the premise of ensuring the performance of the algorithm, the OCT image layering algorithm is provided, and the improvement of the OCT image layering efficiency is very important.
Disclosure of Invention
The invention aims to provide an OCT image processing method and device, which can improve the layering efficiency of an OCT image and improve the accuracy of a layering result on the premise of ensuring the performance of an OCT image layering algorithm by combining a deep neural network model.
In order to solve the above technical problem, a first aspect of the present invention discloses a method for processing an OCT image, the method including:
acquiring a B-Scan image corresponding to the target feature;
performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target region of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to an interlayer boundary of two adjacent layers included in the initial layering result, and the target region is a region including the target feature;
and determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probability corresponding to all the pixel points.
As an optional implementation manner, in the first aspect of the present invention, the performing, by a preset image processing algorithm, image layering processing on the B-Scan image to obtain an initial layering result includes:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating the positive gradient of the filtering image in the vertical direction of the image, and constructing and obtaining a first cost function according to the positive gradient;
determining a first minimum cost path from the left edge to the right edge of the filtering image according to a predetermined path algorithm and the first cost function to obtain a first hierarchical line;
determining a second minimum cost path from the left edge to the right edge of the filtering image according to the path algorithm and the first cost function to obtain a second hierarchical line;
calculating the negative gradient of the filtering image in the vertical direction of the image, and constructing according to the negative gradient to obtain a second cost function;
determining a search area, wherein the search area is a lower area corresponding to a hierarchical line with a lower position in the first hierarchical line and the second hierarchical line;
determining a third minimum cost path from the left edge of the region to the right edge of the region of the search region according to the path algorithm and the second cost function, and performing smooth filtering operation on the third minimum cost path to obtain a third hierarchical line;
determining the first hierarchical line, the second hierarchical line and the third hierarchical line as an initial hierarchical result;
before determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line, the method further includes:
marking the first path as an unreachable path in the filtered image.
As an optional implementation manner, in the first aspect of the present invention, before the inputting the B-Scan image into a pre-trained deep neural network model and obtaining an output result, the method further includes:
determining a target region comprising the target feature in the B-Scan image;
wherein the determining the target region including the target feature in the B-Scan image comprises:
shifting the upper layering line relative to the first layering line in the first layering line and the second layering line by a first preset distance in the vertical direction of the image to obtain a first boundary line;
the third layer dividing line is shifted downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining a region below the first boundary line and above the second boundary line as a target region including the target feature in the B-Scan image.
As an optional implementation manner, in the first aspect of the present invention, after the B-Scan image is input into a pre-trained deep neural network model and an output result is obtained, before determining interlayer boundary information corresponding to the target feature in the B-Scan image according to probabilities corresponding to all the pixel points, the method further includes:
judging whether a target pixel point which falls out of the target area exists in all the pixel points;
when the target pixel point falling out of the target area does not exist in all the pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points;
and when the target pixel points falling outside the target area exist in all the pixel points, performing probability updating operation on all the target pixel points falling outside the target area to update the probabilities corresponding to all the target pixel points, and triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
As an optional implementation manner, in the first aspect of the present invention, the determining whether there is a target pixel point that falls outside the target area among all the pixel points includes:
for each row of pixel points in all the pixel points, judging whether a target pixel point falling outside the target area exists in the row of pixel points;
and the step of performing probability updating operation on all the target pixel points falling outside the target area to update the probability corresponding to all the target pixel points falling outside the target area comprises the following steps:
and for each row of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset numerical value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
As an optional implementation manner, in the first aspect of the present invention, the determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points includes:
for each column of pixel points in all the pixel points, performing normalization processing on the probability distribution of the column of pixel points to obtain the normalized probability distribution of the column of pixel points;
for each column of pixel points in all the pixel points, performing dot product operation on the normalized probability distribution of the column of pixel points and the row number distribution corresponding to the column of pixel points to obtain an interlayer distribution result corresponding to the column of pixel points;
and determining interlayer boundary information corresponding to the target features in the B-Scan image according to the interlayer distribution result corresponding to each row of pixel points in all the pixel points.
As an alternative implementation, in the first aspect of the present invention, the deep neural network model is trained by:
acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and boundary information corresponding to the target feature;
dividing the B-Scan image set to obtain a training set and a test set, wherein the training set is used for training a deep neural network model, and the test set is used for verifying the reliability of the trained deep neural network model;
executing target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation comprises at least one of up-and-down moving processing, left-and-right turning processing, up-and-down reversing processing and contrast adjusting processing;
inputting the processing result into a predetermined deep neural network model as input data to obtain an output result;
analyzing and calculating joint loss according to the output result, the B-Scan images included in the training set and the boundary information to obtain a joint loss value;
carrying out back propagation on the combined loss value in the deep neural network model, and carrying out iterative training with a preset period length to obtain a trained deep neural network model;
wherein the test set is used for verifying the reliability of the trained deep neural network model.
As an alternative embodiment, in the first aspect of the present invention, the target feature is a retinal feature.
The second aspect of the present invention discloses an OCT image processing apparatus, comprising:
the acquisition module is used for acquiring a B-Scan image corresponding to the target feature;
the first processing module is used for executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
a second processing module, configured to input the B-Scan image into a pre-trained deep neural network model to obtain an output result, where the output result includes a probability corresponding to each pixel point corresponding to a target region of the B-Scan image, the probability corresponding to each pixel point is used to indicate a possibility that each pixel point belongs to an interlayer boundary between two adjacent layers included in the initial layering result, and the target region is a region including the target feature;
and the first determining module is used for determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
As an optional implementation manner, in the second aspect of the present invention, the first processing module includes:
the filtering submodule is used for performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtering image;
the function construction submodule is used for calculating the positive gradient of the filtering image in the vertical direction of the image and constructing a first cost function according to the positive gradient;
the first determining submodule determines a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function to obtain a first hierarchical line;
the first determining submodule is further configured to determine a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function, so as to obtain a second hierarchical line;
the function construction submodule is also used for calculating the negative gradient of the filtering image in the vertical direction of the image and constructing a second cost function according to the negative gradient;
the second determining submodule is used for determining a search area, and the search area is a lower area corresponding to a hierarchical line with a lower position relative to the first hierarchical line and the second hierarchical line;
the first determining submodule is further configured to determine a third minimum cost path from a left edge of the region to a right edge of the region in the search region according to the path algorithm and the second cost function, and perform a smoothing filtering operation on the third minimum cost path to obtain a third hierarchical line;
the second determining submodule is further configured to determine the first hierarchical line, the second hierarchical line, and the third hierarchical line as an initial hierarchical result;
and, the first processing module further comprises:
and the marking sub-module is used for marking the first path as an unreachable path in the filtered image before the first determining sub-module determines a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line.
As an alternative embodiment, in the second aspect of the invention.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the second determining module is used for determining a target area comprising the target feature in the B-Scan image before the second processing module inputs the B-Scan image into a pre-trained deep neural network model to obtain an output result;
the mode for determining the target region including the target feature in the B-Scan image by the second determining module specifically includes:
shifting the upper layering line relative to the first layering line in the first layering line and the second layering line by a first preset distance in the vertical direction of the image to obtain a first boundary line;
the third layer dividing line is shifted downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining a region below the first boundary line and above the second boundary line as a target region including the target feature in the B-Scan image.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the judging module is used for judging whether target pixel points falling outside the target area exist in all the pixel points after the second processing module inputs the B-Scan image into a pre-trained deep neural network model and an output result is obtained and before the first determining module determines the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points, and when judging that the target pixel points falling outside the target area do not exist in all the pixel points, triggering the first determining module to execute the probability corresponding to all the pixel points and determine the interlayer boundary information corresponding to the target feature in the B-Scan image;
and the third processing module is used for executing probability updating operation on all the target pixel points falling outside the target area when the judging module judges that the target pixel points falling outside the target area exist in all the pixel points so as to update the probability corresponding to all the target pixel points, and triggering the first determining module to execute the operation of determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
As an optional implementation manner, in the second aspect of the present invention, the manner that the determining module determines whether there is a target pixel point that falls outside the target area among all the pixel points specifically includes:
for each row of pixel points in all the pixel points, judging whether a target pixel point falling outside the target area exists in the row of pixel points;
and the third processing module executes probability updating operation on all the target pixel points falling outside the target area, so as to specifically include a mode of updating the probabilities corresponding to all the target pixel points falling outside the target area:
and for each row of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset numerical value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
As an optional implementation manner, in the second aspect of the present invention, the manner of determining, by the first determining module, the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points specifically includes:
for each column of pixel points in all the pixel points, performing normalization processing on the probability distribution of the column of pixel points to obtain the normalized probability distribution of the column of pixel points;
for each column of pixel points in all the pixel points, performing dot product operation on the normalized probability distribution of the column of pixel points and the row number distribution corresponding to the column of pixel points to obtain an interlayer distribution result corresponding to the column of pixel points;
and determining interlayer boundary information corresponding to the target features in the B-Scan image according to the interlayer distribution result corresponding to each row of pixel points in all the pixel points.
As an alternative embodiment, in the second aspect of the present invention, the deep neural network model is trained by:
acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and boundary information corresponding to the target feature;
dividing the B-Scan image set to obtain a training set and a test set, wherein the training set is used for training a deep neural network model, and the test set is used for verifying the reliability of the trained deep neural network model;
executing target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation comprises at least one of up-and-down moving processing, left-and-right turning processing, up-and-down reversing processing and contrast adjusting processing;
inputting the processing result into a predetermined deep neural network model as input data to obtain an output result;
analyzing and calculating joint loss according to the output result, the B-Scan images included in the training set and the boundary information to obtain a joint loss value;
carrying out back propagation on the combined loss value in the deep neural network model, and carrying out iterative training with a preset period length to obtain a trained deep neural network model;
wherein the test set is used for verifying the reliability of the trained deep neural network model.
As an alternative embodiment, in the second aspect of the present invention, the target feature is a retinal feature.
The third aspect of the present invention discloses another OCT image processing apparatus, including:
a memory storing executable program code;
a processor coupled with the memory;
an input interface and an output interface coupled to the processor;
the processor calls the executable program code stored in the memory to execute the processing method of the OCT image disclosed by the first aspect of the invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides an OCT image processing method and a device, and the method comprises the following steps: the method comprises the steps of obtaining a B-Scan image corresponding to target features, carrying out image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result, inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target area of the B-Scan image, the probability corresponding to each pixel point is used for expressing the possibility that each pixel point belongs to interlayer boundaries of two adjacent layers included in the initial layering result, the target area is an area including the target features, and interlayer boundary information corresponding to the target features in the B-Scan image is determined according to the probability corresponding to all the pixel points. Therefore, the B-Scan image including the target characteristics can be intelligently obtained by implementing the method, and the classification efficiency of the B-Scan image is favorably improved; and the interlayer boundary information corresponding to the target features in the B-Scan image can be intelligently determined by combining the deep neural network model and the initial layering result, so that the image layering efficiency and the accuracy of the image layering result can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an OCT image processing method disclosed in the embodiments of the present invention;
FIG. 2 is a schematic flow chart of another OCT image processing method disclosed in the embodiments of the present invention;
FIG. 3 is a schematic structural diagram of an apparatus for processing an OCT image according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another OCT image processing apparatus disclosed in the embodiments of the present invention;
fig. 5 is a schematic structural diagram of another OCT image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention discloses an OCT image processing method and device, which can intelligently obtain a B-Scan image comprising target characteristics and is beneficial to improving the classification efficiency of the B-Scan image; and the interlayer boundary information corresponding to the target features in the B-Scan image can be intelligently determined by combining the deep neural network model and the initial layering result, so that the image layering efficiency and the accuracy of the image layering result are improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of an OCT image processing method according to an embodiment of the present invention. The OCT method described in fig. 1 may be applied to the layering processing of the B-Scan image of the retina, and the layering result obtained by the processing of the method may be applied to compiling a medical teaching material, and may also be used as an auxiliary material for the retinal research, which is not limited in the embodiment of the present invention. As shown in fig. 1, the OCT image processing method may include the following operations:
101. and acquiring a B-Scan image corresponding to the target feature.
In an embodiment of the present invention, the target feature may include a human eye retina feature (hereinafter, briefly described as a retina feature), and correspondingly, the B-Scan image may include a B-Scan image including a retina feature obtained by processing through an OCT technique. The B-Scan image may be obtained by directly acquiring, acquiring and processing a B-Scan image including a retinal feature by a device carrying an OCT scanning technique, or by acquiring a B-Scan image including a retinal feature stored in a system database, which is not limited in the embodiment of the present invention.
102. And performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result.
In the embodiment of the invention, when the B-Scan image includes the above retinal features, the corresponding obtained initial layering result is a layering result corresponding to a retinal tissue layer. The preset image processing algorithm comprises an algorithm improved by the traditional B-Scan image layering processing (such as an improved minimum cost path algorithm based on a gradient cost map).
It should be noted that, in the OCT image processing method provided by the present invention, because the retina includes a large number of tissue layers, when the image layering processing is performed on the B-Scan image of the retina through the preset image processing algorithm, all tissue layers included in the retina are not layered, and only an ILM (inner limiting membrane) layer, an ISOS (inner and outer segments of photoreceptor cells) layer, and a BM (bruch's membrane) layer, whose boundaries are more obvious than other retina tissue layers, in the retina tissue layer are divided, so that the layering efficiency of the retina tissue layer of the B-Scan image and the accuracy of the layering result are improved on the premise of ensuring the robustness of the processing algorithm performance of the OCT image.
Therefore, in the embodiment of the invention, the data amount required to be calculated when the B-Scan image is processed is reduced by reducing the layering number of the retina tissue layer in the B-Scan image, so that the aims of improving the layering efficiency of the B-Scan image and improving the accuracy of the layering result are fulfilled.
103. And inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result.
In the embodiment of the invention, the output result comprises the probability corresponding to each pixel point corresponding to the target area of the B-Scan image, and the probability corresponding to each pixel point is used for expressing the possibility that each pixel point belongs to the interlayer boundary of some two adjacent layers included in the initial layering result; the target region is a region including a target feature.
Further, when the target feature is the above-mentioned retinal feature, the target region is a region including a retinal tissue layer; assuming that the initial layering result includes three tissue layers, and if the ILM layer, the ISOS layer, and the BM layer are sequentially arranged from top to bottom in the vertical direction of the image, the two adjacent layers include the ILM layer and the ISOS layer, and the ISOS layer and the BM layer, wherein the two adjacent layers refer to two adjacent tissue layers among the tissue layers obtained through algorithm or artificial division, and do not refer to two adjacent tissue layers among all the tissue layers included in the actual retina.
104. And determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probability corresponding to all the pixel points.
In the embodiment of the present invention, when the target feature is the above-mentioned retinal feature, the probability corresponding to each pixel point refers to the probability that the pixel point belongs to the interlayer boundary of some two adjacent retinal layers; the determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points may specifically include the following steps:
for each column of pixel points in all pixel points, performing normalization processing on the probability distribution of the column of pixel points to obtain the normalized probability distribution of the column of pixel points;
for each column of pixel points in all the pixel points, performing dot product operation on the normalized probability distribution of the column of pixel points and the row number distribution corresponding to the column of pixel points to obtain an interlayer distribution result corresponding to the column of pixel points;
and determining interlayer boundary information corresponding to the target characteristics in the B-Scan image according to the interlayer distribution result corresponding to each row of pixel points in all the pixel points.
It should be noted that the function used for performing the normalization process on the probability distribution of the column of pixel points may include a Softmax function.
Therefore, by implementing the processing method of the OCT image described in the figure 1, the image layering processing can be performed on the acquired B-Scan image including the target feature in a targeted manner, and the data amount required to be operated when the B-Scan image is processed is reduced by reducing the layering number of the retina tissue layer in the B-Scan image, so that the layering efficiency of the B-Scan image is improved; and interlayer boundary information corresponding to target features in the B-Scan image can be determined by combining a pre-trained deep neural network model, so that the layering efficiency is improved, and the accuracy of a layering result is further improved.
In an optional embodiment, the performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result may specifically include the following steps:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating the positive gradient of the filtered image in the vertical direction of the image, and constructing according to the positive gradient to obtain a first cost function;
determining a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and a first cost function to obtain a first hierarchical line;
determining a second minimum cost path from the left edge to the right edge of the filtered image according to a path algorithm and the first cost function to obtain a second hierarchical line;
calculating the negative gradient of the filtered image in the vertical direction of the image, and constructing according to the negative gradient to obtain a second cost function;
determining a search area, wherein the search area is a lower area corresponding to a hierarchical line with a position relative to the lower hierarchical line in the first hierarchical line and the second hierarchical line;
determining a third minimum cost path from the left edge of the region to the right edge of the region of the search region according to a path algorithm and a second cost function, and performing smooth filtering operation on the third minimum cost path to obtain a third hierarchical line;
determining a first layering line, a second layering line and a third layering line as an initial layering result;
further, before determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function, and obtaining a second hierarchical line, the method further includes the following steps:
the first path is marked as an unreachable path in the filtered image.
In the optional embodiment, the predetermined path algorithm may be a minimum cost path algorithm obtained by improving on the basis of Dijkstra or Bellman-Ford algorithm, and the embodiment of the present invention is not limited; and the functional expression corresponding to the first Cost function may be Cost1 ═ a × exp (-G) or Cost1 ═ a × (-G), and the functional expression corresponding to the second Cost function may be Cost2 ═ a × exp (-G) or Cost2 ═ a (-G), where G is a gradient value.
In this alternative embodiment, the first path is marked as an unreachable path in the filtered image, and after obtaining the first hierarchical line, according to the marked unreachable path, after the minimum cost path is obtained again through a predetermined path algorithm, a second hierarchical line different from the first hierarchical line is obtained, so that two different hierarchical lines can be obtained.
In this optional embodiment, filtering processing is performed on the B-Scan image through a preset filtering function, where the preset filtering function may be a median filtering function and a mean filtering function; when the smoothing filtering operation is performed on the third minimum cost path, the median filtering and the mean filtering smoothing may be performed on the coordinate points included in the third minimum cost path in actual processing.
Therefore, the optional embodiment provides a minimum cost path algorithm, the required first hierarchical line, second hierarchical line and third hierarchical line can be marked off in the B-Scan image, and the data amount required to be operated when the B-Scan image is processed is reduced by means of the number of hierarchical layers of the retinal tissue layer in the B-Scan image, so that the hierarchical efficiency of the image hierarchical algorithm is improved, and the accuracy of the hierarchical result is improved.
In another optional embodiment, after the B-Scan image is input into the pre-trained deep neural network model to obtain an output result, and before the interlayer boundary information corresponding to the target feature in the B-Scan image is determined according to the probabilities corresponding to all the pixel points, the method may further include the following steps:
judging whether target pixel points falling outside the target area exist in all the pixel points;
when judging that no target pixel point falling outside the target area exists in all pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points;
and when judging that target pixel points falling outside the target area exist in all the pixel points, performing probability updating operation on all the target pixel points falling outside the target area so as to update the probabilities corresponding to all the target pixel points, and triggering and executing operation of determining interlayer boundary information corresponding to target features in the B-Scan image according to the probabilities corresponding to all the pixel points.
In this optional embodiment, determining whether there is a target pixel point that falls outside the target region among all the pixel points includes:
and judging whether the coordinate value corresponding to each pixel point in all the pixel points exceeds the coordinate interval included in the target area.
Therefore, the optional embodiment can perform probability updating operation on all target pixel points falling outside the target area, and is beneficial to improving the accuracy of the determined interlayer boundary information when determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
In this optional embodiment, the determining whether there is a target pixel point that falls outside the target region among all the pixel points may further include:
and judging whether the probability corresponding to each pixel point in all the pixel points is within a preset probability threshold, wherein the preset probability threshold is a predetermined probability threshold (such as (0.5, 1) corresponding to the pixel point falling in the target area, and the predetermined probability threshold does not include two endpoint values).
Therefore, in the optional embodiment, another method for judging whether the pixel points fall into the target area is provided, which is different from the method for analyzing and processing the coordinate values corresponding to each pixel point, including coordinate data on an x axis, a y axis and even a z axis, and only the probability values corresponding to the pixel points are analyzed and processed, so that the data volume of analysis and processing is reduced, the method for judging whether the pixel points fall into the target area is expanded, and the efficiency of judging and obtaining the result is improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another OCT image processing method according to an embodiment of the present invention. The OCT method described in fig. 2 may be applied to the layering processing of the B-Scan image of the retina, and the layering result obtained by the processing of the method may be applied to compiling a medical teaching material, and may also be used as an auxiliary material for the retinal research, which is not limited in the embodiment of the present invention. As shown in fig. 2, the OCT image processing method may include the following operations:
201. and acquiring a B-Scan image corresponding to the target feature.
202. And performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result.
203. And inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result.
204. And determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probability corresponding to all the pixel points.
In the embodiment of the present invention, please refer to other specific descriptions of step 101 to step 104 in the first embodiment for other descriptions of step 201 to step 204, which is not described again in the embodiment of the present invention.
205. And determining a target area comprising the target feature in the B-Scan image.
In the embodiment of the present invention, before the B-Scan image is input into the pre-trained deep neural network model to obtain the output result, determining a target region including the target feature in the B-Scan image may specifically include the following steps:
shifting a layering line of the first layering line and the second layering line, which is positioned at the upper part relative to the first layering line, upwards by a first preset distance in the vertical direction of the image to obtain a first boundary line;
the third layer dividing line is shifted downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining a region below the first boundary line and above the second boundary line as a target region including the target feature in the B-Scan image.
Therefore, by implementing the OCT image processing method described in fig. 2, image layering can be performed on the acquired B-Scan image including the target feature, and by reducing the number of layering layers of the retinal tissue layer in the B-Scan image, the amount of data required to be calculated when the B-Scan image is processed is reduced, and the layering efficiency of the B-Scan image is improved; interlayer boundary information corresponding to target features in the B-Scan image can be determined by combining a pre-trained deep neural network model, so that the layering efficiency is improved, and meanwhile, the accuracy of a layering result is further improved; in addition, the target area comprising the target characteristics can be intelligently determined, so that the data volume required to be processed by an image processing algorithm is reduced when image processing operation is subsequently executed aiming at the target area, and the layering efficiency of the image is improved to a certain extent; meanwhile, after the target area including the target feature is determined, the interference of the redundant area not including the target feature on image layering processing is reduced, and the accuracy of the image layering result is improved.
In an optional embodiment, the determining whether there is a target pixel point falling outside the target region among all the pixel points includes:
for each row of pixel points in all the pixel points, judging whether a target pixel point falling outside a target area exists in the row of pixel points;
and performing probability updating operation on all target pixel points falling outside the target area to update the probability corresponding to all target pixel points falling outside the target area, wherein the probability updating operation comprises the following steps:
and for each row of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset numerical value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
In this alternative embodiment, the preset value may include a fixed coefficient epsilon (e.g. 0.0001) with a very small value, and the coefficient epsilon included in the preset value may also be an attenuation value that varies with the position, for example, a region in a target region composed of a first boundary line and a second boundary line, the region being located in the center of the target region and having a distance equal to the distance between the first boundary line and the second boundary line, is taken as a reference system, and the region being located in the reference system is taken as a starting point, the starting point is defined to reach the first boundary line or the second boundary line farthest from the first boundary line or the second boundary line, and when the position of the pixel point corresponding to the coefficient epsilon is closer to the first boundary line or the second boundary line, the value corresponding to the coefficient epsilon is larger, the position of the pixel point corresponding to the coefficient epsilon is farther from the first boundary line or the second boundary line, and the value corresponding to the coefficient epsilon is smaller.
Therefore, in the optional embodiment, the step of processing the pixel points in the column is provided, the pixel points can be processed one by one, and further the pixel points can be processed in the column, so that the processing efficiency for the pixel points is improved, and the layering efficiency of the OCT image is improved to a certain extent.
In another alternative embodiment, the deep neural network model is trained by:
acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to target features and boundary information corresponding to the target features;
dividing a B-Scan image set to obtain a training set and a test set, wherein the training set is used for training the deep neural network model, and the test set is used for verifying the reliability of the trained deep neural network model;
executing target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation comprises at least one of up-and-down moving processing, left-and-right turning processing, up-and-down reversing processing and contrast adjusting processing;
inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
analyzing and calculating the joint loss according to the output result, the B-Scan image and the boundary information included in the training set to obtain a joint loss value;
performing back propagation on the combined loss value in the deep neural network model, and performing iterative training with a preset period length to obtain a trained deep neural network model;
and the test set is used for verifying the reliability of the trained deep neural network model.
In this alternative embodiment, analyzing the calculated joint loss to obtain a joint loss value comprises:
calculating the label loss L according to the first data (marked as M0) included in the output result and the labeled pixel-level labellabel_diceThe pixel-level label is a label encoded by a preset encoding mode (one-hot);
calculating a cross-entropy loss L from the first data (M0) and the pixel-level labellabel_ce
Calculating to obtain boundary cross entropy loss L according to second data (marked as B0) included in the output result and boundary informationbd_ce
Calculating a smoothing loss L based on the third data (B2) and the boundary information included in the mathematical operation resultbd_l1
Losing the label by Llabel_diceCross entropy loss Llabel_ceBoundary cross entropy loss Lbd_ceAnd a smoothing loss Lbd_l1And multiplying the obtained product results by a coefficient to obtain respective product results, and summing all the product results to obtain a numerical value result corresponding to the joint loss, wherein the numerical value result is used as a joint loss value.
The calculation formula of the joint loss in practical application is as follows:
L=λlabel_diceLlabel_dicelabel_ceLlabel_cebd_ceLbd_cebd_l1Lbd_l1
therefore, the interlayer boundary information corresponding to the target feature in the B-Scan image is determined by performing related processing operations (including normalization processing and dot product operation) on the deep neural network model obtained by training and combining the probability distribution corresponding to each row of pixel points of all the pixel points by taking the row as a unit, and the aims of improving the layering efficiency of the OCT image and improving the accuracy of the layering result are fulfilled
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an OCT image processing apparatus according to an embodiment of the present invention. The OCT image processing device may be an OCT image processing terminal, an OCT image processing device, an OCT image processing system, or an OCT image processing server, where the OCT image processing server may be a local server, a remote server, or a cloud server (also referred to as a cloud server), and when the OCT image processing server is a non-cloud server, the non-cloud server may be in communication connection with the cloud server, and the embodiment of the present invention is not limited thereto. As shown in fig. 3, the processing apparatus of the OCT image may include an acquisition module 301, a first processing module 302, a second processing module 303, and a first determination module 304, wherein:
an obtaining module 301, configured to obtain a B-Scan image corresponding to the target feature.
The first processing module 302 is configured to perform image layering processing on the B-Scan image acquired by the acquisition module 301 through a preset image processing algorithm to obtain an initial layering result.
The second processing module 303 is configured to input the B-Scan image acquired by the acquiring module 301 into a pre-trained deep neural network model to obtain an output result, where the output result includes a probability corresponding to each pixel point corresponding to a target region of the B-Scan image, the probability corresponding to each pixel point is used to indicate a possibility that each pixel point belongs to an interlayer boundary between two adjacent layers included in the initial layering result, and the target region is a region including a target feature.
The first determining module 304 is configured to determine, according to the probabilities corresponding to all the pixel points obtained by the second processing module 303, interlayer boundary information corresponding to the target feature in the B-Scan image obtained by the obtaining module 301.
Therefore, by implementing the OCT image processing apparatus described in fig. 3, image layering can be performed on the acquired B-Scan image including the target feature in a targeted manner, and by reducing the number of layering layers of the retinal tissue layer in the B-Scan image, the amount of data required to be calculated when the B-Scan image is processed is reduced, and the layering efficiency of the B-Scan image is improved; and interlayer boundary information corresponding to target features in the B-Scan image can be determined by combining a pre-trained deep neural network model, so that the layering efficiency is improved, and the accuracy of a layering result is further improved.
In an alternative embodiment, as shown in fig. 4, the first processing module 302 may include a filtering sub-module 3021, a function building sub-module 3022, a first determining sub-module 3023, and a second determining sub-module 3024, wherein:
and the filtering submodule 3021 is configured to perform filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image.
The function constructing submodule 3022 is configured to calculate a positive gradient of the filtered image obtained by the filtering submodule 3021 in the image vertical direction, and construct a first cost function according to the positive gradient.
The first determining sub-module 3023 determines a first minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering sub-module 3021 according to a predetermined path algorithm and the first cost function obtained by the function constructing sub-module 3022, so as to obtain a first hierarchical line.
The first determining sub-module 3023 is further configured to determine, according to a predetermined path algorithm and the first cost function obtained by the function constructing sub-module 3022, a second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering sub-module 3021, so as to obtain a second hierarchical line.
The function building submodule 3022 is further configured to calculate a negative gradient of the filtered image obtained by the filtering submodule 3021 in the image vertical direction, and build a second cost function according to the negative gradient.
The second determining submodule 3024 is configured to determine a search area, where the search area is a lower area corresponding to a hierarchical line having a lower position relative to the first hierarchical line and the second hierarchical line obtained by the first determining submodule 3023.
The first determining submodule 3023 is further configured to determine, according to a predetermined path algorithm and the second cost function obtained by the function constructing submodule 3022, a third minimum cost path of the search region from the left edge of the region to the right edge of the region, which is determined by the second determining submodule 3024, and perform a smoothing filtering operation on the third minimum cost path, so as to obtain a third hierarchical line.
The second determining submodule 3024 is further configured to determine the first hierarchical line, the second hierarchical line, and the third hierarchical line obtained by the first determining submodule 3023 as an initial hierarchical result.
Further, the first processing module 302 may further include a marking sub-module 3025, where:
a marking sub-module 3025, configured to mark the first path as an unreachable path in the filtered image obtained by the filtering sub-module 3021 before the first determining sub-module 3023 determines, according to a predetermined path algorithm and the first cost function, the second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering sub-module 3021, and obtains the second hierarchical line.
Therefore, the optional embodiment provides a minimum cost path algorithm, the required first hierarchical line, second hierarchical line and third hierarchical line can be marked off in the B-Scan image, and the data amount required to be operated when the B-Scan image is processed is reduced by means of the number of hierarchical layers of the retinal tissue layer in the B-Scan image, so that the hierarchical efficiency of the image hierarchical algorithm is improved, and the accuracy of the hierarchical result is improved.
In another alternative embodiment, as shown in fig. 4, the OCT image processing apparatus further includes a second determining module 305, wherein:
and the second determining module 305 is configured to determine a target region including the target feature in the B-Scan image before the second processing module 303 inputs the acquired B-Scan image into the pre-trained deep neural network model to obtain an output result.
The way for determining the target region including the target feature in the B-Scan image by the second determining module 305 specifically includes:
shifting the upper layering line relative to the first layering line in the first layering line and the second layering line by a first preset distance upwards in the vertical direction of the image to obtain a first boundary line;
the third layer dividing line is shifted downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining a region below the first boundary line and above the second boundary line as a target region including the target feature in the B-Scan image.
Therefore, the optional embodiment can intelligently determine the target area including the target feature, so that the data volume required to be processed by the image processing algorithm is reduced when the image processing operation is subsequently executed aiming at the target area, and the layering efficiency of the image is improved to a certain extent; meanwhile, after the target area including the target feature is determined, the interference of the redundant area not including the target feature on image layering processing is reduced, and the accuracy of the image layering result is improved.
In yet another alternative embodiment, as shown in fig. 4, the processing apparatus for OCT images further includes a determining module 306 and a third processing module 307, wherein:
the determining module 306 is configured to, after the second processing module 303 inputs the B-Scan image into the pre-trained deep neural network model to obtain an output result, and before the first determining module 304 determines the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points, determine whether there is a target pixel point falling outside the target region among all the pixel points, and when it is determined that there is no target pixel point falling outside the target region among all the pixel points, trigger the first determining module 304 to perform an operation of determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
The third processing module 307 is configured to, when the determining module 306 determines that there is a target pixel point that falls outside the target region among all the pixel points, perform a probability updating operation on all the target pixel points that fall outside the target region to update probabilities corresponding to all the target pixel points, and trigger the first determining module 304 to perform an operation of determining interlayer boundary information corresponding to a target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
Therefore, the optional embodiment can perform probability updating operation on all target pixel points falling outside the target area, and is beneficial to improving the accuracy of the determined interlayer boundary information when determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
In this optional embodiment, the manner for determining, by the determining module 306, whether there is a target pixel that falls outside the target area among all the pixels specifically includes:
and judging whether a target pixel point falling outside the target area exists in each row of pixel points in all the pixel points.
And the third processing module executes probability updating operation on all target pixel points falling outside the target area, and the mode of updating the probability corresponding to all target pixel points falling outside the target area specifically comprises the following steps:
and for each row of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset numerical value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
Therefore, in the optional embodiment, the step of processing the pixel points in the column is provided, the pixel points can be processed one by one, and the pixel points can also be processed in the column, so that the processing efficiency of the pixel points is improved, and the layering efficiency of the image is improved to a certain extent; in addition, the probability updating operation can be used for reducing the situation that the probability corresponding to all the pixel points comprises abnormal probability when the operation of determining the interlayer boundary information corresponding to the target feature in the B-Scan image is subsequently executed according to the probability corresponding to all the pixel points.
In yet another optional embodiment, the manner for determining the interlayer boundary information corresponding to the target feature in the B-Scan image by the first determining module 304 according to the probabilities corresponding to all the pixel points specifically includes:
for each column of pixel points in all pixel points, performing normalization processing on the probability distribution of the column of pixel points to obtain the normalized probability distribution of the column of pixel points;
for each column of pixel points in all the pixel points, performing dot product operation on the normalized probability distribution of the column of pixel points and the row number distribution corresponding to the column of pixel points to obtain an interlayer distribution result corresponding to the column of pixel points;
and determining interlayer boundary information corresponding to the target characteristics in the B-Scan image according to the interlayer distribution result corresponding to each row of pixel points in all the pixel points.
And the deep neural network model is obtained by training in the following way:
acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to target features and boundary information corresponding to the target features;
dividing a B-Scan image set to obtain a training set and a test set, wherein the training set is used for training the deep neural network model, and the test set is used for verifying the reliability of the trained deep neural network model;
executing target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation comprises at least one of up-and-down moving processing, left-and-right turning processing, up-and-down reversing processing and contrast adjusting processing;
inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
analyzing and calculating the joint loss according to the output result, the B-Scan image and the boundary information included in the training set to obtain a joint loss value;
performing back propagation on the combined loss value in the deep neural network model, and performing iterative training with a preset period length to obtain a trained deep neural network model;
and the test set is used for verifying the reliability of the trained deep neural network model.
Therefore, the deep neural network model obtained through training is combined with probability distribution corresponding to each row of pixel points of all the pixel points by taking the row as a unit to execute related processing operation (including normalization processing and dot product operation), interlayer boundary information corresponding to the target feature in the B-Scan image is determined, and the aims of improving the layering efficiency of the OCT image and improving the accuracy of the layering result are fulfilled.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of another OCT image processing apparatus according to an embodiment of the present invention. As shown in fig. 5, the OCT image processing apparatus includes:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 401;
further, an input interface 403 and an output interface 404 coupled to the processor 402 may be included;
the processor 402 calls the executable program code stored in the memory 401 to execute the steps in the OCT image processing method described in the first embodiment or the second embodiment of the present invention.
EXAMPLE five
An embodiment of the present invention discloses a computer program product, which includes a non-transitory computer storage medium storing a computer program, and the computer program is operable to cause a computer to execute the steps in the processing method of an OCT image described in the first embodiment or the second embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer storage medium, wherein the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM) or other magnetic disk, a magnetic tape Memory, a magnetic tape, a magnetic disk, a, Or any other medium which can be used to carry or store data and which can be read by a computer.
Finally, it should be noted that: the OCT image processing method and the OCT image processing device disclosed in the embodiments of the present invention are only preferred embodiments of the present invention, and are only used for illustrating the technical solution of the present invention, not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of processing an OCT image, the method comprising:
acquiring a B-Scan image corresponding to the target feature;
performing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result comprises the probability corresponding to each pixel point corresponding to a target region of the B-Scan image, the probability corresponding to each pixel point is used for representing the possibility that each pixel point belongs to an interlayer boundary of two adjacent layers included in the initial layering result, and the target region is a region including the target feature;
and determining interlayer boundary information corresponding to the target features in the B-Scan image according to the probability corresponding to all the pixel points.
2. The OCT image processing method of claim 1, wherein the performing an image layering process on the B-Scan image by a preset image processing algorithm to obtain an initial layering result comprises:
performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
calculating the positive gradient of the filtering image in the vertical direction of the image, and constructing and obtaining a first cost function according to the positive gradient;
determining a first minimum cost path from the left edge to the right edge of the filtering image according to a predetermined path algorithm and the first cost function to obtain a first hierarchical line;
determining a second minimum cost path from the left edge to the right edge of the filtering image according to the path algorithm and the first cost function to obtain a second hierarchical line;
calculating the negative gradient of the filtering image in the vertical direction of the image, and constructing according to the negative gradient to obtain a second cost function;
determining a search area, wherein the search area is a lower area corresponding to a hierarchical line with a lower position in the first hierarchical line and the second hierarchical line;
determining a third minimum cost path from the left edge of the region to the right edge of the region of the search region according to the path algorithm and the second cost function, and performing smooth filtering operation on the third minimum cost path to obtain a third hierarchical line;
determining the first hierarchical line, the second hierarchical line and the third hierarchical line as an initial hierarchical result;
before determining a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second hierarchical line, the method further includes:
marking the first path as an unreachable path in the filtered image.
3. The method for processing the OCT image of claim 2, wherein before inputting the B-Scan image into a pre-trained deep neural network model and obtaining an output result, the method further comprises:
determining a target region comprising the target feature in the B-Scan image;
wherein the determining the target region including the target feature in the B-Scan image comprises:
shifting the upper layering line relative to the first layering line in the first layering line and the second layering line by a first preset distance in the vertical direction of the image to obtain a first boundary line;
the third layer dividing line is shifted downwards by a second preset distance in the vertical direction of the image to obtain a second boundary line;
and determining a region below the first boundary line and above the second boundary line as a target region including the target feature in the B-Scan image.
4. The OCT image processing method of claim 3, wherein after the B-Scan image is input into a pre-trained deep neural network model and an output result is obtained, and before the inter-layer boundary information corresponding to the target feature in the B-Scan image is determined according to the probabilities corresponding to all the pixel points, the method further comprises:
judging whether a target pixel point which falls out of the target area exists in all the pixel points;
when the target pixel point falling out of the target area does not exist in all the pixel points, triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points;
and when the target pixel points falling outside the target area exist in all the pixel points, performing probability updating operation on all the target pixel points falling outside the target area to update the probabilities corresponding to all the target pixel points, and triggering and executing the operation of determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points.
5. The method for processing an OCT image according to claim 4, wherein the determining whether or not there is a target pixel that falls outside the target region among all the pixels includes:
for each row of pixel points in all the pixel points, judging whether a target pixel point falling outside the target area exists in the row of pixel points;
and the step of performing probability updating operation on all the target pixel points falling outside the target area to update the probability corresponding to all the target pixel points falling outside the target area comprises the following steps:
and for each row of pixel points in all the pixel points, if a target pixel point falling outside the target area exists in the row of pixel points, multiplying each target pixel point falling outside the target area in the row of pixel points by a preset numerical value corresponding to the target pixel point to obtain a product result, and updating the probability corresponding to the target pixel point according to the product result.
6. The method for processing the OCT image of claim 4 or 5, wherein the determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points comprises:
for each column of pixel points in all the pixel points, performing normalization processing on the probability distribution of the column of pixel points to obtain the normalized probability distribution of the column of pixel points;
for each column of pixel points in all the pixel points, performing dot product operation on the normalized probability distribution of the column of pixel points and the row number distribution corresponding to the column of pixel points to obtain an interlayer distribution result corresponding to the column of pixel points;
and determining interlayer boundary information corresponding to the target features in the B-Scan image according to the interlayer distribution result corresponding to each row of pixel points in all the pixel points.
7. The method for processing the OCT image of any one of claims 1-6, wherein the deep neural network model is trained by:
acquiring a B-Scan image set comprising labeling information, wherein the labeling information corresponding to each B-Scan image in the B-Scan image set comprises label information corresponding to the target feature and boundary information corresponding to the target feature;
dividing the B-Scan image set to obtain a training set and a test set, wherein the training set is used for training a deep neural network model, and the test set is used for verifying the reliability of the trained deep neural network model;
executing target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation comprises at least one of up-and-down moving processing, left-and-right turning processing, up-and-down reversing processing and contrast adjusting processing;
inputting the processing result into a predetermined deep neural network model as input data to obtain an output result;
analyzing and calculating joint loss according to the output result, the B-Scan images included in the training set and the boundary information to obtain a joint loss value;
carrying out back propagation on the combined loss value in the deep neural network model, and carrying out iterative training with a preset period length to obtain a trained deep neural network model;
wherein the test set is used for verifying the reliability of the trained deep neural network model.
8. The method of processing an OCT image according to any one of claims 1 to 7, wherein the target feature is a retinal feature.
9. An apparatus for processing an OCT image, the apparatus comprising:
the acquisition module is used for acquiring a B-Scan image corresponding to the target feature;
the first processing module is used for executing image layering processing on the B-Scan image through a preset image processing algorithm to obtain an initial layering result;
a second processing module, configured to input the B-Scan image into a pre-trained deep neural network model to obtain an output result, where the output result includes a probability corresponding to each pixel point corresponding to a target region of the B-Scan image, the probability corresponding to each pixel point is used to indicate a possibility that each pixel point belongs to an interlayer boundary between two adjacent layers included in the initial layering result, and the target region is a region including the target feature;
and the first determining module is used for determining interlayer boundary information corresponding to the target feature in the B-Scan image according to the probability corresponding to all the pixel points.
10. An apparatus for processing an OCT image, the apparatus comprising:
a memory storing executable program code;
a processor coupled with the memory;
an input interface and an output interface coupled to the processor;
the processor calls the executable program code stored in the memory to execute the method of processing the OCT image of any of claims 1-8.
CN202111435331.4A 2021-11-29 2021-11-29 OCT image processing method and device Active CN114092464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111435331.4A CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111435331.4A CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Publications (2)

Publication Number Publication Date
CN114092464A true CN114092464A (en) 2022-02-25
CN114092464B CN114092464B (en) 2024-06-07

Family

ID=80305758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111435331.4A Active CN114092464B (en) 2021-11-29 2021-11-29 OCT image processing method and device

Country Status (1)

Country Link
CN (1) CN114092464B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374028A (en) * 2015-10-12 2016-03-02 中国科学院上海光学精密机械研究所 Optical coherence tomography retina image layering method
US20170109883A1 (en) * 2015-10-19 2017-04-20 The Charles Stark Draper Laboratory, Inc. System and method for the segmentation of optical coherence tomography slices
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN111462160A (en) * 2019-01-18 2020-07-28 北京京东尚科信息技术有限公司 Image processing method, device and storage medium
CN112330638A (en) * 2020-11-09 2021-02-05 苏州大学 Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image
CN112700390A (en) * 2021-01-14 2021-04-23 汕头大学 Cataract OCT image repairing method and system based on machine learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374028A (en) * 2015-10-12 2016-03-02 中国科学院上海光学精密机械研究所 Optical coherence tomography retina image layering method
US20170109883A1 (en) * 2015-10-19 2017-04-20 The Charles Stark Draper Laboratory, Inc. System and method for the segmentation of optical coherence tomography slices
CN111462160A (en) * 2019-01-18 2020-07-28 北京京东尚科信息技术有限公司 Image processing method, device and storage medium
CN110390650A (en) * 2019-07-23 2019-10-29 中南大学 OCT image denoising method based on intensive connection and generation confrontation network
CN112330638A (en) * 2020-11-09 2021-02-05 苏州大学 Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image
CN112700390A (en) * 2021-01-14 2021-04-23 汕头大学 Cataract OCT image repairing method and system based on machine learning

Also Published As

Publication number Publication date
CN114092464B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
US11989877B2 (en) Method and system for analysing images of a retina
Balakrishna et al. Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder
JP6842481B2 (en) 3D quantitative analysis of the retinal layer using deep learning
Liu et al. Automated layer segmentation of retinal optical coherence tomography images using a deep feature enhanced structured random forests classifier
CN112132265B (en) Model training method, cup-disk ratio determining method, device, equipment and storage medium
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN108694994B (en) Noninvasive cardiac infarction classification model construction method based on stack type self-encoder and support vector machine
CN109345540A (en) A kind of image processing method, electronic equipment and storage medium
US20220036561A1 (en) Method for image segmentation, method for training image segmentation model
CN110781953B (en) Lung cancer pathological section classification method based on multi-scale pyramid convolution neural network
CN112733772B (en) Method and system for detecting real-time cognitive load and fatigue degree in warehouse picking task
CN114881968A (en) OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN113256670A (en) Image processing method and device, and network model training method and device
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
CN114066884A (en) Retinal blood vessel segmentation method and device, electronic device and storage medium
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
Chen et al. Automated retinal layer segmentation in OCT images of age‐related macular degeneration
CN111918611B (en) Method for controlling abnormal display of chest X-ray image, recording medium and apparatus
JP7349018B2 (en) Coronary artery segmentation method, apparatus, electronic device and computer readable storage medium
CN117392746A (en) Rehabilitation training evaluation assisting method, device, computer equipment and storage medium
CN112634291A (en) Automatic burn wound area segmentation method based on neural network
CN116778486A (en) Point cloud segmentation method, device, equipment and medium of angiography image
CN116630347A (en) Blood vessel segmentation method and device for retina fundus image
CN114092464A (en) OCT image processing method and device
US20240087133A1 (en) Method of refining tissue specimen image, and computing system performing same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant