CN110599491A - Priori information-based eye image segmentation method, device, equipment and medium - Google Patents

Priori information-based eye image segmentation method, device, equipment and medium Download PDF

Info

Publication number
CN110599491A
CN110599491A CN201910833947.3A CN201910833947A CN110599491A CN 110599491 A CN110599491 A CN 110599491A CN 201910833947 A CN201910833947 A CN 201910833947A CN 110599491 A CN110599491 A CN 110599491A
Authority
CN
China
Prior art keywords
image
segmentation
learning model
fundus
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910833947.3A
Other languages
Chinese (zh)
Other versions
CN110599491B (en
Inventor
陈思宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910833947.3A priority Critical patent/CN110599491B/en
Publication of CN110599491A publication Critical patent/CN110599491A/en
Application granted granted Critical
Publication of CN110599491B publication Critical patent/CN110599491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for segmenting an eye part image based on prior information, wherein the method comprises the steps of acquiring a fundus image data set, and calculating the prior information according to the fundus image data set; constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model; acquiring a target image to be segmented, wherein the target image comprises eyes; and inputting the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model. According to the image segmentation method and device, the priori information is introduced to train the image segmentation model, so that the segmentation result of the target image to be segmented has high interpretability and accuracy, the classification model can be trained on the basis of the priori information, and the classification result of the target image to be segmented is classified by using the classification model, so that the classification result with high interpretability and high accuracy is obtained, and the scheme of the image segmentation method and device can have a wider application prospect.

Description

Priori information-based eye image segmentation method, device, equipment and medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a medium for segmenting an eye image based on prior information.
Background
In the prior art, there are two main schemes for eye image classification based on machine learning, the first scheme is to perform eye image classification based on a deep learning model MRNet, and the second scheme is to perform eye image classification based on a recurrent attention Convolutional Neural Network (recurrent Neural Network).
As shown in fig. 1(a), it shows a schematic structural diagram of a deep learning model MRNet, where MRNet is based on a convolutional neural network alexnet structure, and takes a corresponding two-dimensional image of a three-dimensional knee image as an input, and the output of different convolutional layers through a max pooling operation can finally give a probability that the three-dimensional knee image points to a certain target object. However, the training process of MRNet does not introduce a priori knowledge, so that it cannot explain the probability that the three-dimensional knee image points to a certain target object.
As shown in fig. 1(b), it shows a schematic structural diagram of a current attribute conditional Neural Network, which is a classification technique based on natural images. By introducing an attention branch, attention weight is introduced during feature extraction, but training of the attention branch is not limited by prior knowledge, so that the risk of errors in an attention area exists, and the accuracy of the attention branch is still to be improved.
In conclusion, both of the two eye image classification methods are difficult to utilize prior knowledge, so that the image classification results have the problems of poor interpretability and poor directivity, and the application prospect of the fundus image segmentation results is limited.
Disclosure of Invention
In order to solve the technical problems that image segmentation is caused by the fact that prior knowledge is not used in eye image classification in the prior art and interpretability of classification results is poor, the embodiment of the invention provides a method, a device, equipment and a medium for eye image segmentation based on prior information.
In one aspect, the present invention provides a method for segmenting an eye image based on prior information, the method comprising:
acquiring a fundus image data set, and calculating prior information according to the fundus image data set;
constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model;
acquiring a target image to be segmented, wherein the target image comprises eyes;
and inputting the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model.
In another aspect, the present invention provides an eye image segmentation apparatus based on prior information, the apparatus comprising:
the fundus image data set acquisition module is used for acquiring a fundus image data set and calculating prior information according to the fundus image data set;
the machine learning model training module is used for constructing a machine learning model and training the machine learning model based on the prior information to obtain an image segmentation model;
the target image acquisition module is used for acquiring a target image to be segmented, wherein the target image comprises eyes;
and the segmentation module is used for inputting the target image into the image segmentation model so as to obtain a target image segmentation result output by the image segmentation model.
In another aspect, the present invention provides an eye image segmentation apparatus based on prior information, which is characterized in that the apparatus includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement an eye image segmentation method based on prior information.
In another aspect, the present invention provides a computer storage medium, wherein at least one instruction, at least one program, code set, or instruction set is stored in the storage medium, and the at least one instruction, at least one program, code set, or instruction set is loaded by a processor and executes a method for segmenting an eye image based on a priori information.
The invention provides a priori information-based eye image segmentation method, a priori information-based eye image segmentation device, a priori information-based eye image segmentation equipment and a priori information-based eye image segmentation medium. According to the image segmentation method, the priori information is introduced to train the image segmentation model, so that the segmentation result of the target image to be segmented has high interpretability and accuracy. Correspondingly, the embodiment of the invention can further train the classification model based on the prior information and classify the segmentation result of the target image to be segmented by using the classification model so as to obtain the classification result with strong interpretability and high accuracy. Different from the prior art, the embodiment of the invention introduces the prior knowledge to train the image segmentation model and the classification model, so that the image segmentation result and the classification result obtained by the embodiment of the invention have interpretability, and the scheme of the embodiment of the invention has wider application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1(a) is a schematic structural diagram of a deep learning model MRNet provided by the present invention;
FIG. 1(b) is a schematic structural diagram of a recovery AttentionConvolitional Neural Network provided by the present invention;
FIG. 2 is a schematic diagram of an implementation environment of a prior information-based eye image segmentation method provided by the present invention;
FIG. 3 is a flowchart of a method for segmenting an eye image based on prior information according to the present invention;
FIG. 4 is a flow chart of the present invention providing calculation of prior information from the fundus image dataset;
FIG. 5 is a schematic diagram of a machine learning model provided by the present invention;
FIG. 6 is a block diagram of a self-splitting network architecture provided by the present invention;
FIG. 7 is a schematic diagram of the data processing logic of the self-splitting network provided by the present invention;
FIG. 8 is a flow chart of a method for constructing a loss function according to the present invention;
FIG. 9 is a flow chart of constructing a center loss component provided by the present invention;
FIG. 10 is a schematic diagram of a joint learning model provided by the present invention and obtained by combining a machine learning model and a classification learning model;
FIG. 11 is a block diagram of an eye image segmentation apparatus based on prior information according to the present invention;
fig. 12 is a hardware structural diagram of an apparatus for implementing the method provided by the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages disclosed in the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the invention and are not intended to limit the embodiments of the invention.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention provides an eye image segmentation method based on prior information. First, the embodiment of the present invention discloses an implementation environment of the eye image segmentation method based on prior information in a possible embodiment.
Referring to fig. 2, the implementation environment includes: client 01, server 03.
The client 01 may include: the physical devices may also include software running in the physical devices, such as applications with eye image segmentation function, and the like. The Client 01 may be communicatively connected to the Server 03 based on a Browser/Server mode (Browser/Server, B/S) or a Client/Server mode (Client/Server, C/S).
The client 01 may send a target image to be segmented to the server 03, where the target image includes an eye image, and the server 03 may train an image segmentation model based on preset prior information, output a segmentation result of the target image according to the image segmentation model, and transmit the segmentation result to the client 01. In a preferred embodiment, the server 03 may further classify the target image according to a segmentation result of the target image, specifically, may train an image classification model, obtain a classification result by inputting the segmentation result into the target classification model, and transmit the classification result to the client 01. For example, the classification result may be represented as a probability that the target image belongs to a target class.
The server 03 may comprise an independently operating server, or a distributed server, or a server cluster composed of a plurality of servers.
Referring to fig. 3, a flowchart of a method for segmenting an eye image based on prior information is shown, where the method may be implemented by a server in the implementation environment of fig. 2 as an execution subject, and the method may include:
s101, acquiring an eye fundus image data set, and calculating prior information according to the eye fundus image data set.
In one possible embodiment, the Fundus Image Dataset may use existing retinal Image data (FIRE). Further, the retinal image data can also be used as a data source for a training set for subsequently training a machine learning model.
FIRE is a retinal fundus image dataset containing 129 subretinal retinal images, combined 134 by different feature combinations, and includes labels for the classification of the images and their corresponding target objects. These image combinations are classified into 3 classes according to the traits. Fundus images were acquired by a Nidek AFC-210 fundus camera with a resolution of 2912x2912 and a viewing elevation angle of 40 degrees. Images were constructed jointly by Papageorgiou Hospital and Aristotle University of Thessaloniki University, collected from 39 users by Thessaloniki University.
Specifically, in the embodiment of the present invention, calculating prior information from the fundus image data set includes, as shown in fig. 4:
s1011, obtaining a first segmentation image set, a second segmentation image set and an eye ground pit central position coordinate point set according to the eye ground image data set, wherein the first segmentation image set comprises a plurality of eye ground optic disc segmentation images, and the second segmentation image set comprises a plurality of eye ground optic cup segmentation images.
And S1013, calculating a spatial probability distribution result of the fundus oculi optic disc position according to the first divided image set, calculating a spatial probability distribution result of the fundus oculi cup position according to the second divided image set, and calculating a spatial probability distribution result of the fundus fossa center position according to the fundus fossa center position coordinate set.
S103, constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model.
Specifically, as shown in fig. 5, the machine learning model includes a feature extractor, a self-segmentation network, and a semantic component base, wherein the feature extractor may use a neural network trained in advance, and the self-segmentation network and the semantic component base are training objects of the machine learning model. The images in the training set are transmitted to a feature extractor and a self-segmentation network, respectively, to facilitate training the self-segmentation network and semantic component bases. Specifically, the feature extractor performs feature extraction on the images in the training set to obtain a first feature atlas, the self-segmentation network segments the images in the training set to obtain a mask set of segmented images, the mask set of segmented images and the semantic component basis are processed through a preset excitation function to obtain a second feature atlas, and a difference value obtained by the first feature atlas and the second feature atlas points to a first loss function of the machine learning model. And training the machine learning model based on the first loss function, and taking a self-segmentation network in the trained machine learning model as an image segmentation model. And obtaining the segmentation image corresponding to the original input image by the mask set of the segmentation image output from the segmentation network through the point multiplication operation of the original input image.
In one possible embodiment, the excitation function is represented as a Linear rectification function (ReLU), which is an activation function (activation function) commonly used in the field of machine learning, and generally refers to a nonlinear function represented by a ramp function and its variants. In general terms, a linear rectification function refers to a slope function in mathematics, i.e., f (x) max (0, x), and in the field of machine learning, linear rectification is used as an activation function of a machine learning model to define a nonlinear output result of the machine learning model after linear transformation.
In particular, the feature extractor may use a pre-trained VGG network. The VGG network is named as Visual Geometry Group, belongs to the scientific engineering system of Oxford university, and is a deep convolutional neural network developed by the Visual Geometry Group (Visual Geometry Group) of Oxford university and researchers of Google deep Mind company. The method comprises a series of convolutional network models beginning with VGG, and can be applied to the aspects of face recognition, image classification and the like from VGG16 to VGG 19. The VGG network deepens the number of network layers and simultaneously avoids excessive parameters, a small convolution kernel of 3x3 is adopted in all the layers, and the convolution layer step size is set to be 1. The input to the VGG is set to 224x244 size RGB images, RGB means are calculated for all images on the training set images, then the images are passed as input into the VGG network, the convolution step is fixed by 1 using a convolution kernel of 3x3 or 1x 1. The VGG network full-connection layer has 3 layers, VGG11 can be obtained to VGG19 according to the difference of the total number of the convolutional layers and the full-connection layer, the minimum VGG11 has 8 convolutional layers and 3 full-connection layers, and the maximum VGG19 has 16 convolutional layers and +3 full-connection layers. The embodiment of the present invention does not limit the specific branch of the VGG network, and for example, any one of VGG16 to VGG19 may be used. According to the embodiment of the invention, when the VGG network is initialized, the weights of the VGG network can be pre-trained by adopting natural images.
In order to obtain a better image segmentation effect, an embodiment of the present invention should provide a feasible self-segmentation network structure, as shown in fig. 6, the self-segmentation network sequentially includes, along a data processing sequence, a first convolution layer, a first residual layer, a first maximum pooling layer, a second residual layer, a second maximum pooling layer, a residual combination layer, a first deconvolution layer, a second convolution layer, a second deconvolution layer, a convolution combination layer, and a linear interpolation layer, where the residual combination layer includes three continuous residual layers, and the convolution combination layer includes two continuous convolution layers.
Referring to fig. 7, a schematic diagram of data processing logic of the self-segmentation network is shown, wherein each rectangle in the schematic diagram represents a feature map after corresponding data processing.
As shown in FIG. 5, the machine learning model includes three branches, the first branch is used for obtaining images in a training set and inputting the imagesThe feature extractor obtains a first feature atlas according to the output of the feature extractor under the condition of preset significance constraint (saliency constraint). Specifically, in this embodiment of the present invention, the first feature atlas may use CxHxWxWherein C, H, W, x respectively represent the number of channels of the feature extractor, the length of the two-dimensional feature map, the width of the two-dimensional feature map and the channel index.
The second branch is used for acquiring images in the training set, the images are input into the segmentation network to obtain a mask set of the segmentation images, and correspondingly, the generated mask set of the segmentation images can use KyHyWyIt is noted that K, H, W, y represent the number of channels from the split network, the length, width of the mask, and the channel index, respectively.
The third branch is used for outputting a semantic component base which is matched with the segmentation image set, wherein the semantic component base generates K vectors, and the length of each vector is C.
In order to embody the influence of the prior knowledge on the training of the machine learning model in the embodiment of the invention, the trained image segmentation model can segment the target image based on the guidance of the prior knowledge so as to obtain the segmentation result with strong interpretability and high precision, the embodiment of the invention designs the first loss function in the training process of the machine learning model so that the first loss function contains the guidance information of the prior knowledge.
In a possible embodiment, the present invention provides a first loss function constructing method, as shown in fig. 8, where the method includes:
s1, constructing a central loss component, wherein the central loss component points to the central loss generated in the second characteristic diagram set.
Specifically, the constructing the central loss component, as shown in fig. 9, includes:
and S11, acquiring the characteristic images corresponding to all channels in the second characteristic image set.
And S13, calculating the centroid position of each characteristic image.
Specifically, the centroid position is calculated asWhere y is a channel index, and R (y, u, v) represents a value at (u, v) of the feature space of the feature image in the y-th channel.
And S15, calculating the central loss component according to the centroid position of each characteristic image.
Specifically, the central loss component is calculated by the formula
And S3, constructing a semantic loss component, wherein the semantic loss component points to loss generated by the difference between the feature images in the first feature image set and the feature images in the second feature image set corresponding to the feature images.
Specifically, the semantic loss component calculation formula isWherein v (u, v) points to all feature images in the first set of feature images,pointing to all feature images, w, in the second set of feature imagesyPointing to the vector generated by the semantic component base.
S5, constructing an orthogonal loss component, wherein the orthogonal loss component points to the orthogonal degree of the vector generated by the semantic component base.
Specifically, the quadrature loss component calculation formula is Lon=||WWT-Ik||F 2Wherein W is a matrix formed by vectors with K and C length generated by the semantic component base, IkIs related to WWTAn adapted identity matrix.
And S7, constructing a prior loss component, wherein the prior loss component points to the deviation degree of the second feature map set and the prior information.
In a possible embodiment, if the prior information includes a spatial probability distribution result of the fundus optic disc position, a spatial probability distribution result of the fundus cup position, and a spatial probability distribution result of the fundus pit center position, the prior loss component may be represented by a mean square value of distances from the fundus optic disc position, the fundus optic cup position, and the fundus pit center position of the prior information, respectively, in the second feature map.
In another possible embodiment, based on a formulaCalculating a priori loss components, where Rk,PkAnd respectively representing the position of a position point in the second feature map set and the position of the maximum probability point corresponding to the position point determined according to the prior information.
And S9, constructing a first loss function according to the central loss component, the semantic loss component, the orthogonal loss component and the prior loss component.
In particular, the first loss function, L, may be constructed based on a weighting method1=λconLconSCLSConLonmseLmseWherein λ isconSConmseAre weights.
After the structure of the machine learning model and the first loss function are determined, the machine learning model can be trained according to a gradient descent method. And the accurate segmentation of the target image can be realized according to the image segmentation model obtained by the machine learning model obtained after training.
In a preferred embodiment, the data set used to train the machine learning model may also include labels, such as FIRE, for the classification of the target object to which the image corresponds. Accordingly, a classification learning model can be designed, and the machine learning model and the classification learning model are jointly trained, so that an image segmentation model and a classification model can be obtained together.
In particular, the classification learning model may use the Resnet10 model, Resnet10 being a deep residual network. The depth residual error network introduces a residual error network structure, and the network depth can be greatly enhanced and a better classification effect can be obtained through the residual error network structure. The residual network structure is provided with a hopping structure. The residual network structure uses the cross-layer link thought of a high-speed network for reference, but the residual items adopt identity mapping.
As shown in fig. 10, a schematic diagram of a joint learning model obtained by combining a machine learning model and a classification learning model is shown, the joint learning model takes an image in a training set as an input, a mask set of a segmentation image is output by a self-segmentation network, image points in the mask set of the segmentation image and the training set are multiplied to obtain a segmentation image set, and labels of the segmentation image set and the image in the corresponding training set are taken as the input of the classification learning model.
The loss components produced by the classification learning model may be characterized by a cross-entropy loss function, which may be expressed asWherein the ratio of y to y is,and respectively pointing to the labels in the training set and the target objects output by the classification learning model.
Accordingly, the second loss function of the joint learning model can be constructed based on the first loss function, only the loss components generated by the classification learning model and the corresponding weights thereof are added on the basis of the first loss function, that is, the second loss function can be represented as L2=λconLconSCLSConLonmseLmsecLcWherein λ iscWeights for the losses generated by the classification learning model. Of course, the joint learning model is also trained using a gradient descent method. Taking the self-segmentation network and the classification learning model in the trained joint learning model as an image segmentation model and a classification model respectivelyAnd (4) modeling.
And S105, acquiring a target image to be segmented, wherein the target image comprises eyes.
And S107, inputting the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model.
Specifically, the image segmentation model may obtain a mask set of a segmented image, and a target segmentation result may be obtained by performing a dot product on the mask set of the segmented image and a target image.
In a preferred embodiment, the image segmentation model and the classification model may be obtained by constructing a joint learning model and training the joint learning model, and therefore, after obtaining the target segmentation result, the method may further include:
s109, inputting the target image segmentation result into a classification model to obtain a classification result output by the classification model, wherein the classification model is obtained by training a joint learning model based on the prior information, and the joint learning model comprises the machine learning model and the classification learning model.
The embodiment of the invention discloses an eye image segmentation method based on prior information, which has stronger interpretability and accuracy on a segmentation result of a target image to be segmented by introducing the prior information to train an image segmentation model. Correspondingly, the embodiment of the invention can further train the classification model based on the prior information and classify the segmentation result of the target image to be segmented by using the classification model so as to obtain the classification result with strong interpretability and high accuracy. Different from the prior art, the embodiment of the invention introduces the prior knowledge to train the image segmentation model and the classification model, so that the image segmentation result and the classification result obtained by the embodiment of the invention have interpretability, and the scheme of the embodiment of the invention has wider application prospect.
The embodiment of the invention also discloses a device for segmenting the eye image based on the prior information, which comprises the following components:
a fundus image dataset acquisition module 201 for acquiring a fundus image dataset from which a priori information is calculated.
And the machine learning model training module 203 is used for constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model.
The machine learning model comprises a feature extractor, a self-segmentation network and a semantic component base, wherein the feature extractor can use a pre-trained neural network, and the self-segmentation network and the semantic component base are training objects of the machine learning model. The images in the training set are transmitted to a feature extractor and a self-segmentation network, respectively, to facilitate training the self-segmentation network and semantic component bases. Specifically, the feature extractor performs feature extraction on the images in the training set to obtain a first feature atlas, the self-segmentation network segments the images in the training set to obtain a mask set of segmented images, the mask set of segmented images and the semantic component basis are processed through a preset excitation function to obtain a second feature atlas, and a difference value obtained by the first feature atlas and the second feature atlas points to a first loss function of the machine learning model. And training the machine learning model based on the first loss function, and taking a self-segmentation network in the trained machine learning model as an image segmentation model. And obtaining the segmentation image corresponding to the original input image by the mask set of the segmentation image output from the segmentation network through the point multiplication operation of the original input image.
A target image obtaining module 205, configured to obtain a target image to be segmented, where the target image includes an eye.
And a segmentation module 207, configured to input the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model.
Further, the method can also comprise the following steps:
and a classification module 209, which inputs the target image segmentation result into a classification model to obtain a classification result output by the classification model, wherein the classification model is obtained by training a joint learning model based on the prior information, and the joint learning model comprises the machine learning model and a classification learning model.
The joint learning model takes images in a training set as input, a mask set of segmentation images is output by a self-segmentation network, the mask set of the segmentation images and image points in the training set are multiplied to obtain a segmentation image set, and labels of the segmentation image set and the images in the training set corresponding to the segmentation image set are used as the input of the classification learning model.
Specifically, the embodiment of the eye image segmentation device and the embodiment of the eye image segmentation method based on the prior information are based on the same inventive concept. For details, please refer to the method embodiment, which is not described herein.
The embodiment of the invention also provides a computer storage medium, and the computer storage medium can store a plurality of instructions. The instructions may be adapted to be loaded by a processor and execute a method for eye image segmentation based on prior information according to an embodiment of the present invention, the method at least including the following steps:
a method of ocular image segmentation based on prior information, the method comprising:
acquiring a fundus image data set, and calculating prior information according to the fundus image data set;
constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model;
acquiring a target image to be segmented, wherein the target image comprises eyes;
and inputting the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model.
In a preferred embodiment, further comprising:
inputting the target image segmentation result into a classification model to obtain a classification result output by the classification model, wherein the classification model is obtained by training a joint learning model based on the prior information, and the joint learning model comprises the machine learning model and the classification learning model.
In a preferred embodiment, the machine learning model includes three branches, a first branch is used for acquiring images in a training set, inputting the images into a feature extractor, and obtaining a first feature atlas according to the output of the feature extractor under a preset significance constraint condition;
the second branch is used for acquiring images in a training set and inputting the images into a self-segmentation network to obtain a mask set of segmented images;
the third branch is used for outputting a semantic component base matched with the segmentation image set;
and processing the mask set of the segmented image and the semantic component base through a preset excitation function to obtain a second feature map set, wherein a difference value obtained by the first feature map set and the second feature map set points to a first loss function of the machine learning model.
In a preferred embodiment, the method further comprises the step of constructing a first loss function, the constructing the first loss function comprising:
constructing a central loss component that points to a central loss generated in the second feature map set;
constructing a semantic loss component, wherein the semantic loss component points to loss generated by difference of the feature images in the first feature image set and the feature images in the second feature image set corresponding to the feature images;
constructing a prior loss component, wherein the prior loss component points to the deviation degree of the second feature map set and the prior information;
and constructing a first loss function according to the central loss component, the semantic loss component, the orthogonal loss component and the prior loss component.
In a preferred embodiment, the method further comprises a step of constructing a second loss function, and training the joint learning model based on the second loss function, wherein the constructing the second loss function comprises:
constructing a loss component generated by a classification learning model;
and constructing a second loss function number according to the central loss component, the semantic loss component, the orthogonal loss component, the prior loss component and the loss component generated by the classification learning model.
In a preferred embodiment, said constructing the central loss component comprises:
acquiring feature images corresponding to all channels in the second feature image set;
calculating the centroid position of each feature image;
and calculating the central loss component according to the centroid position of each characteristic image.
In a preferred embodiment, said calculating a priori information from said fundus image dataset comprises:
obtaining a first segmentation image set, a second segmentation image set and a fundus fossa centre position coordinate point set according to the fundus image data set, wherein the first segmentation image set comprises a plurality of fundus optic disc segmentation images, and the second segmentation image set comprises a plurality of fundus optic cup segmentation images;
and calculating a spatial probability distribution result of the position of the fundus optic disk according to the first divided image set, calculating a spatial probability distribution result of the position of the fundus optic cup according to the second divided image set, and calculating a spatial probability distribution result of the center position of the fundus fossa according to the coordinate point set of the center position of the fundus fossa.
Further, fig. 12 is a schematic hardware structure diagram of an apparatus for implementing the method provided by the embodiment of the present invention, and the apparatus may participate in forming or containing the device or system provided by the embodiment of the present invention. As shown in fig. 12, the device 10 may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission device 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, device 10 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the device 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the method described in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the above-mentioned eye image segmentation method based on a priori information. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by the communication provider of the device 10. In one example, the transmission device 106 includes a network adapter (NIC) that can be connected to other network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the device 10 (or mobile device).
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for segmenting an eye image based on prior information is characterized by comprising the following steps:
acquiring a fundus image data set, and calculating prior information according to the fundus image data set;
constructing a machine learning model, and training the machine learning model based on the prior information to obtain an image segmentation model;
acquiring a target image to be segmented, wherein the target image comprises eyes;
and inputting the target image into the image segmentation model to obtain a target image segmentation result output by the image segmentation model.
2. The method of claim 1, further comprising:
inputting the target image segmentation result into a classification model to obtain a classification result output by the classification model, wherein the classification model is obtained by training a joint learning model based on the prior information, and the joint learning model comprises the machine learning model and the classification learning model.
3. The method according to claim 1 or 2, characterized in that:
the machine learning model comprises three branches, wherein the first branch is used for acquiring images in a training set, inputting the images into a feature extractor, and obtaining a first feature atlas according to the output of the feature extractor under the preset significance constraint condition;
the second branch is used for acquiring images in a training set and inputting the images into a self-segmentation network to obtain a mask set of segmented images;
the third branch is used for outputting a semantic component base matched with the segmentation image set;
and processing the mask set of the segmented image and the semantic component base through a preset excitation function to obtain a second feature map set, wherein a difference value obtained by the first feature map set and the second feature map set points to a first loss function of the machine learning model.
4. The method of claim 3, further comprising the step of constructing a first loss function, said constructing a first loss function comprising:
constructing a central loss component that points to a central loss generated in the second feature map set;
constructing a semantic loss component, wherein the semantic loss component points to loss generated by difference of the feature images in the first feature image set and the feature images in the second feature image set corresponding to the feature images;
constructing a prior loss component, wherein the prior loss component points to the deviation degree of the second feature map set and the prior information;
and constructing a first loss function according to the central loss component, the semantic loss component, the orthogonal loss component and the prior loss component.
5. The method of claim 4, further comprising the step of constructing a second loss function, training a joint learning model based on the second loss function, the constructing a second loss function comprising:
constructing a loss component generated by a classification learning model;
and constructing a second loss function according to the central loss component, the semantic loss component, the orthogonal loss component, the prior loss component and the loss component generated by the classification learning model.
6. The method of claim 4 or 5, wherein said constructing a central loss component comprises:
acquiring feature images corresponding to all channels in the second feature image set;
calculating the centroid position of each feature image;
and calculating the central loss component according to the centroid position of each characteristic image.
7. The method according to claim 1 or 2, wherein said calculating a priori information from said fundus image dataset comprises:
obtaining a first segmentation image set, a second segmentation image set and a fundus fossa centre position coordinate point set according to the fundus image data set, wherein the first segmentation image set comprises a plurality of fundus optic disc segmentation images, and the second segmentation image set comprises a plurality of fundus optic cup segmentation images;
and calculating a spatial probability distribution result of the position of the fundus optic disk according to the first divided image set, calculating a spatial probability distribution result of the position of the fundus optic cup according to the second divided image set, and calculating a spatial probability distribution result of the center position of the fundus fossa according to the coordinate point set of the center position of the fundus fossa.
8. An eye image segmentation apparatus based on prior information, the apparatus comprising:
the fundus image data set acquisition module is used for acquiring a fundus image data set and calculating prior information according to the fundus image data set;
the machine learning model training module is used for constructing a machine learning model and training the machine learning model based on the prior information to obtain an image segmentation model;
the target image acquisition module is used for acquiring a target image to be segmented, wherein the target image comprises eyes;
and the segmentation module is used for inputting the target image into the image segmentation model so as to obtain a target image segmentation result output by the image segmentation model.
9. A computer storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of a priori information based segmentation of an eye image according to any one of claims 1 to 7.
10. An a priori information based eye image segmentation apparatus, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, set of codes, or set of instructions, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded by the processor and executes a priori information based eye image segmentation method according to any one of claims 1 to 7.
CN201910833947.3A 2019-09-04 2019-09-04 Priori information-based eye image segmentation method, apparatus, device and medium Active CN110599491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910833947.3A CN110599491B (en) 2019-09-04 2019-09-04 Priori information-based eye image segmentation method, apparatus, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910833947.3A CN110599491B (en) 2019-09-04 2019-09-04 Priori information-based eye image segmentation method, apparatus, device and medium

Publications (2)

Publication Number Publication Date
CN110599491A true CN110599491A (en) 2019-12-20
CN110599491B CN110599491B (en) 2024-04-12

Family

ID=68857447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910833947.3A Active CN110599491B (en) 2019-09-04 2019-09-04 Priori information-based eye image segmentation method, apparatus, device and medium

Country Status (1)

Country Link
CN (1) CN110599491B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209854A (en) * 2020-01-06 2020-05-29 苏州科达科技股份有限公司 Method and device for recognizing unbelted driver and passenger and storage medium
CN111598904A (en) * 2020-05-21 2020-08-28 腾讯科技(深圳)有限公司 Image segmentation method, device, equipment and storage medium
CN112052721A (en) * 2020-07-16 2020-12-08 北京邮电大学 Wink oscillogram generation method, device and equipment based on deep learning
CN112287938A (en) * 2020-10-29 2021-01-29 苏州浪潮智能科技有限公司 Text segmentation method, system, device and medium
CN112434576A (en) * 2020-11-12 2021-03-02 合肥的卢深视科技有限公司 Face recognition method and system based on depth camera
CN112598686A (en) * 2021-03-03 2021-04-02 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN113658187A (en) * 2021-07-26 2021-11-16 南方科技大学 Medical image segmentation method and device combined with anatomy prior and storage medium
CN114742987A (en) * 2022-06-08 2022-07-12 苏州市洛肯电子科技有限公司 Automatic positioning control method and system for cutting of non-metallic materials
CN117132777A (en) * 2023-10-26 2023-11-28 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110299034A1 (en) * 2008-07-18 2011-12-08 Doheny Eye Institute Optical coherence tomography- based ophthalmic testing methods, devices and systems
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
US20130004046A1 (en) * 2010-03-19 2013-01-03 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, and image processing computer program
CN104318565A (en) * 2014-10-24 2015-01-28 中南大学 Interactive method for retinal vessel segmentation based on bidirectional region growing of constant-gradient distance
US20170270653A1 (en) * 2016-03-15 2017-09-21 International Business Machines Corporation Retinal image quality assessment, error identification and automatic quality correction
CN108717868A (en) * 2018-04-26 2018-10-30 博众精工科技股份有限公司 Glaucoma eye fundus image screening method based on deep learning and system
CN108921227A (en) * 2018-07-11 2018-11-30 广东技术师范学院 A kind of glaucoma medical image classification method based on capsule theory
CN109145939A (en) * 2018-07-02 2019-01-04 南京师范大学 A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity
CN109598733A (en) * 2017-12-31 2019-04-09 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN109635862A (en) * 2018-12-05 2019-04-16 合肥奥比斯科技有限公司 Retinopathy of prematurity plus lesion classification method
CN109658423A (en) * 2018-12-07 2019-04-19 中南大学 A kind of optic disk optic cup automatic division method of colour eyeground figure
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN110189327A (en) * 2019-04-15 2019-08-30 浙江工业大学 Eye ground blood vessel segmentation method based on structuring random forest encoder

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110299034A1 (en) * 2008-07-18 2011-12-08 Doheny Eye Institute Optical coherence tomography- based ophthalmic testing methods, devices and systems
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
US20130004046A1 (en) * 2010-03-19 2013-01-03 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, and image processing computer program
CN104318565A (en) * 2014-10-24 2015-01-28 中南大学 Interactive method for retinal vessel segmentation based on bidirectional region growing of constant-gradient distance
US20170270653A1 (en) * 2016-03-15 2017-09-21 International Business Machines Corporation Retinal image quality assessment, error identification and automatic quality correction
CN109598733A (en) * 2017-12-31 2019-04-09 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN108717868A (en) * 2018-04-26 2018-10-30 博众精工科技股份有限公司 Glaucoma eye fundus image screening method based on deep learning and system
CN109145939A (en) * 2018-07-02 2019-01-04 南京师范大学 A kind of binary channels convolutional neural networks semantic segmentation method of Small object sensitivity
CN108921227A (en) * 2018-07-11 2018-11-30 广东技术师范学院 A kind of glaucoma medical image classification method based on capsule theory
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109635862A (en) * 2018-12-05 2019-04-16 合肥奥比斯科技有限公司 Retinopathy of prematurity plus lesion classification method
CN109658423A (en) * 2018-12-07 2019-04-19 中南大学 A kind of optic disk optic cup automatic division method of colour eyeground figure
CN109978893A (en) * 2019-03-26 2019-07-05 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of image, semantic segmentation network
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine
CN110189327A (en) * 2019-04-15 2019-08-30 浙江工业大学 Eye ground blood vessel segmentation method based on structuring random forest encoder

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HANUNG ADI NUGROHO;WIDHIA K.Z OKTOEBERZA;ASTRID ERASARI;AUGUSTINE UTAMI;CERWYN CAHYONO: ""Segmentation of optic disc and optic cup in colour fundus images based on morphological reconstruction"", 《2017 9TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING (ICITEE)》, 11 January 2018 (2018-01-11) *
刘振宇;汪淼;: "改进区域生长算法在视杯图像分割中的应用", 辽宁大学学报(自然科学版), no. 02, 15 May 2017 (2017-05-15) *
梁礼明;黄朝林;石霏;吴健;江弘九;陈新建;: "融合形状先验的水平集眼底图像血管分割", 计算机学报, no. 07, 25 November 2016 (2016-11-25) *
郑姗;范慧杰;唐延东;王琰;: "多相主动轮廓模型的眼底图像杯盘分割", 中国图象图形学报, no. 11, 16 November 2014 (2014-11-16), pages 1 - 2 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209854A (en) * 2020-01-06 2020-05-29 苏州科达科技股份有限公司 Method and device for recognizing unbelted driver and passenger and storage medium
CN111598904A (en) * 2020-05-21 2020-08-28 腾讯科技(深圳)有限公司 Image segmentation method, device, equipment and storage medium
CN112052721A (en) * 2020-07-16 2020-12-08 北京邮电大学 Wink oscillogram generation method, device and equipment based on deep learning
CN112287938B (en) * 2020-10-29 2022-12-06 苏州浪潮智能科技有限公司 Text segmentation method, system, device and medium
CN112287938A (en) * 2020-10-29 2021-01-29 苏州浪潮智能科技有限公司 Text segmentation method, system, device and medium
CN112434576A (en) * 2020-11-12 2021-03-02 合肥的卢深视科技有限公司 Face recognition method and system based on depth camera
CN112598686A (en) * 2021-03-03 2021-04-02 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN113658187A (en) * 2021-07-26 2021-11-16 南方科技大学 Medical image segmentation method and device combined with anatomy prior and storage medium
CN113658187B (en) * 2021-07-26 2024-03-29 南方科技大学 Medical image segmentation method, device and storage medium combined with anatomical priori
CN114742987A (en) * 2022-06-08 2022-07-12 苏州市洛肯电子科技有限公司 Automatic positioning control method and system for cutting of non-metallic materials
CN114742987B (en) * 2022-06-08 2022-09-27 苏州市洛肯电子科技有限公司 Automatic positioning control method and system for cutting of non-metallic materials
CN117132777A (en) * 2023-10-26 2023-11-28 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN117132777B (en) * 2023-10-26 2024-03-22 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110599491B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN110599491B (en) Priori information-based eye image segmentation method, apparatus, device and medium
EP3828765A1 (en) Human body detection method and apparatus, computer device, and storage medium
CN108205655B (en) Key point prediction method and device, electronic equipment and storage medium
CN107358293B (en) Neural network training method and device
CN110473254A (en) A kind of position and orientation estimation method and device based on deep neural network
JP6850399B2 (en) Depth recovery method and equipment for monocular images, computer equipment
CN111581414B (en) Method, device, equipment and storage medium for identifying, classifying and searching clothes
CN110765882B (en) Video tag determination method, device, server and storage medium
CN112149634B (en) Training method, device, equipment and storage medium for image generator
CN110648289B (en) Image noise adding processing method and device
CN114611720B (en) Federal learning model training method, electronic device, and storage medium
CN110866469B (en) Facial five sense organs identification method, device, equipment and medium
US20220392201A1 (en) Image feature matching method and related apparatus, device and storage medium
CN111401318A (en) Action recognition method and device
CN111401193B (en) Method and device for acquiring expression recognition model, and expression recognition method and device
CN111124902A (en) Object operating method and device, computer-readable storage medium and electronic device
EP3888091B1 (en) Machine learning for protein binding sites
CN113240128B (en) Collaborative training method and device for data unbalance, electronic equipment and storage medium
CN111368860A (en) Relocation method and terminal equipment
CN111833391B (en) Image depth information estimation method and device
CN111339969B (en) Human body posture estimation method, device, equipment and storage medium
CN110276283B (en) Picture identification method, target identification model training method and device
CN110472537B (en) Self-adaptive identification method, device, equipment and medium
CN109242892B (en) Method and apparatus for determining the geometric transform relation between image
CN111292365B (en) Method, apparatus, electronic device and computer readable medium for generating depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant