CN113298742A - Multi-modal retinal image fusion method and system based on image registration - Google Patents

Multi-modal retinal image fusion method and system based on image registration Download PDF

Info

Publication number
CN113298742A
CN113298742A CN202110554406.4A CN202110554406A CN113298742A CN 113298742 A CN113298742 A CN 113298742A CN 202110554406 A CN202110554406 A CN 202110554406A CN 113298742 A CN113298742 A CN 113298742A
Authority
CN
China
Prior art keywords
feature
point set
image
feature point
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110554406.4A
Other languages
Chinese (zh)
Inventor
余洪华
蔡宏民
但婷婷
刘宝怡
肖宇
方莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong General Hospital
Original Assignee
Guangdong General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong General Hospital filed Critical Guangdong General Hospital
Priority to CN202110554406.4A priority Critical patent/CN113298742A/en
Publication of CN113298742A publication Critical patent/CN113298742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-modal retinal image fusion method and system based on image registration. The method comprises the following steps: obtaining a retinal image pair; preprocessing, namely acquiring retina edge image pairs, and extracting a feature point set from each pair of retina edge image pairs; combining a plurality of feature descriptors to construct a multi-feature difference descriptor, and guiding feature extraction of the feature point set; updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix; image registration, namely acquiring a space-transformed image by combining a target image; and fusing the image after the space transformation and a preset reference image to obtain a fused image. The invention overcomes the defects of insufficient image registration precision and inaccurate image space transformation in the multi-mode retina image fusion process, and can realize the registration performance with high precision and high stability. On the basis of high-precision image registration, effective multi-mode fundus image fusion is carried out, so that an ophthalmologist can observe the change of a lesion area conveniently.

Description

Multi-modal retinal image fusion method and system based on image registration
Technical Field
The invention relates to the field of image processing, in particular to a multi-modal retinal image fusion method and system based on image registration.
Background
Fundus retinal images are important criteria for diagnosing a variety of retinal diseases including glaucoma, age-related macular degeneration. In addition, the eyeground is the only blood vessel window which can be directly viewed by the human body, and the hemodynamics change conditions of other organs of the whole body can be reflected to a certain extent. Multimodal retinal images typically contain important local structural and phase information of the retina. If the retinal images of different modalities of the same patient can be fused, various complementary information of the pathological change part can be provided for a doctor, so that a more comprehensive and clear basis is provided for the doctor to make a decision for diagnosis and treatment.
Due to the complicated structure of the vascular network of the fundus image, the fusion effect is affected by the low-quality retina image, including uneven content contrast and nonlinear intensity difference, and overlapping deterioration caused by non-blood vessels, non-texture areas and various pathological lesion areas. Therefore, high precision retinal image registration is a key step to solve the above problems. Existing retinal image registration methods are broadly classified into two categories: a region-based registration method and a feature-based registration method. Feature-based methods typically include two steps, extracting features and constructing a transform to estimate the correspondence between image features.
The most classical feature-based registration method is the thin-plate spline (TPS) -RPM method that uses soft-distribution and deterministic annealing to estimate correspondences and control the update of the TPS conversion, respectively. Since then, based on the classical TPS-RPM method, there is literature that proposes a robust method based on Global and Local Mixture Distance (GLMDTPS) for the point set registration problem.
Although the above methods have met with some success in the application of feature matching, point set and image registration, problems still remain. First, inaccurate feature extraction and description can produce a large number of false feature matches due to the particularities (repetitive structure, multi-modality) of retinal images. Furthermore, the method of the above-mentioned research without using a constraint or using only a single constraint during the image space transformation may cause inaccuracy of the image space transformation. Both of these problems will affect the robustness and accuracy of multi-modal retinal image fusion based on image registration.
In summary, the existing retinal image registration method generally has the defects that inaccurate feature extraction and description result in a large amount of wrong feature matching, and the constraint method results in inaccurate image control transformation, thereby affecting the robustness and accuracy of multi-modal retinal image fusion. Therefore, there is a need for a multimodal retinal image fusion method that can solve the above problems.
Disclosure of Invention
Based on the problems in the prior art, the invention provides a multi-modal retinal image fusion method and system based on image registration. The specific scheme is as follows:
a method of multi-modal retinal image fusion based on image registration, comprising:
image input: acquiring a retina image pair comprising a source image and a target image;
image preprocessing: preprocessing the retina image pair to obtain a retina edge image pair, and extracting a feature point set from each pair of retina edge image pairs, wherein the feature point set comprises a source feature point set extracted from a preprocessed source image and a target feature point set extracted from a preprocessed target image;
combining and extracting features: combining a plurality of feature descriptors to construct a multi-feature difference descriptor, and guiding feature extraction of the feature point set;
and (3) registration of the feature point set: evaluating the corresponding relation between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor to obtain a corresponding relation matrix, and updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix;
image registration: realizing image registration according to the source feature point sets before and after the spatial transformation, and acquiring the image after the spatial transformation by combining the target image;
image fusion: and carrying out pixel-level fusion on the image after the space transformation and a preset reference image to obtain a fused image.
In a specific embodiment, the plurality of feature descriptors comprise an edge orientation histogram descriptor, a local geometry feature descriptor, a global geometry feature descriptor based on a SIFT-like scale space;
the feature combining and extracting specifically comprises:
matching feature point sets on the retina edge image pair under the same scene and different spectrum conditions through the edge orientation histogram based on the SIFT-like scale space to obtain the scale difference of the source feature point set and the target feature point set; acquiring a local feature difference matrix of the source feature point set and the target feature point set through the local geometric structure feature descriptor; acquiring a global feature difference matrix of the source feature point set and the target feature point set through the global geometric structure feature descriptor; and constructing the multi-feature difference descriptor according to the scale difference, the local feature difference matrix and the global feature difference matrix.
In a specific embodiment, the "matching, by the edge orientation histogram based on the SIFT-like scale space, feature point sets on the retina edge image pair located in the same scene and under different spectral conditions, and obtaining the scale difference between the source feature point set and the target feature point set" specifically includes:
detecting the set of feature points by based on a SIFT-like scale space representation; describing the feature point set through an edge orientation histogram descriptor, obtaining feature description of the feature point set, wherein the edge orientation histogram contains spatial information from a contour near each feature point, a window with a preset size is set by taking each feature point as a center to describe the shape and the contour of the retina edge image pair, and invariance of a scale space is maintained; and matching the feature points in the target feature point set with the feature points in the source feature point set according to the feature description, and judging the difference degree between the feature points by using the Euclidean distance to obtain the scale difference. In a specific embodiment, the edge orientation histogram descriptor based on the SIFT-like scale space improves the robustness of feature description through scale difference;
the difference in scale is defined as:
SD(X,Y)=sclx-scly,(x<SD<y)
wherein SD represents the scale difference, scl is the scale space where the feature points are located, X represents the source feature point set, Y represents the target feature point set, and X and Y are defined as follows:
Figure RE-GDA0003135273850000031
wherein the content of the first and second substances,
Figure RE-GDA0003135273850000032
representing the peaks in the SD histogram.
In one embodiment, assuming that the feature point set T includes J feature points, T is definedikIs a characteristic point tiThe k-th nearest neighbor feature point of (1), the ith point T of the point set TiThe local geometric feature descriptor of (2) is defined as follows:
Figure RE-GDA0003135273850000033
where K denotes the number of adjacent points, vikRepresents from tiTo tikVector of (a), muikThe weight parameter is used for controlling the description strength of the adjacent point vectors to the local features; mu.sikIs defined as follows:
Figure RE-GDA0003135273850000034
each element in the local feature difference matrix is defined as:
Figure RE-GDA0003135273850000035
therein, ΨLmnRepresenting a certain element in the local feature difference matrix, m representing a feature point in the target feature point set, n representing a feature point in the source target point set,
Figure RE-GDA0003135273850000036
representing a gaussian radial basis function and Y representing a set of target feature points.
In a specific embodiment, the global feature difference is described by a euclidean distance, and each element in the global feature difference matrix is defined as:
ΨGmn=||ym-C(ρn,Θn)||
therein, ΨGmnRepresenting a certain element in the global feature difference matrix, m representing a feature point in the target feature point set, n representing a feature point in the source feature point set, C (rho)n,Θn) Representing the centroid of the spatially transformed gaussian mixture model.
In one embodiment, the multi-feature difference descriptor is defined as follows:
MLF(T)=ΨG+SD(X,Y)+T1ΨL
wherein, T1Is an annealing parameter of the local feature descriptor, ΨGRepresenting the global feature difference matrix ΨLRepresenting the local feature difference matrix, SD (X, Y) representing scale difference, mlf (t) representing the multi-feature difference descriptor.
In a specific embodiment, the feature point set registration specifically includes:
constructing a probability model on the source characteristic point set, and calculating a corresponding relation matrix between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor; based on the corresponding relation matrix, adjusting parameters of the probability model, and updating the spatial variation of the source feature point set based on global geometric structure constraint and local geometric structure constraint; and iterating the steps to enable the source characteristic point set to gradually approach the target characteristic point set until a preset matching relation is met.
In a specific embodiment, "constructing a probability model on the source feature point set, and calculating a correspondence matrix between the source feature point set and the target feature point set through the multi-feature difference descriptor" specifically includes:
constructing a Gaussian mixture model on the source feature point set; evaluating a correspondence by measuring the multi-feature difference descriptor between the source feature point set and the target feature point set by the Gaussian mixture model; converting the evaluation problem of the corresponding relation into the probability density of the Gaussian mixture model, and solving the probability density by using an approximate solution; based on the probability density, calculating the posterior probability of the Gaussian mixture model according to a Bayes rule to estimate the corresponding relation, and acquiring the corresponding relation matrix; and acquiring an updated source characteristic point set according to the corresponding relation matrix and the source characteristic point set before updating.
In a specific embodiment, "adjusting parameters of the probability model based on the correspondence matrix, and updating the spatial variation based on the global geometry constraint and the local geometry constraint" specifically includes:
updating spatial variation by adjusting model parameters of the Gaussian mixture model based on the corresponding relation matrix to obtain an optimal solution of the model parameters; adding the global geometric structure constraint, and maintaining the global stability of the source feature point set during space change; and adding the local geometric structure constraint based on the local geometric feature descriptor, and constraining the local deformation of the source feature point set by judging the local similarity between the feature point sets before and after the spatial transformation.
In a particular embodiment, the model parameters comprise a first optimization parameter σ2And second optimizationA parameter Θ; the process of obtaining the optimal solution of the model parameters comprises the following steps:
obtaining an optimal solution for the model parameters by minimizing a negative log-likelihood function of the probability density; solving a negative log-likelihood function of the probability density using an expectation-maximization algorithm, the expectation-maximization algorithm comprising calculating the posterior probability and a maximization expectation; according to the posterior probability, taking a negative log-likelihood function of the probability density as an energy function expected by the maximization; adding a global constraint term into the energy function based on the global geometric structure constraint to obtain a first constraint function; and adding a local constraint term into the first constraint function based on the local geometric structure constraint so as to obtain an optimal solution of the parameters of the Gaussian mixture model.
In one embodiment, the expression of the probability density is:
Figure RE-GDA0003135273850000051
Figure RE-GDA0003135273850000052
probability density function, PDF (y) representing a Gaussian mixture modelmn) Representing a Gaussian component pnMid-pair data point ymWherein N +1 gaussian components are included, M represents a feature point in the target feature point set, N represents a feature point in the source-target point set, M represents a total number of feature points in the target feature point set, N represents a total number of feature points in the source-target point set, and each gaussian component ρnRepresenting X in a set X of source feature pointsnP (N) ═ 1/a, an additional N +1 th gaussian component is used to eliminate the effect of redundant points, making them constrained by the parameter ω (0 < ω < 1);
a Gaussian component ρnMid-pair data point ymThe probability density of (c) is as follows:
Figure RE-GDA0003135273850000057
PDF(ymn) Representing a Gaussian component pnMid-pair data point ymThe probability density of (A), MLF (T) denotes the scale difference, σ2Representing a first optimization parameter; the corresponding relation matrix expression is as follows:
Figure RE-GDA00031352738500000510
Poldn|ym) Representing a Gaussian component pnMid-pair data point ymN represents the feature points in the source target point set, and N represents the total number of feature points in the source target point set.
In a specific embodiment, the negative log-likelihood function expression of the probability density is:
Figure RE-GDA00031352738500000513
e represents the negative log-likelihood function of the probability density, pnIn a Gaussian component, ymAs data point, C (ρ)n,Θn) Representing the centroid of the Gaussian mixture model after spatial transformation, and P (x) representing the corresponding relation;
taking a negative log-likelihood function of the probability density as the energy function of the maximization expectation, and expanding as follows:
Figure RE-GDA00031352738500000516
Figure RE-GDA00031352738500000517
Poldn|ym) Representing a Gaussian component pnMid-pair data point ymMoment of correspondence ofMatrix, Θ, represents the second optimization parameter; defining the spatial transformation of a source feature point set X based on a Gaussian radial basis function:
Figure RE-GDA0003135273850000061
wherein the content of the first and second substances,
Figure RE-GDA0003135273850000062
representing the spatial transformation of a set of source feature points, W being an N × D weighting matrix of gaussian kernels, G being an N × N gaussian kernel matrix obtained from gaussian radial basis functions, the expression of a certain element in the matrix being:
Figure RE-GDA0003135273850000063
wherein, gnmThe method comprises the following steps that (1) elements in a Gaussian kernel matrix are used, n and m represent the positions of the elements in the matrix, and x represents a feature point in a source feature point set; wherein, W is an nxd weight matrix of the gaussian kernel, the second optimization parameter Θ is converted into W, and the expression of the energy function is as follows:
Figure RE-GDA0003135273850000064
Q(W,σ2) Representing an energy function, Poldn|ym) Representing a Gaussian component pnMid-pair data point ymThe corresponding relationship matrix of (2).
In a specific embodiment, the global geometric constraint includes adding a global geometric constraint term to the energy function, where the global geometric constraint term is a regularization operator, and an expression of the regularization operator is:
Figure RE-GDA0003135273850000067
therein, it is alwaysThe quantity λ is a weight parameter that controls the global constraint strength, R represents the regularization operator,
Figure RE-GDA0003135273850000068
representing the spatial transformation of a source feature point set, wherein Trace () represents the Trace of a matrix, W is an NxD weighting matrix of a Gaussian kernel, and G is an NxN Gaussian kernel matrix obtained by a Gaussian radial basis function;
adding a global geometric constraint term in the energy function to obtain a first energy function, wherein the expression is as follows:
Figure RE-GDA0003135273850000069
QG(W,σ2) Representing a first energy function, Q (W, σ)2) Representing an energy function, the constant λ is a weight parameter that controls the global constraint strength.
In a specific embodiment, the local geometry constraint term expression is as follows:
Figure RE-GDA00031352738500000610
wherein eta is a weight parameter for controlling the local constraint strength, and the local deformation of the source feature set X in the spatial transformation is constrained by judging the local similarity of X before and after the spatial transformation; η is defined by deterministic annealing techniques, the expression is as follows:
Figure RE-GDA00031352738500000611
where m is the maximum value of the preset weight parameter η, t is the current iteration number, and a constant c is used to control the speed of the change of the constraint strength.
In a specific embodiment, a local geometric constraint term is added to the first energy function expression to obtain a second energy function, where the expression is:
QGL(W,σ2)=QG(W,σ2)+ηTrace(DTD)
this results in a new formula:
Figure RE-GDA0003135273850000071
Figure RE-GDA0003135273850000072
due to the derivation of each term:
Figure RE-GDA0003135273850000073
the optimal solution of the model parameters is:
W=(GPXG+2ησ2GTUTUG)-1(GP0Y-GP0Y)
Figure RE-GDA0003135273850000074
PXis an N × N matrix, PYIs an M × M matrix, PXAnd PYIs a diagonal matrix, consisting of column vectors P 01 and
Figure RE-GDA0003135273850000075
composition, 1 is a column vector whose elements are all 1.
In a specific embodiment, "implementing image registration according to source feature point sets before and after spatial transformation, and acquiring a spatially transformed image in combination with the target image" specifically includes:
acquiring a source characteristic point set after spatial transformation according to the corresponding relation matrix; constructing a control point set according to the source characteristic point set after the spatial transformation and the source characteristic point set before the spatial transformation, and realizing image registration; and based on a reverse deduction method, reversely deducing the transformed image according to the spatial transformation by taking the target image and the control point set as control points.
A multi-modal retinal image fusion system based on image registration, comprising:
an image input unit: for obtaining a retinal image pair comprising a source image and a target image;
an image preprocessing unit: the system comprises a pre-processing unit, a processing unit and a display unit, wherein the pre-processing unit is used for pre-processing the retinal image pairs, acquiring retinal edge image pairs, and extracting a feature point set from each retinal edge image pair, wherein the feature point set comprises a source feature point set extracted from a pre-processed source image and a target feature point set extracted from a pre-processed target image;
a feature combining and extracting unit: the system is used for combining a plurality of feature descriptors to construct a multi-feature difference descriptor and guiding feature extraction of the feature point set;
a feature point set registration unit: the system comprises a multi-feature difference descriptor, a source feature point set and a target feature point set, wherein the multi-feature difference descriptor is used for evaluating the corresponding relation between the source feature point set and the target feature point set, acquiring a corresponding relation matrix, and updating the spatial transformation of the source feature point set according to the corresponding relation matrix;
an image registration unit: the image registration is realized according to the source feature point sets before and after the spatial transformation, and the image after the spatial transformation is obtained by combining the target image;
an image fusion unit: and the fusion image is obtained by fusing the image after the spatial transformation and a preset reference image at a pixel level.
In a particular embodiment, the feature point set registration unit particularly comprises,
a model construction unit: the system is used for constructing a Gaussian mixture model on the source feature point set, and calculating a corresponding relation matrix between the source feature point set and the target feature point set through the multi-feature difference descriptor;
a parameter adjusting unit: the system comprises a correspondence matrix, a source feature point set and a Gaussian mixture model, wherein the correspondence matrix is used for adjusting parameters of the Gaussian mixture model and updating the spatial variation of the source feature point set based on global geometric structure constraint and local geometric structure constraint;
an iteration unit: and the parameter adjusting unit is used for iterating the model building unit and the parameter adjusting unit to enable the source characteristic point set to gradually approach the target characteristic point set until a preset matching relation is met.
In a specific embodiment, the plurality of feature descriptors comprises an edge orientation histogram descriptor (EOH-SIFT descriptor), a local geometry feature descriptor, a global geometry feature descriptor based on a SIFT-like scale space;
the feature combining and extracting unit specifically includes:
a scale difference unit: the edge orientation histogram based on the SIFT scale space is used for matching feature point sets on the retina edge image pair in the same scene and under different spectrum conditions, and the scale difference of the source feature point set and the target feature point set is obtained;
local feature unit: the local feature difference matrix is used for acquiring the source feature point set and the target feature point set through the local geometric structure feature descriptor;
global feature unit: the global feature difference matrix is used for acquiring the source feature point set and the target feature point set through the global geometric structure feature descriptor;
a difference construction unit: for constructing the multi-feature difference descriptor from the scale difference, the local feature difference matrix and the global feature difference matrix.
The invention has the following beneficial effects:
the invention provides a multi-mode retinal image fusion method and system based on image registration, aiming at the defects of insufficient matching precision and inaccurate spatial transformation in retinal image registration in the prior art. Based on the particularity of the retina image, a feature-based registration method is adopted to estimate the corresponding relation between image features, and accurate feature extraction and feature description are realized. In the image space transformation process, a constraint method combining local constraint and global constraint is adopted, so that the stability and the accuracy in the image transformation process are ensured. The method of the invention enables the best registration performance to be obtained and is in most cases superior to the most advanced methods at present. On the basis of high-precision image registration, effective multi-mode fundus image fusion is carried out, so that an ophthalmologist can conveniently observe the change of a pathological change area, and the diagnosis and treatment of relevant retinal diseases are further assisted.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of a multimodal retinal image fusion method according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of input data in embodiment 1 of the present invention;
FIG. 3 is a schematic view of a retinal image fusion process in example 1 of the present invention;
FIG. 4 is a schematic view of a retinal image fusion process in embodiment 1 of the present invention;
FIG. 5 is a schematic view of a retinal image fusion process in example 1 of the present invention;
FIG. 6 is a schematic diagram of a multimodal retinal image fusion method according to embodiment 1 of the present invention;
fig. 7 is a schematic diagram of a multimodal retinal image fusion system according to embodiment 2 of the present invention.
Reference numerals:
1-an image input unit; 2-an image pre-processing unit; 3-a feature combining and extracting unit; 4-a feature point set registration unit; 5-an image registration unit; 6-an image fusion unit; 31-a scale difference unit; 32-local feature cells; 33-global feature cells; 34-a difference building unit; 41-a model building unit; 42-a parameter adjustment unit; 43-iteration unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The multi-modal retinal image fusion method based on image registration specifically comprises the following steps: first, guided image filtering is used to enhance the edges of different types of retinal images. Second, edge maps and geometric features of the retinal image are combined using multi-feature descriptors to improve the description of the feature set and to exclude redundant points. The multi-feature guided model then provides accurate guidance for feature set registration, and feature point-based image registration ultimately provides accurate image registration. Finally, a multimodal retinal fundus image fusion is performed based on accurate image registration.
Example 1
The embodiment provides a multi-modal retinal image fusion method based on image registration, which is shown in the attached figures 1-6 of the specification. The process steps are as shown in the attached figure 1 of the specification, and the specific scheme is as follows:
s1, image input: acquiring a retina image pair comprising a source image and a target image;
s2, image preprocessing: preprocessing the retina image pair to obtain a retina edge image pair, and extracting a feature point set from each pair of edge image pairs, wherein the feature point set comprises a source feature point set extracted from a preprocessed source image and a target feature point set extracted from a preprocessed target image;
s3, feature combination and extraction: combining a plurality of feature descriptors to construct a multi-feature difference descriptor, and guiding feature extraction of the feature point set;
s4, feature point set registration: evaluating the corresponding relation between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor to obtain a corresponding relation matrix, and updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix;
s5, image registration: realizing image registration according to the source feature point sets before and after the spatial transformation, and acquiring an image after the spatial transformation by combining a target image;
s6, image fusion: and carrying out pixel-level fusion on the image after the space transformation and a preset reference image to obtain a fused image.
Specifically, S1, retinal image pairs are acquired, each retinal image pair including a source image and a target image. The retinal image pair is shown in figure 2 of the specification.
S2, image preprocessing: and performing edge processing on the retinal image pair to obtain the retinal edge image pair, and extracting a feature point set in the retinal edge image pair, wherein the feature point set comprises a source feature point set and a target feature point set. Specifically, the retinal image pair is subjected to edge processing, the retinal edge image pair is acquired, and image filtering is guided to enhance the edges of different types of retinal images. The retina image pair comprises a source image and a target image, and the edge retina image pair also comprises a source edge image and a target edge image. And extracting feature points from the source edge image to obtain a source feature point set, and extracting feature points from the target edge image to obtain a target feature point set. Set source feature points as
Figure RE-GDA0003135273850000111
Set target feature points as
Figure RE-GDA0003135273850000112
The embodiment indirectly completes image registration by registering two feature point sets.
There are three main reasons and advantages to using edge maps in retinal image registration. 1. Typically with edges distributed throughout the image. 2. The edge information of the retinal image is stable and easy to retrieve. 3. The gradient direction of the retinal image intensity is ignored in the edge map, and the edge map only retains the gradient magnitude of the image due to its invariance in the multimodal image. Unique and highly repeatable features are reliably extracted by employing an edge-directed EOH-SIFT feature transform algorithm to enhance contrast and eliminate noise from the image.
The image preprocessing steps are as follows: 1. the intensity histogram is equalized to a gaussian distribution with a mean value mo 124 and a variance so 58, and the resulting image is denoised using guided filtering. Sobel filters are used to calculate contrast and enhance the edge response of the image. 3. The contrast of the edge is again enhanced using contrast-limited adaptive histogram equalization. Image pre-processing methods processing retinal images can produce a large number of true matches for contrast enhancement and noise cancellation.
S3, feature combination and extraction: combining the three feature descriptors into a multi-feature difference descriptor (MLF) guided feature extraction. Edge maps and geometric features of retinal images are combined using multi-feature descriptors to improve the description of feature sets and to exclude redundant points. The three feature descriptors include an edge orientation histogram (EOH-SIFT) descriptor based on a SIFT-like scale space, a global geometry feature, and a local geometry feature (LGSF). The three feature descriptors are combined with a multi-feature difference descriptor to realize complementation, so that the performance of the descriptors is enhanced, and the identifiability of each feature point is improved.
Compared with SIFT, the EOH-SIFT descriptor uses Scale Difference (SD) to improve the robustness of feature description by discarding feature points having no information in sub-regions in the edge image, i.e., feature points in sub-regions containing only a small number of contours. Given a pair of feature sets X and Y, SD is defined as follows:
SD(X,Y)=sclx-scly,(x<SD<y)
wherein SD represents scale difference, scl is scale space where the feature points are located, X represents a source feature point set, Y represents a target feature point set, and the rough values of X and Y are obtained by the following two steps of calculation: (1) calculating histograms of all matched SDs; (2) extracting peaks in SD histograms
Figure RE-GDA0003135273850000113
x and y are defined as follows:
Figure RE-GDA0003135273850000114
further, the local geometric feature descriptor (LGSF) refers to: assuming that the feature point set T contains J feature points, defining TikIs a characteristic point tiSet the ith point T of the point set TiThe local feature descriptor of (a) is defined as follows:
Figure RE-GDA0003135273850000121
where K denotes the number of neighboring points, vikRepresents from tiTo tikVector of (a), muikIs a weight parameter used for controlling the description strength of the local features by the adjacent point vectors. Since the vector features contain the spatial information of euclidean distance and direction, the present embodiment uses the sum vector to fully describe tiLocal characteristics of (1).
The performance of local geometry feature descriptors (LGSFs) is mainly affected by two factors: the number of neighboring points K and the weighting parameter μik. Due to a feature point tiThe local feature of (a) is mainly affected by its surrounding neighboring points, so the value of K should take into account the neighboring feature points around the point that may be affected. For example, in the case of two-dimensional point set registration, at least four directions (up, down, left, right) should be considered for better performance of the LGSF, i.e., the value of K should take a value greater than 4. The local geometry feature descriptor (LGSF) can be applied to any point set registration greater than or equal to two dimensions, and thus the value of K should depend on the dimensions of the particular application. In this embodiment, the LGSF is used to complete the retinal image registration, i.e. applied to the two-dimensional case, so the value of K is 5.
Vector vikToo long of (a) will have a large negative impact on the description performance of the local geometry feature descriptor (LGSF), so this implementation is very usefulExample uses Euclidean norm to define the weight mu of each vectorikAnd the following conditions are satisfied: the shorter the vector length, the greater the impact. Mu.sikIs defined as follows:
Figure RE-GDA0003135273850000122
the core idea of LGSF is that the local features of a feature point can be represented by the sum of vectors between the feature point and a specified number of neighboring feature points, and the contribution of these vectors to the LGSF descriptor is affected by its own length. Feature difference matrix Ψ corresponding to local feature descriptorLEach element of (a) can be obtained based on the following formula:
Figure RE-GDA0003135273850000123
therein, ΨLmnRepresenting a certain element in a feature difference matrix corresponding to the local feature descriptor, X representing a source feature point set, Y representing a target feature point set, m representing a feature point in the target feature point set, n representing a feature point in the source target point set,
Figure RE-GDA0003135273850000124
representing a gaussian radial basis function.
Similarly, the global feature descriptor corresponds to the feature difference matrix ΨGEach element of (a) can be obtained based on the following formula:
ΨGmn=||ym-C(ρn,θn)||
therein, ΨGmnRepresents a certain element, C (rho), in the feature difference matrix corresponding to the global feature descriptorn,θn) Is the centroid of the spatially transformed gaussian mixture model, which is detailed at S3.
In this embodiment, three feature descriptors are combined into a multiple feature difference descriptor (MLF), the expression of which is:
MLF(T)=ΨG+SD(X,Y)+T1ΨL
wherein, T1Is an annealing parameter of the local feature descriptor, ΨGRepresenting the feature difference matrix, Ψ, to which the global feature descriptors correspondLAnd representing a characteristic difference matrix corresponding to the local characteristic descriptor, and SD (X, Y) represents scale difference.
After two feature point sets are extracted from a pair of retinal images, a large number of redundant points generally exist, and in order to solve the registration problem caused by the redundant point problem, the embodiment provides a robust feature point set registration model guided by multiple feature descriptors. Two feature point sets are given, and the two feature point sets are respectively source feature point sets extracted from an image to be registered
Figure RE-GDA0003135273850000131
And extracting a target feature point set from the target image
Figure RE-GDA0003135273850000132
The feature point set registration model comprises the following two main steps: (i) evaluation of correspondence: describing X and Y at each iteration by using the above-proposed MLF to evaluate the correspondence between them; (ii) and (3) spatial transformation updating: constructing a non-rigid transformation of X according to the corresponding relation evaluated in (1) to update the position of the source point set. (iii) approximating X progressively to the target Y by iterating steps (i) and (ii) to match corresponding feature points between the two point sets.
S4, feature point set registration: and constructing a Gaussian mixture model on the source characteristic point set, evaluating the corresponding relation between the source characteristic point set and the target characteristic point set through a multi-characteristic difference descriptor, acquiring a corresponding relation matrix, and updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix.
Specifically, the feature point set registration specifically includes: constructing a Gaussian mixture model on the source characteristic point set, and calculating a corresponding relation matrix between the source characteristic point set and the target characteristic point set through a multi-characteristic difference descriptor; adjusting parameters of the Gaussian mixture model based on the corresponding relation matrix, and updating space change based on global geometric structure constraint and local geometric structure constraint; and iterating the steps to enable the feature point set to gradually approach the target feature point set until a preset matching relation is met.
And describing the corresponding relation between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor during each iteration. After each spatial transformation, the source feature point set changes and gradually approaches the target feature point set. The present embodiment describes the correspondence between the changed source feature point set and target feature point set by MLF.
The Gaussian Mixture Model (GMM) is a model that accurately quantizes objects using a gaussian probability density function (normal distribution curve) and decomposes one object into a plurality of objects based on the gaussian probability density function (normal distribution curve). The principle and process of establishing a Gaussian model for an image background are as follows: the image gray level histogram reflects the frequency of occurrence of a certain gray level in an image, and may also be an estimate of the probability density of the image gray level. If the difference between the target area and the background area contained in the image is large and the background area and the target area have a certain difference in gray level, the gray level histogram of the image is in a double-peak-valley shape, wherein one peak corresponds to the target and the other peak corresponds to the central gray level of the background.
The correspondence evaluation means that a correspondence is first estimated by measuring MLF descriptors between two feature point sets using a Gaussian Mixture Model (GMM), and then the correspondence evaluation problem is solved using an approximate solution, converting the correspondence evaluation problem into a probability density of the GMM. Specifying a feature point xnAnd target feature point ymThe MLF descriptor in between is considered as an example of the centroid of the nth gaussian component and the mth data, and the resulting GMM probability density function is therefore:
Figure RE-GDA0003135273850000141
Figure RE-GDA0003135273850000142
probability density function, PDF (y) representing a Gaussian mixture modelmn) Representing a Gaussian component pnMid-pair data point ymWherein N +1 gaussian components are included, M represents a feature point in the target feature point set, N represents a feature point in the source-target point set, M represents a total number of feature points in the target feature point set, N represents a total number of feature points in the source-target point set, and each gaussian component ρnRepresenting X in a set X of source feature pointsnP (N) ═ 1/a, an additional N +1 th gaussian component is used to eliminate the effect of redundant points, making them constrained by the parameter ω (0 < ω < 1);
a Gaussian component ρnMid-pair data point ymThe probability density of (c) is as follows:
Figure RE-GDA0003135273850000147
PDF(ymn) Representing a Gaussian component pnMid-pair data point ymThe probability density of (A), MLF (T) denotes the scale difference, σ2Representing a first optimization parameter.
After obtaining the PDF of the GMM guided by the MLF, the correspondence is estimated by calculating the posterior probability of the GMM according to the Bayesian rule, wherein the GMM parameter takes the value of the previous iteration: the correspondence matrix expression is as follows:
Figure RE-GDA00031352738500001410
finally obtaining a one-to-many fuzzy corresponding relation P of the MLF descriptoroldIs recorded as a matrix P0The size is N × M. And simultaneously acquiring a corresponding target point set:
Figure RE-GDA00031352738500001411
Figure RE-GDA00031352738500001412
representing the set of transformed target feature points, Y representing the set of pre-transformation target feature pointsThe target feature point set of (1). After obtaining the correspondence matrix between the two feature point sets, the spatial transformation of the source feature point set X is updated based on the correspondence matrix.
Since the source feature point set is constructed as the GMM, by obtaining the parameters θ and σ of the GMM2To complete the spatial transformation update. The spatial transformation update of non-rigid registration can be regarded as a parameter optimization process, and in the embodiment, the optimal parameters theta and sigma of GMM are obtained by minimizing the negative log-likelihood function of probability density2And further obtaining an optimal parameter matrix.
The negative log-likelihood function expression of the probability density is:
Figure RE-GDA00031352738500001413
the present invention employs an expectation-maximization (EM) algorithm to solve this optimization problem. The EM algorithm includes two steps of calculating the posterior probability (E step) and maximizing the expectation (M step). And according to the obtained posterior probability, taking the negative log-likelihood function of the probability density as an energy function, and expanding the function as follows:
Figure RE-GDA0003135273850000151
Figure RE-GDA0003135273850000152
obtaining optimal parameters theta and sigma of GMM by minimizing negative log-likelihood function of probability density2,σ2Representing the first optimization parameter and theta the second optimization parameter, the maximization expectation is fulfilled. In order to be able to derive the partial derivatives of the energy function, the invention converts them into a pure matrix form. Heretofore, a spatial transformation C of a source feature point set X was first defined based on a Gaussian Radial Basis Function (GRBF):
Figure RE-GDA0003135273850000153
wherein, W is an NxD weighting matrix of a Gaussian kernel, G is an NxN Gaussian kernel matrix obtained by a Gaussian radial basis function, and the expression of a certain element in the matrix is as follows:
Figure RE-GDA0003135273850000154
where n, m represent the position of the element in the matrix. W is an NxD weight matrix of the Gaussian kernel, and the optimization parameter theta is W. The expansion of the energy function is as follows:
Figure RE-GDA0003135273850000155
the energy function is represented in the form of a matrix to perform the partial derivative calculation:
Figure RE-GDA0003135273850000156
wherein Trace (·) represents the Trace of the matrix, and N × N matrix PXAnd M matrix PYIs a diagonal matrix, consisting of column vectors P 01 and
Figure RE-GDA0003135273850000157
and (4) forming. 1 is a column vector whose elements are all 1. W is an NxD weighting matrix of the Gaussian kernel, G is an NxN Gaussian kernel matrix obtained by Gaussian radial basis functions, X represents a source feature point set, and Y represents a target feature point set.
The optimal parameters are obtained by solving a function set of the following partial derivatives:
Figure RE-GDA0003135273850000158
Figure RE-GDA0003135273850000161
regarding global geometric constraint, global constraint terms are added in an energy function to maintain global structural stability of a feature point set during spatial transformation updating, namely a regularization operator is adopted
Figure RE-GDA0003135273850000162
To update the non-rigid transformation that is stable in global structure. This embodiment writes this regularization operator as follows:
Figure RE-GDA0003135273850000163
where λ is a weight parameter that controls the global constraint strength. And adding a global constraint term to the expansion of the energy function to obtain a first energy function, and adding a local constraint term, huoqu, second energy function to the first energy function. The expression of the first energy function is:
Figure RE-GDA0003135273850000164
QG(W,σ2) Representing a first energy function, Q (W, σ)2) Representing an energy function, the constant λ is a weight parameter that controls the global constraint strength.
Then, a new partial derivative array is obtained as follows:
Figure RE-GDA0003135273850000165
Figure RE-GDA0003135273850000166
regarding the local set structure constraint, the present embodiment defines the local geometry constraint term based on the local feature descriptor LGSF defined in S3 as follows:
Figure RE-GDA0003135273850000167
where η is a weight parameter that controls the local constraint strength. The core idea of the constraint is to constrain the local deformation of the source feature set X in the spatial transformation by judging the local similarity of X before and after the spatial transformation. The value of the weight parameter η has an important influence in the local constraint term.
However, the requirements of the method proposed by the present embodiment on the strength of the local constraint term are different for different situations and different registration processes. Therefore, the value strategy of the weight parameter η has a certain influence on the performance of the proposed method. In order to maximize the performance of the method, the need for constraint strength of local constraint terms with different registration processes is in most cases devised: at the start of registration, local structural constraints are applied to the spatial transformation. The source is set according to the maximum value of the weight parameter η. The constraint strength of the local structural constraint becomes gradually weaker in the registration process. Therefore, a deterministic annealing technique is used to define the weight parameter η, as follows:
Figure RE-GDA0003135273850000171
where m is the maximum value of the preset weight parameter η, t is the current iteration number, and a constant c is used to control the speed of the change of the constraint strength. The addition of local geometry constraints to the energy function that controls global geometry stability allows the method to maintain global and local stability while updating the spatial transform. The matrix form of the local structural constraint is given here:
L=ηTrace(DTD);D=UYY-UX(X+GW)
Figure RE-GDA0003135273850000172
wherein I represents an identity matrix and Y represents a spatial variationThe set of target feature points after the conversion,
Figure RE-GDA0003135273850000173
representing the set of target feature points before spatial transformation.
Suppose that the feature point set T includes J points, point TikIs the kth feature point tiHere, h (t) is defined as a J × J matrix, and each element of the matrix is defined as follows:
Figure RE-GDA0003135273850000174
i.e. H (T) is a sparse matrix, represented by tiThe weights of the K nearest neighbors. This definition enhances the maintenance of the local structural stability of the point set by the constraint term by setting the nearest neighbors of the point set before and after registration to correspond to the same points. Adding the local geometric structure constraint term to the first energy function to obtain a second energy function, wherein the expression of the second energy function is as follows:
QGL(W,σ2)=QG(W,σ2)+ηTrace(DTD)
this results in a new formula:
Figure RE-GDA0003135273850000175
Figure RE-GDA0003135273850000176
due to the derivation of each term:
Figure RE-GDA0003135273850000177
thus, the solution for parameter optimization is:
W=(GPXG+2ησ2GTUTUG)-1(GP0Y-GP0Y)
Figure RE-GDA0003135273850000181
PXis an N × N matrix, PYIs an M × M matrix, PXAnd PYIs a diagonal matrix, consisting of column vectors P 01 and
Figure RE-GDA0003135273850000182
composition, 1 is a column vector whose elements are all 1.
S5, image registration: and (4) image registration based on a backward method, and acquiring a transformed image according to the mapping relation between the source characteristic point set and the target characteristic point set calculated in S4.
Specifically, the source feature point set after spatial transformation is obtained according to the mapping relation
Figure RE-GDA0003135273850000183
Then, it can be combined with the source point set X before transformation into
Figure RE-GDA0003135273850000184
Image registration is then achieved. The invention adopts a reverse method to complete image transformation, namely, a target image and characteristic points on the image are used as control points, and the transformed image is reversely deduced according to the spatial transformation of a point set.
The image in the transformation is regarded as a thin plate, and the space transformation is completed by adopting a Thin Plate Spline (TPS) transformation model, wherein the TPS model is as follows:
Figure RE-GDA0003135273850000185
obtained by
Figure RE-GDA0003135273850000186
Is a matrix of size (N +3) x 3, wherein,
Figure RE-GDA0003135273850000187
is an N x 3 matrix and its N-th row is
Figure RE-GDA0003135273850000188
O is a zero matrix of 3X 3, the TPS nucleus
Figure RE-GDA0003135273850000189
Figure RE-GDA00031352738500001810
Is a matrix of size N × N.
Obtaining coordinates of a rectangular grid point set according to pixel-by-pixel indexes of rectangular images
Figure RE-GDA00031352738500001811
Wherein the number of the mesh point sets is the number Z of all pixels of the image ═ X (I)t)×Y(It). Will grid deltatAs a TPS model
Figure RE-GDA00031352738500001812
Is inputted to obtain
Figure RE-GDA00031352738500001813
The method comprises the following specific steps:
Figure RE-GDA00031352738500001814
Figure RE-GDA00031352738500001815
and
Figure RE-GDA00031352738500001816
two matrices. Then taking the first two columns as coordinates of the transformed grid point set
Figure RE-GDA00031352738500001817
In which the size is Z × N
Figure RE-GDA00031352738500001818
Size of Z x 3 matrix
Figure RE-GDA00031352738500001819
Z th action of
Figure RE-GDA00031352738500001820
At the moment, a grid point set is obtained through a TPS model
Figure RE-GDA00031352738500001821
I.e. representing the position of the transformed image, and finally performing resampling to obtain the content of the transformed image. Firstly:
Figure RE-GDA00031352738500001822
then according to
Figure RE-GDA00031352738500001823
The pixel from the sample pixel of the image to be registered filled to the position of the transformed image other than these coordinates is set to 0. In order to enhance the smoothness of the transformed image in the resampling process, the invention uses bicubic interpolation in sample filling.
S6: and obtaining the transformed image, and performing pixel-level fusion on the transformed image and the reference image to obtain a fusion image, namely completing the multi-modal retinal image fusion.
First, guided image filtering is used to enhance the edges of different types of retinal images. Second, edge maps and geometric features of the retinal image are combined using multi-feature descriptors to improve the description of the feature set and to exclude redundant points. The multi-feature guided model then provides accurate guidance for feature set registration, and feature point-based image registration ultimately provides accurate image registration. Finally, a multimodal retinal fundus image fusion is performed based on accurate image registration. The process of changing the retinal image pair is shown in fig. 3, 4 and 5 of the specification, for each set of results, the first line is the source image and the target image, the second line is the corresponding edge image, the third line is the feature matching result, the last line gives a 12 × 12 chessboard for alternately displaying the converted image and the target image as the chessboard image on the right side, and the converted image is displayed on the left side. The complete flow diagram is shown in figure 6 in the specification.
The embodiment provides a multimodal retinal image fusion method and system based on image registration. On the basis of high-precision image registration, effective multi-mode fundus image fusion is carried out, so that an ophthalmologist can conveniently observe the change of a pathological change area, and the diagnosis and treatment of relevant retinal diseases are further assisted.
Example 2
In this embodiment, on the basis of embodiment 1, the multimodal retinal image fusion method based on image registration proposed in embodiment 1 is modularized to form a multimodal retinal image fusion system based on image registration, and a schematic diagram of each module is shown in fig. 7 in the specification.
A multi-modal retinal image fusion system based on image registration comprises an image input unit 1, an image preprocessing unit 2, a feature combination and extraction unit 3, a feature point set registration unit 4, an image registration unit 5 and an image fusion unit 6 which are sequentially connected.
Image input unit 1: for obtaining a retinal image pair comprising a source image and a target image. Each pair of retinal image pairs includes a source image and a target image.
The image preprocessing unit 2: the method is used for preprocessing the retinal image pair, acquiring the retinal edge image pair, and extracting a feature point set from each retinal edge image pair, wherein the feature point set comprises a source feature point set extracted from a preprocessed source image and a target feature point set extracted from a preprocessed target image. Pre-processing includes contrast enhancement, noise cancellation, etc., leading to image filtering to enhance the edges of different types of retinal images.
Feature combining and extracting unit 3: the method is used for combining a plurality of feature descriptors to construct a multi-feature difference descriptor and guiding feature extraction of the feature point set. Edge maps and geometric features of retinal images are combined using multi-feature descriptors to improve the description of feature sets and to exclude redundant points.
The feature point set registration unit 4: and the method is used for evaluating the corresponding relation between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor, acquiring a corresponding relation matrix, and updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix. The multi-feature guidance model provides accurate guidance for feature set registration.
The image registration unit 5: the method is used for realizing image registration according to the source feature point sets before and after the spatial transformation and acquiring the image after the spatial transformation by combining the target image. Feature point based image registration ultimately provides accurate image registration.
The image fusion unit 6: and the fusion image is obtained by fusing the image after the spatial transformation and a preset reference image at a pixel level. And performing multi-modal retinal fundus image fusion based on accurate image registration.
The feature point set registration unit 4 specifically includes a model construction unit 41, a parameter adjustment unit 42, and an iteration unit 43. The method specifically comprises the following steps:
the model construction unit 41: and the method is used for constructing a Gaussian mixture model on the source characteristic point set and calculating a corresponding relation matrix between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor.
Parameter adjusting unit 42: and the method is used for adjusting the parameters of the Gaussian mixture model based on the corresponding relation matrix and updating the spatial variation of the source feature point set based on the global geometric structure constraint and the local geometric structure constraint.
The iteration unit 43: and the method is used for the iterative model building unit and the parameter adjusting unit to enable the source characteristic point set to gradually approach the target characteristic point set until the preset matching relationship is met.
Wherein the plurality of feature descriptors comprises an edge orientation histogram descriptor (EOH-SIFT descriptor), a local geometry feature descriptor, a global geometry feature descriptor based on a SIFT-like scale space. The feature combining and extracting unit 3 specifically includes a scale difference unit 31, a local feature unit 32, a global feature unit 33, and a difference construction unit 34. The method specifically comprises the following steps:
the scale difference unit 31: the method comprises the steps of matching feature point sets on retina edge image pairs under the same scene and different spectrum conditions through an edge orientation histogram based on a SIFT-like scale space to obtain the scale difference of a source feature point set and a target feature point set;
local feature unit 32: the local feature difference matrix is used for acquiring a source feature point set and a target feature point set through a local geometric structure feature descriptor;
the global feature unit 33: the global feature difference matrix is used for acquiring a source feature point set and a target feature point set through a global geometric structure feature descriptor;
the difference construction unit 34: and the method is used for constructing the multi-feature difference descriptor according to the scale difference, the local feature difference matrix and the global feature difference matrix.
In this embodiment, a sample image generation system based on deep learning is provided on the basis of embodiment 1, and the method of embodiment 1 is modularized to form a specific system, so that the system has higher practicability.
Aiming at the prior art, the invention provides a multi-modal retinal image fusion method and system based on image registration, solves the problems of insufficient matching precision and inaccurate image space transformation in the retinal image registration in the prior art, can obtain the optimal registration performance of the multi-modal retinal image fusion by adopting the method provided by the invention, and is superior to the most advanced method at present in most cases. On the basis of high-precision image registration, effective multi-mode fundus image fusion is carried out, so that an ophthalmologist can conveniently observe the change of a pathological change area, and the diagnosis and treatment of relevant retinal diseases are further assisted.
It will be understood by those skilled in the art that the modules or steps of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented by program code executable by a computing device, such that it may be stored in a memory device and executed by a computing device, or it may be separately fabricated into various integrated circuit modules, or it may be fabricated by fabricating a plurality of modules or steps thereof into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (20)

1. A multi-modal retinal image fusion method based on image registration is characterized by comprising the following steps:
image input: acquiring a retina image pair comprising a source image and a target image;
image preprocessing: preprocessing the retina image pair to obtain a retina edge image pair, and extracting a feature point set from each pair of retina edge image pairs, wherein the feature point set comprises a source feature point set extracted from a preprocessed source image and a target feature point set extracted from a preprocessed target image;
combining and extracting features: combining a plurality of feature descriptors to construct a multi-feature difference descriptor, and guiding feature extraction of the feature point set;
and (3) registration of the feature point set: evaluating the corresponding relation between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor to obtain a corresponding relation matrix, and updating the spatial transformation of the source characteristic point set according to the corresponding relation matrix;
image registration: realizing image registration according to the source feature point sets before and after the spatial transformation, and acquiring the image after the spatial transformation by combining the target image;
image fusion: and carrying out pixel-level fusion on the image after the space transformation and a preset reference image to obtain a fused image.
2. The method of claim 1, wherein the plurality of feature descriptors comprises an edge orientation histogram descriptor, a local geometry feature descriptor, and a global geometry feature descriptor based on a SIFT-like scale space;
the feature combining and extracting specifically comprises:
matching feature point sets on the retina edge image pair under the same scene and different spectrum conditions through the edge orientation histogram based on the SIFT-like scale space to obtain the scale difference of the source feature point set and the target feature point set;
acquiring a local feature difference matrix of the source feature point set and the target feature point set through the local geometric structure feature descriptor;
acquiring a global feature difference matrix of the source feature point set and the target feature point set through the global geometric structure feature descriptor;
and constructing the multi-feature difference descriptor according to the scale difference, the local feature difference matrix and the global feature difference matrix.
3. The method according to claim 2, wherein the step of matching, by the edge orientation histogram based on the SIFT-like scale space, the feature point sets on the retina edge image pair located in the same scene and under different spectral conditions to obtain the scale difference between the source feature point set and the target feature point set specifically comprises:
detecting the set of feature points by based on a SIFT-like scale space representation;
describing the feature point set through an edge orientation histogram descriptor, obtaining feature description of the feature point set, wherein the edge orientation histogram contains spatial information from a contour near each feature point, a window with a preset size is set by taking each feature point as a center to describe the shape and the contour of the retina edge image pair, and invariance of a scale space is maintained;
and matching the feature points in the target feature point set with the feature points in the source feature point set according to the feature description, and judging the difference degree between the feature points by using the Euclidean distance to obtain the scale difference.
4. The method as claimed in claim 3, wherein the edge orientation histogram descriptor based on SIFT-like scale space improves the robustness of feature description by scale difference;
the difference in scale is defined as:
SD(X,Y)=sclx-scly,(x<SD<y)
wherein SD represents the scale difference, scl is the scale space where the feature points are located, X represents the source feature point set, Y represents the target feature point set, and X and Y are defined as follows:
Figure FDA0003076564310000021
wherein the content of the first and second substances,
Figure FDA0003076564310000022
representing the peaks in the SD histogram.
5. The method of claim 4, wherein T is defined assuming that the feature point set T contains J feature pointsikIs a characteristic point tiThe k-th nearest neighbor feature point of (1), the ith point T of the point set TiThe local geometric feature descriptor of (2) is defined as follows:
Figure FDA0003076564310000031
where K denotes the number of adjacent points, vikRepresents from tiTo tikVector of (a), muikThe weight parameter is used for controlling the description strength of the adjacent point vectors to the local features;
μikis defined as follows:
Figure FDA0003076564310000032
each element in the local feature difference matrix is defined as:
Figure FDA0003076564310000033
therein, ΨLmnRepresenting a certain element in the local feature difference matrix, m representing a feature point in the target feature point set, n representing a feature point in the source target point set,
Figure FDA0003076564310000034
representing a gaussian radial basis function and Y representing a set of target feature points.
6. The method of claim 5, wherein the global feature difference is described by a Euclidean distance, and wherein each element in the global feature difference matrix is defined as:
Figure FDA0003076564310000035
therein, ΨGmnRepresenting a certain element in the global feature difference matrix, m representing a feature point in a target feature point set, n representing a feature point in a source feature point set,
Figure FDA0003076564310000036
representing the centroid of the spatially transformed gaussian mixture model.
7. The method of claim 7, wherein the multi-feature difference descriptor is defined as follows:
MLF(T)=ΨG+SD(X,Y)+T1ΨL
wherein, T1Is an annealing parameter of the local feature descriptor, ΨGRepresenting the global feature difference matrix ΨLRepresenting the local feature difference matrix, SD (X, Y) representing scale difference, mlf (t) representing the multi-feature difference descriptor.
8. The method according to claim 2, wherein the feature point set registration specifically comprises:
constructing a probability model on the source characteristic point set, and calculating a corresponding relation matrix between the source characteristic point set and the target characteristic point set through the multi-characteristic difference descriptor;
based on the corresponding relation matrix, adjusting parameters of the probability model, and updating the spatial variation of the source feature point set based on global geometric structure constraint and local geometric structure constraint;
and iterating the steps to enable the source characteristic point set to gradually approach the target characteristic point set until a preset matching relation is met.
9. The method according to claim 8, wherein constructing a probability model on the source feature point set and calculating a correspondence matrix between the source feature point set and the target feature point set through the multi-feature difference descriptor specifically comprises:
constructing a Gaussian mixture model on the source feature point set;
evaluating a correspondence by measuring the multi-feature difference descriptor between the source feature point set and the target feature point set by the Gaussian mixture model;
converting the evaluation problem of the corresponding relation into the probability density of the Gaussian mixture model, and solving the probability density by using an approximate solution;
based on the probability density, calculating the posterior probability of the Gaussian mixture model according to a Bayes rule to estimate the corresponding relation, and acquiring the corresponding relation matrix;
and acquiring an updated source characteristic point set according to the corresponding relation matrix and the source characteristic point set before updating.
10. The method according to claim 9, wherein "adjusting parameters of the probabilistic model based on the correspondence matrix, updating spatial variations based on global geometry constraints and local geometry constraints" specifically comprises:
updating spatial variation by adjusting model parameters of the Gaussian mixture model based on the corresponding relation matrix to obtain an optimal solution of the model parameters;
adding the global geometric structure constraint, and maintaining the global stability of the source feature point set during space change;
and adding the local geometric structure constraint based on the local geometric feature descriptor, and constraining the local deformation of the source feature point set by judging the local similarity between the feature point sets before and after the spatial transformation.
11. The method of claim 9, wherein the model parameters comprise a first optimization parameter σ2And a second optimization parameter Θ;
the process of obtaining the optimal solution of the model parameters comprises the following steps:
obtaining an optimal solution for the model parameters by minimizing a negative log-likelihood function of the probability density;
solving a negative log-likelihood function of the probability density using an expectation-maximization algorithm, the expectation-maximization algorithm comprising calculating the posterior probability and a maximization expectation;
according to the posterior probability, taking a negative log-likelihood function of the probability density as an energy function expected by the maximization;
adding a global constraint term into the energy function based on the global geometric structure constraint to obtain a first constraint function;
and adding a local constraint term into the first constraint function based on the local geometric structure constraint so as to obtain an optimal solution of the parameters of the Gaussian mixture model.
12. The method of claim 11, wherein the probability density is expressed as:
Figure FDA0003076564310000051
Figure FDA0003076564310000052
a probability density function representing a gaussian mixture model,
Figure FDA0003076564310000053
representing a Gaussian component
Figure FDA0003076564310000054
Mid-pair data point ymWherein N +1 gaussian components are included, M represents a feature point in the target feature point set, N represents a feature point in the source-target point set, M represents a total number of feature points in the target feature point set, N represents a total number of feature points in the source-target point set, and each gaussian component
Figure FDA0003076564310000055
Representing X in a set X of source feature pointsnP (N) ═ 1/a, an additional N +1 th gaussian component is used to eliminate the effect of redundant points, making them constrained by the parameter ω (0 < ω < 1);
a Gaussian component
Figure FDA0003076564310000061
Mid-pair data point ymThe probability density of (c) is as follows:
Figure FDA0003076564310000062
Figure FDA0003076564310000063
representing a Gaussian component
Figure FDA0003076564310000064
Mid-pair data point ymThe probability density of (A), MLF (T) denotes the scale difference, σ2Representing a first optimization parameter;
the corresponding relation matrix expression is as follows:
Figure FDA0003076564310000065
Figure FDA0003076564310000066
representing a Gaussian component
Figure FDA0003076564310000067
Mid-pair data point ymN represents the feature points in the source target point set, and N represents the total number of feature points in the source target point set.
13. The method of claim 12, wherein the negative log-likelihood function expression for the probability density is:
Figure FDA0003076564310000068
e represents a negative log-likelihood function of the probability density,
Figure FDA0003076564310000069
in a Gaussian component, ymAs a result of the data points,
Figure FDA00030765643100000610
representing the centroid of the Gaussian mixture model after spatial transformation, and P (x) representing the corresponding relation;
taking a negative log-likelihood function of the probability density as the energy function of the maximization expectation, and expanding as follows:
Figure FDA00030765643100000611
Figure FDA00030765643100000612
Figure FDA00030765643100000613
representing a Gaussian component
Figure FDA00030765643100000614
Mid-pair data point ymTheta represents a second optimization parameter;
defining the spatial transformation of a source feature point set X based on a Gaussian radial basis function:
Figure FDA0003076564310000071
wherein the content of the first and second substances,
Figure FDA0003076564310000072
representing a spatial transformation of a set of source feature points, W being a Gaussian kernelG is an N × N gaussian kernel matrix obtained from gaussian radial basis functions, the expression of a certain element in the matrix is:
Figure FDA0003076564310000073
wherein, gnmThe method comprises the following steps that (1) elements in a Gaussian kernel matrix are used, n and m represent the positions of the elements in the matrix, and x represents a feature point in a source feature point set;
wherein, W is an nxd weight matrix of the gaussian kernel, the second optimization parameter Θ is converted into W, and the expression of the energy function is as follows:
Figure FDA0003076564310000074
Q(W,σ2) The function of the energy is represented by,
Figure FDA0003076564310000075
representing a Gaussian component
Figure FDA0003076564310000076
Mid-pair data point ymThe corresponding relationship matrix of (2).
14. The method of claim 13, wherein the global geometric constraint comprises adding a global geometric constraint term to the energy function, wherein the global geometric constraint term is a regularization operator, and wherein the regularization operator has an expression:
Figure FDA0003076564310000077
where the constant λ is a weight parameter that controls the global constraint strength, R represents the regularization operator,
Figure FDA0003076564310000078
representing the spatial transformation of a source feature point set, wherein Trace () represents the Trace of a matrix, W is an NxD weighting matrix of a Gaussian kernel, and G is an NxN Gaussian kernel matrix obtained by a Gaussian radial basis function;
adding a global geometric constraint term in the energy function to obtain a first energy function, wherein the expression is as follows:
Figure FDA0003076564310000079
QG(W,σ2) Representing a first energy function, Q (W, σ)2) Representing an energy function, the constant λ is a weight parameter that controls the global constraint strength.
15. The method of claim 13, wherein the local geometry constraint term is expressed as follows:
Figure FDA0003076564310000081
wherein eta is a weight parameter for controlling the local constraint strength, and the local deformation of the source feature set X in the spatial transformation is constrained by judging the local similarity of X before and after the spatial transformation;
η is defined by deterministic annealing techniques, the expression is as follows:
Figure FDA0003076564310000082
where m is the maximum value of the preset weight parameter η, t is the current iteration number, and a constant c is used to control the speed of the change of the constraint strength.
16. The method of claim 15, wherein adding a local geometric constraint term to the first energy function expression obtains a second energy function, wherein the expression is:
QGL(W,σ2)=QG(W,σ2)+ηTrace(DTD)
this results in a new formula:
Figure FDA0003076564310000083
Figure FDA0003076564310000084
due to the derivation of each term:
Figure FDA0003076564310000085
the optimal solution of the model parameters is:
W=(GPXG+2ησ2GTUTUG)-1(GP0Y-GP0Y)
Figure FDA0003076564310000086
PXis an N × N matrix, PYIs an M × M matrix, PXAnd PYIs a diagonal matrix, consisting of column vectors P01 and
Figure FDA0003076564310000091
composition, 1 is a column vector whose elements are all 1.
17. The method according to claim 12, wherein "realizing image registration according to the source feature point sets before and after spatial transformation, and acquiring the spatially transformed image in combination with the target image" specifically includes:
acquiring a source characteristic point set after spatial transformation according to the corresponding relation matrix;
constructing a control point set according to the source characteristic point set after the spatial transformation and the source characteristic point set before the spatial transformation, and realizing image registration;
and based on a reverse deduction method, reversely deducing the transformed image according to the spatial transformation by taking the target image and the control point set as control points.
18. A multi-modality retinal image fusion system based on image registration, comprising:
an image input unit: for obtaining a retinal image pair comprising a source image and a target image;
an image preprocessing unit: the system comprises a pre-processing unit, a processing unit and a display unit, wherein the pre-processing unit is used for pre-processing the retinal image pairs, acquiring retinal edge image pairs, and extracting a feature point set from each retinal edge image pair, wherein the feature point set comprises a source feature point set extracted from a pre-processed source image and a target feature point set extracted from a pre-processed target image;
a feature combining and extracting unit: the system is used for combining a plurality of feature descriptors to construct a multi-feature difference descriptor and guiding feature extraction of the feature point set;
a feature point set registration unit: the system comprises a multi-feature difference descriptor, a source feature point set and a target feature point set, wherein the multi-feature difference descriptor is used for evaluating the corresponding relation between the source feature point set and the target feature point set, acquiring a corresponding relation matrix, and updating the spatial transformation of the source feature point set according to the corresponding relation matrix;
an image registration unit: the image registration is realized according to the source feature point sets before and after the spatial transformation, and the image after the spatial transformation is obtained by combining the target image;
an image fusion unit: and the fusion image is obtained by fusing the image after the spatial transformation and a preset reference image at a pixel level.
19. The system according to claim 18, characterized in that the feature point set registration unit comprises in particular,
a model construction unit: the system is used for constructing a Gaussian mixture model on the source feature point set, and calculating a corresponding relation matrix between the source feature point set and the target feature point set through the multi-feature difference descriptor;
a parameter adjusting unit: the system comprises a correspondence matrix, a source feature point set and a Gaussian mixture model, wherein the correspondence matrix is used for adjusting parameters of the Gaussian mixture model and updating the spatial variation of the source feature point set based on global geometric structure constraint and local geometric structure constraint;
an iteration unit: and the parameter adjusting unit is used for iterating the model building unit and the parameter adjusting unit to enable the source characteristic point set to gradually approach the target characteristic point set until a preset matching relation is met.
20. The system of claim 18, wherein the plurality of feature descriptors comprises a SIFT-like scale space-based edge orientation histogram descriptor (EOH-SIFT descriptor), a local geometry feature descriptor, a global geometry feature descriptor;
the feature combining and extracting unit specifically includes:
a scale difference unit: the edge orientation histogram based on the SIFT scale space is used for matching feature point sets on the retina edge image pair in the same scene and under different spectrum conditions, and the scale difference of the source feature point set and the target feature point set is obtained;
local feature unit: the local feature difference matrix is used for acquiring the source feature point set and the target feature point set through the local geometric structure feature descriptor;
global feature unit: the global feature difference matrix is used for acquiring the source feature point set and the target feature point set through the global geometric structure feature descriptor;
a difference construction unit: for constructing the multi-feature difference descriptor from the scale difference, the local feature difference matrix and the global feature difference matrix.
CN202110554406.4A 2021-05-20 2021-05-20 Multi-modal retinal image fusion method and system based on image registration Pending CN113298742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110554406.4A CN113298742A (en) 2021-05-20 2021-05-20 Multi-modal retinal image fusion method and system based on image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110554406.4A CN113298742A (en) 2021-05-20 2021-05-20 Multi-modal retinal image fusion method and system based on image registration

Publications (1)

Publication Number Publication Date
CN113298742A true CN113298742A (en) 2021-08-24

Family

ID=77323371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110554406.4A Pending CN113298742A (en) 2021-05-20 2021-05-20 Multi-modal retinal image fusion method and system based on image registration

Country Status (1)

Country Link
CN (1) CN113298742A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862760A (en) * 2022-03-30 2022-08-05 中山大学中山眼科中心 Method and device for detecting retinopathy of prematurity
CN115294371A (en) * 2022-01-05 2022-11-04 山东建筑大学 Complementary feature reliable description and matching method based on deep learning
CN115690556A (en) * 2022-11-08 2023-02-03 河北北方学院附属第一医院 Image recognition method and system based on multi-modal iconography characteristics
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078881A1 (en) * 2003-09-22 2005-04-14 Chenyang Xu Method and system for hybrid rigid registration based on joint correspondences between scale-invariant salient region features
US20140016830A1 (en) * 2012-07-13 2014-01-16 Seiko Epson Corporation Small Vein Image Recognition and Authorization Using Constrained Geometrical Matching and Weighted Voting Under Generic Tree Model
CN106548491A (en) * 2016-09-30 2017-03-29 深圳大学 A kind of method for registering images, its image interfusion method and its device
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN111260701A (en) * 2020-01-08 2020-06-09 华南理工大学 Multi-mode retina fundus image registration method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078881A1 (en) * 2003-09-22 2005-04-14 Chenyang Xu Method and system for hybrid rigid registration based on joint correspondences between scale-invariant salient region features
US20140016830A1 (en) * 2012-07-13 2014-01-16 Seiko Epson Corporation Small Vein Image Recognition and Authorization Using Constrained Geometrical Matching and Weighted Voting Under Generic Tree Model
CN106548491A (en) * 2016-09-30 2017-03-29 深圳大学 A kind of method for registering images, its image interfusion method and its device
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN111260701A (en) * 2020-01-08 2020-06-09 华南理工大学 Multi-mode retina fundus image registration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONGSHENG BI ET AL.: "Multiple Image Features-Based Retinal Image Registration Using Global and Local Geometric Structure Constraints", 《IEEE ACCESS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294371A (en) * 2022-01-05 2022-11-04 山东建筑大学 Complementary feature reliable description and matching method based on deep learning
CN115294371B (en) * 2022-01-05 2023-10-13 山东建筑大学 Complementary feature reliable description and matching method based on deep learning
CN114862760A (en) * 2022-03-30 2022-08-05 中山大学中山眼科中心 Method and device for detecting retinopathy of prematurity
CN115690556A (en) * 2022-11-08 2023-02-03 河北北方学院附属第一医院 Image recognition method and system based on multi-modal iconography characteristics
CN116109852A (en) * 2023-04-13 2023-05-12 安徽大学 Quick and high-precision feature matching error elimination method

Similar Documents

Publication Publication Date Title
EP3674968B1 (en) Image classification method, server and computer readable storage medium
CN111723860B (en) Target detection method and device
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN108154192B (en) High-resolution SAR terrain classification method based on multi-scale convolution and feature fusion
CN113298742A (en) Multi-modal retinal image fusion method and system based on image registration
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN110337669B (en) Pipeline method for segmenting anatomical structures in medical images in multiple labels
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
WO2019001208A1 (en) Segmentation algorithm for choroidal neovascularization in oct image
CN113728335A (en) Method and system for classification and visualization of 3D images
CN112602099A (en) Deep learning based registration
CN109509193B (en) Liver CT atlas segmentation method and system based on high-precision registration
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN107437252B (en) Method and device for constructing classification model for macular lesion region segmentation
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
Huang et al. Automatic building change image quality assessment in high resolution remote sensing based on deep learning
WO2021017168A1 (en) Image segmentation method, apparatus, device, and storage medium
CN112884668A (en) Lightweight low-light image enhancement method based on multiple scales
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN111626379B (en) X-ray image detection method for pneumonia
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
CN113269774B (en) Parkinson disease classification and lesion region labeling method of MRI (magnetic resonance imaging) image
CN113066054B (en) Cervical OCT image feature visualization method for computer-aided diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824