CN111223069B - Image fusion method and system - Google Patents

Image fusion method and system Download PDF

Info

Publication number
CN111223069B
CN111223069B CN202010036038.XA CN202010036038A CN111223069B CN 111223069 B CN111223069 B CN 111223069B CN 202010036038 A CN202010036038 A CN 202010036038A CN 111223069 B CN111223069 B CN 111223069B
Authority
CN
China
Prior art keywords
image
visible light
base layer
layer image
fluorescent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010036038.XA
Other languages
Chinese (zh)
Other versions
CN111223069A (en
Inventor
王慧泉
毛润
姜泊
牛萍娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN202010036038.XA priority Critical patent/CN111223069B/en
Publication of CN111223069A publication Critical patent/CN111223069A/en
Application granted granted Critical
Publication of CN111223069B publication Critical patent/CN111223069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image fusion method and system. The method comprises the following steps: acquiring a source image; the source image includes a fluorescence image and a visible light image; performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; constructing a weight map highlighting fluorescence information by using a nonlinear function; fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image to obtain a base layer image of the fused image; constructing a second weight map of enhanced fluorescence information based on the significance detection; fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image to obtain a detail layer image of the fused image; reconstructing a base layer image of the fusion image and a detail layer image of the fusion image to obtain the fusion image. The invention can reduce the complexity of the multi-scale algorithm and improve the efficiency of image fusion.

Description

Image fusion method and system
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image fusion method and system.
Background
The image fusion technology is to synthesize two or more pieces of image information of the same scene under different spectrums and space details. The Toet et al indicates that the infrared image has the best target detection and identification effect in the perception evaluation of different image fusion schemes, and the visible light image has the best global scene perception effect, so that the complementary information fusion between the visible light image and the infrared image is comprehensively utilized to form a new image, richer information is provided, and the method has important application in the research fields of target identification, multi-source information mining, medical imaging, map matching and the like.
Because the multi-scale geometric analysis method accords with the visual characteristics of human beings, the obtained fusion image has good visual effects, the mainstream fusion algorithm is mostly based on multi-scale geometric analysis, such as pyramid fusion algorithm, discrete wavelet transform fusion algorithm-based, curve transform and contourlet transform. However, the traditional multi-scale image fusion method is realized on a frequency domain, the calculation complexity is high, the requirement of system instantaneity cannot be met, the fusion efficiency is improved by the two-scale spatial domain transformation image fusion method, and noise is easily generated when the image is processed by the existing two-scale method.
Disclosure of Invention
The invention aims to provide an image fusion method and system, which are used for reducing the complexity of a multi-scale algorithm and improving the image fusion efficiency.
In order to achieve the above object, the present invention provides the following solutions:
an image fusion method, comprising:
acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
constructing a first weight map highlighting fluorescence information using a nonlinear function;
fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of the fused image;
constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of the fluorescent image and a final weight map of the visible light image;
fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight image to obtain a detail layer image of the fused image;
reconstructing the base layer image of the fusion image and the detail layer image of the fusion image to obtain the fusion image.
Optionally, the performing a two-scale decomposition on the source image by using a gaussian filtering method to obtain a base layer image and a detail layer image of the source image specifically includes:
using the formula
Figure BDA0002366037940000021
Performing two-scale decomposition on the fluorescent image to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation;
using the formula
Figure BDA0002366037940000022
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V Is a detail layer image corresponding to the visible light image.
Optionally, the constructing a first weight map with highlighted fluorescence information by using a nonlinear function specifically includes:
using the formula
Figure BDA0002366037940000023
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is the pixel value of the pixel point at (x, y) in the fluorescence information characteristic image, |B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, |B V (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image;
using the formula
Figure BDA0002366037940000031
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
using a non-linear function using formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure BDA0002366037940000032
x is the argument of the nonlinear function and lambda is the enhancement factor.
Optionally, the fusing, according to the first weight map, the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image to obtain a base layer image of the fused image specifically includes:
using formula B F =B N W B +B V (1-W B ) Fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image to obtain a base layer image of the fused image; wherein B is N B is a base layer image corresponding to the fluorescent image V Is a base layer image corresponding to a visible light image, W B For the first weight map, B F Is a base layer picture of a fused image.
Optionally, the constructing a second weight map for enhancing fluorescence information based on significance detection specifically includes:
using median and mean filters, using the formula
Figure BDA0002366037940000033
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V For the visible light image, MF is a median filter, and AF is an average filter; />
Using the formula
Figure BDA0002366037940000034
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V An initial weight map for the visible light image;
based on the enhancement coefficient matrix and the initial weight map, using the formula
Figure BDA0002366037940000041
Constructing a second weight graph; wherein (1)>
Figure BDA0002366037940000042
For the final weight map of the fluorescence image, +.>
Figure BDA0002366037940000043
K is fluorescence information enhancement coefficient, and P is enhancement coefficient matrix.
Optionally, the fusing, according to the second weight map, the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image to obtain a detail layer image of the fused image, specifically includes:
using the formula
Figure BDA0002366037940000044
Fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of the fused image F The method comprises the steps of carrying out a first treatment on the surface of the Wherein D is N For the detail layer image corresponding to the fluorescence image, D V Is a detail layer image corresponding to the visible light image, < >>
Figure BDA0002366037940000045
As fluorescent imagesFinal weight map, < >>
Figure BDA0002366037940000046
Is the final weight map of the visible light image.
The invention also provides an image fusion system, comprising:
the source image acquisition module is used for acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
the two-scale decomposition module is used for carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
the first weight map construction module is used for constructing a first weight map which highlights fluorescence information by using a nonlinear function;
the base layer image fusion module is used for fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight image to obtain a base layer image of the fused image;
the second weight map construction module is used for constructing a second weight map of the enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of the fluorescent image and a final weight map of the visible light image;
the detail layer image fusion module is used for fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight image to obtain a detail layer image of the fused image;
and the reconstruction module is used for reconstructing the base layer image of the fusion image and the detail layer image of the fusion image to obtain the fusion image.
Optionally, the two-scale decomposition module specifically includes:
fluorescent image two-scale decomposition unitFor using formulas
Figure BDA0002366037940000051
Performing two-scale decomposition on the fluorescent image to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation;
a visible light image two-scale decomposition unit for utilizing formula
Figure BDA0002366037940000052
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V Is a detail layer image corresponding to the visible light image.
Optionally, the first weight map construction module specifically includes:
target feature information identifying unit for using formula
Figure BDA0002366037940000053
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is the pixel value of the pixel point at (x, y) in the fluorescence information characteristic image, |B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, |B V (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image;
a first normalization unit for using the formula
Figure BDA0002366037940000054
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
a nonlinear adjustment unit for using a nonlinear function to utilize a formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure BDA0002366037940000061
x is the argument of the nonlinear function and lambda is the enhancement factor.
Optionally, the second weight map construction module specifically includes:
a visual saliency feature construction unit for using a median filter and an average filter, using the formula
Figure BDA0002366037940000062
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V For the visible light image, MF is a median filter, and AF is an average filter; />
A second normalization unit for using the formula
Figure BDA0002366037940000063
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V An initial weight map for the visible light image;
a final weight map construction unit for utilizing a formula according to the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000064
Constructing a second weight graph; wherein (1)>
Figure BDA0002366037940000065
For the final weight map of the fluorescence image, +.>
Figure BDA0002366037940000066
K is fluorescence information enhancement coefficient, and P is enhancement coefficient matrix.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method can effectively reserve the significance information of the source image, and simultaneously highlight the detail information of the fluorescent image, so that the accurate positioning and detail enhancement of the target object to be detected are realized, and a more comfortable visual effect is provided. The method has the advantages of obviously improving peak signal-to-noise ratio, mutual information, edge retention and visual information retention index, along with low fusion complexity and high fusion efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image fusion method of the present invention;
FIG. 2 is a schematic diagram of an image fusion system according to the present invention;
FIG. 3 is a flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a schematic flow chart of an image fusion method of the present invention. As shown in fig. 1, the image fusion method of the present invention includes the steps of:
step 100: a source image is acquired. The source image includes a fluorescence image and a visible light image to be fused.
Step 200: and carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image. The base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image. The specific process is as follows:
using the formula
Figure BDA0002366037940000071
Performing two-scale decomposition on the fluorescent image to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation.
Using the formula
Figure BDA0002366037940000072
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V Is a detail layer image corresponding to the visible light image.
Step 300: a first weight map highlighting fluorescence information is constructed using a nonlinear function. The specific process is as follows:
using the formula
Figure BDA0002366037940000081
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is the pixel value of the pixel point at (x, y) in the fluorescence information characteristic image, |B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, |B V And (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image.
Using the formula
Figure BDA0002366037940000082
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; where P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix.
Using a non-linear function using formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure BDA0002366037940000083
x is the argument of the nonlinear function, x is [0,1 ]]λ is the enhancement factor, λ ε [0, ].
Step 400: and fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain the base layer image of the fused image. Specifically, using formula B F =B N W B +B V (1-W B ) Fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image to obtain a base layer image of the fused image; wherein B is N B is a base layer image corresponding to the fluorescent image V Is a base layer image corresponding to a visible light image, W B For the first weight map, B F Is a base layer picture of a fused image.
Step 500: a second weight map of enhanced fluorescence information is constructed based on the significance detection. The second weight map includes a final weight map of the fluorescence image and a final weight map of the visible light image. The specific process is as follows:
using median and mean filters, using the formula
Figure BDA0002366037940000084
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V And for the visible light image, MF is a median filter, and AF is an average filter. Wherein, the filter radius of the mean filter is set to 31, and the filter radius of the median filter can be set to 3./>
Using the formula
Figure BDA0002366037940000091
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V Is an initial weight map of the visible light image.
Based on the enhancement coefficient matrix and the initial weight map, using the formula
Figure BDA0002366037940000092
Constructing a second weight graph; wherein (1)>
Figure BDA0002366037940000093
For the final weight map of the fluorescence image, +.>
Figure BDA0002366037940000094
K is fluorescence information enhancement coefficient, and P is enhancement coefficient matrix.
Step 600: and fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight image to obtain a detail layer image of the fused image. Specifically, the formula is used
Figure BDA0002366037940000095
Fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of the fused image F The method comprises the steps of carrying out a first treatment on the surface of the Wherein D is N For the detail layer image corresponding to the fluorescence image, D V Is a detail layer image corresponding to the visible light image, < >>
Figure BDA0002366037940000096
For the final weight map of the fluorescence image, +.>
Figure BDA0002366037940000097
Is the final weight map of the visible light image.
Step 700: reconstructing a base layer image of the fusion image and a detail layer image of the fusion image to obtain the fusion image. Specifically, using f=d F +B F And reconstructing the base layer image of the fusion image and the detail layer image of the fusion image to obtain a fusion image F.
The image fusion method can effectively retain the significance information of the source image, and simultaneously highlight the detail information of the fluorescent image, so that the accurate positioning and detail enhancement of the target object to be detected are realized, and a more comfortable visual effect is provided. The method has the advantages that the peak signal-to-noise ratio, the mutual information, the edge holding quantity and the visual information holding degree index are obviously improved, the fusion complexity is lower, and the fusion efficiency is obviously improved.
Fig. 2 is a schematic structural diagram of the image fusion system of the present invention. As shown in fig. 2, the image fusion system of the present invention includes the following structure:
a source image acquisition module 201, configured to acquire a source image; the source image includes a fluorescence image and a visible light image to be fused.
The two-scale decomposition module 202 is configured to perform two-scale decomposition on the source image by using a gaussian filtering method, so as to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image.
The first weight map construction module 203 is configured to construct a first weight map highlighting fluorescence information using a nonlinear function.
And the base layer image fusion module 204 is configured to fuse the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map, so as to obtain a base layer image of the fused image.
A second weight map construction module 205, configured to construct a second weight map of enhanced fluorescence information based on the significance detection; the second weight map includes a final weight map of the fluorescence image and a final weight map of the visible light image.
And a detail layer image fusion module 206, configured to fuse, according to the second weight map, the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image, so as to obtain a detail layer image of the fused image.
And the reconstruction module 207 is configured to reconstruct the base layer image of the fused image and the detail layer image of the fused image, so as to obtain a fused image.
As a specific embodiment, the two-scale decomposition module 202 in the image fusion system of the present invention specifically includes:
a fluorescence image two-scale decomposition unit for utilizing formula
Figure BDA0002366037940000101
Performing two-scale decomposition on the fluorescent image to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation.
A visible light image two-scale decomposition unit for utilizing formula
Figure BDA0002366037940000111
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V Is a detail layer image corresponding to the visible light image.
As a specific embodiment, the first weight map construction module 203 in the image fusion system of the present invention specifically includes:
target feature information identifying unit for using formula
Figure BDA0002366037940000112
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is the pixel value of the pixel point at (x, y) in the fluorescence information characteristic image, |B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, |B V And (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image.
A first normalization unit for using the formula
Figure BDA0002366037940000113
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; where P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix.
A nonlinear adjustment unit for using a nonlinear function to utilize a formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure BDA0002366037940000114
x is the argument of the nonlinear function and lambda is the enhancement factor.
As a specific embodiment, the second weight map construction module 205 in the image fusion system of the present invention specifically includes:
a visual saliency feature construction unit for using a median filter and an average filter, using the formula
Figure BDA0002366037940000115
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V And for the visible light image, MF is a median filter, and AF is an average filter.
A second normalization unit for using the formula
Figure BDA0002366037940000121
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V Is an initial weight map of the visible light image.
A final weight map construction unit for utilizing a formula according to the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000122
Constructing a second weight graph; wherein (1)>
Figure BDA0002366037940000123
For the final weight map of the fluorescence image, +.>
Figure BDA0002366037940000124
K is fluorescence information enhancement coefficient, and P is enhancement coefficient matrix.
A specific embodiment is provided below to further illustrate the embodiments of the present invention shown in fig. 1 and 2. FIG. 3 is a flow chart of an embodiment of the present invention. As shown in fig. 3, the present embodiment includes the steps of:
step1: and performing two-scale decomposition on the source image by using Gaussian filtering to obtain a fluorescent image, a base layer containing large-scale information in the visible light image and a detail layer image containing small-scale information. The source image, namely the input image, of the step is respectively an infrared image (namely a fluorescent image) and a visible light image which are acquired for the same scene, the visible light image can effectively display the background information of the target scene, the infrared image has the advantage of highlighting the target information, the image size is the same, and the input image is subjected to Gaussian filtering to perform two-scale decomposition on the source image, so that a base layer image containing large-scale information and a detail layer image containing small-scale information are obtained.
Step2: and constructing a weight graph highlighting fluorescence information by using a nonlinear function for a fusion rule of the base layer image, and enhancing the relative quantity of fluorescence spectrum information in a fine-tuning mode so as to obtain the base layer image of the fusion image. The method comprises the following specific steps:
b1: and identifying target characteristic information of the fluorescent base layer image by the base layer image after the source image is subjected to two-scale decomposition, and obtaining the fluorescent information characteristic image.
B2: and after obtaining the fluorescence information characteristic image, normalizing the characteristic image to obtain a coefficient enhancement matrix.
B3: and after obtaining the coefficient enhancement matrix, carrying out nonlinear function adjustment on the enhancement coefficient matrix to obtain an initial weight map of the base layer fusion.
B4: and weighting the base layer images of the fluorescent image and the visible light image through a base layer weight graph to obtain a fused base layer image.
Step3: and constructing a fusion weight map of the enhanced fluorescence information by using a significance detection and coefficient enhancement matrix for a fusion rule of the detail layer image, and obtaining the detail layer image of the fusion image through weighted fusion.
C1: and constructing a saliency image by using median filtering and mean filtering of the detail layer image of the source image after the two-scale decomposition, and obtaining the visual saliency characteristic of the source image.
C2: after obtaining the saliency images of the fluorescent image and the visible light image, carrying out normalization construction on an initial weight map.
And C3: after the initial weight map is obtained, an enhanced weight map is constructed by using the coefficient enhancement matrix and the initial weight map.
And C4: the fused detail layer image is obtained by weighting the detail layer images of the fluorescent image and the visible light image through a detail layer weight graph.
Step4: and (3) obtaining a fused image by the reconstruction process of the fused base layer image and the detail layer image.
The process of the invention contains more detail, higher definition and has a satisfactory running speed than other processes. The verification process of the invention adopts two groups of images as source images, and the result measurement utilizes the traditional image quality evaluation standard: peak signal-to-noise ratio (Peak Signal to Noise Ratio, PSNR), mutual information (Mutual Information, MI), edge hold (Q) AB/F ) Visual information fidelity VIFF, information entropy (Information Entropy, IE). The method (PROPOED) is compared with double-tree complex wavelet transform (DTCWT), discrete Wavelet Transform (DWT), fast Filtering Image Fusion (FFIF), laplacian pyramid algorithm (LP), non-downsampled contourlet transform (NSCT), low-pass pyramid transform (RP) and double-scale fusion (TSIFVS) based on significance detection. The verification results are shown in tables 1, 2 and 3, the invention can effectively retain the significance information of the source image, and simultaneously highlight the detail information of the fluorescent image, thereby realizing the accurate positioning and detail enhancement of the target object to be detected and further providing more comfortable visual effect. The method has the advantages that the peak signal-to-noise ratio, the mutual information, the edge holding quantity and the visual information holding degree index are obviously improved, the fusion complexity is lower, and the fusion efficiency is obviously improved.
TABLE 1 evaluation of objective indicators for first group of fused images
Figure BDA0002366037940000141
TABLE 2 evaluation of objective indicators for the second set of fused images
Figure BDA0002366037940000142
TABLE 3 time comparison of individual fusion method operations
Figure BDA0002366037940000143
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (6)

1. An image fusion method, comprising:
acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
the method for performing two-scale decomposition on the source image by adopting the Gaussian filtering method to obtain a base layer image and a detail layer image of the source image specifically comprises the following steps:
using the formula
Figure FDA0004194134270000011
Performing two-scale decomposition on the fluorescent image to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation;
using the formula
Figure FDA0004194134270000012
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V The detail layer image is a detail layer image corresponding to the visible light image;
constructing a first weight map highlighting fluorescence information using a nonlinear function;
fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of the fused image;
constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of the fluorescent image and a final weight map of the visible light image;
the construction of the second weight map for enhancing fluorescence information based on significance detection specifically comprises the following steps:
using median and mean filters, using the formula
Figure FDA0004194134270000013
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V For the visible light image, MF is a median filter, and AF is an average filter;
using the formula
Figure FDA0004194134270000021
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V An initial weight map for the visible light image;
based on the enhancement coefficient matrix and the initial weight map, using the formula
Figure FDA0004194134270000022
Constructing a second weight graph; wherein (1)>
Figure FDA0004194134270000023
For the final weight map of the fluorescence image, +.>
Figure FDA0004194134270000024
K is fluorescence information enhancement coefficient, P is enhancement coefficient matrix;
fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight image to obtain a detail layer image of the fused image;
reconstructing the base layer image of the fusion image and the detail layer image of the fusion image to obtain the fusion image.
2. The image fusion method according to claim 1, wherein the constructing the first weight map with the highlighted fluorescence information using the nonlinear function specifically includes:
using the formula
Figure FDA0004194134270000025
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is fluorescencePixel value of pixel point at (x, y) in light information characteristic image, B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, B V (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image;
using the formula
Figure FDA0004194134270000026
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
using a non-linear function using formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure FDA0004194134270000031
x is the argument of the nonlinear function and lambda is the enhancement factor.
3. The image fusion method according to claim 1, wherein the fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain the base layer image of the fused image specifically includes:
using formula B F =B N W B +B V (1-W B ) Fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image to obtain a base layer image of the fused image; wherein B is N B is a base layer image corresponding to the fluorescent image V Is a base layer image corresponding to a visible light image, W B For the first weight map, B F Is a base layer picture of a fused image.
4. The image fusion method according to claim 1, wherein the fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight map to obtain a detail layer image of the fused image specifically includes:
using the formula
Figure FDA0004194134270000032
Fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of the fused image F The method comprises the steps of carrying out a first treatment on the surface of the Wherein D is N For the detail layer image corresponding to the fluorescence image, D V Is a detail layer image corresponding to the visible light image, < >>
Figure FDA0004194134270000033
For the final weight map of the fluorescence image, +.>
Figure FDA0004194134270000034
Is the final weight map of the visible light image.
5. An image fusion system, comprising:
the source image acquisition module is used for acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
the two-scale decomposition module is used for carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
the two-scale decomposition module specifically comprises:
a fluorescence image two-scale decomposition unit for utilizing formula
Figure FDA0004194134270000035
For the fluorescence imagePerforming two-scale decomposition to obtain a base layer image and a detail layer image corresponding to the fluorescent image; wherein I is N For the fluorescent image, B N Base layer image corresponding to fluorescent image, D N G (r, sigma) is a Gaussian filter, r is the size of a filtering window, and sigma is the standard deviation;
a visible light image two-scale decomposition unit for utilizing formula
Figure FDA0004194134270000041
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein I is V For the visible light image, B V D is a base layer image corresponding to the visible light image V The detail layer image is a detail layer image corresponding to the visible light image;
the first weight map construction module is used for constructing a first weight map which highlights fluorescence information by using a nonlinear function;
the base layer image fusion module is used for fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight image to obtain a base layer image of the fused image;
the second weight map construction module is used for constructing a second weight map of the enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of the fluorescent image and a final weight map of the visible light image;
the second weight map construction module specifically includes:
a visual saliency feature construction unit for using a median filter and an average filter, using the formula
Figure FDA0004194134270000042
Constructing visual saliency features of a fluorescent image and a visible light image; wherein H is N H is the visual saliency characteristic of fluorescent images V I is visual saliency feature of visible light image N For the fluorescence image, I V For the visible light imageThe MF is a median filter, and the AF is an average filter;
a second normalization unit for using the formula
Figure FDA0004194134270000043
Normalizing the visual saliency features of the fluorescent image and the visual saliency features of the visible light image to obtain an initial weight map; wherein W is N For initial weight map of fluorescence image, W V An initial weight map for the visible light image;
a final weight map construction unit for utilizing a formula according to the enhancement coefficient matrix and the initial weight map
Figure FDA0004194134270000051
Constructing a second weight graph; wherein (1)>
Figure FDA0004194134270000052
For the final weight map of the fluorescence image, +.>
Figure FDA0004194134270000053
K is fluorescence information enhancement coefficient, P is enhancement coefficient matrix;
the detail layer image fusion module is used for fusing the detail layer image corresponding to the fluorescent image and the detail layer image corresponding to the visible light image according to the second weight image to obtain a detail layer image of the fused image;
and the reconstruction module is used for reconstructing the base layer image of the fusion image and the detail layer image of the fusion image to obtain the fusion image.
6. The image fusion system of claim 5, wherein the first weight map construction module specifically comprises:
target feature information identifying unit for using formula
Figure FDA0004194134270000054
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein R (x, y) is the pixel value of the pixel point at (x, y) in the fluorescence information characteristic image, B N (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the fluorescent image, B V (x, y) is the pixel value of the pixel point at (x, y) in the base layer image corresponding to the visible light image;
a first normalization unit for using the formula
Figure FDA0004194134270000055
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
a nonlinear adjustment unit for using a nonlinear function to utilize a formula W B =G(r,σ)*S λ (P) adjusting the enhancement coefficient matrix to obtain a first weight map W B The method comprises the steps of carrying out a first treatment on the surface of the Wherein G (r, sigma) is a Gaussian filter, S λ As a function of the non-linearity,
Figure FDA0004194134270000056
x is the argument of the nonlinear function and lambda is the enhancement factor. />
CN202010036038.XA 2020-01-14 2020-01-14 Image fusion method and system Active CN111223069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010036038.XA CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010036038.XA CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Publications (2)

Publication Number Publication Date
CN111223069A CN111223069A (en) 2020-06-02
CN111223069B true CN111223069B (en) 2023-06-02

Family

ID=70829558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010036038.XA Active CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Country Status (1)

Country Link
CN (1) CN111223069B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652832B (en) * 2020-07-09 2023-05-12 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN111968105A (en) * 2020-08-28 2020-11-20 南京诺源医疗器械有限公司 Method for detecting salient region in medical fluorescence imaging
CN112037216B (en) * 2020-09-09 2022-02-15 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system
CN112200735A (en) * 2020-09-18 2021-01-08 安徽理工大学 Temperature identification method based on flame image and control method of low-concentration gas combustion system
CN112419212B (en) * 2020-10-15 2024-05-17 卡乐微视科技(云南)有限公司 Infrared and visible light image fusion method based on side window guide filtering
CN112801927B (en) * 2021-01-28 2022-07-19 中国地质大学(武汉) Infrared and visible light image fusion method based on three-scale decomposition
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN114283486B (en) * 2021-12-20 2022-10-28 北京百度网讯科技有限公司 Image processing method, model training method, image processing device, model training device, image recognition method, model training device, image recognition device and storage medium
CN115330624A (en) * 2022-08-17 2022-11-11 华伦医疗用品(深圳)有限公司 Method and device for acquiring fluorescence image and endoscope system
CN118229555B (en) * 2024-05-23 2024-07-23 山东威高集团医用高分子制品股份有限公司 Image fusion method, device, equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109509163A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of multi-focus image fusing method and system based on FGF
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109509163A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of multi-focus image fusing method and system based on FGF
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Zhiqiang Zhou等."Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters".《Information Fusion》.2016,第15-26页. *
徐丹萍 等."基于双边滤波和NSST的红外与可见光图像融合".《计算机测量与控制》.2018,第第26卷卷(第第4期期),第201-204页. *

Also Published As

Publication number Publication date
CN111223069A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN111223069B (en) Image fusion method and system
Zhang et al. Multifocus image fusion using the nonsubsampled contourlet transform
CN103854267B (en) A kind of image co-registration based on variation and fractional order differential and super-resolution implementation method
CN105139367A (en) Visible light polarization image fusion method based on non-subsampled shear wave
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN109447930B (en) Wavelet domain light field full-focusing image generation algorithm
CN106056564B (en) Edge clear image interfusion method based on joint sparse model
CN110097617B (en) Image fusion method based on convolutional neural network and significance weight
Mao et al. Multi-directional laplacian pyramid image fusion algorithm
Arivazhagan et al. A modified statistical approach for image fusion using wavelet transform
CN115330653A (en) Multi-source image fusion method based on side window filtering
CN104766290B (en) A kind of Pixel Information estimation fusion method based on quick NSCT
CN106384341B (en) A kind of passive image enchancing method of millimeter wave based on target polarized radiation characteristic
CN103400360A (en) Multi-source image fusing method based on Wedgelet and NSCT (Non Subsampled Contourlet Transform)
Jia et al. Research on the decomposition and fusion method for the infrared and visible images based on the guided image filtering and Gaussian filter
Ren et al. Fusion of infrared and visible images based on discrete cosine wavelet transform and high pass filter
Avcı et al. MFIF-DWT-CNN: Multi-focus ımage fusion based on discrete wavelet transform with deep convolutional neural network
Pang et al. Infrared and visible image fusion based on double fluid pyramids and multi-scale gradient residual block
Ren et al. Multiresolution fusion of Pan and MS images based on the Curvelet transform
Shinde et al. Analysisof Biomedical Image Using Wavelet Transform
CN114187210B (en) Multi-mode dense fog removing method based on visible light-far infrared image
Xiao et al. MOFA: A novel dataset for Multi-modal Image Fusion Applications
Budhiraja et al. Effect of pre-processing on MST based infrared and visible image fusion
CN114897751A (en) Infrared and visible light image perception fusion method based on multi-scale structural decomposition
Natarajan A review on underwater image enhancement techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant