CN113192049A - Visible light and infrared image fusion method based on LatLRR and Retinex enhancement - Google Patents
Visible light and infrared image fusion method based on LatLRR and Retinex enhancement Download PDFInfo
- Publication number
- CN113192049A CN113192049A CN202110535236.5A CN202110535236A CN113192049A CN 113192049 A CN113192049 A CN 113192049A CN 202110535236 A CN202110535236 A CN 202110535236A CN 113192049 A CN113192049 A CN 113192049A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- images
- detail
- retinex
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 33
- 230000004927 fusion Effects 0.000 claims abstract description 70
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 35
- 238000004364 calculation method Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000005286 illumination Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a visible light and infrared image fusion method based on LatLRR and Retinex enhancement. The method comprises the following steps of preprocessing a visible light image and an infrared image, obtaining a basic image and a plurality of detail images through multi-stage potential low-rank decomposition, and reconstructing the decomposed basic image: performing Retinex enhancement on the basic image and then fusing to obtain a fused basic image; and reconstructing the detail part of each level into detail images, performing multi-vision comprehensive weighted fusion on each pair of detail images, and finally adding the fused basic image and the fused multi-level detail images to obtain a final fusion image. The fusion result obtained by the fusion method is rich in detail information, the scene target is clearer, and the contrast of the image is improved. The image fusion method provided by the invention can better retain the detail information of the image and simultaneously improve the information content contained in the image, thereby improving the image fusion quality.
Description
Technical Field
The invention relates to the field of image fusion, in particular to a visible light and infrared image fusion method based on LatLRR and Retinex enhancement.
Background
The image fusion technology can fuse the visible light image and the infrared image to obtain a synthetic image which is more suitable for human eye observation or computer vision tasks, so the infrared and visible light image fusion technology has good application prospect in the fields of military reconnaissance, medical diagnosis, artificial intelligence and the like.
At present, there are many algorithms for fusing infrared and visible light images, which can be summarized into three types in general: spatial domain based methods, transform domain based methods and deep learning based methods. The image fusion method based on the spatial domain is a method for directly operating a source image on the spatial domain to obtain a fused image, and has certain advantages in operation speed, but can cause loss of detail information of the source image. Transform domain based methods include fusion methods based on multi-scale decomposition and sparse representation methods. The method based on multi-scale effectively retains the information of the source image, the detail information is obvious, but the structure information of the source image is changed to a certain extent, the fusion method based on sparse representation has better fusion performance, but the defects of limited capability of capturing the global structure and insufficient detail information still exist. The method based on deep learning needs network model training in advance, then image feature extraction and fusion are achieved, fusion performance is greatly influenced by parameters, and calculated amount is large. Recently, researchers have proposed a fusion method based on multi-level potential low-rank representation (MDLatLRR), which can extract global structure information and local structure information in a source image, and the fusion image has rich detail information, high definition and good visibility. However, when the MDLatLRR is used for low-illumination images, the original images are blurred, and the fusion of the salient parts does not make full use of the visual salient features of the detail images, so that the fusion effect is poor.
Disclosure of Invention
The invention aims to provide a visible light and infrared image fusion method based on LatLRR and Retinex enhancement aiming at the defects of the current visible light and infrared image fusion method. The illumination of the basic image is improved by combining a Retinex image enhancement method, and the fusion weight of the detail image is determined by utilizing various visual salient features of the detail image reconstructed by the salient part, so that the fusion effect is improved.
The technical scheme adopted by the invention is as follows:
for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decompositionAnd a base imageInfrared image data P (I)2) K detail images are obtained through k-level potential low-rank decompositionAnd a base imageThe above-mentionedRepresents P (I)r) Detail obtained after j-th decomposition, where r is 1, 2;show thatRearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
for the basic imageAndperforming multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
Preferably, the image contrast stretching method may employ global linear stretching, piecewise linear stretching, exponential transformation stretching, logarithmic transformation stretching, or the like;
preferably, the basic image fusion strategy can adopt average fusion, weighted fusion, absolute value maximum fusion, least square fusion, weighted least square fusion and the like;
preferably, the weighted fusion method refers to a fusion method in which fusion weights of two images are determined first and then weighted summation is performed, and the fusion method includes pixel weighted fusion and regional energy weighted fusion.
And 4, performing multi-vision comprehensive weighting fusion on the detail images.
For each pair of detail imagesAndfusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
Where (x, y) denotes the pixel coordinate, j ═ 1, …, k;
the multi-vision comprehensive weighting fusion method integrates three indexes of the definition AF, the corner significance EA and the local contrast LC of an image to calculate fusion weight. Preferably, the sharpness is computed using a modified laplacian energy sum, and the corner saliency is computed using a linear structure tensor matrix.
The local contrast calculation formula of the image is as follows:
whereinIn the image I with IxyAn s x s region that is the center,is represented bypqBeta is a constant, IpqRepresenting pixel points in the s x s region.
The fusion weight calculation formula is as follows:
wherein the content of the first and second substances,representing weight coefficients calculated based on sharpness,representing the weight coefficients calculated based on the corner saliency,representing the weight coefficients calculated on the basis of the local contrast,andrespectively representing detailed imagesAndlocal sharpness at coordinates (x, y),andrespectively representing detailed imagesAndcorner saliency at coordinates (x, y),andrespectively representing detailed imagesAndlocal contrast at coordinates (x, y).
And 5, adding the results of the steps 3 and 4 to obtain a final fusion result:
further, the method for calculating the local definition of the image in step 4 specifically includes the following steps:
step 4a-1) calculates the approximate sum of laplacian energies for each position in the image I:
wherein N is1And M1Is a positive integer.
Further, the corner saliency in step 4 is calculated by using a linear structure tensor method, and the specific process is as follows:
step 4b-1) calculating a linear structure tensor matrix for each position in the image I:
wherein the content of the first and second substances,Kσis a gaussian kernel with standard deviation σ, and is a convolution operation.
Step 4b-2) calculating the edge significance.
Firstly, a linear structure tensor matrix J is calculatedσxyCharacteristic value of (d):
constructing an edge saliency matrix M and a corner saliency matrix N:
M(x,y)=|λxy1-λxy2|(13)
N(x,y)=|λxy1+λxy2|(14)
respectively normalizing the matrix M and the matrix N to obtainAndand linearly combined to obtain the edge significance:
where k ∈ [0,1] is a constant.
The invention has the beneficial effects that:
1. the invention proposes that the Retinex enhancement is firstly carried out on the basic images represented by the potential low rank and then the basic images are fused, so that the illumination of the basic images is improved, and the influence of the fuzziness of the original images on the fusion result can be reduced;
2. according to the invention, the detail image is firstly reconstructed, and then the fusion weight of the detail image is determined by utilizing various visual salient features of the detail image reconstructed by a salient part, so that the feature of human eye visual observation is better met, and the fusion effect is improved;
3. the image fusion method provided by the invention can better retain the detail information of the image and simultaneously improve the information content contained in the image, thereby improving the image fusion quality.
Drawings
FIG. 1 is a flowchart of a potential low rank decomposition combined Retinex enhanced infrared and visible light image fusion method of the present invention;
FIG. 2 is an infrared and visible image to be fused, wherein (a) is the visible image and (b) is the infrared image;
FIG. 3 is a 1-4 level fused image;
FIG. 4 is a diagram illustrating decomposition and fusion indexes at different levels.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description. The drawings are for illustrative purposes only and are not to be construed as limiting the patent. The technical features of the embodiments of the present invention can be combined correspondingly without mutual conflict.
A preferred embodiment of the present invention will be described in detail below, and as shown in FIG. 1, the present invention comprises the following steps:
step 1: reading a visible light image I to be fused1And an infrared image I2Performing a preprocessing operation of rendering the image Ir(r 1,2) are equally divided into M subblocks of size n × n, and each image subblock is rearranged into a column of data to obtain a data matrix P (I) suitable for low-rank decomposition1) And P (I)2);
Step 2: for data matrix P (I)1) And P (I)2) And performing multi-stage potential low-rank decomposition to obtain a base image and a plurality of detail images.
Visible light image data P (I)1) The k-level potential low rank decomposition of (1) is implemented as follows:
the decomposition of the 1 st level is carried out,let X1=P(I1),X1Obtaining a significance matrix through potential low-rank decompositionWherein L is a significant coefficient matrix, and then reconstructing the matrix to obtain a significant imageFurther, the base image
The subsequent k (k starting from 2) decomposition process is similar, so thatX2Obtaining a significance matrix through potential low-rank decompositionThen, the matrix is reconstructed to obtain a salient imageFurther, a base image is obtainedWhereinShow thatThe data in each column is rearranged into an n × n matrix and then reconstructed into the operation of the whole saliency map.
Same procedure, infrared image data P (I)2) K significant images are obtained through k-level potential low-rank decompositionAnd a low rank image
As shown in fig. 2, step 3: enhanced post-stretch fusion of base images
For the basic imageAndperforming multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic imageThe specific implementation method comprises the following steps:
3-1) estimating an illumination component
According to Retinex theory, the original image can be decomposed into illumination components and reflection components, and the core idea is to eliminate or reduce the illumination components of the image and preserve the reflection characteristics as much as possible, thereby restoring the original appearance of the objects in the image. In the example, the illumination component in the basic image is estimated by adopting Gaussian filtering, different scales are applied to obtain different illumination components, and the illumination component I of the image is obtained after weighted averageRef。
3-2) control of the radiation component IRefIs subjected to linear stretching
To improve the contrast of the image, it is necessary to contrast the illumination component IRefLinear stretching is carried out, in the present example, in such a way as to contrast the radiation component IRefThe stretching was performed as follows:
wherein Imin,ImaxAre the minimum and maximum values in the illumination component, and MIN and MAX are the minimum and maximum values in the gray space to be stretched.
3-3) basic image fusion
Base image after enhanced stretchingAndobtaining a fused basic image based on region energy weighted fusionThe region energy weighted fusion step is as follows:
the specific implementation method comprises the following steps:
(1) firstly, respectively carrying out regional calculation on a basic image by adopting a sliding window to obtain a regional energy matrix with a pixel (m, n) as a center, wherein the calculation formula is as follows:
where X and Y represent the maximum row and column coordinates, respectively, within a window of the region, the window size typically being 3X 3,is the window coefficient. By continuously moving the central pixel, the area energy matrix can be obtained.
(2) Calculating an area energy ratio matrix E according to the area energy in (1)ratioThe formula is as follows:
(3) the fusion weight is determined from the regional energy ratio. Specifically, two thresholds th1, th2 are provided, and th1 is not assumed<th2, if the energy ratio Eratio<th1, describing an infrared chartIf the image area energy is far less than the visible light image area energy, the infrared image area energy can be ignored, the infrared image fusion weight is set to be 0, and the visible light image fusion weight is set to be 1; if th1<Eratio<th2, showing that the energy of the infrared image and the visible light image in the region is relatively close, and determining the fusion weight is shown in the formula; if Eratio>th2, it is shown that the energy of the infrared image is much larger than that of the visible image, and the weight of the visible image is set to 0 and the weight of the infrared image is set to 1. The above method of determining the fusion weight can be formulated as:
(4) and finally, the basic part of the fused image is obtained by weighted fusion of the basic part of the infrared image and the basic part of the visible light image.
As shown in fig. 3 and 4, step 4: multi-vision comprehensive weighted fusion of each pair of detail images
Each pair of detail imagesAndfusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image(j ═ 1, …, k), where (x, y) denotes the pixel coordinates. The multi-vision comprehensive weighting fusion method integrates three indexes of the definition AF, the corner significance EA and the local contrast LC of an image to calculate fusion weight.
The definition adopts improved Laplace energy and calculation, and the specific process is as follows:
step 4a-1) calculating the approximate sum of laplacian energies for each position in the image I
Step 4a-2) calculating a sharpness map of the image
Wherein N is1And M1Is a positive integer.
The edge saliency is calculated by utilizing a linear structure tensor matrix, and the specific process is as follows:
step 4b-1) calculating a linear structure tensor matrix for each position in the image I
Wherein the content of the first and second substances,Kσis a gaussian kernel with standard deviation σ, and is a convolution operation.
Step 4b-2) calculating the edge saliency
Calculation of JσxyCharacteristic value of
Constructing a matrix edge saliency M and a corner saliency matrix N, wherein
M(x,y)=|λxy1-λxy2|
N(x,y)=|λxy1+λxy2|
Respectively normalizing the matrix M and the matrix N to obtainAndcombining linearly to obtain edge significance
Where k ∈ [0,1] is a constant.
The local contrast of the image is calculated according to the formula
WhereinIn the image I with IxyAn s x s region that is the center,is represented bypqβ is a constant.
The fusion weight calculation formula is
Wherein
Step five: the final fusion result is:
the invention also provides a visible light and infrared image fusion system based on LatLRR and Retinex enhancement, which comprises an image preprocessing module, an image decomposition module, a basic image enhancement stretching fusion module, a detail image weighting fusion module and an image fusion module;
the image preprocessing module is specifically realized as follows: for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
the image decomposition module is specifically realized as follows: the multi-stage potential low-rank decomposition obtains a base image and a plurality of detail images, visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decompositionAnd a base imageInfrared image data P (I)2) K detail images are obtained through k-level potential low-rank decomposition And a base imageThe above-mentionedRepresents P (I)r) Detail obtained after j-th decomposition, where r is 1, 2;show thatRearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
the basic image enhancement stretching fusion module is specifically realized as follows: for the basic imageAndperforming multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
The detail image weighting and fusing module is specifically realized as follows: for each pair of detail imagesAndfusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
The image fusion module is specifically realized as follows: and adding the output of the basic image enhancement stretching fusion module and the output result of the basic image enhancement stretching fusion module to obtain a final fusion result.
Claims (7)
1. The visible light and infrared image fusion method based on LatLRR and Retinex enhancement is characterized by comprising the following steps:
step 1, preprocessing images to generate training data suitable for low-rank decomposition:
for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
step 2, obtaining a basic image and a plurality of detail images by multi-stage potential low-rank decomposition:
visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decompositionAnd a base imageInfrared image data P (I)2) K detail images are obtained through k-level potential low-rank decompositionAnd a base imageThe above-mentionedRepresents P (I)r) J thThe fraction of detail obtained after the stage decomposition, where r is 1, 2;show thatRearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
step 3, fusing the basic images after enhancement and stretching;
step 4, performing multi-vision comprehensive weighting fusion on the detail images;
and 5, adding the results of the steps 3 and 4 to obtain a final fusion result.
2. The LatLRR and Retinex enhanced visible and infrared image fusion method according to claim 1, characterized in that step 3 is implemented as follows:
for the basic imageAndperforming multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
The image contrast stretching method adopts global linear stretching, piecewise linear stretching, exponential transformation stretching or logarithmic transformation stretching;
the basic image fusion strategy adopts average fusion, weighted fusion, absolute value maximum fusion, least square fusion or weighted least square fusion;
the weighted fusion method is a fusion method which firstly determines fusion weights of two images and then performs weighted summation, and comprises pixel weighted fusion and regional energy weighted fusion.
3. The LatLRR and Retinex enhanced visible and infrared image fusion method according to claim 1 or 2, characterized in that step 4 is implemented as follows:
for each pair of detail imagesAndfusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
4. The LatLRR and Retinex enhanced visible and infrared image fusion method of claim 3, wherein the multi-vision comprehensive weighted fusion method synthesizes three indexes of image definition AF, edge saliency EA and local contrast LC to calculate fusion weight;
the local contrast calculation formula of the image is as follows:
whereinIn the image I with IxyAn s x s region that is the center,is represented bypqBeta is a constant, IpqRepresenting pixel points in the sxs region;
the fusion weight calculation formula is as follows:
wherein the content of the first and second substances,representing weight coefficients calculated based on sharpness,representing the weight coefficients calculated based on the corner saliency,representing the weight coefficients calculated on the basis of the local contrast,andrespectively representing detailed imagesAndlocal sharpness at coordinates (x, y),andrespectively representing detailed imagesAndcorner saliency at coordinates (x, y),andrespectively representing detailed imagesAndlocal contrast at coordinates (x, y).
5. The LatLRR and Retinex enhanced visible light and infrared image fusion method according to claim 4, characterized in that the method for calculating the local sharpness of the image in step 4 comprises the following steps:
step 4a-1) the sum of the approximate laplacian energies for each position in the image I is calculated:
step 4a-2) calculating a definition map of the image:
wherein N is1And M1Is a positive integer.
6. The LatLRR and Retinex enhanced visible light and infrared image fusion method according to claim 5, wherein the corner saliency is calculated by using a linear structure tensor method in step 4, specifically comprising:
step 4b-1) calculating a linear structure tensor matrix of each position in the image I:
wherein the content of the first and second substances,Kσis a gaussian kernel with standard deviation sigma, which is a convolution operation;
step 4b-2) calculating the edge significance:
firstly, a linear structure tensor matrix J is calculatedσxyCharacteristic value of (d):
secondly, constructing a matrix edge saliency matrix M and a corner saliency matrix N:
M(x,y)=|λxy1-λxy2|(12)
N(x,y)=|λxy1+λxy2|(13)
finally, the matrixes M and N are respectively normalized to obtainAndand linearly combined to obtain the edge significance:
where k ∈ [0,1] is a constant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110535236.5A CN113192049B (en) | 2021-05-17 | 2021-05-17 | Visible light and infrared image fusion method based on LatLRR and Retinex enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110535236.5A CN113192049B (en) | 2021-05-17 | 2021-05-17 | Visible light and infrared image fusion method based on LatLRR and Retinex enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113192049A true CN113192049A (en) | 2021-07-30 |
CN113192049B CN113192049B (en) | 2024-02-06 |
Family
ID=76982231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110535236.5A Active CN113192049B (en) | 2021-05-17 | 2021-05-17 | Visible light and infrared image fusion method based on LatLRR and Retinex enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113192049B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109539A (en) * | 2023-03-21 | 2023-05-12 | 智洋创新科技股份有限公司 | Infrared image texture information enhancement method and system based on generation of countermeasure network |
WO2023231094A1 (en) * | 2022-05-31 | 2023-12-07 | 广东海洋大学 | Underwater sea urchin image recognition model training method and device, and underwater sea urchin image recognition method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345494A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on potential low-rank representation and structure tensor |
CN110148104A (en) * | 2019-05-14 | 2019-08-20 | 西安电子科技大学 | Infrared and visible light image fusion method based on significance analysis and low-rank representation |
-
2021
- 2021-05-17 CN CN202110535236.5A patent/CN113192049B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345494A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on potential low-rank representation and structure tensor |
CN110148104A (en) * | 2019-05-14 | 2019-08-20 | 西安电子科技大学 | Infrared and visible light image fusion method based on significance analysis and low-rank representation |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023231094A1 (en) * | 2022-05-31 | 2023-12-07 | 广东海洋大学 | Underwater sea urchin image recognition model training method and device, and underwater sea urchin image recognition method and device |
CN116109539A (en) * | 2023-03-21 | 2023-05-12 | 智洋创新科技股份有限公司 | Infrared image texture information enhancement method and system based on generation of countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN113192049B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875935B (en) | Natural image target material visual characteristic mapping method based on generation countermeasure network | |
Fan et al. | Integrating semantic segmentation and retinex model for low-light image enhancement | |
Hu et al. | Singular value decomposition and local near neighbors for face recognition under varying illumination | |
Yang et al. | An adaptive method for image dynamic range adjustment | |
CN107798661B (en) | Self-adaptive image enhancement method | |
CN109872285A (en) | A kind of Retinex low-luminance color image enchancing method based on variational methods | |
CA3137297C (en) | Adaptive convolutions in neural networks | |
CN110136075B (en) | Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle | |
CN112257766A (en) | Shadow recognition detection method under natural scene based on frequency domain filtering processing | |
Zong et al. | Local-CycleGAN: a general end-to-end network for visual enhancement in complex deep-water environment | |
CN113192049A (en) | Visible light and infrared image fusion method based on LatLRR and Retinex enhancement | |
Ding et al. | U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement | |
Lepcha et al. | A deep journey into image enhancement: A survey of current and emerging trends | |
CN113039576A (en) | Image enhancement system and method | |
Huo et al. | High‐dynamic range image generation from single low‐dynamic range image | |
Yuan et al. | Single image dehazing via NIN-DehazeNet | |
CN114549567A (en) | Disguised target image segmentation method based on omnibearing sensing | |
CN112509144A (en) | Face image processing method and device, electronic equipment and storage medium | |
CN116012255A (en) | Low-light image enhancement method for generating countermeasure network based on cyclic consistency | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
Lei et al. | Low-light image enhancement using the cell vibration model | |
Liu et al. | Low-light image enhancement network based on recursive network | |
Hou et al. | Reconstructing a high dynamic range image with a deeply unsupervised fusion model | |
Pu et al. | Fractional-order retinex for adaptive contrast enhancement of under-exposed traffic images | |
Singh et al. | Multiscale reflection component based weakly illuminated nighttime image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |