CN112837254A - Image fusion method and device, terminal equipment and storage medium - Google Patents

Image fusion method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112837254A
CN112837254A CN202110209439.5A CN202110209439A CN112837254A CN 112837254 A CN112837254 A CN 112837254A CN 202110209439 A CN202110209439 A CN 202110209439A CN 112837254 A CN112837254 A CN 112837254A
Authority
CN
China
Prior art keywords
image
fused
value
original color
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110209439.5A
Other languages
Chinese (zh)
Other versions
CN112837254B (en
Inventor
许楚萍
符顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202110209439.5A priority Critical patent/CN112837254B/en
Publication of CN112837254A publication Critical patent/CN112837254A/en
Application granted granted Critical
Publication of CN112837254B publication Critical patent/CN112837254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image fusion, and provides an image fusion method, an image fusion device, terminal equipment and a storage medium. In the embodiment of the application, at least two original color images to be fused are obtained; performing brightness channel fusion on at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain the brightness value of a fused image; processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to obtain a chromatic value of the fused image; and determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image, thereby integrally improving the accuracy of the fused result after the image is fused.

Description

Image fusion method and device, terminal equipment and storage medium
Technical Field
The present application belongs to the field of image fusion technologies, and in particular, to an image fusion method, an image fusion device, a terminal device, and a storage medium.
Background
With the development of society, image fusion technology is widely applied to the life of people, information acquired by different images is different due to external factors such as position, illumination and the like, and in order to detect the images more comprehensively, image information corresponding to different images needs to be integrated to obtain more accurate image results, so that image fusion needs to be performed on the obtained images, and targets in the images can be better and accurately known, and subsequent related decisions can be conveniently performed. However, when the existing image is fused, the color cast problem is often generated, so that the color reduction degree of the image fusion is low, and the accuracy of the obtained fusion result is low.
Disclosure of Invention
The embodiment of the application provides an image fusion method, an image fusion device, terminal equipment and a storage medium, and can solve the problem that the accuracy of a fusion result after image fusion is low.
In a first aspect, an embodiment of the present application provides an image fusion method, including:
acquiring at least two original color images to be fused;
performing brightness channel fusion on the at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain a brightness value of a fused image;
processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to obtain a chromatic value of the fused image;
and determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image.
In a second aspect, an embodiment of the present application provides an image fusion apparatus, including:
the acquisition module is used for acquiring at least two original color images to be fused;
the brightness module is used for performing brightness channel fusion on the at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain a brightness value of a fused image;
the chrominance module is used for processing the luminance value of the fused image and the luminance value of the original color image according to a preset self-adaptive fusion algorithm to obtain the chrominance value of the fused image;
and the fused image determining module is used for determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any of the steps of the image fusion method when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the image fusion methods.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute any one of the image fusion methods in the first aspect.
In the embodiment of the application, at least two original color images to be fused are obtained; and performing brightness channel fusion on at least two original color images by adopting a preset Laplace pyramid image fusion method to reduce image splicing traces, thereby obtaining the brightness value of the fused image. And processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to reduce the color cast problem in the image fusion process, further obtain the chromatic value of the fused image, and finally determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image, thereby integrally improving the accuracy of the fused result after the image fusion.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first flowchart of an image fusion method provided in an embodiment of the present application;
fig. 2 is a second flowchart of an image fusion method provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of laplacian pyramid decomposition provided in an embodiment of the present application;
fig. 4 is a third flowchart illustrating an image fusion method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of Gaussian pyramid decomposition provided in an embodiment of the present application;
FIG. 6 is a schematic flowchart of the original color image luminance channel fusion provided by the embodiment of the present application;
fig. 7 is a fourth flowchart illustrating an image fusion method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an image fusion apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Fig. 1 is a schematic flowchart of an image fusion method in an embodiment of the present application, where an execution subject of the method may be a terminal device, and as shown in fig. 1, the image fusion method may include the following steps:
and S101, acquiring at least two original color images to be fused.
In this embodiment, since the information collected by different images changes due to external factors, it is necessary to perform image fusion on multiple images so as to obtain a more accurate result corresponding to the images.
In one embodiment, the terminal device can convert the at least two original color images into the same format to facilitate image fusion, so when detecting that an image with a format that does not conform to a preset format exists in the at least two current original color images, format conversion is performed on the image that does not conform to the preset format, wherein the preset format is a YUV format, and when detecting that the original color image is not in the YUV format, the original color image is converted into the YUV format.
And S102, performing brightness channel fusion on at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain the brightness value of a fused image.
In this embodiment, the terminal device improves the laplacian pyramid image fusion method required for image brightness fusion in advance to facilitate better image fusion, and then the terminal device performs brightness channel fusion on at least two original color images acquired by the terminal device by using the preset laplacian pyramid image fusion method to obtain the brightness value of the fused image. The laplacian pyramid image fusion method, also called multi-resolution image fusion, is widely used in image fusion because the number of stitching traces can be effectively reduced in image fusion. The method comprises the steps of decomposing each image participating in fusion into a multi-scale Laplacian pyramid image sequence, enabling a low-resolution image to be on the upper layer, enabling a high-resolution image to be on the lower layer, enabling the size of the upper layer image to be 1/4 of the size of the previous layer image, and enabling the number of layers to be 0, 1, 2, … and N. And fusing the Laplacian pyramids of all the images on corresponding layers according to a certain rule to obtain a synthetic pyramid, and reconstructing the synthetic pyramid according to the inverse process of pyramid generation to obtain a fused image.
In one embodiment, as shown in fig. 2, step S102 includes:
step S201, calculating an initial weight map of the original color image.
In this embodiment, the initial weight map represents the weight of each position of the luminance channel of the original color image, and the method for calculating the initial weight map may set the corresponding calculation method according to different scenes, and in this embodiment, the formula for calculating the weight of each position in the initial weight map is as follows:
Figure BDA0002951769780000051
wherein, the above Wk(c, d) is the weight of the initial weight map corresponding to the k-th original color image at the (c, d) coordinate, and the above-mentioned yk(c,d)∈[0,255]The above-mentioned
Figure BDA0002951769780000052
As described above
Figure BDA0002951769780000053
After the weight of each position in the initial weight map is calculated, weight normalization processing is carried out:
Figure BDA0002951769780000054
step S202, performing L-level preset Laplacian pyramid decomposition on a brightness channel of an original color image to obtain a first Laplacian component; l is a positive integer.
In this embodiment, the luminance channel of the original color image may be subjected to L-level improved laplacian pyramid decomposition, so as to obtain the original color imageThe L +1 laplacian components of the image in luminance, i.e., the first laplacian component described above, are illustrated in fig. 3, lpY in fig. 3i0,lpYi1,lpYi2…,lpYiL is the L +1 laplacian components in luminance of the ith original color image. Wherein, when i is 1, 2, …, K, lpY will be10~lpY1L,lpY 20~lpY2L,…,lpY K0~lpYKL is set to L +1 laplacian components in luminance of the acquired K original color images, where L is a positive integer.
In one embodiment, as shown in fig. 3 and 4, the step S202 includes:
step S401, performing downsampling on the original color image to obtain an original color image after downsampling.
In this embodiment, the downsampling operation may be set according to user requirements, for example, 3 times downsampling. Since most of the existing laplacian-based image fusion is to improve the fusion rule to some extent, compared with the existing laplacian-based image fusion, the overall decomposition and fusion process is not changed, so that the time consumption is not reduced obviously, and in this embodiment, 3-fold down-sampling is adopted to perform down-sampling operation on the original color image. Compared with the 2-time down-sampling of the traditional Laplace image fusion method, the 3-time down-sampling is adopted, so that the pyramid layer number required when the same fusion effect is achieved is reduced, the memory occupation is reduced, the consumed time is reduced, and the image fusion requirements in some environments with high real-time performance are met.
Specifically, as shown in fig. 3, the luminance channel Y image Y of the ith original color image to be fused in the present embodimentiThat is, gsYi0 in fig. 3, the image size is M × N, 1/3 downsampling is performed on the image to obtain an original color image subjected to 3-fold downsampling, and if the coordinates (p, q) are the coordinates of the luminance channel luminance value of the original color image subjected to 3-fold downsampling, then:
Ydown(p,q)=Y1(3*p,3*q)
wherein, the above YdownThe original color image is 3 times of the down-sampled original color image; y is above1Which is the original color image when not in operation, e.g., gsYi0 in fig. 3.
And S402, filtering the original color image subjected to the down-sampling operation to obtain a Gaussian image.
In this embodiment, the original color image after the downsampling operation is filtered through a symmetric gaussian filter to obtain a gaussian image, and compared with the traditional laplacian pyramid image fusion, the downsampling and filtering are exchanged in this embodiment, so that the time consumption for the whole image fusion can be shortened on the premise of ensuring that the image effect is not changed, and a method which is rapid and has the effect unchanged is provided for the image fusion. It will be appreciated that if the down-sampling and filtering processes are repeated, a series of gaussian images with a size smaller than the predetermined size of the previous image can be obtained. Wherein, when the gaussian filter convolution kernel ω corresponding to the gaussian filter is 3 × 3, it may be:
Figure BDA0002951769780000061
specifically, as shown in fig. 3, an image gsY obtained by 3-fold down-sampling an image having a size of M × N i0, Gaussian filtering is carried out, and a first layer of Gaussian image gsY is obtainedi1, the size of the image is (M/3) × (N/3), if the process is repeated for L times, a series of images gsY with the size of the previous layer of Gaussian images 1/9 can be obtainedi0,gsYi1,gsYi2,…,gsYiL。
And S403, performing upsampling operation on the Gaussian image to obtain the Gaussian image after the upsampling operation.
In this embodiment, the upsampling operation may be set according to user requirements, for example, 3 times upsampling, and in this embodiment, the 3 times upsampling is used to perform the upsampling operation on the gaussian image. By adopting 3 times of upsampling, compared with bilinear interpolation which requires at least 6 times of multiplication when each pixel is calculated, the calculation amount of the 3 times of upsampling is less due to different algorithm principles, so that the time consumption is less; compared with the nearest interpolation, because the nearest pixel is not simply adopted as a result, but the weighted summation is carried out by combining the adjacent 1-4 elements, the obvious blocking effect of the nearest interpolation is avoided, and the balance between the time and the effect is achieved.
Specifically, as shown in fig. 3, the gaussian image gsY is aligned in the present embodimenti1 is up-sampled by 3 times to obtain a 3 times up-sampled gaussian image EXPANDs (gsY)i1) The image size is M × N.
In one embodiment, the step S403 includes:
and performing a kronecker inner product treatment on the Gaussian image according to a preset first parameter to obtain an amplified Gaussian image. And performing convolution processing on the amplified Gaussian image according to a preset second parameter to obtain the Gaussian image after the up-sampling operation.
In this embodiment, the gaussian image is amplified by a preset multiple through the kronecker inner product processing, and then the amplified gaussian image is subjected to convolution processing, so that the gaussian image after the up-sampling operation is obtained. The formula for obtaining the gaussian image after the up-sampling operation is as follows:
Figure BDA0002951769780000071
wherein, the above
Figure BDA0002951769780000072
Performing convolution processing; as described above
Figure BDA0002951769780000073
Performing kronecker product treatment; a is a second parameter; b is a first parameter; y is aboveupIs a Gaussian image after 3 times up-sampling operation, e.g. the Gaussian image EXPAND in FIG. 3 (gsY)i1) (ii) a Y is above2Is a Gaussian image when not operating, e.g. layer 1 Gaussian image gsY in FIG. 3i1。
In this embodiment, the gaussian image can be enlarged by three times by using the first parameter B, which can be:
Figure BDA0002951769780000074
wherein, the value of B is equivalent to inserting two 0 s between every two elements.
Design concept by convolution processing:
Figure BDA0002951769780000075
the second parameter a may be:
Figure BDA0002951769780000081
in one embodiment, the second parameter a may be replaced by a':
Figure BDA0002951769780000082
it will be appreciated that since "/256" can be replaced by an 8-bit shift to the right in operation, the shift to the right operation is faster than the division operation. Therefore, the design of the convolution kernel needs to be based on 16 so as to conveniently normalize the right shift by 8 bits, so the convolution kernel is set as [ b a × a b ], where a + b is set as X to satisfy symmetry; x-a is set to (X-b)/2 to smooth the interpolated pixel transition to give [ 51116115 ]/16, which is then expanded into a two-dimensional convolution kernel, referred to as a' after the substitution.
And step S404, subtracting the original color image and the Gaussian image after the up-sampling operation to obtain a Laplace component of the current layer.
In this embodiment, the laplacian component of the current layer is obtained by subtracting the original color image from the gaussian image after the upsampling operation. It can be understood that if L +1 laplacian components of the original color image on the luminance channel are desired, the gaussian pyramid image of each layer is subtracted from the upsampled image of the next layer to obtain the laplacian component of each layer, and the laplacian component of the highest layer is directly obtained from the gaussian image of the highest layer.
Specifically, the Laplacian component lpY of Laplacian pyramid level 0 in FIG. 3i0=gsYi0-EXPAND(gsYi1) The process is repeated to obtain the laplacian component lpY corresponding to each layeriz=gsYiz-EXPAND(gsYi(z +1)), (z 1, …, N), and the laplacian component lpY of the highest layeriL=gsYiAnd L, obtaining Laplace components of L +1 original color images by the means.
Step S203, performing L-level preset gaussian pyramid decomposition on the initial weight map to obtain a second laplacian component.
In this embodiment, the initial weight map is downsampled and then gaussian filtered to obtain the gaussian image of each layer, and this process is repeated L times to obtain L +1 laplacian components of the original color map in weight, i.e. the second laplacian component, as shown in fig. 5, gsW in fig. 5i0,gsWi1,gsWi2,…,gsWiAnd L is each Laplace component of the initial weight map corresponding to the ith original color image. Wherein, when i is 1, 2, …, K, gsW will be10~gsW1L,gsW 20~gsW2L,…,gsW K0~gsWKL is set to L +1 laplacian components of the initial weight map corresponding to the K original color images acquired.
Specifically, as shown in fig. 5, W of the initial weight map corresponding to the ith original color image to be fusediI.e. gsW in fig. 5i0, the image size is M × N, the image is 1/3 down-sampled, and the 3 times down-sampled image is filtered by a symmetrical Gaussian filter ω (e.g. 3 × 3), thereby obtaining a Gaussian map with size (M/3) × (N/3)Image gsWi1. If this process is repeated L times, a series of images gsW of the size of the previous Gaussian image 1/9 can be obtainedi0,gsWi1,gsWi2,…,gsWiL。
And step S204, calculating the Laplace component of each layer after fusion according to the first Laplace component and the second Laplace component, and reconstructing to obtain an image after brightness fusion.
In this embodiment, the terminal device performs laplacian component gsW based on L +1 of each initial weight map 10~gsW1L,gsW 20~gsW2L,…,gsW K0~gsWKL for L +1 laplacian components lpY of each original color image 10~lpY1L,lpY 20~lpY2L,…,lpY K0~lpYKL correspond to each other, and perform weighted fusion to obtain a fused laplacian component of each layer, and further determine a fused laplacian pyramid, as shown in fig. 6, where fig. 6 is a flow diagram illustrating a process of performing 3-level improved laplacian pyramid decomposition on a luminance channel of K original color images, performing 3-level improved gaussian pyramid decomposition on a weight image corresponding to the original color image, and performing luminance channel fusion.
Specifically, the fused z-th layer laplace decomposition value at coordinates (x, y) is represented by lpf (z) (x, y):
Figure BDA0002951769780000091
in this embodiment, when obtaining the laplacian component of each fused layer, the fused laplacian pyramid can be determined, and the fused laplacian pyramid is recurred from the top layer by layer from top to bottom according to a preset formula, and after the calculation is completed, the final fusion result Y (0) can be obtained, where Y (0) includes Y (l) and Y (z), and the preset formula is:
Figure BDA0002951769780000101
wherein EXPAND (-) above represents a 3-fold upsampling operation.
And S103, processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to obtain a chromatic value of the fused image.
In this embodiment, the terminal device determines the chroma value of the fused image by using a preset adaptive fusion algorithm according to the obtained brightness value of the fused image and the brightness value of the original color image, so that the color cast problem after the image fusion can be reduced.
In one embodiment, as shown in fig. 7, step S103 includes:
and step S701, acquiring the brightness value of the target point in the fusion image.
Step S702, a first reference point brightness value and a second reference point brightness value which are most adjacent to the target point brightness value are selected from the original color image.
In this embodiment, the terminal device obtains the luminance value of the target point in the fused image, and selects the luminance values of two reference points that are closest to the luminance value of the target point in luminance, that is, the luminance value of the first reference point and the luminance value of the second reference point, at the same position of the obtained at least two original color images to be fused. The nearest neighbors refer to two brightness values with the minimum difference value with the brightness value of the target point.
Step S703 determines a first adaptive ratio according to a difference between the luminance value of the first reference point and the luminance value of the target point.
In this embodiment, the first adaptive proportion k is1The determination formula of (1) is:
Figure BDA0002951769780000102
wherein, the above Yl(p, q) is a first reference point brightness value, Y (p, q) is a brightness value of a target point, and Y isj(p, q) are second reference point luminance values.
Step S704, determining a second adaptive ratio according to the difference between the brightness value of the second reference point and the brightness value of the target point.
In this embodiment, the second adaptive ratio k is2The determination formula of (1) is:
Figure BDA0002951769780000111
step S705, determining a chromaticity value of the target point according to the first adaptive proportion and the second adaptive proportion.
In this embodiment, because the color cast problem is easily generated during the fusion process of the images, when the terminal device performs the color cast on the original color image, according to the brightness value of the fused image obtained in advance, the first adaptive proportion and the second adaptive proportion are determined by the adjacent brightness values of the brightness value, and then the chromaticity value of the target point is determined according to the determined first adaptive proportion and the second adaptive proportion, so that the color cast problem after the image fusion can be reduced.
In one embodiment, step S705 includes:
and acquiring a chromatic value corresponding to the first reference point and a chromatic value corresponding to the second reference point.
And when the sum of the first adaptive proportion and the second adaptive proportion is 1, calculating the chromatic value corresponding to the first reference point and the chromatic value corresponding to the second reference point by adopting a preset chromatic calculation formula, and determining the chromatic value of the target point.
In this embodiment, when the terminal device detects that the sum of the first adaptive proportion and the second adaptive proportion is 1, it indicates that the luminance value of the target point is between the luminance value of the first reference point and the luminance value of the second reference point, that is, the colors of the luminance value of the first reference point and the luminance value of the second reference point are normal, in this case, the adaptive proportion may be used to add and sum to obtain the chromaticity value of the target point, that is, a preset chromaticity calculation formula is used to calculate the chromaticity value corresponding to the acquired first reference point and the chromaticity value corresponding to the acquired second reference point, so that the chromaticity value of the target point is obtained by the chromaticity value adaptive interpolation of the original color image, and the color cast problem is reduced.
In one embodiment, the predetermined chromaticity calculation formula includes:
Figure BDA0002951769780000112
wherein k is2For the second adaptive ratio, k1For said first adaptive ratio, (u)l,vl) For the chroma value corresponding to the first reference point, (u)j,vj) For the chroma value corresponding to the second reference point, (u)f,vf) Is the chromaticity value of the target point.
When the sum of the first adaptive proportion and the second adaptive proportion is not 1, selecting a reference point corresponding to the adaptive proportion smaller than 1 from the first adaptive proportion and the second adaptive proportion, and determining the chromatic value corresponding to the reference point as the chromatic value of the target point.
In this embodiment, when the terminal device detects that the sum of the first adaptive proportion and the second adaptive proportion is not 1, it indicates that the luminance value of the target point is not between the luminance value of the first reference point and the luminance value of the second reference point, that is, it indicates that there is a color anomaly of the luminance value of one reference point in the current luminance value of the first reference point and the luminance value of the second reference point, in this case, because the normal value is closer to the fused luminance value and the sum of the first adaptive proportion and the second adaptive proportion can only be greater than or equal to 1, the reference point corresponding to the adaptive proportion smaller than 1 is selected from the first adaptive proportion and the second adaptive proportion, which is the closest point to the luminance value of the target point, and the colorimetric value corresponding to the selected reference point is determined as the colorimetric value of the target point, so as to obtain the colorimetric value of the target point by adaptively interpolating the colorimetric values of the original color image, the color cast problem is reduced.
Specifically, if the first adaptive ratio is smaller than 1, then:
Figure BDA0002951769780000121
if the second adaptive proportion is less than 1, then:
Figure BDA0002951769780000122
and step S104, determining a fused image according to the brightness value of the fused image and the chromatic value of the fused image.
In the present embodiment, the fused image is determined by the luminance value of the fused image and the chrominance value of the fused image obtained as described above, thereby reducing the problem of color cast of the image and improving the real-time performance of the image processing. The fused image can be directly obtained from the fused brightness value and the fused chromatic value in a YUV space, and the fused image is a color image; the color image in other spaces, such as RGB, Lab and the like, can be obtained by performing formula conversion on the color image in YUV space.
In the embodiment of the application, at least two original color images to be fused are obtained; and performing brightness channel fusion on at least two original color images by adopting a preset Laplace pyramid image fusion method to reduce image splicing traces, thereby obtaining the brightness value of the fused image. And processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to reduce the color cast problem in the image fusion process, further obtain the chromatic value of the fused image, and finally determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image, thereby integrally improving the accuracy of the fused result after the image fusion.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 8 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present application, corresponding to the image fusion method described above, and as shown in fig. 8, the image fusion apparatus may include:
an obtaining module 801, configured to obtain at least two original color images to be fused.
The brightness module 802 is configured to perform brightness channel fusion on at least two original color images by using a preset laplacian pyramid image fusion method to obtain a brightness value of a fused image.
And the chrominance module 803 is configured to process the luminance value of the fused image and the luminance value of the original color image according to a preset adaptive fusion algorithm to obtain a chrominance value of the fused image.
And a fused image determining module 804, configured to determine a fused image according to the brightness value of the fused image and the chromatic value of the fused image.
In one embodiment, the chrominance module 803 may include:
and the obtaining submodule is used for obtaining the brightness value of the target point in the fusion image.
And the selecting submodule is used for selecting a first reference point brightness value and a second reference point brightness value which are most adjacent to the target point brightness value from the original color image.
And the first proportion determining submodule is used for determining a first adaptive proportion according to the difference value of the brightness value of the first reference point and the brightness value of the target point.
And the second proportion determining submodule is used for determining a second self-adaptive proportion according to the difference value of the brightness value of the second reference point and the brightness value of the target point.
And the chromatic value determining submodule is used for determining the chromatic value of the target point according to the first adaptive proportion and the second adaptive proportion.
In one embodiment, the chrominance value determining sub-module may include:
and the acquisition unit is used for acquiring the chromatic value corresponding to the first reference point and the chromatic value corresponding to the second reference point.
And the calculating unit is used for calculating the chromatic value corresponding to the first reference point and the chromatic value corresponding to the second reference point by adopting a preset chromatic calculation formula when the sum of the first adaptive proportion and the second adaptive proportion is 1, and determining the chromatic value of the target point.
And the selecting unit is used for selecting a reference point corresponding to the adaptive proportion smaller than 1 from the first adaptive proportion and the second adaptive proportion when the sum of the first adaptive proportion and the second adaptive proportion is not 1, and determining the chromatic value corresponding to the reference point as the chromatic value of the target point.
In one embodiment, the calculating unit may include:
Figure BDA0002951769780000141
wherein k is2For the second adaptive ratio, k1For said first adaptive ratio, (u)l,vl) For the chroma value corresponding to the first reference point, (u)j,vj) For the chroma value corresponding to the second reference point, (u)f,vf) Is the chromaticity value of the target point.
In one embodiment, the brightness module 802 may include:
and the calculating submodule is used for calculating an initial weight map of the original color image.
The first decomposition submodule is used for carrying out L-level preset Laplacian pyramid decomposition on a brightness channel of the original color image to obtain a first Laplacian component; l is a positive integer.
And the second decomposition submodule is used for carrying out L-level preset Gaussian pyramid decomposition on the initial weight map to obtain a second Laplacian component.
And the reconstruction submodule is used for calculating the fused Laplace component of each layer according to the first Laplace component and the second Laplace component and reconstructing to obtain the brightness fused image.
In one embodiment, the first decomposition submodule may include:
and the down-sampling unit is used for performing down-sampling operation on the original color image to obtain the original color image after the down-sampling operation.
And the filtering unit is used for filtering the original color image after the down-sampling operation to obtain a Gaussian image.
And the up-sampling unit is used for performing up-sampling operation on the Gaussian image to obtain the Gaussian image after the up-sampling operation.
And the subtraction processing unit is used for carrying out subtraction processing on the original color image and the Gaussian image after the up-sampling operation to obtain the Laplace component of the current layer.
In one embodiment, the upsampling unit may include:
and the clo inner product processing subunit is used for carrying out clo inner product processing on the Gaussian image according to a preset first parameter to obtain an amplified Gaussian image.
And the convolution processing subunit is used for performing convolution processing on the amplified Gaussian image according to a preset second parameter to obtain the Gaussian image subjected to the up-sampling operation.
In the embodiment of the application, at least two original color images to be fused are obtained; and performing brightness channel fusion on at least two original color images by adopting a preset Laplace pyramid image fusion method to reduce image splicing traces, thereby obtaining the brightness value of the fused image. And processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to reduce the color cast problem in the image fusion process, further obtain the chromatic value of the fused image, and finally determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image, thereby integrally improving the accuracy of the fused result after the image fusion.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the module described above may refer to corresponding processes in the foregoing system embodiments and method embodiments, and are not described herein again.
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown.
As shown in fig. 9, the terminal device 9 of this embodiment includes: at least one processor 900 (only one shown in fig. 9), a memory 901 coupled to the processor 900, and a computer program 902, such as an image fusion program, stored in the memory 901 and executable on the at least one processor 900. The processor 900 executes the computer program 902 to implement the steps in the image fusion method embodiments, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 900 executes the computer program 902 to implement the functions of the modules in the device embodiments, such as the modules 801 to 804 shown in fig. 8.
Illustratively, the computer program 902 may be divided into one or more modules, and the one or more modules are stored in the memory 901 and executed by the processor 900 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 902 in the terminal device 9. For example, the computer program 902 may be divided into an acquisition module 801, a luminance module 802, a chrominance module 803, and a fused image determination module 804, and the specific functions of the modules are as follows:
an obtaining module 801, configured to obtain at least two original color images to be fused;
the brightness module 802 is configured to perform brightness channel fusion on at least two original color images by using a preset laplacian pyramid image fusion method to obtain a brightness value of a fused image;
the chrominance module 803 is configured to process the luminance value of the fused image and the luminance value of the original color image according to a preset adaptive fusion algorithm to obtain a chrominance value of the fused image;
and a fused image determining module 804, configured to determine a fused image according to the brightness value of the fused image and the chromatic value of the fused image.
The terminal device 9 may include, but is not limited to, a processor 900 and a memory 901. Those skilled in the art will appreciate that fig. 9 is only an example of the terminal device 9, and does not constitute a limitation to the terminal device 9, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, a bus, etc.
The Processor 900 may be a Central Processing Unit (CPU), and the Processor 900 may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 901 may be an internal storage unit of the terminal device 9 in some embodiments, for example, a hard disk or a memory of the terminal device 9. In other embodiments, the memory 901 may also be an external storage device of the terminal device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 9. Further, the memory 901 may include both an internal storage unit and an external storage device of the terminal device 9. The memory 901 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as a program code of the computer program. The above memory 901 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer-readable storage medium, to instruct related hardware. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image fusion method, comprising:
acquiring at least two original color images to be fused;
performing brightness channel fusion on the at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain a brightness value of a fused image;
processing the brightness value of the fused image and the brightness value of the original color image according to a preset self-adaptive fusion algorithm to obtain a chromatic value of the fused image;
and determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image.
2. The image fusion method according to claim 1, wherein the processing the luminance values of the fused image and the luminance values of the original color image according to a preset adaptive fusion algorithm to obtain the chrominance values of the fused image comprises:
acquiring the brightness value of a target point in the fusion image;
selecting a first reference point brightness value and a second reference point brightness value which are most adjacent to the target point brightness value from the original color image;
determining a first adaptive proportion according to the difference value between the brightness value of the first reference point and the brightness value of the target point;
determining a second adaptive proportion according to the difference value between the brightness value of the second reference point and the brightness value of the target point;
and determining the chromatic value of the target point according to the first adaptive proportion and the second adaptive proportion.
3. The image fusion method of claim 2, wherein the determining the chromaticity value of the target point according to the first adaptive proportion and the second adaptive proportion includes:
acquiring a chromatic value corresponding to the first reference point and a chromatic value corresponding to the second reference point;
when the sum of the first adaptive proportion and the second adaptive proportion is 1, calculating a chromatic value corresponding to the first reference point and a chromatic value corresponding to the second reference point by adopting a preset chromatic calculation formula, and determining a chromatic value of the target point;
when the sum of the first adaptive proportion and the second adaptive proportion is not 1, selecting a reference point corresponding to the adaptive proportion smaller than 1 from the first adaptive proportion and the second adaptive proportion, and determining a chromatic value corresponding to the reference point as a chromatic value of the target point.
4. The image fusion method according to claim 3, wherein the predetermined chromaticity calculation formula includes:
Figure FDA0002951769770000021
wherein k is2For the second adaptive ratio, k1For said first adaptive ratio, (u)l,vl) For the chroma value corresponding to the first reference point, (u)j,vj) For the chroma value corresponding to the second reference point, (u)f,vf) Is the chromaticity value of the target point.
5. The image fusion method according to claim 1, wherein the performing luminance channel fusion on the at least two original color images by using a preset laplacian pyramid image fusion method comprises:
calculating an initial weight map of the original color image;
performing L-level preset Laplacian pyramid decomposition on a brightness channel of the original color image to obtain a first Laplacian component; l is a positive integer;
performing L-level preset Gaussian pyramid decomposition on the initial weight map to obtain a second Laplacian component;
and calculating the Laplace component of each layer after fusion according to the first Laplace component and the second Laplace component, and reconstructing to obtain a brightness fused image.
6. The image fusion method of claim 5, wherein the performing L-level predetermined Laplacian pyramid decomposition on the luminance channel of the original color image to obtain a first Laplacian component comprises:
carrying out down-sampling operation on the original color image to obtain the original color image after the down-sampling operation;
filtering the original color image subjected to the down-sampling operation to obtain a Gaussian image;
performing upsampling operation on the Gaussian image to obtain the Gaussian image after the upsampling operation;
and performing subtraction processing on the original color image and the Gaussian image subjected to the up-sampling operation to obtain a Laplace component of the current layer.
7. The image fusion method of claim 6, wherein the upsampling the Gaussian image to obtain the upsampled Gaussian image comprises:
performing a kronecker inner product treatment on the Gaussian image according to a preset first parameter to obtain an amplified Gaussian image;
and performing convolution processing on the amplified Gaussian image according to a preset second parameter to obtain the Gaussian image after the up-sampling operation.
8. An image fusion apparatus, comprising:
the acquisition module is used for acquiring at least two original color images to be fused;
the brightness module is used for performing brightness channel fusion on the at least two original color images by adopting a preset Laplace pyramid image fusion method to obtain a brightness value of a fused image;
the chrominance module is used for processing the luminance value of the fused image and the luminance value of the original color image according to a preset self-adaptive fusion algorithm to obtain a chrominance value of the fused image;
and the fused image determining module is used for determining the fused image according to the brightness value of the fused image and the chromatic value of the fused image.
9. A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of an image fusion method according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of an image fusion method according to any one of claims 1 to 7.
CN202110209439.5A 2021-02-25 2021-02-25 Image fusion method and device, terminal equipment and storage medium Active CN112837254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209439.5A CN112837254B (en) 2021-02-25 2021-02-25 Image fusion method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209439.5A CN112837254B (en) 2021-02-25 2021-02-25 Image fusion method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112837254A true CN112837254A (en) 2021-05-25
CN112837254B CN112837254B (en) 2024-06-11

Family

ID=75933388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209439.5A Active CN112837254B (en) 2021-02-25 2021-02-25 Image fusion method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112837254B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187213A (en) * 2021-12-14 2022-03-15 成都微光集电科技有限公司 Image fusion method and device, equipment and storage medium thereof
CN114494084A (en) * 2022-04-14 2022-05-13 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100296698A1 (en) * 2009-05-25 2010-11-25 Visionatics Inc. Motion object detection method using adaptive background model and computer-readable storage medium
CN105635574A (en) * 2015-12-29 2016-06-01 小米科技有限责任公司 Image processing method and device
US20160300337A1 (en) * 2015-04-08 2016-10-13 Tatung University Image fusion method and image processing apparatus
CN106204513A (en) * 2016-08-15 2016-12-07 厦门美图之家科技有限公司 The methods, devices and systems of image procossing
CN106446829A (en) * 2016-09-22 2017-02-22 三峡大学 Hydroelectric generating set vibration signal noise reduction method based on mode autocorrelation analysis of SVD and VMD
CN106550227A (en) * 2016-10-27 2017-03-29 成都西纬科技有限公司 A kind of image saturation method of adjustment and device
US20170132820A1 (en) * 2015-11-11 2017-05-11 Leauto Intelligent Technology (Beijing) Co. Ltd Method and system for mitigating color mutation in image fusion
CN106875371A (en) * 2017-02-09 2017-06-20 聚龙智瞳科技有限公司 Image interfusion method and image fusion device based on Bayer format
US20170213330A1 (en) * 2016-01-25 2017-07-27 Qualcomm Incorporated Unified multi-image fusion approach
CN107038695A (en) * 2017-04-20 2017-08-11 厦门美图之家科技有限公司 A kind of image interfusion method and mobile device
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
WO2018000359A1 (en) * 2016-06-30 2018-01-04 北京深迈瑞医疗电子技术研究院有限公司 Method and system for enhancing ultrasound contrast images and ultrasound contrast imaging device
CN108600721A (en) * 2018-03-28 2018-09-28 深圳市华星光电半导体显示技术有限公司 A kind of method of color gamut mapping of color and equipment
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN110211077A (en) * 2019-05-13 2019-09-06 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of more exposure image fusion methods based on Higher-order Singular value decomposition
CN110309781A (en) * 2019-07-01 2019-10-08 中国科学院遥感与数字地球研究所 Damage remote sensing recognition method in house based on the fusion of multi-scale spectrum texture self-adaption
CN111080568A (en) * 2019-12-13 2020-04-28 兰州交通大学 Tetrolet transform-based near-infrared and color visible light image fusion algorithm
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment
CN111238808A (en) * 2020-02-04 2020-06-05 沈阳理工大学 Compound fault diagnosis method for gearbox based on empirical mode decomposition and improved variational mode decomposition
CN111292243A (en) * 2020-03-09 2020-06-16 三亚至途科技有限公司 Projection seamless edge fusion method and device
CN111355886A (en) * 2018-03-27 2020-06-30 华为技术有限公司 Photographing method, photographing device and mobile terminal
CN111504646A (en) * 2020-06-05 2020-08-07 合肥工业大学 Weak signal fault feature classification method and system for early failure of bearing
WO2020182230A2 (en) * 2019-03-11 2020-09-17 影石创新科技股份有限公司 Image fusion method and portable terminal
CN111953893A (en) * 2020-06-30 2020-11-17 普联技术有限公司 High dynamic range image generation method, terminal device and storage medium
CN111986129A (en) * 2020-06-30 2020-11-24 普联技术有限公司 HDR image generation method and device based on multi-shot image fusion and storage medium
CN112036042A (en) * 2020-09-02 2020-12-04 哈尔滨工程大学 Power equipment abnormality detection method and system based on variational modal decomposition
CN112179653A (en) * 2020-09-07 2021-01-05 神华铁路装备有限责任公司 Rolling bearing vibration signal blind source separation method and device and computer equipment
CN112381743A (en) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 Image processing method, device, equipment and storage medium
CN112633368A (en) * 2020-12-21 2021-04-09 四川大学 Flat vibration motor defect detection system and method based on improved multi-granularity cascade forest

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100296698A1 (en) * 2009-05-25 2010-11-25 Visionatics Inc. Motion object detection method using adaptive background model and computer-readable storage medium
US20160300337A1 (en) * 2015-04-08 2016-10-13 Tatung University Image fusion method and image processing apparatus
US20170132820A1 (en) * 2015-11-11 2017-05-11 Leauto Intelligent Technology (Beijing) Co. Ltd Method and system for mitigating color mutation in image fusion
CN105635574A (en) * 2015-12-29 2016-06-01 小米科技有限责任公司 Image processing method and device
US20170213330A1 (en) * 2016-01-25 2017-07-27 Qualcomm Incorporated Unified multi-image fusion approach
WO2018000359A1 (en) * 2016-06-30 2018-01-04 北京深迈瑞医疗电子技术研究院有限公司 Method and system for enhancing ultrasound contrast images and ultrasound contrast imaging device
CN106204513A (en) * 2016-08-15 2016-12-07 厦门美图之家科技有限公司 The methods, devices and systems of image procossing
CN106446829A (en) * 2016-09-22 2017-02-22 三峡大学 Hydroelectric generating set vibration signal noise reduction method based on mode autocorrelation analysis of SVD and VMD
CN106550227A (en) * 2016-10-27 2017-03-29 成都西纬科技有限公司 A kind of image saturation method of adjustment and device
CN106875371A (en) * 2017-02-09 2017-06-20 聚龙智瞳科技有限公司 Image interfusion method and image fusion device based on Bayer format
CN107038695A (en) * 2017-04-20 2017-08-11 厦门美图之家科技有限公司 A kind of image interfusion method and mobile device
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN111355886A (en) * 2018-03-27 2020-06-30 华为技术有限公司 Photographing method, photographing device and mobile terminal
CN108600721A (en) * 2018-03-28 2018-09-28 深圳市华星光电半导体显示技术有限公司 A kind of method of color gamut mapping of color and equipment
WO2020182230A2 (en) * 2019-03-11 2020-09-17 影石创新科技股份有限公司 Image fusion method and portable terminal
CN110211077A (en) * 2019-05-13 2019-09-06 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of more exposure image fusion methods based on Higher-order Singular value decomposition
CN110309781A (en) * 2019-07-01 2019-10-08 中国科学院遥感与数字地球研究所 Damage remote sensing recognition method in house based on the fusion of multi-scale spectrum texture self-adaption
CN111080568A (en) * 2019-12-13 2020-04-28 兰州交通大学 Tetrolet transform-based near-infrared and color visible light image fusion algorithm
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment
CN111238808A (en) * 2020-02-04 2020-06-05 沈阳理工大学 Compound fault diagnosis method for gearbox based on empirical mode decomposition and improved variational mode decomposition
CN111292243A (en) * 2020-03-09 2020-06-16 三亚至途科技有限公司 Projection seamless edge fusion method and device
CN111504646A (en) * 2020-06-05 2020-08-07 合肥工业大学 Weak signal fault feature classification method and system for early failure of bearing
CN111953893A (en) * 2020-06-30 2020-11-17 普联技术有限公司 High dynamic range image generation method, terminal device and storage medium
CN111986129A (en) * 2020-06-30 2020-11-24 普联技术有限公司 HDR image generation method and device based on multi-shot image fusion and storage medium
CN112036042A (en) * 2020-09-02 2020-12-04 哈尔滨工程大学 Power equipment abnormality detection method and system based on variational modal decomposition
CN112179653A (en) * 2020-09-07 2021-01-05 神华铁路装备有限责任公司 Rolling bearing vibration signal blind source separation method and device and computer equipment
CN112381743A (en) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 Image processing method, device, equipment and storage medium
CN112633368A (en) * 2020-12-21 2021-04-09 四川大学 Flat vibration motor defect detection system and method based on improved multi-granularity cascade forest

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RABIA ZAFAR等: ""Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library"", 《J IMAGING》, vol. 6, no. 7, 2 July 2020 (2020-07-02), pages 60 - 72 *
SIMRANDEEP SINGH: ""Review of Various Image Fusion Algorithms and Image Fusion Performance Metric"", 《ARCHIVES OF COMPUTATION METHOD IN ENGINEERING》, vol. 28, 19 February 2021 (2021-02-19), pages 3645 - 3659, XP037528567, DOI: 10.1007/s11831-020-09518-x *
张红英等: ""一种基于细节层分离的单曝光HDR图像生成算法"", 《自动化学报》, vol. 45, no. 11, 30 November 2019 (2019-11-30), pages 2159 - 2170 *
李成立等: ""基于颜色传递和目标增强的夜视图像彩色融合"", 《激光与红外》, vol. 46, no. 5, 31 May 2016 (2016-05-31), pages 607 - 611 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187213A (en) * 2021-12-14 2022-03-15 成都微光集电科技有限公司 Image fusion method and device, equipment and storage medium thereof
CN114494084A (en) * 2022-04-14 2022-05-13 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium
CN114494084B (en) * 2022-04-14 2022-07-26 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112837254B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
US6091851A (en) Efficient algorithm for color recovery from 8-bit to 24-bit color pixels
US8224085B2 (en) Noise reduced color image using panchromatic image
US7149355B2 (en) Image processing apparatus, image processing method, image processing program, and computer-readable record medium storing image processing program
CN111402258A (en) Image processing method, image processing device, storage medium and electronic equipment
KR20070068433A (en) Magnification and pinching of two-dimensional images
CN109903224A (en) Image-scaling method, device, computer equipment and storage medium
CN112997479B (en) Method, system and computer readable medium for processing images across a phase jump connection
US11854157B2 (en) Edge-aware upscaling for improved screen content quality
CN112837254A (en) Image fusion method and device, terminal equipment and storage medium
CN110430403B (en) Image processing method and device
CN109005368A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN113939845A (en) Method, system and computer readable medium for improving image color quality
CN110503704A (en) Building method, device and the electronic equipment of three components
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117437118A (en) Image processing method and device and electronic equipment
CN112200719A (en) Image processing method, electronic device and readable storage medium
CN111724292B (en) Image processing method, device, equipment and computer readable medium
US20170206637A1 (en) Image correction apparatus and image correction method
CN116523787A (en) Low-illumination image enhancement method based on LAB space and multi-scale feature pyramid
US20230098437A1 (en) Reference-Based Super-Resolution for Image and Video Enhancement
CN115471417A (en) Image noise reduction processing method, apparatus, device, storage medium, and program product
CN115809959A (en) Image processing method and device
JP6525700B2 (en) Image processing apparatus, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant