CN111815732A - Method for coloring intermediate infrared image - Google Patents

Method for coloring intermediate infrared image Download PDF

Info

Publication number
CN111815732A
CN111815732A CN202010724946.8A CN202010724946A CN111815732A CN 111815732 A CN111815732 A CN 111815732A CN 202010724946 A CN202010724946 A CN 202010724946A CN 111815732 A CN111815732 A CN 111815732A
Authority
CN
China
Prior art keywords
image
infrared image
channel
column vectors
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010724946.8A
Other languages
Chinese (zh)
Other versions
CN111815732B (en
Inventor
王向宇
梁军利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202010724946.8A priority Critical patent/CN111815732B/en
Publication of CN111815732A publication Critical patent/CN111815732A/en
Application granted granted Critical
Publication of CN111815732B publication Critical patent/CN111815732B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for coloring a mid-infrared image. Firstly, constructing a dictionary matched with a color image and a mid-infrared image; according to the obtained dictionary and the n neighbor principle, finding out color image small blocks matched with the small blocks of the intermediate infrared image to be colored, and further constructing a color image to obtain the intermediate infrared image after preliminary coloring; and finally, denoising the preliminarily colored intermediate infrared image by using a total variation model to obtain a final result. The method well distinguishes main objects in the image scene from the background, and the coloring effect is natural.

Description

Method for coloring intermediate infrared image
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image coloring method.
Background
With the wide application of infrared imaging technology in the military field, various image processing technologies for infrared images are rapidly developed. The coloring of the infrared image belongs to the technology which is developed in recent years, and has important significance for infrared image segmentation, target identification and the like. Infrared image imaging mainly depends on the thermal radiation of an object, and can be roughly divided into near-infrared images, intermediate-infrared images and far-infrared images according to different radiation wavelengths, and the infrared image imaging processes and the used equipment of different wave bands are different. For near-infrared images, a visible light color image and a corresponding near-infrared image can be acquired by adding an optical filter to the same lens, the color image and the near-infrared image are in a pixel-level corresponding relation, and conversion between the color image and the near-infrared image can be realized by constructing a point-to-point mapping relation. For the middle infrared image and the far infrared image, only a thermal infrared imager can be used for imaging, and the imaged image and the visible light color image have no one-to-one relation of pixel level. In the prior art, the coloring of mid-infrared images is mainly realized by using a neural network, and from the effect after coloring, objects in a scene cannot be well colored, so that the main objects in the images cannot be effectively distinguished from the background, and a pseudo coloring phenomenon exists, namely, the color of the processed objects is greatly different from the possible color of the objects in the real situation, and the difference of the colors on senses of people is very large.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for coloring a mid-infrared image. Firstly, constructing a dictionary matched with the intermediate infrared image and the color image; according to the obtained dictionary and the n neighbor principle, finding out color image small blocks matched with the small blocks of the intermediate infrared image to be colored, and further constructing a color image to obtain the intermediate infrared image after preliminary coloring; and finally, denoising the preliminarily colored intermediate infrared image by using a total variation model to obtain a final result. The method well distinguishes main objects in the image scene from the background, and the coloring effect is natural.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: establishing a mid-infrared image and color image matching dictionary;
step 1-1: selecting m color images with the same size, and acquiring a mid-infrared image corresponding to each color image, wherein the size of the acquired mid-infrared image is the same as that of the color image; the color image and the corresponding intermediate infrared image form an image pair, and m groups of image pairs are shared;
step 1-2: dividing color images in the k-th group of image pairs into R-channel images, G-channel images and B-channel images according to RGB values, wherein k is 1, …, m, and dividing each channel image into p s1×s2Sorting the p rectangular small blocks of each channel image according to the position sequence of the rectangular small blocks from left to right in the channel image from top to bottom; expanding the pixels in each rectangular small block into column vectors from left to right and from top to bottom, and generating p column vectors in total; forming a two-dimensional matrix by the p column vectors of each channel, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors; the three channel images form three two-dimensional matrixes which are respectively marked as
Figure BDA0002601326260000021
And
Figure BDA0002601326260000022
dividing the mid-infrared image in the k group of image pairs into p s1×s2Sorting the p rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right in the intermediate infrared image from top to bottom; expanding pixels in each rectangular small block of the intermediate infrared image into column vectors from left to right from top to bottom, and generating p column vectors together; and then forming a two-dimensional matrix by the p column vectors, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors, and the sequence is marked as DIRk
Definition of
Figure BDA0002601326260000023
And
Figure BDA0002601326260000024
an R channel dictionary, a G channel dictionary and a B channel dictionary of the kth group of image pairs respectively;
step 1-3: adopting the processing method in the step 1-2 to obtain an R channel dictionary, a G channel dictionary and a B channel dictionary of the m groups of image pairs; combining the R channel dictionary, the G channel dictionary and the B channel dictionary of the m groups of image pairs according to the RGB three channels respectively, and the result is as follows:
Figure BDA0002601326260000025
the combined results are recorded as:
Figure BDA0002601326260000026
wherein
Figure BDA0002601326260000027
In the form of a general R-channel dictionary,
Figure BDA0002601326260000028
in the form of a general G-channel dictionary,
Figure BDA0002601326260000029
a general B channel dictionary;
Figure BDA00026013262600000210
and DIRAre two-dimensional matrixes and have the same size; within each channel dictionary DIRAnd
Figure BDA00026013262600000211
vectors with the same sequence number in the middle column have a corresponding relation;
step 2: coloring the intermediate infrared image;
step 2-1: dividing the mid-infrared image to be colored into a s1×s2Sorting a rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right and from top to bottom in the intermediate infrared image to be colored; expanding pixels in each rectangular small block of the mid-infrared image to be colored into column vectors from left to right and from top to bottom, and generating a column vectors; and then forming a two-dimensional matrix by the a column vectors, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the a rectangular small blocks corresponding to the column vectors, and the sequence is marked asIRWhere the ith rectangular patch generates a column vector denotedi IR
Step 2-2: using n nearest neighbor principle in matrix DIRIs found ini IRThe nearest n column vectors are represented as
Figure BDA0002601326260000033
Using the total R channel dictionary, the total G channel dictionary and the total B channel dictionary to obtain
Figure BDA0002601326260000034
In a two-dimensional matrix
Figure BDA0002601326260000035
And
Figure BDA0002601326260000036
the corresponding n column vectors with the same column sequence number are respectively
Figure BDA0002601326260000037
And
Figure BDA0002601326260000038
to pair
Figure BDA0002601326260000039
And
Figure BDA00026013262600000310
weighted summation:
Figure BDA00026013262600000311
in the formula (I), the compound is shown in the specification,
Figure BDA00026013262600000312
respectively representing RGB three channel image small block column vectors corresponding to column vectors generated by the ith rectangular small block of the intermediate infrared image to be colored; according to the pixel sequence when the pixels in each rectangular small block are expanded into the column vectors from left to right and from top to bottom in the step 1-2, the column vectors of the RGB three-channel image small blocks are arranged
Figure BDA00026013262600000319
Figure BDA00026013262600000314
Restoring the image into small image blocks of three channels of RGB;
wjfor weight, the following is calculated:
Figure BDA00026013262600000315
Figure BDA00026013262600000316
step 2-3: processing each image small block divided by the intermediate infrared image to be colored in the step 2-2 to obtain image small blocks of three channels of RGB corresponding to each image small block of the intermediate infrared image; splicing the image small blocks of the three RGB channels corresponding to each image small block of the obtained intermediate infrared image into three complete RGB channel images respectively from left to right and from top to bottom when the intermediate infrared image is segmented according to the step 2-1, and superposing the three obtained RGB channel images into a three-dimensional matrix, thereby obtaining a color image Z after preliminary coloring;
and step 3: denoising by adopting a total variation model, and constructing an objective function:
Figure BDA00026013262600000317
wherein, X is a color image colored on the mid-infrared image after denoising,
Figure BDA00026013262600000318
in order to differentiate X, lambda is a penalty factor;
and iterating once to obtain:
Figure BDA0002601326260000041
wherein d is the step length of gradient descent,
Figure BDA0002601326260000042
to differentiate Z.
Preferably, the method for acquiring the mid-infrared image corresponding to each color image in step 1-1 is as follows: the method comprises the following steps of adopting an infrared camera and a visible light camera to take a picture of the same scene for sampling, and obtaining an image pair of a color image and a middle infrared image after image distortion removal, zooming and alignment;
preferably, s is as described in Steps 1-21=10,s2=10。
Preferably, n of the n neighbor principle described in step 2-2 is equal to 10.
Preferably, the n-nearest neighbor principle method described in step 2-2 is as follows:
computingi IRAnd matrix DIRThe distance between every two column vectors in the vector sum is the two norms of the vectors obtained after the subtraction of the two vectors; sorting all the distances obtained by calculation from small to large, and taking D corresponding to the first n distancesIRThe column vector of (2).
Preferably, λ ═ 0 and d ═ 0.2 in step 3.
Preferably, the differentiation method in step 3 is to replace the differential value with the difference value between the pixel and the pixel in its eight neighborhoods.
Due to the adoption of the method for coloring the mid-infrared image, the main objects in the image scene can be distinguished from the background after the mid-infrared image is colored, and the coloring effect is natural.
Drawings
FIG. 1 is a schematic diagram of a color image versus mid-infrared image;
FIG. 2 is a mid-infrared image to be colored using the method of the present invention;
FIG. 3 is a three pairs of color and mid-infrared image pairs selected for use in generating a dictionary when coloring FIG. 2;
FIG. 4 is an image of the RGB three channels rendered on FIG. 2 by the method of the present invention;
FIG. 5 is a preliminary color image resulting from the method of the present invention after coloring FIG. 2;
FIG. 6 shows the result of denoising the primary color image by the holomorphic model according to the method of the present invention;
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
A method for coloring a mid-infrared image. The method comprises the following steps:
step 1: establishing a mid-infrared image and color image matching dictionary;
step 1-1: selecting m color images with the same size, and acquiring a mid-infrared image corresponding to each color image, wherein the size of the acquired mid-infrared image is the same as that of the color image; the color image and the corresponding intermediate infrared image form an image pair, and m groups of image pairs are shared;
step 1-2: dividing color images in the k-th group of image pairs into R-channel images, G-channel images and B-channel images according to RGB values, wherein k is 1, …, m, and dividing each channel image into p s1×s2Sorting the p rectangular small blocks of each channel image according to the position sequence of the rectangular small blocks from left to right in the channel image from top to bottom; expanding the pixels in each rectangular small block into column vectors from left to right and from top to bottom, and generating p column vectors in total; forming a two-dimensional matrix by the p column vectors of each channel, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors; the three channel images form three two-dimensional matrixes which are respectively marked as
Figure BDA0002601326260000051
And
Figure BDA0002601326260000052
dividing the mid-infrared image in the k group of image pairs into p s1×s2Sorting the p rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right in the intermediate infrared image from top to bottom; each rectangle of the intermediate infrared image is reducedExpanding the pixels in the block into column vectors from left to right from top to bottom, and generating p column vectors together; and then forming a two-dimensional matrix by the p column vectors, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors, and the sequence is marked as DIRk
Definition of
Figure BDA0002601326260000053
And
Figure BDA0002601326260000054
an R channel dictionary, a G channel dictionary and a B channel dictionary of the kth group of image pairs respectively;
step 1-3: adopting the processing method in the step 1-2 to obtain an R channel dictionary, a G channel dictionary and a B channel dictionary of the m groups of image pairs; combining the R channel dictionary, the G channel dictionary and the B channel dictionary of the m groups of image pairs according to the RGB three channels respectively, and the result is as follows:
Figure BDA0002601326260000055
the combined results are recorded as:
Figure BDA0002601326260000056
wherein
Figure BDA0002601326260000057
In the form of a general R-channel dictionary,
Figure BDA0002601326260000058
in the form of a general G-channel dictionary,
Figure BDA0002601326260000059
a general B channel dictionary;
Figure BDA00026013262600000510
and DIRAre two-dimensional matrixes and have the same size; within each channel dictionary DIRAnd
Figure BDA00026013262600000511
vectors with the same sequence number in the middle column have a corresponding relation;
step 2: coloring the intermediate infrared image;
step 2-1: dividing the mid-infrared image to be colored into a s1×s2Sorting a rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right and from top to bottom in the intermediate infrared image to be colored; expanding pixels in each rectangular small block of the mid-infrared image to be colored into column vectors from left to right and from top to bottom, and generating a column vectors; and then forming a two-dimensional matrix by the a column vectors, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the a rectangular small blocks corresponding to the column vectors, and the sequence is marked asIRWhere the ith rectangular patch generates a column vector denotedi IR
Step 2-2: using n nearest neighbor principle in matrix DIRIs found ini IRThe nearest n column vectors are represented as
Figure BDA0002601326260000063
Using the total R channel dictionary, the total G channel dictionary and the total B channel dictionary to obtain
Figure BDA0002601326260000064
In a two-dimensional matrix
Figure BDA0002601326260000065
And
Figure BDA0002601326260000066
the corresponding n column vectors with the same column sequence number are respectively
Figure BDA0002601326260000067
And
Figure BDA0002601326260000068
to pair
Figure BDA0002601326260000069
And
Figure BDA00026013262600000610
weighted summation:
Figure BDA00026013262600000611
in the formula (I), the compound is shown in the specification,
Figure BDA00026013262600000612
respectively representing RGB three channel image small block column vectors corresponding to column vectors generated by the ith rectangular small block of the intermediate infrared image to be colored; according to the pixel sequence when the pixels in each rectangular small block are expanded into the column vectors from left to right and from top to bottom in the step 1-2, the column vectors of the RGB three-channel image small blocks are arranged
Figure BDA00026013262600000617
Figure BDA00026013262600000614
Restoring the image into small image blocks of three channels of RGB;
wjfor weight, the following is calculated:
Figure BDA00026013262600000615
Figure BDA00026013262600000616
step 2-3: processing each image small block divided by the intermediate infrared image to be colored in the step 2-2 to obtain image small blocks of three channels of RGB corresponding to each image small block of the intermediate infrared image; splicing the image small blocks of the three RGB channels corresponding to each image small block of the obtained intermediate infrared image into three complete RGB channel images respectively from left to right and from top to bottom when the intermediate infrared image is segmented according to the step 2-1, and superposing the three obtained RGB channel images into a three-dimensional matrix, thereby obtaining a color image Z after preliminary coloring;
and step 3: denoising by adopting a total variation model, and constructing an objective function:
Figure BDA0002601326260000071
wherein, X is a color image colored on the mid-infrared image after denoising,
Figure BDA0002601326260000072
in order to differentiate X, lambda is a penalty factor;
and iterating once to obtain:
Figure BDA0002601326260000073
wherein d is the step length of gradient descent,
Figure BDA0002601326260000074
to differentiate Z.
Example (b):
the embodiment relates to a method for acquiring a color image and a corresponding intermediate infrared image, which comprises the steps of adopting an infrared camera and a visible light camera to shoot and sample the same scene, and obtaining a series of image pairs matched with the color image and the intermediate infrared image after image distortion removal, zooming and alignment. Fig. 1 shows a pair of processed images, a color image on the left and a mid-infrared image on the right, and it can be seen that the position and size of the same object in different images are approximately the same. The selected mid-infrared image to be colored is shown in fig. 2.
1. According to the mid-infrared image to be colored in fig. 2, three mid-infrared images with contents similar to those of the mid-infrared image in fig. 2 and corresponding visible light color images of the three mid-infrared images are selected, fig. 3 shows three pairs of the visible light color images and the mid-infrared images selected corresponding to fig. 2, the left side is a color image, and the right side is a mid-infrared image. Since the color image has three channels, a phase is taken in each channelAnd the same operation is carried out, and the dictionary is constructed for the three pairs of images according to the step 1
Figure BDA0002601326260000075
Size s of the division block1×s210 x 10 was chosen. Within each dictionary DIRAnd
Figure BDA0002601326260000076
vectors with the same sequence number in the middle column have a corresponding relation, and the corresponding relation is the key for coloring the infrared image.
2. Coloring the mid-infrared image to be colored to obtain an image after preliminary coloring;
dividing the mid-infrared image to be colored into a plurality of small blocks with the size of 10 x 10, wherein the column vector generated by the ith rectangular small block is recorded asi IRAt DIRFind 10 AND's in the column vectori IRThe nearest vectors, the 10 nearest vectors are represented as
Figure BDA0002601326260000079
Using the global R channel dictionary, global G channel dictionary and global B channel dictionary,
Figure BDA00026013262600000710
in a two-dimensional matrix
Figure BDA00026013262600000711
And
Figure BDA00026013262600000712
corresponding column vector is
Figure BDA0002601326260000081
And
Figure BDA0002601326260000082
these vectors are summed weighted:
Figure BDA0002601326260000083
Figure BDA0002601326260000084
Figure BDA0002601326260000085
obtaining image small blocks of RGB three channels corresponding to the ith rectangular small block
Figure BDA0002601326260000086
The above steps are taken for each small block of the image to be colored, the obtained three-channel color image small block column vectors are converted into three-channel color image small blocks of 10 × 10 again, images of three channels of RGB are obtained through splicing, as shown in FIG. 4, and the obtained three images of the channels of RGB are superposed into a three-dimensional matrix, so that a preliminary colored image Z is obtained, as shown in FIG. 5.
3. Denoising the preliminarily colored image obtained in the last step by using the total variation model in the step 3;
taking lambda as 0 and d as 0.2, and iterating only once to obtain the denoised image shown in FIG. 6.
According to the simulation result, the invention can be used for coloring the mid-infrared image, so that the object and the background are well separated, and the coloring effect is natural.

Claims (7)

1. A method for coloring a mid-infrared image, comprising the steps of:
step 1: establishing a mid-infrared image and color image matching dictionary;
step 1-1: selecting m color images with the same size, and acquiring a mid-infrared image corresponding to each color image, wherein the size of the acquired mid-infrared image is the same as that of the color image; the color image and the corresponding intermediate infrared image form an image pair, and m groups of image pairs are shared;
step 1-2: dividing color images in the kth group of image pairs into R channel images, G channel images and B channels according to RGB valuesA channel image, k 1, …, m, each of which is divided into p s1×s2Sorting the p rectangular small blocks of each channel image according to the position sequence of the rectangular small blocks from left to right in the channel image from top to bottom; expanding the pixels in each rectangular small block into column vectors from left to right and from top to bottom, and generating p column vectors; forming a two-dimensional matrix by the p column vectors of each channel, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors; the three channel images form three two-dimensional matrixes which are respectively marked as
Figure FDA0002601326250000011
And
Figure FDA0002601326250000012
dividing the mid-infrared image in the k group of image pairs into p s1×s2Sorting the p rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right in the intermediate infrared image from top to bottom; expanding pixels in each rectangular small block of the intermediate infrared image into column vectors from left to right from top to bottom, and generating p column vectors together; and then forming a two-dimensional matrix by the p column vectors, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of the p rectangular small blocks corresponding to the column vectors, and the sequence is marked as DIRk
Definition of
Figure FDA0002601326250000013
And
Figure FDA0002601326250000014
an R channel dictionary, a G channel dictionary and a B channel dictionary of the kth group of image pairs respectively;
step 1-3: adopting the processing method in the step 1-2 to obtain an R channel dictionary, a G channel dictionary and a B channel dictionary of the m groups of image pairs; combining the R channel dictionary, the G channel dictionary and the B channel dictionary of the m groups of image pairs according to the RGB three channels respectively, and the result is as follows:
Figure FDA0002601326250000015
the combined results are recorded as:
Figure FDA0002601326250000016
wherein
Figure FDA0002601326250000017
In the form of a general R-channel dictionary,
Figure FDA0002601326250000018
in the form of a general G-channel dictionary,
Figure FDA0002601326250000019
a general B channel dictionary;
Figure FDA00026013262500000110
and DIRAre two-dimensional matrixes and have the same size; within each channel dictionary DIRAnd
Figure FDA00026013262500000111
vectors with the same sequence number in the middle column have a corresponding relation;
step 2: coloring the intermediate infrared image;
step 2-1: dividing the mid-infrared image to be colored into a s1×s2Sorting a rectangular small blocks of the intermediate infrared image according to the position sequence of the rectangular small blocks from left to right and from top to bottom in the intermediate infrared image to be colored; expanding pixels in each rectangular small block of the mid-infrared image to be colored into column vectors from left to right and from top to bottom, and generating a column vectors; and then forming a column vectors into a two-dimensional matrix, wherein the arrangement sequence of the column vectors from left to right in the two-dimensional matrix is the same as the sequence of a rectangular small blocks corresponding to the column vectorsIs marked asIRWhere the ith rectangular patch generates a column vector denoted
Figure FDA0002601326250000021
Step 2-2: using n nearest neighbor principle in matrix DIRIs found in
Figure FDA0002601326250000022
The nearest n column vectors are represented as
Figure FDA0002601326250000023
Using the total R channel dictionary, the total G channel dictionary and the total B channel dictionary to obtain
Figure FDA0002601326250000024
In a two-dimensional matrix
Figure FDA0002601326250000025
And
Figure FDA0002601326250000026
the corresponding n column vectors with the same column sequence number are respectively
Figure FDA0002601326250000027
And
Figure FDA0002601326250000028
to pair
Figure FDA0002601326250000029
And
Figure FDA00026013262500000210
weighted summation:
Figure FDA00026013262500000211
in the formula (I), the compound is shown in the specification,
Figure FDA00026013262500000212
respectively representing RGB three channel image small block column vectors corresponding to column vectors generated by the ith rectangular small block of the intermediate infrared image to be colored; according to the pixel sequence when the pixels in each rectangular small block are expanded into the column vectors from left to right and from top to bottom in the step 1-2, the column vectors of the RGB three-channel image small blocks are arranged
Figure FDA00026013262500000213
Figure FDA00026013262500000214
Restoring the image into small image blocks of three channels of RGB;
wjfor weight, the following is calculated:
Figure FDA00026013262500000215
Figure FDA00026013262500000216
step 2-3: processing each image small block divided by the intermediate infrared image to be colored in the step 2-2 to obtain image small blocks of three channels of RGB corresponding to each image small block of the intermediate infrared image; splicing the image small blocks of the three RGB channels corresponding to each image small block of the obtained intermediate infrared image into three complete RGB channel images respectively from left to right and from top to bottom when the intermediate infrared image is segmented according to the step 2-1, and superposing the three obtained RGB channel images into a three-dimensional matrix, thereby obtaining a color image Z after preliminary coloring;
and step 3: denoising by adopting a total variation model, and constructing an objective function:
Figure FDA0002601326250000031
wherein, X is a color image colored on the mid-infrared image after denoising,
Figure FDA0002601326250000032
in order to differentiate X, lambda is a penalty factor;
and iterating once to obtain:
Figure FDA0002601326250000033
wherein d is the step length of gradient descent,
Figure FDA0002601326250000034
to differentiate Z.
2. A method for coloring a mid-infrared image according to claim 1, wherein the method for acquiring the mid-infrared image corresponding to each color image in step 1-1 is as follows: the method comprises the steps of adopting an infrared camera and a visible light camera to shoot and sample the same scene, and obtaining an image pair of a color image and a middle infrared image after image distortion removal, zooming and alignment.
3. A method for coloring an intermediate infrared image according to claim 1, wherein s in step 1-21=10,s2=10。
4. A method for coloring an intermediate infrared image according to claim 1, wherein n of the n-nearest neighbor principle in step 2-2 is 10.
5. A method for coloring an intermediate infrared image according to claim 1, wherein the n-nearest neighbor principle method of step 2-2 is as follows:
computing
Figure FDA0002601326250000035
And matrix DIRThe distance between every two column vectors in the vector sum is the two norms of the vectors obtained after the subtraction of the two vectors; sorting all the distances obtained by calculation from small to large, and taking D corresponding to the first n distancesIRThe column vector of (2).
6. A method for coloring an intermediate infrared image according to claim 1, wherein λ is 0 and d is 0.2 in step 3.
7. A method as claimed in claim 1, wherein the differentiation method in step 3 is to replace the differentiation value with the difference between the pixel and the pixel in its eight neighborhoods.
CN202010724946.8A 2020-07-24 2020-07-24 Method for coloring intermediate infrared image Expired - Fee Related CN111815732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010724946.8A CN111815732B (en) 2020-07-24 2020-07-24 Method for coloring intermediate infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010724946.8A CN111815732B (en) 2020-07-24 2020-07-24 Method for coloring intermediate infrared image

Publications (2)

Publication Number Publication Date
CN111815732A true CN111815732A (en) 2020-10-23
CN111815732B CN111815732B (en) 2022-04-01

Family

ID=72861420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010724946.8A Expired - Fee Related CN111815732B (en) 2020-07-24 2020-07-24 Method for coloring intermediate infrared image

Country Status (1)

Country Link
CN (1) CN111815732B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340515A1 (en) * 2013-05-14 2014-11-20 Panasonic Corporation Image processing method and system
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN108154494A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of image fusion system based on low-light and infrared sensor
CN108876723A (en) * 2018-06-25 2018-11-23 大连海事大学 A kind of construction method of the color background of gray scale target image
US20180367744A1 (en) * 2015-12-14 2018-12-20 Sony Corporation Image sensor, image processing apparatus and method, and program
CN111080566A (en) * 2019-12-12 2020-04-28 太原科技大学 Visible light and infrared image fusion method based on structural group double-sparse learning
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340515A1 (en) * 2013-05-14 2014-11-20 Panasonic Corporation Image processing method and system
US20180367744A1 (en) * 2015-12-14 2018-12-20 Sony Corporation Image sensor, image processing apparatus and method, and program
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN108154494A (en) * 2017-12-25 2018-06-12 北京航空航天大学 A kind of image fusion system based on low-light and infrared sensor
CN108876723A (en) * 2018-06-25 2018-11-23 大连海事大学 A kind of construction method of the color background of gray scale target image
CN111080566A (en) * 2019-12-12 2020-04-28 太原科技大学 Visible light and infrared image fusion method based on structural group double-sparse learning
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUAN, YIHUI等: "Objective quality evaluation of visible and infrared color fusion image", 《OPTICAL ENGINEERING》 *
张鹏辉: "红外、微光_可见光图像融合算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)》 *
徐萌兮: "红外与可见光数字图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)》 *
陈玉春: "多源图像融合算法研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑(月刊)》 *

Also Published As

Publication number Publication date
CN111815732B (en) 2022-04-01

Similar Documents

Publication Publication Date Title
Bavirisetti et al. Multi-sensor image fusion based on fourth order partial differential equations
CN110660088B (en) Image processing method and device
Yoon et al. Light-field image super-resolution using convolutional neural network
CN111145131A (en) Infrared and visible light image fusion method based on multi-scale generation type countermeasure network
CN101443817B (en) Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
CN104867124B (en) Multispectral and panchromatic image fusion method based on the sparse Non-negative Matrix Factorization of antithesis
Hu et al. Convolutional sparse coding for RGB+ NIR imaging
Connah et al. Spectral edge image fusion: Theory and applications
CN111462128A (en) Pixel-level image segmentation system and method based on multi-modal spectral image
CN114757831A (en) High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion
CN113486697B (en) Forest smoke and fire monitoring method based on space-based multimode image fusion
Takatani et al. One-shot hyperspectral imaging using faced reflectors
John et al. Analysis of various color space models on effective single image super resolution
Ulhaq et al. FACE: Fully automated context enhancement for night-time video sequences
CN115546505A (en) Unsupervised monocular image depth estimation method based on deep learning
Abdelhamed et al. Leveraging the availability of two cameras for illuminant estimation
Haq et al. Automated multi-sensor color video fusion for nighttime video surveillance
CN110335197A (en) Based on the intrinsic demosaicing methods of non local statistics
Abbas et al. Joint unmixing and demosaicing methods for snapshot spectral images
CN111815732B (en) Method for coloring intermediate infrared image
CN110580684A (en) image enhancement method based on black-white-color binocular camera
Yamaguchi et al. Image demosaicking via chrominance images with parallel convolutional neural networks
CN110992266B (en) Demosaicing method and demosaicing system based on multi-dimensional non-local statistical eigen
CN114529488A (en) Image fusion method, device and equipment and storage medium
CN114972625A (en) Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220401