CN112788249A - Image fusion method and device, electronic equipment and computer readable storage medium - Google Patents

Image fusion method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112788249A
CN112788249A CN202110076502.2A CN202110076502A CN112788249A CN 112788249 A CN112788249 A CN 112788249A CN 202110076502 A CN202110076502 A CN 202110076502A CN 112788249 A CN112788249 A CN 112788249A
Authority
CN
China
Prior art keywords
image
exposure
infrared
images
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110076502.2A
Other languages
Chinese (zh)
Other versions
CN112788249B (en
Inventor
范蒙
俞海
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110076502.2A priority Critical patent/CN112788249B/en
Publication of CN112788249A publication Critical patent/CN112788249A/en
Application granted granted Critical
Publication of CN112788249B publication Critical patent/CN112788249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/14Optical objectives specially designed for the purposes specified below for use with infrared or ultraviolet radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image fusion method, an image fusion device, electronic equipment and a computer readable storage medium, wherein the method comprises the steps of firstly obtaining each frame of image obtained by at least twice exposure in one image acquisition period; determining an infrared-sensing brightness image based on one of the acquired frames of images; determining a visible light color image based on the rest of the acquired images in each frame of image; and finally, fusing the determined infrared sensing brightness image with the visible light color image to obtain a fused image. In the scheme provided by the embodiment of the invention, each frame of image obtained by at least two exposures in one image acquisition period can be acquired by one image sensor, so that the acquisition and fusion of the images can be completed as long as one image sensor exists in the equipment, so that the image quality under the condition of low illumination is improved, and the equipment provided by the scheme provided by the embodiment of the invention has good adaptability and is convenient to apply.

Description

Image fusion method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image acquisition technologies, and in particular, to an image fusion method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The fusion in the image fusion technique can be understood as: fusing a visible light image and a non-visible light image such as an infrared image to obtain a fused image; the fused image is a dual-band image, and compared with any one of a visible light image and a non-visible light image belonging to a single band, the fused image can embody more image information.
In the prior art, an image fusion technology mainly refers to a light splitting fusion technology, and the basic flow of the image fusion technology is as follows: incident light is divided into visible light signals and non-visible light signals through light splitting devices such as a light splitting prism, then visible light images and non-visible light images are respectively generated by the two sensors based on the visible light signals and the non-visible light signals, and finally the visible light images and the non-visible light images are fused to obtain fused images.
It can be understood that the above-mentioned optical splitting and fusing technology must be adapted to a device having two image sensors, and if there is only one image sensor in the device, the above-mentioned optical splitting and fusing process cannot be completed, so the device adaptability of the optical splitting and fusing technology in the prior art is poor.
Disclosure of Invention
Embodiments of the present invention provide an image fusion method, an image fusion device, an electronic device, and a computer-readable storage medium, so as to improve device adaptability of an image fusion technology. The specific technical scheme is as follows:
to achieve the above object, in a first aspect, an embodiment of the present invention provides an image fusion method, where the method includes:
obtaining each frame image obtained by at least two exposures in one image acquisition period;
determining an infrared sensing brightness image based on one of the frame images;
determining a visible light color image based on the rest of the images in each frame of image;
and fusing the infrared sensing brightness image and the visible light color image to obtain a fused image.
Optionally, the method is applied to an image fusion device, and the frames of images are collected by the image fusion device;
the method further comprises the following steps:
performing infrared light supplement within exposure time corresponding to the first preset exposure in the image acquisition period;
the step of determining the infrared-sensing brightness image based on one of the frames of images comprises:
and determining an infrared sensing brightness image based on the image obtained by the first preset exposure.
Optionally, the exposure parameter corresponding to the first preset exposure is not greater than the target maximum value,
the exposure parameters are exposure duration and/or gain, and the target maximum value is the maximum value of the exposure parameters corresponding to the other exposures except the first preset exposure.
Optionally, the step of performing infrared light supplement within the exposure time corresponding to the first preset exposure in the image acquisition period includes:
according to the following control mode, carrying out infrared supplementary lighting within the exposure time corresponding to the first preset exposure in the image acquisition period:
the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure.
Optionally, when the number of exposures in the image acquisition period is greater than two, the first preset exposure is a first exposure or a last exposure of the at least two exposures.
Optionally, the step of determining the infrared-sensing brightness image based on one of the frame images includes:
and performing demosaicing processing on one of the frames of images, and generating an infrared-sensing brightness image by using the demosaiced frame of image.
Optionally, the number of the rest images in each frame image is 1,
the step of determining a visible light color image based on the remaining images of the frames of images includes:
and performing infrared removal processing on the rest images in each frame of image to obtain a visible light color image.
Optionally, the step of performing de-infrared processing on the remaining images in each frame of image to obtain a visible light color image includes:
under the condition that a target image contains an IR channel, interpolating the IR channel of the target image to generate the target image after interpolation processing, wherein the target image is the rest of images in each frame of image;
updating each pixel in the target image after interpolation processing according to the following mode to obtain a visible light color image:
if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR parameter value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR parameter value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the IR parameter value of the pixel; the IR parameter value of the pixel is the product of the IR value of the pixel and a preset correction value.
Optionally, the number of the other images in each frame of image is at least 2, and the exposure durations corresponding to the other images in each frame of image are different;
the step of determining a visible light color image based on the remaining images of the frames of images includes:
carrying out wide dynamic synthesis processing on the rest images in each frame of image to obtain a wide dynamic image;
and performing infrared removal processing on the wide dynamic image to obtain a visible light color image.
Optionally, the method is applied to an image fusion device, and the frames of images are collected by the image fusion device;
an optical filter is arranged on an optical lens of the image fusion equipment, and a spectral region filtered by the optical filter comprises [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
In a second aspect, an embodiment of the present invention provides an image fusion apparatus, where the apparatus includes:
the acquisition module is used for acquiring each frame of image obtained by at least twice exposure in one image acquisition period;
the first determining module is used for determining the infrared sensing brightness image based on one frame of the frames of images;
the second determining module is used for determining a visible light color image based on the rest images in each frame of image;
and the fusion module is used for fusing the infrared sensing brightness image and the visible light color image to obtain a fusion image.
Optionally, the apparatus is applied to an image fusion device, and each frame of image is acquired by the image fusion device;
the device further comprises:
the infrared light supplementing module is used for performing infrared light supplementing within exposure time corresponding to the first preset exposure in the image acquisition period;
the first determining module is specifically configured to:
and determining an infrared sensing brightness image based on the image obtained by the first preset exposure.
Optionally, the exposure parameter corresponding to the first preset exposure is not greater than the target maximum value,
the exposure parameters are exposure duration and/or gain, and the target maximum value is the maximum value of the exposure parameters corresponding to the other exposures except the first preset exposure.
Optionally, the infrared light supplement module is specifically configured to:
according to the following control mode, carrying out infrared supplementary lighting within the exposure time corresponding to the first preset exposure in the image acquisition period:
the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure.
Optionally, when the number of exposures in the image acquisition period is greater than two, the first preset exposure is a first exposure or a last exposure of the at least two exposures.
Optionally, the first determining module is specifically configured to:
and performing demosaicing processing on one of the frames of images, and generating an infrared-sensing brightness image by using the demosaiced frame of image.
Optionally, the number of the rest images in each frame image is 1,
the second determining module is specifically configured to:
and performing infrared removal processing on the rest images in each frame of image to obtain a visible light color image.
Optionally, the second determining module includes:
the interpolation submodule is used for interpolating the IR channel of the target image under the condition that the target image contains the IR channel to generate the target image after interpolation processing, wherein the target image is the rest of images in each frame of image;
and the updating submodule is used for updating each pixel in the target image after the interpolation processing according to the following mode to obtain a visible light color image:
if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR parameter value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR parameter value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the IR parameter value of the pixel; the IR parameter value of the pixel is the product of the IR value of the pixel and a preset correction value.
Optionally, the number of the other images in each frame of image is at least 2, and the exposure durations corresponding to the other images in each frame of image are different;
the second determining module includes:
the first processing submodule is used for carrying out wide dynamic synthesis processing on the rest images in each frame of image to obtain a wide dynamic image;
and the second processing submodule is used for carrying out infrared removal processing on the wide dynamic image to obtain a visible light color image.
Optionally, the apparatus is applied to an image fusion device, and each frame of image is acquired by the image fusion device;
an optical filter is arranged on an optical lens of the image fusion equipment, and a spectral region filtered by the optical filter comprises [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory,
the memory is used for storing program codes;
and the processor is used for realizing the method steps of any image fusion method when the program codes stored in the memory are executed.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps of any one of the image fusion methods described above.
As can be seen from the above, in the scheme provided by the embodiment of the present invention, each frame of image obtained by at least two exposures within one image acquisition period is obtained first; determining an infrared-sensing brightness image based on one of the acquired frames of images; determining a visible light color image based on the rest of the acquired images in each frame of image; and finally, fusing the determined infrared sensing brightness image with the visible light color image to obtain a fused image. Compared with the prior art, in the scheme provided by the embodiment of the invention, each frame of image obtained by at least two exposures in one image acquisition period can be acquired by one image sensor, so that the acquisition and fusion of the image can be completed as long as one image sensor exists in the equipment, so that the image quality under the condition of low illumination is improved, and the equipment provided by the embodiment of the invention has good adaptability and is convenient to apply; on the other hand, for the device which integrates image acquisition and fusion and applies the scheme provided by the embodiment of the invention, only one sensor can be arranged in the device, a light splitting device is not required, the structure is simple, and the cost of the device is low.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a RGBIR image sensor according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an image fusion method according to another embodiment of the present invention;
FIG. 4 is a graphical illustration of spectral response according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a relationship between exposure and infrared light supplement according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating another relationship between exposure and infrared fill-in light according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image fusion apparatus according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, technical terms related to the present document will be briefly described below.
An image capturing cycle, which is referred to in this document as a time period corresponding to each frame of image obtained by multiple exposures, is usually not too long, for example, one image capturing cycle is 40ms (milliseconds); taking an image sensor as an example, the image sensor may generate an image by using an incident light signal obtained by each exposure, and after multiple exposures, a plurality of frames of images may be obtained, and if a frame of fused image is obtained from the plurality of frames of images, the sum of exposure times respectively corresponding to the plurality of frames of images may be the image acquisition period.
In addition, for the shooting process of the video, each frame of video image in the video can be regarded as a fused image in the document of the application, that is, each frame of video image is obtained by fusing a plurality of frames of original images obtained by imaging based on an image sensor, so for the technical field of video, the image acquisition period may be as follows: the time from the exposure starting time of the first frame original image corresponding to the previous frame video image to the exposure starting time of the first frame original image corresponding to the next frame video image.
The visible light is electromagnetic waves which can be perceived by human eyes, the visible spectrum has no precise range, and the wavelength of the electromagnetic waves which can be perceived by the human eyes is 400-760 nm (nanometer). Infrared light is an electromagnetic wave having a wavelength of 760nm to 1mm (millimeters) that is not visible to the human eye.
The visible light color image may be a color image in which only visible light signals are perceived, and the color image is sensitive only to a visible light band.
The infrared brightness image may be a brightness image in which an infrared light signal is perceived, and it should be noted that the infrared brightness image is not limited to a brightness image in which only an infrared light signal is perceived, and may also be a brightness image in which an infrared light signal and other wavelength band light signals are perceived.
In order to solve the above mentioned problems in the background art, embodiments of the present invention provide an image fusion method, an image fusion device, an electronic device, and a computer-readable storage medium, so as to improve the device adaptability of the image fusion technology.
First, an image fusion method provided by an embodiment of the present invention is described in detail below.
The image fusion method provided by the embodiment of the invention can be applied to image fusion equipment, and the image fusion equipment can be equipment with an image acquisition function, such as a camera; in addition, it is reasonable that the image fusion device may also be a device that does not have an image capturing function but communicates with the image capturing device, and that can receive the image captured and transmitted by the image capturing device.
As shown in fig. 1, an image fusion method provided in an embodiment of the present invention includes:
s101: each frame image obtained by at least two exposures in one image acquisition period is obtained.
Regarding the obtaining form of each frame image in step S101, as mentioned above, in one case, the image fusion device may be a device having an image capturing function, and then step S101 may be: in one image acquisition period, each frame of image is acquired through at least two exposure acquisitions, that is, each frame of image acquired in step S101 is acquired by the image fusion device itself. For example, if the image fusion device is a camera that performs 3 exposures in one image capturing period, the camera captures 3 frames of images.
It should be noted that, in the embodiment of the present invention, the frame images acquired by the image fusion device in one image acquisition period may not be obtained by imaging by one image sensor in the image fusion device, for example, a camera with two cameras. In this case, step S101 may be: and receiving each frame image acquired by other equipment through at least two exposures in one image acquisition period.
As mentioned above, in another case, if the image fusion device is a device that does not have an image capturing function but communicates with another image capturing device, step S101 may be: and receiving each frame of image which is sent by the image acquisition equipment and acquired by at least two times of exposure in one image acquisition period. For example, a monitoring front-end camera acquires 3 frames of images in an image acquisition period, and sends the 3 frames of images to a monitoring rear-end image fusion device, that is, the monitoring rear-end image fusion device acquires the 3 frames of images.
The number of images in each obtained frame image is the same as the number of exposures in one image acquisition period, and each exposure can only obtain one frame of exposure image. The number of exposures corresponding to one image capturing period may be preset, for example, the preset number of exposures is 2, and the step S101 may specifically be to obtain two frames of images obtained through two exposures in one image capturing period.
It should be noted that, in the embodiment of the present invention, the obtained frame images are obtained by imaging an image sensor, which may be a common image sensor, but in order to ensure that the obtained frame images contain as much infrared component as possible, the image sensor may be an rgbiir image sensor, such as an rgbiir image sensor manufactured by Ominivision of american semiconductor corporation and having a model number of OV 4682.
For example, assuming that the image fusion device is provided with an image acquisition unit to acquire the obtained frame images, the image sensor used in the image acquisition unit is an rgbiir image sensor.
S102: and determining the infrared sensing brightness image based on one frame of the obtained frame images.
As an optional implementation manner of the embodiment of the present invention, one frame of image may be randomly selected from the obtained frames of images, and the selected image is used to determine the infrared-sensitive luminance image, for example, 3 frames of images are collected in one image collection period, and the image fusion device randomly selects the second frame of image to generate the infrared-sensitive luminance image; as another optional implementation manner of the embodiment of the present invention, a certain frame of image may also be preset to determine the infrared-sensing luminance image, for example, 3 frames of images are collected in one image collection period, and the collected last frame of image is preset to be an image used for determining the infrared-sensing luminance image.
In addition, the step of determining the infrared-sensing brightness image (S102) based on one of the obtained frame images may be to directly use one of the obtained frame images to generate the infrared-sensing brightness image, but the quality of the image captured by the image sensor is usually poor, which may result in poor quality of the directly generated infrared-sensing brightness image; therefore, in order to ensure the image quality of the infrared-sensitive luminance image, one of the obtained frame images may be subjected to image processing to obtain an image after the image processing, and then the image after the image processing is used to generate the infrared-sensitive luminance image.
As an optional manner of the embodiment of the present invention, in order to obtain a clear infrared luminance image with real image details, the image processing manner may be demosaicing, that is, one of the obtained frames of images may be demosaiced, and then the demosaiced frame of image is generated into the infrared luminance image. That is, the step of determining the infrared-sensitive luminance image (S102) based on one of the obtained frame images may include:
step X: one of the obtained frame images is demosaiced, and the demosaiced frame image is used for generating an infrared-sensitive brightness image.
As will be understood by those skilled in the art, in an image directly imaged by an image sensor, the channel signals are distributed in an interlaced manner, for example, rgbiir image sensors, as shown in fig. 2, r (red), g (green), b (blue), and IR (Infrared) channel signals are distributed in an interlaced manner, and when the image directly imaged by the image sensor is viewed in an enlarged manner, the mosaic phenomenon is found in the image, the definition is not good, and therefore demosaic processing is required to generate an image with real details.
For convenience of description, one of the frames of images obtained here is referred to as an image to be processed, and the step of performing demosaicing on one of the frames of images and generating the infrared-sensitive luminance image using the demosaiced frame of image may include the steps of performing demosaicing on the image to be processed 1 and generating the infrared-sensitive luminance image using the demosaiced image to be processed 2:
step 1: and respectively interpolating R, G, B and the IR channel of the image to be processed to obtain an R value, a G value, a B value and an IR value which respectively correspond to each pixel in the image to be processed.
Specifically, the interpolation method used for performing interpolation in step 1 may be a bilinear interpolation algorithm, a bicubic interpolation algorithm, or the like, and the interpolation algorithm used in this embodiment of the present invention is not limited.
Step 2: and averaging the R channel value, the G channel value, the B channel value and the IR (Infrared) channel value in the image to be processed obtained after interpolation to obtain the Infrared-sensing brightness image after demosaic processing.
That is, step 2 may obtain an infrared-sensing luminance image that is the same as the input resolution and includes only luminance signals, where the luminance value of each pixel in the infrared-sensing luminance image is: and averaging the corresponding channel values in the image to be processed. Taking an image corresponding to the rgbiir image sensor as an example, the brightness value of each pixel in the image is: an average of the R, G, B, and IR channel values of the pixel; for example, the luminance value of the pixel coordinate (x, y) in the infrared-sensitive luminance image is equal to the average value of the R channel value, the G channel value, the B channel value, and the IR channel value of the pixel coordinate (x, y) in the image to be processed.
Of course, the above step 1 and step 2 are only exemplary illustrations of the step X, and do not constitute specific limitations to the embodiments of the present invention, and those skilled in the art may complete the step X based on other specific technical means.
S103: based on the remaining images in each of the obtained frame images, a visible light color image is determined.
First, the remaining images are the images obtained by removing one frame of image used for determining the infrared-sensitive luminance image in step S102 from the obtained frame of images. For example, each of the obtained frame images includes images a to c, where image a is used to determine the infrared-sensitive luminance image, and images b and c are the rest of the images in step S103; for another example, each frame of image obtained as described above includes images d and e, where the image d is used to determine the infrared-sensitive luminance image, and the image e is the remaining image in step S103.
The visible light color image determined in step S103 is an image that does not include an infrared component, and therefore, it is necessary to perform de-infrared processing on the remaining images in each frame of the obtained image to obtain a visible light color image with a true color reduction degree. It is to be understood that the number of the remaining images involved in step S103 may be 1, or may be at least two, and therefore, in an embodiment of the present invention, in one case, when the number of the remaining images in each of the obtained frame images is 1, the step of determining the visible light color image (S103) based on the remaining images in each of the obtained frame images may include:
and performing infrared removal processing on the rest images in the obtained frame images to obtain visible light color images.
And after the rest images in the obtained frame images are subjected to infrared removal processing, removing infrared components in the images to obtain visible light color images. Of course, the method for performing de-infrared processing on an image may refer to the prior art, and embodiments of the present invention may not be limited to the specific implementation of the de-infrared processing.
As an alternative implementation manner in this case, as shown in fig. 3, on the basis of the embodiment of the method shown in fig. 1, the step of performing the de-infrared processing on the remaining images in each obtained frame image to obtain the visible light color image may include:
s1031: and under the condition that the target image contains the IR channel, interpolating the IR channel of the target image to generate the target image after interpolation, wherein the target image is the rest of the images in each frame of image.
For example, if the image sensor for obtaining the target image by imaging is an rgbiir image sensor, the target image includes an IR channel; it can be understood that after the IR channels of the target image are respectively interpolated, each pixel in the target image after the interpolation process corresponds to an R value, a G value, and a B value. Similarly, the interpolation method used for interpolation in step S1031 may be a bilinear interpolation algorithm, a bicubic interpolation algorithm, or the like, and the interpolation algorithm used in this embodiment of the present invention is not limited.
S1032: and updating each pixel in the target image after the interpolation processing according to the following mode to obtain a visible light color image:
if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR parameter value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR parameter value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the IR parameter value of the pixel; the IR parameter value of the pixel is the product of the IR value of the pixel and a preset correction value.
The preset correction value may be any integer or decimal from 0 to 1024, and the specific value of the preset correction value may be set according to the actual situation. In general, the preset correction value may be set to 1, and step S1032 may specifically be: and updating each pixel in the target image after the interpolation processing according to the following mode to obtain a visible light color image: if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the IR value of the pixel. Of course, it will be understood by those skilled in the art that the value of the preset correction value is not limited thereto.
Specifically, after step S1031 is executed, each pixel of the target image corresponds to an IR value, but the image fusion device has not interpolated the R channel, G channel, and B channel of the target image, so that the pixel in the target image may correspond to only an IR value, or have an R value, a G value, or a B value in addition to an IR value.
For example, assuming that the preset correction value is 1, and the IR parameter value of the pixel is the IR value of the pixel, step S1032 can be understood as:
for each pixel in the target image after the interpolation processing, updating is carried out according to the following modes:
if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the I value of the pixel; of course, if only the IR value exists for that pixel, no update process is done for that pixel.
At this time, a color image having only three channels of RGB can be obtained, which can be regarded as a visible light color image.
In addition, since the remaining images in the obtained frame images are subjected to the de-infrared processing and the de-infrared processed images do not have the R value, the G value, and the B value for each pixel, it is also possible to obtain the visible light color image by interpolating the R channel, the G channel, and the B channel of the remaining images in the frame images after the de-infrared processing in order to improve the image quality.
Similarly, the interpolation methods used for interpolating the R channel, the G channel, and the B channel of the other images in each frame of image may be a bilinear interpolation algorithm, a bicubic interpolation algorithm, and the like, and the interpolation algorithm used in this embodiment of the present invention is not limited.
In addition, in addition to the implementation shown in fig. 3, other implementations provided in the prior art may also be used to perform the de-infrared processing on the other images described herein, and the embodiment of the present invention is not limited herein.
In another case, when the number of the remaining images in each of the obtained frame images is at least 2, the exposure time lengths corresponding to the remaining images in each of the obtained frame images are different.
It is to be understood that, in order to perform the wide dynamic synthesis processing to obtain the wide dynamic image, the exposure durations respectively corresponding to the other images are different, and specifically, a control unit may be disposed in the image fusion device to control the exposure durations corresponding to the obtained frame images. For example, if the number of the remaining images is 2, the exposure time of the second frame image is 32ms, and the exposure time corresponding to the third frame image is 2ms, except for the first frame image used for determining the infrared-sensitive luminance image, in the 3 frames of images collected by the image fusion device.
In this case, the step of determining the visible light color image (S103) based on the remaining images in each of the obtained frame images may include the steps of a and b:
step a: and performing wide dynamic synthesis processing on the rest images in the obtained frame images to obtain wide dynamic images.
The High Dynamic Range (HDR) image also becomes a wide Dynamic Range image, and compared with a low Dynamic Range image, the HDR image does not have a local overexposure phenomenon, and can embody more image details, so in the embodiment of the present invention, in order to obtain a visible light color image with more image details, the remaining images of the plurality of frames may be subjected to wide Dynamic synthesis processing, so as to obtain a wide Dynamic image. Of course, the specific implementation manner of performing the wide dynamic synthesis processing on the multi-frame image belongs to the prior art, and the embodiment of the present invention is not described in detail herein.
Step b: and performing infrared removal processing on the wide dynamic image to obtain a visible light color image.
Similarly, the specific implementation manner of performing the de-infrared processing on the wide dynamic image may refer to the specific implementation manner of performing the de-infrared processing on the remaining image of one frame in the embodiment of the method shown in fig. 3, which is not described herein again in the embodiment of the present invention.
S104: and fusing the determined infrared sensing brightness image and the visible light color image to obtain a fused image.
In the embodiment of the present invention, the implementation manner of fusing the infrared-sensing luminance image and the visible-light color image may be various, and as an implementation manner of the embodiment of the present invention, the step of fusing the determined infrared-sensing luminance image and the visible-light color image to obtain the fused image may include the following steps a1 to a 4:
step a 1: the luminance signal of each pixel in the visible light color image is calculated by the following formula:
Y=(R+G+B)/3;
in the formula, Y represents a luminance signal value of a pixel in the visible light color image, R represents an R channel value of a pixel corresponding to Y, G represents a G channel value of a pixel corresponding to Y, and B represents a B channel value of a pixel corresponding to Y.
Step a 2: for each pixel in the visible light color image, the ratio of the R channel value, the G channel value, and the B channel value of the pixel to the luminance signal value Y corresponding to the pixel is calculated, i.e., K1 ═ R/Y, K2 ═ G/Y, and K3 ═ B/Y.
Step a 3: and performing color noise reduction treatment on the K1, K2 and K3 corresponding to all pixels in the visible light color image, for example, adopting gaussian filtering treatment to obtain the color noise-reduced K1 ', K2 ' and K3 ' corresponding to each pixel.
Step a 4: and (3) fusing the brightness signal value Y 'of each pixel in the infrared sensing brightness image with K1', K2 'and K3' of the corresponding pixel in the visible light color image by adopting the following formula to obtain a fused image:
R’=K1’*Y’;
G’=K2’*Y’;
B’=K3’*Y’;
in the formula, R ', G ' and B ' respectively represent R channel value, G channel value and B channel value of pixels in the fused image; k1 ', K2 ' and K3 ' respectively represent K1, K2 and K3 of corresponding pixels in the visible light color image after the color noise reduction processing; y' represents the luminance signal value of the corresponding pixel in the infrared-sensitive luminance image.
As another implementation manner of the embodiment of the present invention, the step of fusing the determined infrared-sensing luminance image and the visible color image to obtain a fused image may include the following steps b1 to b 4:
step b 1: the RGB color signals in the visible light color image are converted into YUV (a color coding standard) signals.
Of course, the specific implementation manner of converting RGB color signals into YUV signals belongs to the prior art, and the embodiment of the present invention is not described in detail herein.
Step b 2: the UV component, i.e., the color component, in the Y UV signal is extracted.
Step b 3: carrying out color noise removal treatment on the extracted UV component, for example, carrying out Gaussian filtering noise reduction to obtain a treated UV component;
step b 4: combining the processed UV component with the brightness signal of the infrared-sensing brightness image to form a new YUV signal, wherein the image corresponding to the new YUV signal can be used as a final fusion image; or the new YUV signal may be converted into a new RGB signal, and an image corresponding to the new RGB signal may be used as a final fusion image.
In addition, similar to the implementation manner, the RGB color signals in the visible light color image may also be converted into HSV (color coding standard) signals for image fusion, and the embodiment of the present invention is not limited herein.
In addition, in order to ensure accurate restoration of the color after the infrared component is removed, thereby improving the image fusion quality, when the method is applied to an image fusion device, and the obtained images of each frame are acquired by the image fusion device;
an optical lens of the image fusion device may be provided with an optical filter, and a spectral region filtered by the optical filter may include [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
Referring to fig. 4, it can be understood that R, G, B and IR channels have large response differences in the near infrared band (650nm to 1100nm), and in order to avoid the problem that the infrared component removal effect is poor due to large response differences of the channels in some spectral regions, an optical filter is disposed on the optical lens of the image fusion device to filter out the spectral regions with large response differences.
Specifically, the image fusion device may be provided with an image capturing unit, where the image capturing unit includes an optical lens, an optical filter disposed on the optical lens, and an image sensor. The optical filter can be integrated on the optical lens through a film coating technology; in addition, the optical filter can be a band-elimination optical filter or a bimodal optical filter with lower cost, and when the optical filter is the bimodal optical filter, the spectral region filtered by the optical filter can also comprise a spectral region of [ T3, + ∞ ], T3 is less than or equal to 850nm and less than or equal to 1100nm, and T2 is less than T3.
Compared with the prior art, in the scheme provided by the embodiment, each frame of image obtained by at least two exposures in one image acquisition period can be acquired by one image sensor, so that the acquisition and fusion of the images can be completed as long as one image sensor exists in the equipment, so that the image quality under the condition of low illumination is improved, and the equipment provided by the embodiment has good adaptability and is convenient to apply; on the other hand, for the device which integrates image acquisition and fusion and applies the scheme provided by the embodiment, only one sensor can be arranged in the device, a light splitting device is not needed, the structure is simple, and the cost of the device is low.
In order to obtain a fused image with a high signal-to-noise ratio and a higher quality, as an optional implementation manner of the embodiment of the present invention, in a case that the method is applied to an image fusion device, and the obtained frame images are acquired by the image fusion device, on the basis of any one of the above method embodiments, the method may further include:
and performing infrared light supplement within exposure time corresponding to the first preset exposure in the image acquisition period.
The image fusion device generates an image by using an incident light signal captured by the optical lens in the process of primary exposure, wherein if infrared light supplement is not performed, the incident light signal captured by the optical lens only comprises an ambient incident light signal, and under the condition of infrared light supplement, the incident light signal captured by the optical lens comprises the ambient incident light signal and an infrared light supplement signal.
The image fusion equipment carries out infrared light supplement within the exposure time of the first preset time, and specifically, a control unit can be arranged in the image fusion equipment to control the infrared light supplement lamp and the image acquisition unit, so that the light supplement time period of the infrared light supplement lamp is within the exposure time of the certain preset time in the image acquisition unit.
It should be noted that, the image fusion device performs infrared light supplement within the first preset exposure time, which may increase the quality of the infrared luminance image, but if infrared light supplement is performed within other exposure times except the first preset exposure time in the image acquisition period, the difficulty in obtaining the visible light color image is increased.
Therefore, in order to improve the quality of the infrared-sensitive luminance image and not increase the difficulty in obtaining the visible light color image, as an optional implementation manner of the embodiment of the present invention, the step of performing infrared light supplement within the exposure time corresponding to the first preset exposure in the image acquisition period may include:
according to the following control mode, carrying out infrared light supplement within the exposure time corresponding to the first preset exposure in the image acquisition period:
the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure.
Illustratively, a control unit is arranged in the image fusion device, the control unit controls to start the infrared supplementary lighting at the starting time of the first preset exposure in the image acquisition period, and controls to close the infrared supplementary lighting at the ending time of the first preset exposure, the infrared supplementary lighting and the first preset exposure are completely synchronous, namely the infrared supplementary lighting starts when the first preset exposure starts, and the infrared supplementary lighting ends when the first preset exposure ends.
In the embodiment of the present invention, the fill-in light intensity of the infrared fill-in light may be set according to an actual situation, and the fill-in light intensity of the infrared fill-in light is not limited in the embodiment of the present invention. In addition, the exposure duration corresponding to the one-time exposure with the infrared supplementary lighting may be determined according to an actual supplementary lighting parameter, and the embodiment of the present invention also does not limit the exposure duration corresponding to the one-time exposure with the infrared supplementary lighting.
In addition, the wavelength band of the infrared light used for the infrared supplementary lighting may not be limited, but in order that the image sensor may obtain the maximum response, the embodiment of the present invention may use the infrared light with the wavelength of 850nm to 900nm for the infrared supplementary lighting.
In this case, the step of determining the infrared-sensitive luminance image (S102) based on one of the obtained frame images may include:
and determining the infrared-sensing brightness image based on the image obtained by the first preset exposure.
It is understood that, in the embodiment of the present invention, the image for determining the infrared-sensitive luminance image is obtained by exposure in the presence of infrared fill light, and the remaining images in the above-mentioned obtained frame images are obtained by exposure in the absence of infrared fill light.
To be further extended to the shooting process of the video, as described above, each frame of image in the video is the fused image in the embodiment of the present invention, and since the video frames in the video are continuously collected, the fill-in mode of the infrared fill-in is a stroboscopic fill-in, and the cycle of the stroboscopic fill-in is the same as the collection cycle of each frame of image.
It can be understood that, the luminance of the image obtained by the first preset exposure is enhanced by the infrared fill light performed in the exposure process of the first preset exposure, so that in order to ensure that the luminance of the image obtained by the first preset exposure is kept within a proper luminance range, in the embodiment of the present invention, the exposure parameter corresponding to the first preset exposure may not be greater than the target maximum value, where the exposure parameter is the exposure duration and/or the gain, and the target maximum value is the maximum value of the exposure parameters corresponding to the other exposures except the first preset exposure.
Taking the exposure parameter as an example of exposure duration, assuming that three exposures are performed in one image acquisition period, presetting an image obtained by the third exposure in the image acquisition period to be used for generating an infrared-sensitive brightness image, wherein the exposure durations of the three exposures in the image acquisition period are respectively as follows: x milliseconds, y milliseconds and z milliseconds, and if the x is more than y, x is larger than or equal to z; for example, the exposure time lengths of three exposures within an image acquisition period are respectively: 25 milliseconds, 5 milliseconds, and 10 milliseconds.
Assuming that two exposures are performed in one image acquisition period, presetting an image obtained by the first exposure in the image acquisition period to be used for generating an infrared-sensitive brightness image, wherein the exposure durations of the two exposures in the image acquisition period are respectively as follows: m milliseconds and n milliseconds, n is larger than or equal to m; for example, the exposure time lengths of two exposures within an image acquisition period are respectively: 10 milliseconds and 30 milliseconds.
In addition, when the number of exposures in the image capturing period is greater than two, the first preset exposure may be a first exposure or a last exposure of the at least two exposures.
It can be understood that when the number of exposures in the image capturing period is greater than two, at least 3 frames of images can be obtained, in the at least 3 frames of images, one frame is used for generating the infrared-sensitive brightness image, and the other frames of images are used for generating the visible light color image, so that the other frames of images need to be images captured by the image sensor continuously.
For example, it is possible to first generate a wide dynamic range image using the above-described remaining frame images, and then generate a visible light color image using the generated wide dynamic range image. Since a plurality of frames of images continuously acquired are required to generate a wide dynamic range image, the remaining frames of images here are required to be images continuously acquired by the image sensor.
For the present embodiment, exemplarily, as shown in fig. 5, two exposures are included in one image capturing cycle, that is, the double-shutter exposure in fig. 5, one odd exposure and one adjacent even exposure in fig. 5 correspond to one image capturing cycle, and the even exposure corresponds to the obtained image to determine the infrared-sensitive brightness image, as can be seen from the brightness variation curve of the infrared lamp in the figure: the rising edge of the infrared supplementary lighting can be later than the starting time of even exposure but can not be earlier; similarly, the falling edge can be earlier than the end time of even exposure but can not be later; that is, the infrared fill light should not be earlier or later than the even exposure. It can be understood that in the continuous collection process of video frames, the infrared light is only used for infrared light supplement during even exposure, and a stroboscopic light supplement is formed.
As shown in fig. 6, one image capturing cycle includes 3 exposures, i.e., exposure a in fig. 6, and adjacent exposure B and exposure C, and exposure C corresponds to the obtained image to determine the infrared-sensitive brightness image, as can be seen from the brightness variation curve of the infrared lamp in the figure: the rising edge of the infrared supplementary lighting can be later than the C exposure starting time, but can not be earlier; similarly, the falling edge may be earlier than the C exposure end time, but not later; i.e. the infrared fill light should not be earlier or later than the C exposure. It can be understood that in the continuous collection process of video frames, the infrared light is only used for infrared light supplement during even exposure, and a stroboscopic light supplement is formed.
It can be understood that, in this embodiment, the image for determining the infrared-sensitive luminance image is obtained by exposure in the presence of the infrared supplementary light, and the infrared-sensitive luminance image is enhanced by the infrared supplementary light, so that a better signal-to-noise ratio is obtained, and a fused image with a better quality can be obtained after the infrared-sensitive luminance image is fused with the visible light color image.
The following presents a simplified summary of an embodiment of the invention by way of a specific example.
In order to show the process of obtaining the fused image by the image fusion equipment more clearly, the image acquisition equipment is divided into a plurality of units in the example, and the image fusion process is completed by all the units together; of course, the dividing manner of the image fusion device in this example is not to limit the present invention, and is only an exemplary illustration.
As shown in fig. 7, the image fusion device may include an infrared light supplement unit (such as a light supplement lamp), a control unit, an image acquisition unit, a preprocessing unit, and a fusion processing unit, where the preprocessing unit and the fusion processing unit may be regarded as an image synthesis unit.
It should be noted that the control unit may send an exposure control signal to the image acquisition unit to control the image acquisition unit to acquire a multi-frame image in one image acquisition period, and may control the exposure duration of each exposure through the exposure control signal; in addition, the control unit can send a light supplement control signal to the infrared light supplement unit, so that the infrared light supplement unit can ensure that the infrared light supplement is carried out within the preset one-time exposure time.
Specifically, the process of obtaining the fused image by the image fusion device is as follows:
the RGBIR image sensor in the image acquisition unit obtains images a, b and c through continuous three times of exposure in an image acquisition period, and in the process of obtaining an image c through the third time of exposure, the infrared light supplementing unit carries out infrared light supplementing, so that the image c is obtained through imaging based on ambient incident light and the infrared light supplementing.
Then, the preprocessing unit carries out wide dynamic synthesis processing on the images a and b to obtain a wide dynamic image, and carries out infrared removal processing on the wide dynamic image to obtain a visible light color image. Meanwhile, the preprocessing unit also performs demosaicing processing on the image c and generates an infrared-sensing brightness image by using the demosaiced image c.
And finally, the fusion processing unit obtains the visible light color image and the infrared-sensing brightness image from the preprocessing unit and fuses the visible light color image and the infrared-sensing brightness image to obtain a fused image.
Corresponding to the embodiment of the method shown in fig. 1, an embodiment of the present invention further provides an image fusion apparatus, as shown in fig. 8, the apparatus includes:
an obtaining module 110, configured to obtain each frame of image obtained through at least two exposures in one image acquisition period;
a first determining module 120, configured to determine an infrared-sensing luminance image based on one of the frames of images;
a second determining module 130, configured to determine a visible light color image based on the remaining images in the frames of images;
and a fusion module 140, configured to fuse the infrared sensing brightness image and the visible light color image to obtain a fusion image.
As an optional implementation manner of the embodiment of the present invention, the apparatus is applied to an image fusion device, and each frame of image is acquired by the image fusion device;
the apparatus may further include:
the infrared light supplementing module is used for performing infrared light supplementing within exposure time corresponding to the first preset exposure in the image acquisition period;
the first determining module 120 may be specifically configured to:
and determining an infrared sensing brightness image based on the image obtained by the first preset exposure.
Specifically, the exposure parameter corresponding to the first preset exposure may not be greater than the target maximum value,
the exposure parameters are exposure duration and/or gain, and the target maximum value is the maximum value of the exposure parameters corresponding to the other exposures except the first preset exposure.
As an optional implementation manner of the embodiment of the present invention, the infrared light supplement module may be specifically configured to:
according to the following control mode, carrying out infrared supplementary lighting within the exposure time corresponding to the first preset exposure in the image acquisition period:
the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure.
Specifically, when the number of exposures in the image acquisition period is greater than two, the first preset exposure may be the first exposure or the last exposure of the at least two exposures.
Specifically, the first determining module 120 may be specifically configured to:
and performing demosaicing processing on one of the frames of images, and generating an infrared-sensing brightness image by using the demosaiced frame of image.
As an optional implementation manner of the embodiment of the present invention, when the number of the remaining images in each frame image is 1, the second determining module 130 may be specifically configured to:
and performing infrared removal processing on the rest images in each frame of image to obtain a visible light color image.
In this implementation, corresponding to the embodiment of the method shown in fig. 3, specifically, as shown in fig. 9, the second determining module 130 may include:
the interpolation sub-module 1301 is configured to interpolate an IR channel of a target image to generate the target image after interpolation processing, where the target image is the rest of the images in each frame of image, when the target image includes the IR channel;
the updating sub-module 1302 is configured to update each pixel in the target image after the interpolation processing according to the following manner, so as to obtain a visible light color image:
if the R value of the pixel exists, the R value of the pixel is updated as follows: the difference between the R value of the pixel and the IR parameter value of the pixel; if the pixel has a G value, updating the G value of the pixel as follows: the difference between the G value of the pixel and the IR parameter value of the pixel; if the pixel has a B value, updating the B value of the pixel as follows: the difference between the B value of the pixel and the IR parameter value of the pixel; the IR parameter value of the pixel is the product of the IR value of the pixel and a preset correction value.
As another optional implementation manner of the embodiment of the present invention, when the number of the remaining images in each frame image is at least 2, and the exposure durations corresponding to the remaining images in each frame image are different;
the second determining module 130 may include:
the first processing submodule is used for carrying out wide dynamic synthesis processing on the rest images in each frame of image to obtain a wide dynamic image;
and the second processing submodule is used for carrying out infrared removal processing on the wide dynamic image to obtain a visible light color image.
Specifically, the device can be applied to image fusion equipment, and each frame of image is acquired by the image fusion equipment;
an optical filter can be arranged on an optical lens of the image fusion device, and a spectral region filtered by the optical filter comprises [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
As can be seen from the above, in the scheme provided in this embodiment, each frame of image obtained through at least two exposures in one image acquisition period may be acquired by one image sensor, so as long as there is one image sensor in the device, the acquisition and fusion of the images can be completed, so as to improve the image quality under the low illumination condition, and the device provided in this embodiment has good adaptability and is convenient to apply; on the other hand, for the device which integrates image acquisition and fusion and applies the scheme provided by the embodiment, only one sensor can be arranged in the device, a light splitting device is not needed, the structure is simple, and the cost of the device is low.
Corresponding to the method embodiment shown in fig. 1 or 3, the embodiment of the present invention further provides an electronic device, as shown in fig. 10, which includes a memory 210 and a processor 220.
The memory 210 is used for storing program codes;
the processor 220, when executing the program code stored in the memory 210, implements the following steps:
obtaining each frame image obtained by at least two exposures in one image acquisition period;
determining an infrared sensing brightness image based on one of the obtained frame images;
determining a visible light color image based on the rest of the obtained frame images;
and fusing the determined infrared sensing brightness image and the visible light color image to obtain a fused image.
For specific implementation and related explanation of each step of the method, reference may be made to the method embodiments shown in fig. 1 and fig. 3 and other method embodiments, which are not described herein again.
The Memory may include a Random Access Memory (RAM), or may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a network Processor (Ne word Processor, NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
As can be seen from the above, in the scheme provided in this embodiment, each frame of image obtained through at least two exposures in one image acquisition period may be acquired by one image sensor, so as long as there is one image sensor in the device, the acquisition and fusion of the images can be completed, so as to improve the image quality under the low illumination condition, and the device provided in this embodiment has good adaptability and is convenient to apply; on the other hand, for the device which integrates image acquisition and fusion and applies the scheme provided by the embodiment, only one sensor can be arranged in the device, a light splitting device is not needed, the structure is simple, and the cost of the device is low.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the image fusion method described in any one of the above embodiments.
As can be seen from the above, in the scheme provided in this embodiment, each frame of image obtained through at least two exposures in one image acquisition period may be acquired by one image sensor, so as long as there is one image sensor in the device, the acquisition and fusion of the images can be completed, so as to improve the image quality under the low illumination condition, and the device provided in this embodiment has good adaptability and is convenient to apply; on the other hand, for the device which integrates image acquisition and fusion and applies the scheme provided by the embodiment, only one sensor can be arranged in the device, a light splitting device is not needed, the structure is simple, and the cost of the device is low.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device, and the computer-readable storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to the description, reference may be made to some portions of the description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (15)

1. An image fusion method, characterized in that the method comprises:
obtaining two frame images obtained through two exposures, wherein the two exposures are adjacent two exposures;
determining an infrared-sensing brightness image based on one of the two frames of images; the method comprises the following steps that one frame of image used for determining the infrared-sensing brightness image is obtained under the condition that infrared supplementary lighting exists, the exposure corresponding to the one frame of image is a first preset exposure, and the first preset exposure is the first exposure or the last exposure in the two exposures;
determining a visible light color image based on the other of the two frames of images, wherein the other frame of image used for determining the visible light color image is obtained in the absence of infrared fill;
fusing the infrared-sensing brightness image and the visible light color image to obtain a fused image;
wherein the two frames of images are acquired by a camera;
the camera comprises an image acquisition unit, an infrared light supplement lamp and a processor, wherein the image acquisition unit comprises an optical lens, a double-peak optical filter and an image sensor, and the image sensor is a single sensor;
the processor is used for controlling exposure of the image sensor and controlling light supplement of the infrared light supplement lamp; the system is also used for executing the acquisition of two frames of images obtained by two exposures; determining an infrared-sensing brightness image based on one of the two frames of images; determining a visible light color image based on the other of the two frames of images; fusing the visible light color image and the infrared-sensing brightness image to obtain a fused image;
the image sensor is used for carrying out the two-time exposure according to the control of the processor, generating and outputting the two frames of images, wherein the control parameters of the two-time exposure are different;
the infrared light supplement lamp is used for performing infrared light supplement within the exposure time corresponding to the first preset exposure according to the control of the processor, wherein the starting time of the infrared light supplement is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared light supplement is not later than the exposure ending time of the first preset exposure.
2. The method of claim 1, wherein the control parameter for the two exposures comprises an exposure duration, and wherein the exposure durations for the two exposures are different.
3. The method according to claim 1, wherein the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure, and the method at least comprises:
and controlling to start the infrared supplementary lighting at the starting time of the first preset exposure, and controlling to stop the infrared supplementary lighting at the ending time of the first preset exposure.
4. The method of claim 1, wherein the image sensor is an RGBIR sensor.
5. The method as claimed in claim 1, wherein the wavelength band of the infrared fill-in light by the infrared fill-in light lamp is in a wavelength band range of 760nm to 1 mm.
6. The method according to any one of claims 1 to 5,
the spectral region filtered out by the filter comprises [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
7. The method of claim 6,
the spectral region filtered by the filter also comprises a spectral region of [ T3, + ∞ ], wherein T3 is more than or equal to 850nm and less than or equal to 1100nm, and T2 is more than or equal to T3.
8. The camera is characterized by comprising an image acquisition unit, an infrared light supplement lamp, a processor and a memory, wherein the image acquisition unit comprises an optical lens, a double-peak optical filter and an image sensor, and the image sensor is a single sensor;
the memory is used for storing program codes;
the processor is used for executing the steps of controlling the exposure of the image sensor and controlling the supplementary lighting of the infrared supplementary lighting lamp when executing the program codes stored in the memory, and is also used for executing the steps of obtaining two frames of images obtained through two times of exposure; determining an infrared-sensing brightness image based on one of the two frames of images; determining a visible light color image based on the other of the two frames of images; fusing the visible light color image and the infrared-sensing brightness image to obtain a fused image; wherein the two exposures are adjacent two exposures; the method comprises the steps that one frame of image used for determining the infrared-sensing brightness image is obtained under the condition that infrared supplementary lighting exists, the other frame of image used for determining the visible light color image is obtained under the condition that the infrared supplementary lighting does not exist, the exposure corresponding to the one frame of image is a first preset exposure, and the second preset exposure is the first exposure or the last exposure in the two exposures;
the image sensor is used for carrying out the two-time exposure according to the control of the processor, generating and outputting the two frames of images, wherein the control parameters of the two-time exposure are different;
and the infrared light supplement lamp is used for performing infrared light supplement within the exposure time corresponding to the first preset exposure according to the control of the processor, wherein the starting time of the infrared light supplement is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared light supplement is not later than the exposure ending time of the first preset exposure.
9. The camera of claim 8, wherein the control parameters for the two exposures comprise exposure durations, the exposure durations for the two exposures being different.
10. The camera according to claim 8, wherein the starting time of the infrared supplementary lighting is not earlier than the exposure starting time of the first preset exposure, and the ending time of the infrared supplementary lighting is not later than the exposure ending time of the first preset exposure, and the method at least comprises:
and controlling to start the infrared supplementary lighting at the starting time of the first preset exposure, and controlling to stop the infrared supplementary lighting at the ending time of the first preset exposure.
11. The camera of claim 8, wherein the image sensor is an rgbiir sensor.
12. The camera of claim 8, wherein a wavelength band of the infrared fill-in light by the infrared fill-in light lamp is in a wavelength band range of 760nm to 1 mm.
13. The camera according to any one of claims 8 to 12,
the spectral region filtered out by the filter comprises [ T1, T2 ]; wherein T1 is more than or equal to 600nm and less than or equal to 800nm, T2 is more than or equal to 750nm and less than or equal to 1100nm, and T1 is more than or equal to T2.
14. The camera of claim 13,
the spectral region filtered by the filter also comprises a spectral region of [ T3, + ∞ ], wherein T3 is more than or equal to 850nm and less than or equal to 1100nm, and T2 is more than or equal to T3.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202110076502.2A 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium Active CN112788249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110076502.2A CN112788249B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110076502.2A CN112788249B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium
CN201711381018.0A CN109951646B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201711381018.0A Division CN109951646B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112788249A true CN112788249A (en) 2021-05-11
CN112788249B CN112788249B (en) 2022-12-06

Family

ID=66992522

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110076502.2A Active CN112788249B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium
CN201711381018.0A Active CN109951646B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201711381018.0A Active CN109951646B (en) 2017-12-20 2017-12-20 Image fusion method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (2) CN112788249B (en)
WO (1) WO2019119842A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110574367A (en) * 2019-07-31 2019-12-13 华为技术有限公司 Image sensor and image sensitization method
CN112399064B (en) * 2019-08-12 2023-05-23 浙江宇视科技有限公司 Double-light fusion snapshot method and camera
CN110602415B (en) * 2019-09-30 2021-09-07 杭州海康威视数字技术股份有限公司 Exposure control device, method and camera
CN113259546B (en) * 2020-02-11 2023-05-12 华为技术有限公司 Image acquisition device and image acquisition method
CN113271414B (en) * 2020-02-14 2022-11-18 上海海思技术有限公司 Image acquisition method and device
CN113940052B (en) * 2020-04-29 2023-01-20 华为技术有限公司 Camera and method for acquiring image
CN111383206B (en) * 2020-06-01 2020-09-29 浙江大华技术股份有限公司 Image processing method and device, electronic equipment and storage medium
CN114143443B (en) * 2020-09-04 2024-04-05 聚晶半导体股份有限公司 Dual-sensor imaging system and imaging method thereof
CN114374776B (en) * 2020-10-15 2023-06-23 华为技术有限公司 Camera and control method of camera
CN113114926B (en) * 2021-03-10 2022-11-25 杭州海康威视数字技术股份有限公司 Image processing method and device and camera
CN113112495B (en) * 2021-04-30 2024-02-23 浙江华感科技有限公司 Abnormal image processing method and device, thermal imaging equipment and storage medium
CN115314628B (en) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 Imaging method, imaging system and camera
CN115314629B (en) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 Imaging method, imaging system and camera
CN113489865A (en) * 2021-06-11 2021-10-08 浙江大华技术股份有限公司 Monocular camera and image processing system
CN113596357B (en) * 2021-07-29 2023-04-18 北京紫光展锐通信技术有限公司 Image signal processor, image signal processing device and method, chip and terminal equipment
US20230123736A1 (en) * 2021-10-14 2023-04-20 Redzone Robotics, Inc. Data translation and interoperability
CN113905185B (en) * 2021-10-27 2023-10-31 锐芯微电子股份有限公司 Image processing method and device
CN114157382B (en) * 2021-12-28 2024-02-09 中电海康集团有限公司 Time synchronization control system of light vision all-in-one machine
CN114500850B (en) * 2022-02-22 2024-01-19 锐芯微电子股份有限公司 Image processing method, device, system and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040256561A1 (en) * 2003-06-17 2004-12-23 Allyson Beuhler Wide band light sensing pixel array
CN103220534A (en) * 2012-01-20 2013-07-24 宏达国际电子股份有限公司 Image capturing device and method thereof
CN103686111A (en) * 2013-12-31 2014-03-26 上海富瀚微电子有限公司 Method and device for correcting color based on RGBIR (red, green and blue, infra red) image sensor
US20160065865A1 (en) * 2013-04-24 2016-03-03 Hitachi Maxell, Ltd. Imaging device and imaging system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010081010A2 (en) * 2009-01-09 2010-07-15 New York University Methods, computer-accessible medium and systems for facilitating dark flash photography
US8408821B2 (en) * 2010-10-12 2013-04-02 Omnivision Technologies, Inc. Visible and infrared dual mode imaging system
CN102982518A (en) * 2012-11-06 2013-03-20 扬州万方电子技术有限责任公司 Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
US9654704B2 (en) * 2013-03-15 2017-05-16 Infrared Integrated Systems, Ltd. Apparatus and method for multispectral imaging with three dimensional overlaying
KR20150021353A (en) * 2013-08-20 2015-03-02 삼성테크윈 주식회사 Image systhesis system and image synthesis method
CN104661008B (en) * 2013-11-18 2017-10-31 深圳中兴力维技术有限公司 The treating method and apparatus that color image quality is lifted under low light conditions
US9832394B2 (en) * 2014-03-31 2017-11-28 Google Technology Holdings LLC Adaptive low-light view modes
CN105263008B (en) * 2014-06-19 2018-03-16 深圳中兴力维技术有限公司 Color image quality method for improving and its device under low environment illumination
CN105243726B (en) * 2014-07-11 2018-03-30 威海新北洋荣鑫科技股份有限公司 The acquisition methods and device of digital image data
JP6264233B2 (en) * 2014-09-02 2018-01-24 株式会社Jvcケンウッド IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND CONTROL PROGRAM
JP6319449B2 (en) * 2014-09-18 2018-05-09 株式会社島津製作所 Imaging device
KR102388249B1 (en) * 2015-11-27 2022-04-20 엘지이노텍 주식회사 Camera module for taking picture using visible light or infrared ray
CN105611136B (en) * 2016-02-26 2019-04-23 联想(北京)有限公司 A kind of imaging sensor and electronic equipment
CN107438170B (en) * 2016-05-25 2020-01-17 杭州海康威视数字技术股份有限公司 Image fog penetration method and image acquisition equipment for realizing image fog penetration
CN106572289B (en) * 2016-10-21 2019-08-20 维沃移动通信有限公司 A kind of image processing method and mobile terminal of camera module

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040256561A1 (en) * 2003-06-17 2004-12-23 Allyson Beuhler Wide band light sensing pixel array
CN103220534A (en) * 2012-01-20 2013-07-24 宏达国际电子股份有限公司 Image capturing device and method thereof
US20160065865A1 (en) * 2013-04-24 2016-03-03 Hitachi Maxell, Ltd. Imaging device and imaging system
CN103686111A (en) * 2013-12-31 2014-03-26 上海富瀚微电子有限公司 Method and device for correcting color based on RGBIR (red, green and blue, infra red) image sensor

Also Published As

Publication number Publication date
CN109951646A (en) 2019-06-28
CN109951646B (en) 2021-01-15
CN112788249B (en) 2022-12-06
WO2019119842A1 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
CN109951646B (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN110493506B (en) Image processing method and system
CN110493532B (en) Image processing method and system
EP3582494B1 (en) Multi-spectrum-based image fusion apparatus and method, and image sensor
JP5460173B2 (en) Image processing method, image processing apparatus, image processing program, and imaging apparatus
US8724928B2 (en) Using captured high and low resolution images
KR102266649B1 (en) Image processing method and device
US9581436B2 (en) Image processing device, image capturing apparatus, and image processing method
WO2017202061A1 (en) Image defogging method and image capture apparatus implementing image defogging
CN103546730A (en) Method for enhancing light sensitivities of images on basis of multiple cameras
CN102783135A (en) Method and apparatus for providing a high resolution image using low resolution
CN110493531B (en) Image processing method and system
US9071737B2 (en) Image processing based on moving lens with chromatic aberration and an image sensor having a color filter mosaic
JP2011135563A (en) Image capturing apparatus, and image processing method
US9148552B2 (en) Image processing apparatus, image pickup apparatus, non-transitory storage medium storing image processing program and image processing method
WO2017086155A1 (en) Image capturing device, image capturing method, and program
JP5541205B2 (en) Image processing apparatus, imaging apparatus, image processing program, and image processing method
JP4250506B2 (en) Image processing method, image processing apparatus, image processing program, and imaging system
JP2020187409A (en) Image recognition device, solid-state imaging device, and image recognition method
JPWO2016143139A1 (en) Image processing apparatus, image processing method, and program
JP7297406B2 (en) Control device, imaging device, control method and program
WO2018100662A1 (en) Image processing device, image capture device, image processing method, image processing program, and recording medium
JP4523629B2 (en) Imaging device
US20100303355A1 (en) Image processing apparatus, image processing method, and image processing program
JP6857006B2 (en) Imaging device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant