CN115829896A - Image fusion method and device and electronic equipment - Google Patents

Image fusion method and device and electronic equipment Download PDF

Info

Publication number
CN115829896A
CN115829896A CN202211582779.3A CN202211582779A CN115829896A CN 115829896 A CN115829896 A CN 115829896A CN 202211582779 A CN202211582779 A CN 202211582779A CN 115829896 A CN115829896 A CN 115829896A
Authority
CN
China
Prior art keywords
image
target
shape
channel
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211582779.3A
Other languages
Chinese (zh)
Inventor
王志超
耿丹
张志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Micro Image Software Co ltd
Original Assignee
Hangzhou Micro Image Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Micro Image Software Co ltd filed Critical Hangzhou Micro Image Software Co ltd
Priority to CN202211582779.3A priority Critical patent/CN115829896A/en
Publication of CN115829896A publication Critical patent/CN115829896A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image fusion method, an image fusion device and electronic equipment, and relates to the technical field of image fusion, wherein the method comprises the following steps: acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel; carrying out distortion correction processing on the second target image by using the target mapping relation to obtain a corrected second target image; the target mapping relation is used for correcting the distortion degree of the second target image into the distortion degree of the first target image; and carrying out image fusion on the first target image and the corrected second target image to obtain a fused image. By the scheme, image fusion can be realized on the premise of ensuring that the field angle of the image of the appointed channel is not lost.

Description

Image fusion method and device and electronic equipment
Technical Field
The present application relates to the field of image fusion technologies, and in particular, to an image fusion method, an image fusion device, and an electronic device.
Background
The information content of the image output by the single image acquisition channel is limited, and the requirement of practical application is generally difficult to meet, so the images output by the multiple image acquisition channels are generally subjected to image fusion to obtain a fused image with more information content. The image acquisition channel may include an infrared image acquisition channel, an ultraviolet image acquisition channel, or a visible image acquisition channel, among others.
Since the output image of each of the multiple image capturing channels is distorted, in the related art, the output image of each of the multiple image capturing channels needs to be subjected to distortion correction before image fusion. Distortion is the distortion of an image caused by the degree of distortion of an image formed by an object relative to the object itself.
When the distortion correction is performed, the angle of view of the image of each image capturing channel is lost. In fact, users usually do not want the loss of the angle of view of the image of the designated channel, so how to implement image fusion is a problem to be solved urgently on the premise of ensuring that the angle of view of the image of the designated channel does not lose.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image fusion method, an image fusion device, and an electronic device, so as to implement image fusion on the premise of ensuring that a field angle of an image of a specified channel is not lost. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image fusion method, including:
acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
carrying out distortion correction processing on the second target image by using the target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
Optionally, the target mapping relationship is specifically: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape;
the target shape is a display shape of an object characterized by the image content of the first target image in the first target image.
Optionally, the target mapping relationship is a coordinate mapping relationship generated in a predetermined calibration manner;
wherein, the preset calibration mode is as follows: the calibration method comprises the following steps of performing calibration on the basis of a coordinate relation, related to a calibration object, between sample images obtained when two image sensors are used for image acquisition on the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel.
Optionally, the calibration process for the predetermined calibration mode includes:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: and if the image sensor of the main channel is used for carrying out image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is obtained.
Optionally, if the image sensor of the main channel is used for acquiring an image of the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: and performing projection processing on the target image shape by adopting an optical back projection mode to obtain the shape.
Optionally, the determining of the actual shape of the calibration object includes:
inputting shape parameters of the target image shape, specified optical parameters of an image sensor of the main channel and object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the specified optical parameters to obtain an actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to the corresponding acquisition object.
Optionally, the field angle of any image output by the primary channel is smaller than that of any image output by the secondary channel;
carrying out image fusion on the first target image and the corrected second target image to obtain a fused image, wherein the image fusion comprises the following steps:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by adjusting the size of the alternative image and has the same size as the first target image;
and fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image.
Optionally, the primary channel is an infrared thermal imaging channel, and the secondary channel is a visible light image acquisition channel.
In a second aspect, an embodiment of the present application provides an image fusion apparatus, including:
the acquisition module is used for acquiring a first target image output by the main channel and a second target image output by the secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
the correction module is used for carrying out distortion correction processing on the second target image by utilizing a target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and the fusion module is used for carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
Optionally, the target mapping relationship is specifically: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape;
the target shape is a display shape of an object characterized by the image content of the first target image in the first target image.
Optionally, the target mapping relationship is a coordinate mapping relationship generated in a predetermined calibration manner;
wherein, the preset calibration mode is as follows: a calibration mode is carried out based on the coordinate relation of the calibration object and the obtained sample images when two image sensors are adopted for image acquisition of the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel.
Optionally, the calibration process for the predetermined calibration mode includes:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: and if the image sensor of the main channel is used for carrying out image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is obtained.
Optionally, if the image sensor of the main channel is used for acquiring an image of the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: and performing projection processing on the target image shape by adopting an optical back projection mode to obtain the shape.
Optionally, the determining of the actual shape of the calibration object includes:
inputting the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel and the object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the designated optical parameters to obtain the actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to the corresponding acquisition object.
Optionally, the field angle of any image output by the main channel is smaller than that of any image output by the secondary channel;
the fusion module is specifically configured to:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by adjusting the size of the alternative image and has the same size as the first target image;
and fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image.
Optionally, the primary channel is an infrared thermal imaging channel, and the secondary channel is a visible light image acquisition channel.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing any image fusion method when executing the program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the image fusion methods.
In a fifth aspect, the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to perform any of the image fusion methods described above.
The embodiment of the application has the following beneficial effects:
the image fusion method provided by the embodiment of the application includes the steps that a first target image output by a main channel and a second target image output by a secondary channel are obtained, distortion correction processing can be performed on the second target image by using a target mapping relation, the second target image after distortion correction is obtained, and then the first target image and the second target image after correction can be subjected to image fusion, so that the target image is obtained. Compared with the mode of distortion correction and image fusion of the images of all channels in the related technology, the method and the device have the advantages that the first target image output by the main channel is not subjected to distortion correction, the second target image output by the secondary channel is subjected to distortion correction by using the target mapping relation, so that the distortion degree of the corrected second target image is matched with the distortion degree of the first target image, and the first target image and the distortion-corrected second target image are fused on the premise that the field angle of the first target image output by the main channel is not lost. Therefore, the image fusion can be realized on the premise of ensuring that the field angle of the image of the specified channel is not lost.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is also obvious for a person skilled in the art to obtain other embodiments according to the drawings.
Fig. 1 is a schematic flowchart of an image fusion method according to an embodiment of the present disclosure;
fig. 2 is another schematic flow chart of an image fusion method according to an embodiment of the present disclosure;
fig. 3a is a schematic view of a display shape of a marker in a second sample image according to an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of a given shape provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic view of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
The image fusion refers to extracting the beneficial information in the image of each channel to the maximum extent by image processing, computer technology and the like of the image data about the same object acquired by a plurality of image acquisition channels, and finally fusing the image data into a high-quality image, wherein the plurality of image acquisition channels include but are not limited to a visible light image acquisition channel, a thermal imaging channel, an ultraviolet light image acquisition channel and the like.
The output image of each of the multiple image acquisition channels requires distortion correction of the image prior to image fusion.
When distortion correction is performed in the related art, images output by each channel need to be corrected independently until the images are free of distortion, the angle of view of the images of each image acquisition channel is lost, and image fusion cannot be achieved on the premise that the angle of view of the images of the specified channel is not lost.
Based on the above, the application provides an image fusion method, an image fusion device and an electronic device, which can realize image fusion on the premise of ensuring that the field angle of the image of the specified channel is not lost.
First, an image fusion method provided by the present application is described below.
The image fusion method provided by the application can be applied to electronic equipment, the electronic equipment can be image acquisition equipment or terminal equipment, the terminal equipment can be a mobile phone or a computer and the like, and the application does not limit the specific form of the electronic equipment. When the electronic device is an image acquisition device, the image acquisition device may have a plurality of image acquisition channels, and the image acquisition device may perform image fusion on images of the plurality of image acquisition channels and output the fused images, that is, the images output by the image acquisition device to a user may be fused images; when the electronic equipment is terminal equipment, the terminal equipment can fuse images of a plurality of image acquisition channels of different acquisition equipment or the same acquisition equipment to obtain fused images. The image fusion method provided by the application can be applied to any scene with image fusion requirements, such as: and fusing the images of the same object acquired by the visible light channel and the thermal imaging channel.
In addition, an execution subject of the image fusion method provided by the present application may be an image fusion apparatus. For example, the image fusion device may be functional software running on a terminal device or an image capturing device, such as: and functional software for image fusion. When the image fusion device operates in the terminal device, the image fusion device may fuse multi-channel images in the terminal device, and the multi-channel images in the terminal device may be acquired from the image acquisition device or acquired by the terminal device itself, which is not limited herein. When the image fusion device runs in the image acquisition equipment, the image fusion device can fuse the multi-channel images acquired by the image acquisition equipment.
The image fusion method provided by the application can comprise the following steps:
acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
carrying out distortion correction processing on the second target image by utilizing the target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
The image fusion method provided by the embodiment of the application includes the steps that a first target image output by a main channel and a second target image output by a secondary channel are obtained, distortion correction processing can be performed on the second target image by using a target mapping relation, the second target image after distortion correction is obtained, and then the first target image and the second target image after correction can be subjected to image fusion, so that the target image is obtained. Compared with the mode of carrying out distortion correction on the images of all channels and then carrying out image fusion in the related technology, the method has the advantages that the distortion correction is not carried out on the first target image output by the main channel, the distortion correction is carried out on the second target image output by the secondary channel by utilizing the target mapping relation, so that the distortion degree of the corrected second target image is matched with the distortion degree of the first target image, and the first target image and the second target image after the distortion correction are fused on the premise that the field angle of the first target image output by the main channel is not lost. Therefore, the image fusion can be realized on the premise of ensuring that the field angle of the image of the specified channel is not lost.
An image fusion method provided by the present application is exemplarily described below with reference to the accompanying drawings.
As shown in fig. 1, an image fusion method provided by the present application may include the following steps:
s101: acquiring a first target image output by a main channel and a second target image output by a secondary channel;
the main channel is an image acquisition channel which is specified in advance and used for image fusion, and the angle of view of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
the image fusion method provided by the application is to fuse a plurality of image acquisition channels with respect to an image acquired by a same object, in order to ensure that no loss occurs in the field angle of the image of the designated channel, the image acquisition channels for image fusion that are kept unchanged with respect to the field angle of the output image may be designated in advance as a main channel and sub-channels other than the main channel, and the number of the sub-channels may be one or more, which is not limited herein.
For example, in one implementation, the primary channel is an infrared thermal imaging channel, and the secondary channel is a visible light image acquisition channel. The plurality of image acquisition channels may include an infrared thermal imaging channel and a visible light image acquisition channel, at this time, the main channel may be the infrared thermal imaging channel, the sub-channel may be the visible light image acquisition channel, the plurality of image acquisition channels may further include an ultraviolet image acquisition channel, and the ultraviolet image acquisition channel may also be the sub-channel, which is not limited to this, of course; in practical application, one image acquisition channel can be selected as a main channel according to requirements, and the other image acquisition channels can be used as sub-channels, which is not limited herein.
It can be understood that, when image fusion is performed, a first target image output by a main channel and a second target image output by a sub-channel need to be acquired first, when the scheme is applied to a terminal device, the first target image and the second target image can be acquired from an image acquisition device, and when the scheme is applied to the image acquisition device, the first target image output by the main channel and the second target image output by the sub-channel can be directly determined.
S102: carrying out distortion correction processing on the second target image by using the target mapping relation to obtain a corrected second target image;
wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
it should be noted that, an image captured by any one image capturing channel may generate distortion of different degrees with respect to a captured object, while images captured by different image capturing channels with respect to the same object have different distortion degrees, in order to ensure that a first target image output by a main channel does not generate a loss of field angle, a second target image output by a sub channel may be subjected to distortion correction with the first target image as a base, and a distortion degree of the second target image after the distortion correction may be matched with a distortion degree of the first target image. Therefore, distortion correction can be performed on the second target image using the target mapping relationship. In addition, if the number of the sub-channels is multiple, for each sub-channel, a target mapping relationship needs to be generated in advance, so as to perform distortion correction on the second target image output by the sub-channel.
After the first target image and the second target image are obtained, distortion correction processing can be performed on the second target image by using a pre-generated target mapping relation, so that a corrected second target image is obtained; moreover, since the target mapping relationship is a mapping relationship in which the distortion degree of the second target image is corrected to be the same as the distortion degree of the first target image, the distortion degree of the obtained corrected second target image is the same as the distortion degree of the first target image, and subsequently, the first target image and the corrected second target image may be subjected to image fusion.
Illustratively, the target mapping relationship is specifically: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape;
the target shape is a display shape of an object characterized by the image content of the first target image in the first target image.
It should be noted that the distortion degree of the image may be represented by a display shape in the image through an object represented by image content in the image, and a display shape of the object represented by the image content of the second target image in the second target image before distortion correction is different from a target shape of the first target image, and when distortion correction is performed on the second target image, each pixel point corresponding to the object represented by the image content belonging to the second target image may be mapped by using a target mapping relationship, and a shape of an image formed by each mapped pixel point may be the same as a target shape of the first target image, that is, a distortion degree of the distortion corrected second target image matches a distortion degree of the first target image.
For example, the object represented by the image content of the first target image may be rectangular, the object represented by the image content of the second target image may be other shapes, and when the distortion correction is performed on the second target image by using the target mapping relationship, the object represented by the image content of the second target image and the display shape in the second target image may be corrected to be rectangular, so as to obtain the distortion-corrected second target image.
In addition, the target mapping relation can be generated in advance and recorded in the terminal device or the image acquisition device, so that the target mapping relation can be acquired quickly, and distortion correction processing on the second target image is facilitated. The specific generation manner of the target mapping relationship will be described in detail in the following examples, which are not described herein again.
The above description of the distortion correction process performed on the second target image is merely an example, and should not be construed as a limitation of the present application.
S103: carrying out image fusion on the first target image and the corrected second target image to obtain a fused image;
after distortion correction processing is performed on the second target image, the obtained corrected second target image has the same distortion degree as the first target image, and at this time, image fusion can be performed on the first target image and the corrected second target image, so that a fused image is obtained.
For example, in one implementation, the field angle of any image output by the main channel is smaller than that of any image output by the secondary channel;
carrying out image fusion on the first target image and the corrected second target image to obtain a fused image, wherein the image fusion comprises the following steps:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by carrying out size adjustment on the alternative image and has the same size as the first target image;
and fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image.
It is understood that distortion correction of the image may result in a loss of the angle of view of the image, that is, after distortion correction of the second target image, the corrected second target image may lose a part of the angle of view, and in order to enable the corrected second target image to include the area of the first target image, the angle of view of any image output by the main channel is smaller than that of any image output by the sub channel. When multi-channel image fusion is carried out, the scheme can ensure that the field angle of the main channel is not lost, and the field angle of the secondary channel can be set to be larger than that of the main channel by considering other factors such as tolerance of equipment for collecting images, namely the field angle of any image output by the main channel is smaller than that of any image output by the secondary channel. When distortion correction is carried out on the secondary channel, the angle of view of the secondary channel is lost, and the mode that the angle of view of the secondary channel is larger than that of the primary channel can ensure that even if the angle of view of the secondary channel is lost, the area corresponding to the image output by the primary channel in the image of the secondary channel after distortion correction is not lost, and when subsequent image fusion is carried out, the storage allowance of the image of the secondary channel after distortion correction can be cut, and the image of the area corresponding to the image of the primary channel can be selected from the image of the secondary channel after distortion correction to be fused with the image of the primary channel. That is, the scheme can realize image fusion on the premise of ensuring that the field angle of the image of the specified channel is not lost.
When image fusion is carried out, a specified object region in the corrected second target image can be identified to obtain an alternative image, the specified object region can be a region represented by the first target image, the size of the output second target image can be larger than that of the first target image due to the fact that the field angle of the secondary channel is larger than that of the primary channel, and the size of the alternative image can be different from that of the first target image, so that an image to be fused corresponding to the alternative image can be determined, and the image to be fused and corresponding pixel points in the first target image can be fused to obtain a fused image. The designated object region may be a region corresponding to the target object included in the first target image, for example: the target object may be a person or an object, etc.
The field angle of any image output by the secondary channel is larger than that of any image output by the main channel, so that the second target image output by the secondary channel can contain the complete information content represented by the first target image, and after distortion correction is performed on the second target image, the corrected second target image still contains the complete information content represented by the first target image, namely, the image of the corresponding area of the first target image in the second target image is not lost. By determining the image to be fused with the same size as the first target image, the image of the corresponding area of the first target image can be selected from the corrected second target image, and the first target image and the image to be fused can be fused more conveniently.
In the technical scheme of the application, the operations of obtaining, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user are all carried out under the condition that the authorization of the user is obtained.
It should be noted that the first target image and the second target image in the present embodiment are from a public data set.
The image fusion method provided by the embodiment of the application includes the steps that a first target image output by a main channel and a second target image output by a secondary channel are obtained, distortion correction processing can be performed on the second target image by using a target mapping relation, the second target image after distortion correction is obtained, and then the first target image and the second target image after correction can be subjected to image fusion, so that the target image is obtained. Compared with the mode of distortion correction and image fusion of the images of all channels in the related technology, the method and the device have the advantages that the first target image output by the main channel is not subjected to distortion correction, the second target image output by the secondary channel is subjected to distortion correction by using the target mapping relation, so that the distortion degree of the corrected second target image is matched with the distortion degree of the first target image, and the first target image and the distortion-corrected second target image are fused on the premise that the field angle of the first target image output by the main channel is not lost. Therefore, the image fusion can be realized on the premise of ensuring that the field angle of the image of the specified channel is not lost.
Optionally, in another embodiment provided by the present application, the target mapping relationship is a coordinate mapping relationship generated by a predetermined calibration manner;
wherein, the preset calibration mode is as follows: a calibration mode is carried out based on the coordinate relation of the calibration object and the obtained sample images when two image sensors are adopted for image acquisition of the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel.
It is understood that the target mapping relationship may be generated by a calibration method, and during calibration, calibration may be performed based on a coordinate relationship with respect to a calibration object between sample images obtained when image acquisition is performed on the same calibration object by using image sensors of a primary channel and a secondary channel.
Optionally, the calibration process for the predetermined calibration mode includes:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: and if the image sensor of the main channel is used for acquiring the image of the calibration object, the display shape of the calibration object in the obtained first sample image is obtained.
When the target mapping relationship is generated in a calibration manner, a second sample image obtained when the secondary channel performs image acquisition on the calibration object may be obtained first, and for the second sample image, a coordinate mapping relationship that needs to be utilized when the display shape of the calibration object in the second sample image is corrected to a specified shape may be calibrated, so as to obtain the target mapping relationship. The designated shape may be a display shape of the calibration object in the first sample image obtained if the sensor of the main channel performs image acquisition for the calibration object; in this case, the coordinate mapping relationship to be specified is a coordinate mapping relationship in which the display shape of the image content belonging to the calibration object in the sub channel is corrected to the display shape of the image content belonging to the calibration object in the main channel, and the coordinate mapping relationship can be used as a target mapping relationship in which the distortion degree of the image of the sub channel is corrected to the distortion degree of the image of the main channel.
For example, as shown in fig. 3a, the display shape of the calibration object in the second sample image may be a pillow shape, as shown in fig. 3b, the specified shape may be a rectangle, and when the target mapping relationship is generated, the coordinate mapping relationship required when the calibration object in the pillow shape of fig. 3a is corrected to the rectangle shape of fig. 3b may be calibrated as the target mapping relationship.
It is to be understood that there are various ways of calibrating the coordinate mapping relationship to be used when correcting the display shape of the calibration object in the second sample image to a specified shape, and the method is not limited herein.
For example, in an implementation manner, the display shape of the second sample image with respect to the calibration object may be used as an image shape acquired in the camera calibration process, the pixel value of each pixel point corresponding to the specified shape may be used as a physical coordinate value of the calibration object in the calibration process, the pixel value of each pixel point in the display shape of the second sample image with respect to the calibration object is identified, and based on the determined pixel value, the coordinate mapping relationship that is required to be used when the display shape of the second sample image with respect to the calibration object is corrected to the specified shape may be calibrated by mapping each pixel point in the display shape of the second sample image with respect to the calibration object to each pixel point corresponding to the specified shape, so as to obtain the target mapping relationship.
In addition, it is understood that in the camera calibration, calibration may be performed using the gnomon calibration method: shooting the checkerboard of Zhangzhen friend calibration method at different angles by using a camera to obtain a group of images; detecting the characteristic points in the image to obtain pixel values of the characteristic points, and calculating physical coordinate values of the characteristic points according to the known size of the checkerboard and the origin of a world coordinate system; solving an internal reference matrix and an external reference matrix of the camera according to the relationship between the physical coordinate values and the pixel coordinate values; solving distortion parameters of the camera; and optimizing the internal reference matrix and the external reference matrix of the camera and the distortion parameters of the camera by using an L-M (Levenberg-Marquardt) algorithm. Of course, the camera calibration may be performed in other manners, which is not limited herein.
The designated shape may be any regular shape, and any method that can determine the coordinate mapping relationship to be used when correcting the display shape of the calibration object in the second sample image to the designated shape is applicable to this embodiment, and is not limited herein. By calibrating the coordinate mapping relation needed to be utilized when the display shape of the calibration object in the second sample image is corrected to the specified shape, the specified shape can be set to any regular shape, the coordinate mapping relation is calibrated more conveniently, the target mapping relation can be generated quickly, and therefore the efficiency of subsequent image fusion can be improved.
Optionally, if the image sensor of the main channel is used for acquiring an image of the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: and performing projection processing on the target image shape by adopting an optical back projection mode to obtain the shape.
It can be understood that, if the image sensor of the main channel is used to capture an image of the calibration object, the display shape of the calibration object in the obtained first sample image may be a pre-specified target figure shape, and the figure shape may be a display shape of the calibration object in the obtained first sample image when the image sensor of the main channel is used to capture an image of the calibration object, for example: rectangular, diamond, etc.
The shape of the calibration object may be a shape obtained by projecting the shape of the target pattern by using an optical back projection method, and of course, the actual shape of the calibration object may be determined by other methods.
Optionally, the determining of the actual shape of the calibration object includes:
inputting the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel and the object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the designated optical parameters to obtain the actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to the corresponding acquisition object.
The actual shape of the calibration object can be obtained by an inverse projection mode and optical software simulation; when the actual shape of the calibration object is determined by back projection, the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel, and the object distance information corresponding to the main channel may be determined first, so as to obtain the actual shape of the calibration object. Additionally, the specified optical parameters may be, for example, lens and sensor parameters of the main channel, such as: lens focal length, number of sensor pixels, pixel size, distortion curve, etc.
In addition, it should be noted that, the calibration process for the predetermined calibration mode further includes:
acquiring a first sample image and a second sample image; the first sample image and the second sample image are respectively images obtained when image acquisition is carried out on a calibration object by utilizing image sensors of a main channel and a secondary channel; the actual shape of the calibration object is a preset shape;
and calibrating the coordinate mapping relation needed to be utilized when the display shape of the calibration object in the second sample image is corrected to the display shape of the calibration object in the first sample image, so as to obtain the target mapping relation.
When the target mapping relationship is generated, the display shape of the marker in the second sample image and the display shape of the marker in the first sample image can be directly calibrated, so that the target mapping relationship is obtained; at this time, the first sample image and the second sample image are shapes obtained by image acquisition of the calibration object in advance by using the main channel and the secondary channel, the actual shape of the calibration object does not need to be determined in a back projection manner, and the target mapping relation can be quickly generated.
That is, the actual shape of the calibration object may be a preset shape or a shape generated by optical back projection, and is not limited herein.
It should be noted that, by means of optical software simulation, the actual shape of the calibration object can be accurately determined, and may include parameter information such as size and shape of the actual shape of the calibration object, after the actual shape of the calibration object is obtained, the calibration object may be subjected to image acquisition by using the secondary channel to obtain a second sample image and determine a display shape of the calibration object in the second sample image, and by means of calibration, a coordinate mapping relationship that is required to be utilized when the display shape of the calibration object in the second sample image is corrected to a specified shape may be calibrated, so as to obtain a target mapping relationship; compared with the mode of presetting the actual shape of the calibration object, the effect of determining the target mapping relation by the optical software simulation mode is better, the correction effect of the corrected second target image is better when the distortion correction is subsequently carried out on the second target image, and the image quality of the obtained fusion image is higher when the images are fused.
An exemplary description of an image fusion method provided in the present application is provided below with reference to another embodiment.
As shown in fig. 2, an image fusion method provided in the present application may include the following steps:
s201: determining a main channel, and acquiring images of the main channel and a secondary channel; the method comprises the steps of determining an image acquisition channel which is specified in advance when images are fused and used for image fusion, wherein the angle of view of the image relative to an output image is kept unchanged, acquiring images of a main channel and a secondary channel, and correspondingly acquiring a first target image output by the main channel and a second target image output by the secondary channel.
S202: acquiring a distortion correction mapping relation; that is, a mapping relationship for performing distortion correction on the image of the sub-channel is obtained, corresponding to the obtained pre-generated target mapping relationship. When the distortion correction mapping relationship is obtained, a second sample image of the secondary channel may be obtained first, and the display shape of the calibration object in the second sample image is calibrated to the coordinate mapping relationship required to be used when the display shape is the specified shape by calibration, so as to obtain the distortion correction mapping relationship, which corresponds to the determination manner of the target mapping relationship.
S203: distortion correction is carried out on the secondary channel image; that is, the distortion correction mapping relation obtained in S202 is used to perform distortion correction on the image of the secondary channel, and the distortion correction processing is performed on the second target image according to the target mapping relation, so as to obtain a corrected second target image.
S204: fusing the secondary channel image subjected to distortion correction with the primary channel image; fusing each pixel point in the secondary channel image after distortion correction with each pixel point in the corresponding main channel image, and performing image fusion on the first target image and the second target image after correction corresponding to the primary channel image to obtain a fused image.
It should be noted that steps S201 to S204 are similar to steps S101 to S103, and they are only needed to be referred to each other, and are not described herein again.
According to the image fusion method, the main channel can be used as a substrate, distortion correction is not carried out on the image of the main channel, the fact that the angle of view of the main channel is not lost is guaranteed, distortion correction is carried out on the image of the secondary channel through the acquired distortion correction mapping relation, the distortion degree of the corrected image of the secondary channel is the same as the image distortion degree of the main channel, and finally image fusion is carried out. The image fusion can be realized under the condition that the field angle of the main channel image is not lost.
In addition, aiming at the large-field-angle multi-channel image acquisition equipment, the field angle requirement of a user main channel can be preferentially ensured, and the problem that all channels need distortion correction in the image fusion process in the prior art is solved. Any channel can be selected as a main channel according to requirements, the effect of the main channel image is confirmed to be unchanged, and distortion correction is only performed on the secondary channel image, so that the distortion degree of the main channel image is matched, and errors caused by distortion correction are reduced by reducing the distortion correction times.
Based on the image fusion method, the present application further provides an image fusion device, as shown in fig. 4, the device includes:
an obtaining module 410, configured to obtain a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
the correcting module 420 is configured to perform distortion correction processing on the second target image by using a target mapping relationship to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and a fusion module 430, configured to perform image fusion on the first target image and the corrected second target image to obtain a fused image.
The image fusion device provided by the embodiment of the application firstly obtains the first target image output by the main channel and the second target image output by the secondary channel, can perform distortion correction processing on the second target image by using a target mapping relation to obtain the second target image after distortion correction, and further can perform image fusion on the first target image and the second target image after correction to obtain the target image. Compared with the mode of distortion correction and image fusion of the images of all channels in the related technology, the method and the device have the advantages that the first target image output by the main channel is not subjected to distortion correction, the second target image output by the secondary channel is subjected to distortion correction by using the target mapping relation, so that the distortion degree of the corrected second target image is matched with the distortion degree of the first target image, and the first target image and the distortion-corrected second target image are fused on the premise that the field angle of the first target image output by the main channel is not lost. Therefore, the image fusion can be realized on the premise of ensuring that the field angle of the image of the specified channel is not lost.
Optionally, the target mapping relationship is specifically: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape;
the target shape is a display shape of an object characterized by the image content of the first target image in the first target image.
Optionally, the target mapping relationship is a coordinate mapping relationship generated in a predetermined calibration manner;
wherein, the preset calibration mode is as follows: a calibration mode is carried out based on the coordinate relation of the calibration object and the obtained sample images when two image sensors are adopted for image acquisition of the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel.
Optionally, the calibration process for the predetermined calibration mode includes:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: and if the image sensor of the main channel is used for carrying out image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is obtained.
Optionally, if an image sensor of the main channel is used to perform image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: and performing projection processing on the target image shape by adopting an optical back projection mode to obtain the shape.
Optionally, the determining manner of the actual shape of the calibration object includes:
inputting the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel and the object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the designated optical parameters to obtain the actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to the corresponding acquisition object.
Optionally, the field angle of any image output by the main channel is smaller than that of any image output by the secondary channel;
the fusion module is specifically configured to:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by adjusting the size of the alternative image and has the same size as the first target image;
and fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image after fusion.
Optionally, the primary channel is an infrared thermal imaging channel, and the secondary channel is a visible light image acquisition channel.
An embodiment of the present application further provides an electronic device, as shown in fig. 5, including:
a memory 501 for storing a computer program;
the processor 502 is configured to implement the following steps when executing the program stored in the memory 501:
acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
carrying out distortion correction processing on the second target image by using the target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
The electronic device may further include a communication bus and/or a communication interface, and the processor 502, the communication interface, and the memory 501 complete communication with each other through the communication bus.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the image fusion methods described above.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the image fusion methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or other media (e.g., solid State Disks (SSDs)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the scope of protection of the present application.

Claims (11)

1. An image fusion method, comprising:
acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
carrying out distortion correction processing on the second target image by using the target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
2. The method of claim 1,
the target mapping relationship is specifically as follows: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape;
the target shape is a display shape of an object characterized by image content of the first target image in the first target image.
3. The method according to claim 1 or 2, wherein the target mapping relationship is a coordinate mapping relationship generated by a predetermined calibration manner;
wherein, the preset calibration mode is as follows: a calibration mode is carried out based on the coordinate relation of the calibration object and the obtained sample images when two image sensors are adopted for image acquisition of the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel.
4. The method according to claim 3, wherein the calibration process for the predetermined calibration mode comprises:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: and if the image sensor of the main channel is used for carrying out image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is obtained.
5. The method according to claim 4, wherein when the image sensor of the main channel is used to perform image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: and performing projection processing on the target image shape by adopting an optical back projection mode to obtain the shape.
6. The method of claim 5, wherein the actual shape of the calibration object is determined by:
inputting the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel and the object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the designated optical parameters to obtain the actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to the corresponding acquisition object.
7. The method according to any one of claims 1 to 6, wherein the field angle of any image output by the primary channel is smaller than that of any image output by the secondary channel;
carrying out image fusion on the first target image and the corrected second target image to obtain a fused image, wherein the image fusion comprises the following steps:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by adjusting the size of the alternative image and has the same size as the first target image;
and fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image.
8. The method of any one of claims 1-6, wherein the primary channel is an infrared thermal imaging channel and the secondary channel is a visible image acquisition channel.
9. An image fusion apparatus, comprising:
the acquisition module is used for acquiring a first target image output by a main channel and a second target image output by a secondary channel; the main channel is an image acquisition channel which is specified in advance and used for image fusion, the field angle of an image relative to an output image is kept unchanged, and the secondary channel is an image acquisition channel except the main channel;
the correction module is used for carrying out distortion correction processing on the second target image by utilizing the target mapping relation to obtain a corrected second target image; wherein the target mapping relationship is a mapping relationship for correcting a distortion degree of the second target image to a distortion degree of the first target image;
and the fusion module is used for carrying out image fusion on the first target image and the corrected second target image to obtain a fused image.
10. The apparatus according to claim 9, wherein the target mapping relationship is specifically: correcting the display shape of the object represented by the image content of the second target image in the second target image into a mapping relation of a target shape; the target shape is a display shape of an object represented by the image content of the first target image in the first target image;
and/or the presence of a gas in the gas,
the target mapping relation is a coordinate mapping relation generated in a preset calibration mode;
wherein, the preset calibration mode is as follows: a calibration mode is carried out based on the coordinate relation of the calibration object and the obtained sample images when two image sensors are adopted for image acquisition of the same calibration object;
wherein the two image sensors include an image sensor of the primary channel and an image sensor of the secondary channel;
and/or the presence of a gas in the gas,
the calibration process aiming at the preset calibration mode comprises the following steps:
acquiring a second sample image; wherein the second sample image is: when an image sensor of the secondary channel is used for carrying out image acquisition on the calibration object, an obtained image is obtained;
calibrating the display shape of the calibration object in the second sample image, and correcting the display shape to a coordinate mapping relation required to be utilized when the display shape is a specified shape to obtain a target mapping relation;
the specified shape is: if the image sensor of the main channel is used for carrying out image acquisition on the calibration object, the display shape of the calibration object in the obtained first sample image is obtained;
and/or the presence of a gas in the gas,
if the image sensor of the main channel is used for collecting the image of the calibration object, the display shape of the calibration object in the obtained first sample image is a pre-specified target graphic shape;
the actual shape of the calibration object is as follows: performing projection processing on the shape of the target image in an optical back projection mode to obtain a shape;
and/or the presence of a gas in the gas,
the determination mode of the actual shape of the calibration object comprises the following steps:
inputting the shape parameters of the target image shape, the designated optical parameters of the image sensor of the main channel and the object distance information corresponding to the main channel into optical simulation software, so that the optical simulation software performs optical back projection on the target image shape with the shape parameters based on the object distance information corresponding to the main channel and the designated optical parameters to obtain the actual shape of the calibration object;
the specified optical parameters are optical parameters which have correlation with the generation of image distortion when the image content in any image output by the main channel generates image distortion relative to a corresponding acquisition object;
and/or the presence of a gas in the atmosphere,
the field angle of any image output by the main channel is smaller than that of any image output by the secondary channel;
the fusion module is specifically configured to:
identifying a designated object area of the corrected second target image to obtain an alternative image corresponding to the designated object area;
determining an image to be fused corresponding to the alternative image; the image to be fused is an image which is obtained by adjusting the size of the alternative image and has the same size as the first target image;
fusing corresponding pixels of the image to be fused and the first target image to obtain a fused image;
and/or the presence of a gas in the gas,
the main channel is an infrared thermal imaging channel, and the secondary channel is a visible light image acquisition channel.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the image fusion method according to any one of claims 1 to 8 when executing a program stored in a memory.
CN202211582779.3A 2022-12-09 2022-12-09 Image fusion method and device and electronic equipment Pending CN115829896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211582779.3A CN115829896A (en) 2022-12-09 2022-12-09 Image fusion method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211582779.3A CN115829896A (en) 2022-12-09 2022-12-09 Image fusion method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115829896A true CN115829896A (en) 2023-03-21

Family

ID=85546179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211582779.3A Pending CN115829896A (en) 2022-12-09 2022-12-09 Image fusion method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115829896A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117788532A (en) * 2023-12-26 2024-03-29 四川新视创伟超高清科技有限公司 Ultra-high definition double-light fusion registration method based on FPGA in security field

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117788532A (en) * 2023-12-26 2024-03-29 四川新视创伟超高清科技有限公司 Ultra-high definition double-light fusion registration method based on FPGA in security field

Similar Documents

Publication Publication Date Title
US10504244B2 (en) Systems and methods to improve camera intrinsic parameter calibration
WO2018214365A1 (en) Image correction method, apparatus, device, and system, camera device, and display device
CN109345467B (en) Imaging distortion correction method, imaging distortion correction device, computer equipment and storage medium
WO2021114990A1 (en) Method and apparatus for correcting face distortion, electronic device, and storage medium
CN111083458B (en) Brightness correction method, system, equipment and computer readable storage medium
WO2022143283A1 (en) Camera calibration method and apparatus, and computer device and storage medium
CN111080571B (en) Camera shielding state detection method, device, terminal and storage medium
CN110809781A (en) Image processing method, control terminal and storage medium
CN112233189B (en) Multi-depth camera external parameter calibration method and device and storage medium
CN111385461B (en) Panoramic shooting method and device, camera and mobile terminal
CN107704798A (en) Image weakening method, device, computer-readable recording medium and computer equipment
CN108769665A (en) Data transmission method, device, electronic equipment and computer readable storage medium
CN115829896A (en) Image fusion method and device and electronic equipment
CN109068060B (en) Image processing method and device, terminal device and computer readable storage medium
CN110689565B (en) Depth map determination method and device and electronic equipment
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
US20220309627A1 (en) Face image straight line processing method, terminal device and storage medium
CN114494013A (en) Image splicing method, device, equipment and medium
CN110458870B (en) Image registration, fusion and occlusion detection method and device and electronic equipment
Santana-Cedrés et al. Estimation of the lens distortion model by minimizing a line reprojection error
CN109753886B (en) Face image evaluation method, device and equipment
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device
CN110770786A (en) Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof
CN116883515A (en) Optical environment adjusting method and optical calibration device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination