CN113159229B - Image fusion method, electronic equipment and related products - Google Patents

Image fusion method, electronic equipment and related products Download PDF

Info

Publication number
CN113159229B
CN113159229B CN202110548512.1A CN202110548512A CN113159229B CN 113159229 B CN113159229 B CN 113159229B CN 202110548512 A CN202110548512 A CN 202110548512A CN 113159229 B CN113159229 B CN 113159229B
Authority
CN
China
Prior art keywords
image
visible light
infrared
determining
saliency map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110548512.1A
Other languages
Chinese (zh)
Other versions
CN113159229A (en
Inventor
张跃强
郭宏希
陈文均
刘肖琳
李狄龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Qiantang Science and Technology Innovation Center
Original Assignee
Shenzhen University
Qiantang Science and Technology Innovation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Qiantang Science and Technology Innovation Center filed Critical Shenzhen University
Priority to CN202110548512.1A priority Critical patent/CN113159229B/en
Publication of CN113159229A publication Critical patent/CN113159229A/en
Application granted granted Critical
Publication of CN113159229B publication Critical patent/CN113159229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image fusion method, electronic equipment and related products, wherein the method comprises the following steps: acquiring an infrared image through an infrared camera; the method comprises the steps that a visible light image is obtained through a visible light camera, and the infrared camera and the visible light camera correspond to the same shooting range; determining a first saliency map of the infrared image; determining a second saliency map of the visible light image; determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map; and carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image. The embodiment of the application can improve the image quality in the military detection task.

Description

Image fusion method, electronic equipment and related products
Technical Field
The application relates to the technical field of image processing, in particular to an image fusion method, electronic equipment and related products.
Background
In a military detection task, a inspector usually uses an image acquisition device to monitor a target scene so as to grasp the position and motion information of the target, but a single visible light sensor image cannot capture the blocked or camouflaged target in the scene and is easy to interfere with imaging by dark light and smoke, so that an infrared camera is also used for acquiring heat radiation information of the target and converting the heat radiation information into brightness information in the target detection task, and the problem that the image quality is reduced due to the detection of the visible light camera such as blocking, camouflage and dark light is solved. However, the infrared camera imaging has the problems that the brightness distribution does not accord with the visual habit of human eyes, the resolution is low, and the acquisition capability of scene detail information is insufficient. Therefore, how to improve the image quality in the military detection task is needed to be solved.
Disclosure of Invention
The embodiment of the application provides an image fusion method, electronic equipment and related products, which can improve the image quality in military detection tasks.
In a first aspect, an embodiment of the present application provides an image fusion method, including:
acquiring an infrared image through an infrared camera;
the method comprises the steps that a visible light image is obtained through a visible light camera, and the infrared camera and the visible light camera correspond to the same shooting range;
determining a first saliency map of the infrared image;
determining a second saliency map of the visible light image;
determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
and carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image.
In a second aspect, an embodiment of the present application provides an image fusion apparatus, including: an acquisition unit, a determination unit and an image fusion unit, wherein,
the acquisition unit is used for acquiring an infrared image through the infrared camera; the method comprises the steps that a visible light camera is used for obtaining a visible light image, and the infrared camera and the visible light camera correspond to the same shooting range;
The determining unit is used for determining a first saliency map of the infrared image; determining a second saliency map of the visible light image; determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
the image fusion unit is used for carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, according to the image fusion method, the electronic device and the related products described in the embodiments of the present application, an infrared image is obtained through an infrared camera, a visible light image is obtained through a visible light camera, the infrared camera and the visible light camera correspond to the same shooting range, a first saliency map of the infrared image is determined, a second saliency map of the visible light image is determined, a first weight of the infrared image and a second weight of the visible light image are determined according to the first saliency map and the second saliency map, image fusion is performed on the infrared image and the visible light image according to the first weight and the second weight, so as to obtain a fusion image, and further, a region of a investigation target in the image can be determined by performing saliency detection on the image, the infrared image and the visible light image in the region are fused in the fusion process, so that loss of scene details caused by pixel level fusion of the infrared image and the visible light image is reduced.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic flow chart of an image fusion method according to an embodiment of the present application;
FIG. 1B is a schematic illustration of a heatable checkerboard calibration plate for calibrating an infrared and visible light binocular system according to an embodiment of the present application;
FIG. 2 is a flowchart of another image fusion method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a functional unit composition block diagram of an image fusion apparatus according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the list of steps or elements but may include, in one possible example, other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The electronic device according to the embodiment of the present application may include, but is not limited to: smart phones, tablet computers, smart robots, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to wireless modems, as well as various forms of user devices (UserEquipment, UE), mobile stations (MobileStation, MS), terminal devices (terminal devices), etc., without limitation, the electronic device may also be a server.
Referring to fig. 1A, fig. 1A is a schematic flow chart of an image fusion method according to an embodiment of the present application, as shown in the drawings, applied to an electronic device, the image fusion method includes:
101. and acquiring an infrared image through an infrared camera.
In the embodiment of the application, the infrared image may be any frame of image in the video shot by the infrared camera. In a specific implementation, the electronic device can shoot through the infrared camera, and then an infrared image can be obtained.
Optionally, the step 101 of acquiring an infrared image by using an infrared camera may include the following steps:
11. acquiring a target environmental temperature;
12. determining a first shooting parameter corresponding to the target environmental temperature according to a mapping relation between a preset environmental temperature and shooting parameters;
13. and shooting according to the first shooting parameters to obtain the infrared image.
In the embodiment of the present application, a mapping relationship between a preset ambient temperature and a shooting parameter may be stored in advance in the electronic device, and the shooting parameter may be at least one of the following: the focal length, the sensitivity, the infrared light brightness, the wavelength of the infrared light, the operating frequency of the infrared light, the transmitting power of the infrared light, the operating current of the infrared camera, the operating voltage of the infrared camera, the operating power of the infrared camera, and the like are not limited herein.
In a specific implementation, the electronic device can acquire the target environmental temperature, and determine a first shooting parameter corresponding to the target environmental temperature according to a mapping relation between the preset environmental temperature and the shooting parameter, so that shooting parameters suitable for the temperature can be obtained, and then shooting is performed according to the first shooting parameter to obtain an infrared image, thereby being beneficial to improving the image quality of the infrared image.
Optionally, the step 11 of obtaining the target ambient temperature may include the following steps:
111. acquiring a preview image through the infrared camera;
112. determining an average gray value of the preview image;
113. determining a reference temperature corresponding to the average gray according to a mapping relation between a preset temperature and a gray value;
114. dividing the preview image into a plurality of areas, and determining the gray value of each area in the plurality of areas to obtain a plurality of gray values;
115. performing mean square error operation according to the gray values to obtain a target mean square error;
116. determining a target optimization factor corresponding to the target mean square error according to a mapping relation between a preset mean square error and the optimization factor;
117. and optimizing the reference temperature according to the target optimization factor to obtain the target environmental temperature.
In a specific implementation, a mapping relation between a preset temperature and a gray value and a mapping relation between a preset mean square error and an optimization factor can be stored in electronic equipment in advance.
Specifically, the electronic device may acquire the preview image through the infrared camera, and determine an average gray value of the preview image, and since the principle of the infrared image is based on temperature imaging, the average gray value reflects the overall temperature of the object in the shooting scene to a certain extent. Furthermore, the electronic device may determine the reference temperature corresponding to the average gray according to the mapping relationship between the preset temperature and the gray value. The electronic device may further divide the preview image into a plurality of regions, determine gray values of each region in the plurality of regions to obtain a plurality of gray values, perform a mean square error operation according to the plurality of gray values to obtain a target mean square error, and determine a target optimization factor corresponding to the target mean square error according to a mapping relationship between a preset mean square error and an optimization factor, wherein a value range of the optimization factor may be-0.2 to 0.2, and further determine an influence degree between the neighborhoods by considering that a temperature change between the neighborhoods in the image also affects an actual temperature of a shooting scene, so as to optimize a reference temperature, thereby facilitating accurate evaluation of a temperature of the scene, and finally perform optimization processing on the reference temperature according to the target optimization factor to obtain a target environmental temperature, which is as follows:
Target ambient temperature= (1+target optimization factor) ×reference temperature
Further, the ambient temperature can be accurately estimated by the preview image.
102. And obtaining a visible light image through a visible light camera, wherein the infrared camera and the visible light camera correspond to the same shooting range.
In the embodiment of the application, the electronic equipment can comprise an infrared camera system and a visible camera system, and the infrared camera and the visible camera can be arranged on the infrared camera system and the visible camera system. The infrared camera and the visible light camera correspond to the same shooting range, namely calibration is carried out between the infrared camera and the visible light camera, and the shot preview pictures are registered.
Optionally, the step 102 of obtaining the visible light image by the visible light camera may include the following steps:
21. acquiring a target environment parameter;
22. determining a second shooting parameter corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and the shooting parameter;
23. and shooting according to the second shooting parameters to obtain the visible light image.
In a specific implementation, the environmental parameter may be at least one of: the ambient brightness, ambient color temperature, ambient humidity, weather, etc., are not limited herein, and the photographing parameters may be at least one of the following: sensitivity, exposure time, focal length, zoom parameter, and the like are not limited herein.
In a specific implementation, a mapping relation between a preset environmental parameter and a shooting parameter can be stored in the electronic device in advance, and then the electronic device can acquire a target environmental parameter, then a second shooting parameter corresponding to the target environmental parameter is determined according to the mapping relation between the preset environmental parameter and the shooting parameter, and shooting is performed according to the second shooting parameter to obtain a visible light image, so that a shooting image which is suitable for the environment can be obtained.
103. A first saliency map of the infrared image is determined.
In a specific implementation, since the infrared image has only gray information and has low resolution, the saliency detection of the infrared video sequence can be realized based on global contrast.
Optionally, the determining the first saliency map of the infrared image in the step 103 may include the following steps:
31. acquiring a histogram of the infrared image;
32. a first saliency map of the infrared image is determined from the histogram.
In specific implementation, for an infrared video sequence chart, i.e. an infrared image, the significance calculation formula of each pixel is as follows:
V(p)=|I p -I 1 |+|I p -I 2 |+...+|I p -I N |
wherein p represents the pixel position in the infrared image, I p Representing gray values at corresponding positions in the infrared image, N representing the number of pixels in the infrared image, and V may represent the first saliency map. Furthermore, the above formula can be further simplified with image histogram distribution:
Wherein L represents the number of gray levels in the image, which can be 255, h j Representing the number of pixels with the gray level of j in the infrared image, calculating the saliency, and obtaining a saliency map corresponding to the infrared image through normalization.
104. A second saliency map of the visible light image is determined.
In specific implementation, the image salience is an important visual feature in the image, and represents the importance degree of human eyes to each region of the image. The electronic device may be a second saliency map of the visible light image.
Optionally, the determining the second saliency map of the visible light image in step 104 may include the following steps:
41. acquiring color channel parameters of the visible light image;
42. determining a red-green color component, a blue Huang Yanse component, a brightness component and a motion component of the visible light image according to the color channel parameters;
43. determining a first reference expression of the visible light image according to the red-green color component, the blue Huang Yanse component, the luminance component, and the motion component;
44. simplifying the reference expression to obtain a simplified expression;
45. performing quaternary Fourier transform on the simplified expression to obtain a frequency domain expression;
46. Extracting target phase information according to the frequency domain expression, and performing inverse Fourier transform on the target phase information to obtain a second reference expression;
47. and filtering the second reference expression to obtain the second saliency map.
In a specific implementation, in the embodiment of the present application, the color channel parameters may be r, g, and b color channel parameters. The visible light image can be any frame image in a visible light video sequence, and the significance of the visible light video sequence is detected: the acquired picture can be represented by four components of red, green, blue and yellow, brightness and motion, and then quaternary Fourier transform is carried out on the picture to obtain a phase spectrum, wherein the formula is as follows:
RG(t)=R(t)-G(t)
BY(t)=B(t)-Y(t)
M(t)=|I(t)-I(t-1)|
wherein t represents the current frame number of the video sequence, r, g and b represent three color channels of the input image respectively, RG, BY, I and M obtained BY calculation represent two color components (red-green color component, blue Huang Yanse component), luminance component and motion component of the image respectively, and at this time, the current frame image can be represented BY a quaternary number, so as to obtain a first reference expression, which is specifically as follows:
q(t)=M(t)+RG(t)u 1 +BG(t)u 1 +BY(t)u 2 +I(t)u 3
wherein u is i I=1, 2,3 satisfiesu 1 ⊥u 2 ,u 2 ⊥u 3 ,u 1 ⊥u 3 ,u 3 =u 1 u 2
Further, q (t) can be further simplified into the following formula, resulting in a simplified expression of q (t), specifically as follows:
q(t)=f 1 (t)+f 2 (t)u 2
f 1 (t)=M(t)+RG(t)u 1
f 2 (t)=BY(t)+I(t)u 1
And (3) performing quaternary Fourier transform on q (t) to obtain a frequency domain expression, wherein the frequency domain expression is specifically as follows:
Q[u,v]=F 1 [u,v]+F 2 [u,v]u 2
let Q (t) be the frequency domain representation of Q (t), then Q (t) can be written as a pole number form:
Q(t)=||Q(t)||e uφ(t)
wherein, ii Q (t) is a frequency spectrum portion of fourier transform, Φ (t) is a phase spectrum portion of fourier transform, let ||q (t) |=1, then phase information of Q (t) frequency spectrum, that is, target phase information, is extracted, and the second reference expression can be obtained by performing inverse fourier transform on the phase information, which is specifically as follows:
q′=ρ 01 u 12 u 23 u 3
wherein ρ is i (i=0, 1,2, 3) represents the value of each part of the quaternion after the inverse fourier transform of the phase spectrum. Finally, the inverse fourier transform result may be subjected to gaussian filtering to obtain a second saliency Map sM (t), sM being an abbreviation for saliency Map, specifically as follows:
sM(t)=g*||q′(t)|| 2
where g represents a two-dimensional gaussian filter kernel and x represents a convolution operation.
105. And determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map.
In specific implementation, the image salience is an important visual feature in the image, and represents the importance degree of human eyes to each region of the image. Further, the electronic device may determine a first weight of the infrared image and a second weight of the visible light image based on the first saliency map and the second saliency map.
In a specific implementation, the step 105 of determining the first weight of the infrared image and the second weight of the visible light image according to the first saliency map and the second saliency map may include the following steps:
51. determining a first weight of the infrared image according to the following formula:
wherein W is 1 For the first weight, V 1 For the first saliency map, V 2 Is the second saliency map;
52. determining a second weight of the visible light image according to the following formula:
W 2 =1-W 1
wherein W is 2 And the second weight value is the second weight value.
The method comprises the steps of detecting the saliency of an image, determining the region of a detection target in the image, fusing infrared and visible light images in the region in the fusion process, and reducing the loss of scene details caused by pixel-level fusion of the infrared and visible light images. The occupation ratio of the infrared image and the visible light image in fusion can be determined through the saliency map, and further, the quality of the fusion image can be improved.
106. And carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image.
In a specific implementation, the electronic device may perform image fusion on the infrared image and the visible light image according to the first weight and the second weight, that is, perform a weighting operation to obtain a fused image, which specifically includes:
Fusion image=first weight x infrared image+second weight x visible light image
The image fusion method described by the embodiment of the application can realize image fusion and image fusion between video sequences. Furthermore, the imaging characteristics of the infrared camera and the visible light camera can be utilized to fuse the images acquired by the infrared camera and the visible light camera, the problems that the detail of a single infrared camera is unclear and the visible light camera is easily affected by fog and dim light can be overcome, and more abundant scene information can be obtained. In addition, in the embedded device, real-time and lightweight requirements are met for the visible light and infrared image acquisition processing system, and the processing speed of 25 frames per second is required to be achieved for a 1080p resolution visible light image and a 640 x 512 infrared image under the condition of limited computing resources.
In a specific implementation, after the infrared camera system and the visible camera system are calibrated and registered, the electronic equipment calculates the saliency of the infrared video and the visible video respectively, and fuses the infrared image and the visible image in a weighted mode according to the saliency map, so that the blocked and camouflaged targets in the scene are easier to observe and detect in the images, and the investigation capability of the single visible camera system is improved.
According to the embodiment of the application, the saliency information of the complementary source image is obtained through a saliency extraction algorithm. The video saliency extraction algorithm can extract color, brightness and motion information in an image, and the phase spectrum of the image spectrum after Fourier transformation can represent the position with smaller periodicity or homogeneity of the original image, so that the position of the candidate object is determined. The region of the candidate object in the infrared image is fused with the visible light image by utilizing the saliency information of the image, so that the scene detail distortion caused by the introduction of the non-target region of the infrared image by pixel-level fusion can be avoided.
The motion information in the video saliency is extracted by using an inter-frame difference method, the inter-frame difference is very sensitive to the motion target, the saliency extraction on the inter-frame difference result can enable the motion target to have higher weight in the fusion stage, and compared with a method for detecting the saliency by using a single image, the detection capability of the motion target can be effectively improved.
Of course, before step 101, the infrared camera and the visible light camera may be calibrated, in specific implementation, as shown in fig. 1B, the infrared and visible light binocular camera is calibrated, the left image is a heatable checkerboard calibration plate photographed by the infrared camera, and the right image is a heatable checkerboard calibration plate photographed by the visible light camera: the infrared and visible binocular systems were calibrated using a heatable checkerboard calibration plate as shown in fig. 1B.
The imaging process of the monocular camera can be represented by adopting a pinhole model, and the relationship between a pixel coordinate system of the monocular camera and a world coordinate system is as follows:
where s represents the scale factor from world coordinate system to image coordinate system, (x) p ,y p ) Representing the position in the pixel coordinate system, (X) w ,Y w ,Z w ) Representing coordinates of space points in a coordinate system, (f) x ,f y ) Represents the equivalent focal length of the camera in the x and y directions, and gamma represents the degree of non-orthogonality in the x and y directions, (x) 0 ,y 0 ) The translation of the pixel coordinate system to the camera coordinate system is represented, K represents the camera internal parameter matrix, and M represents the camera external parameter matrix. The internal and external parameters of the camera can be obtained by Zhang Zhengyou calibration method, assuming a checkerboard plane Z W =0, the spatial to image mapping can be reduced to:
wherein r is 1 And r 2 Respectively representing the first two columns of the external reference rotation matrix, let H=Kr 1 r 2 T]I.e. H is the homography matrix of image plane to world plane, the mapping of space to image can be expressed as:
after the least square optimization method is used for solving the internal and external parameter matrixes of two cameras, assuming that the coordinate of a world coordinate system is p for any point in space w The coordinates of the left and right camera coordinate system are p l 、p r The following formula is satisfied:
the world coordinate system can be eliminated:
the transformation relation of the two-phase coordinate system can be obtained:
Because the binocular system is generally used in the field, and the target is tens of meters away from the binocular system, the translation relation between the infrared camera and the visible camera can be ignored, and an image registration initial value is obtained:
and (3) utilizing an image registration initial value, and further utilizing a scale invariant feature transform (scale invariant feature transform, SIFT) key point detection method, a least square matching method and the like to realize image fine registration, so that a fusion result has a better visual effect.
The specific implementation steps of the image fusion method described in the embodiment of the application are as follows:
1. calibrating an infrared and visible light binocular system: collecting a plurality of images containing checkerboards by using a binocular system, and extracting corner points of the checkerboards in the images;
2. calibrating an infrared camera and a visible light camera respectively by using checkerboard corner information in the infrared and visible light images to obtain internal parameters and external parameters of the infrared camera and the visible light camera;
3. calculating the conversion relation between the infrared camera and the visible light camera coordinate system by utilizing the position information of the angular points in the checkerboard image shot at the same moment;
4. obtaining a conversion relation of an image coordinate system according to the internal and external parameters of the infrared camera and the visible light camera and the conversion relation between the infrared camera coordinate system and the visible light camera coordinate system, and taking the conversion relation as an initial parameter of binocular system registration;
5. Performing fine registration on the initial registration parameters by using a least square matching method;
6. calculating the color, brightness and motion components RG, BY, I and M of the visible light color image;
7. calculating the significance of the visible light image;
8. calculating the histogram distribution of the infrared image, and obtaining a saliency map of the infrared image based on the histogram distribution;
9. for the obtained infrared and visible light saliency map V 1 、V 2 The weight of the final infrared image is calculated:
the weight of the visible light image is W 2 =1-W 1 Based on W again 1 W is provided 2 And (5) image fusion is carried out on the infrared image and the visible light image.
According to the embodiment of the application, on one hand, the region of the investigation target in the image is determined by performing saliency detection on the image, and the infrared and visible light images in the region are fused in the fusion process, so that the loss of scene details caused by pixel-level fusion of the infrared and visible light images is reduced. In addition, the fusion method based on video saliency detection does not need to decompose images, reduces the use and calculation time of an operation memory, and can achieve better real-time performance on an embedded platform; on the other hand, on an embedded platform with limited computing resources, the saliency of the infrared video is computed, and the salient targets are fused without reducing scene details. The method based on video saliency fusion can provide candidate areas for the subsequent target detection algorithm, improves the calculation speed of the subsequent target detection system, and has the advantages of light weight, good instantaneity and strong expandability.
Optionally, before the step 101 of acquiring the infrared image by the infrared camera, the method may further include the following steps:
a1, performing double-target calibration on the infrared camera and the visible light camera by using a heatable checkerboard to obtain a calibration result;
a2, determining a perspective transformation matrix between the infrared camera and the visible light camera according to the calibration result;
the step 101 of acquiring an infrared image by using an infrared camera may be implemented as follows:
acquiring an infrared image through an infrared camera, and performing perspective transformation on the infrared image according to the perspective transformation matrix;
the step 102 of obtaining the visible light image by the visible light camera may be implemented as follows:
and obtaining a visible light image through a visible light camera, and performing perspective transformation on the visible light image according to the perspective transformation matrix.
In a specific implementation, the electronic device may perform binocular calibration on the infrared and visible light cameras by using a heatable checkerboard, the calibration method may be to perform calibration on the two cameras by using a Zhang Zhengyou calibration method to obtain internal and external parameters, then convert external parameter matrixes of the two cameras according to a corresponding relationship to obtain a perspective transformation matrix, obtain an infrared image by using the infrared camera, perform perspective transformation on the infrared image according to the perspective transformation matrix, obtain a visible light image by using the visible light camera, perform perspective transformation on the visible light image according to the perspective transformation matrix, and determine a salient map of the infrared image and the visible light image after perspective transformation on the basis.
It can be seen that, in the image fusion method described in the embodiment of the application, an infrared image is obtained through an infrared camera, a visible light image is obtained through a visible light camera, the infrared camera and the visible light camera correspond to the same shooting range, a first saliency map of the infrared image is determined, a second saliency map of the visible light image is determined, a first weight of the infrared image and a second weight of the visible light image are determined according to the first saliency map and the second saliency map, the infrared image and the visible light image are subjected to image fusion according to the first weight and the second weight, a fusion image is obtained, and further, the region of a detection target in the image can be determined through saliency detection of the image, the loss of infrared and visible light image pixel level fusion to scene details is reduced in the fusion process.
In accordance with the embodiment shown in fig. 1A, please refer to fig. 2, fig. 2 is a schematic flow chart of an image fusion method according to an embodiment of the present application, which is applied to an electronic device, and the image fusion method includes:
201. And (3) performing binocular calibration on the infrared camera and the visible light camera by using the heatable checkerboard to obtain a calibration result.
202. And determining a perspective transformation matrix between the infrared camera and the visible light camera according to the calibration result.
203. And acquiring an infrared image through the infrared camera, and performing perspective transformation on the infrared image according to the perspective transformation matrix.
204. And obtaining a visible light image through the visible light camera, performing perspective transformation on the visible light image according to the perspective transformation matrix, and enabling the infrared camera and the visible light camera to correspond to the same shooting range.
205. A first saliency map of the infrared image is determined.
206. A second saliency map of the visible light image is determined.
207. And determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map.
208. And carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image.
The specific description of the steps 201 to 208 may refer to the corresponding steps of the image fusion method described in fig. 1A, and will not be repeated herein.
It can be seen that, in the image fusion method described in the embodiment of the present application, the heatable checkerboard is utilized to perform binocular calibration on the infrared camera and the visible light camera, a calibration result is obtained, a perspective transformation matrix between the infrared camera and the visible light camera is determined according to the calibration result, an infrared image is obtained through the infrared camera, perspective transformation is performed on the infrared image according to the perspective transformation matrix, a visible light image is obtained through the visible light camera, perspective transformation is performed on the visible light image according to the perspective transformation matrix, the infrared camera and the visible light camera correspond to the same shooting range, a first significant map of the infrared image is determined, a second significant map of the visible light image is determined, the method comprises the steps of determining a first weight of an infrared image and a second weight of a visible light image according to a first saliency map and a second saliency map, performing image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image, further, determining the region of a detection target in the image by performing saliency detection on the image, and fusing the infrared image and the visible light image in the region in the fusion process, so that the loss of the pixel-level fusion of the infrared image and the visible light image to scene details is reduced.
In accordance with the above embodiment, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in the drawing, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the programs include instructions for executing the following steps:
acquiring an infrared image through an infrared camera;
the method comprises the steps that a visible light image is obtained through a visible light camera, and the infrared camera and the visible light camera correspond to the same shooting range;
determining a first saliency map of the infrared image;
determining a second saliency map of the visible light image;
determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
and carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image.
It can be seen that, in the electronic device described in the embodiment of the present application, an infrared image is obtained through an infrared camera, a visible light image is obtained through a visible light camera, the infrared camera and the visible light camera correspond to the same shooting range, a first saliency map of the infrared image is determined, a second saliency map of the visible light image is determined, a first weight of the infrared image and a second weight of the visible light image are determined according to the first saliency map and the second saliency map, image fusion is performed on the infrared image and the visible light image according to the first weight and the second weight, so as to obtain a fusion image, and further, the region of a investigation target in the image can be determined by performing saliency detection on the image, so that the loss of scene details caused by infrared and visible light image pixel level fusion in the region is reduced.
Optionally, in said determining the first saliency map of the infrared image, the program comprises instructions for:
acquiring a histogram of the infrared image;
a first saliency map of the infrared image is determined from the histogram.
Optionally, in said determining the second saliency map of the visible light image, the program comprises instructions for:
acquiring color channel parameters of the visible light image;
determining a red-green color component, a blue Huang Yanse component, a brightness component and a motion component of the visible light image according to the color channel parameters;
determining a first reference expression of the visible light image according to the red-green color component, the blue Huang Yanse component, the luminance component, and the motion component;
simplifying the reference expression to obtain a simplified expression;
performing quaternary Fourier transform on the simplified expression to obtain a frequency domain expression;
extracting target phase information according to the frequency domain expression, and performing inverse Fourier transform on the target phase information to obtain a second reference expression;
and filtering the second reference expression to obtain the second saliency map.
Optionally, in the determining the first weight of the infrared image and the second weight of the visible light image according to the first saliency map and the second saliency map, the program includes instructions for:
determining a first weight of the infrared image according to the following formula:
wherein W is 1 For the first weight, V 1 For the first saliency map, V 2 Is the second saliency map;
determining a second weight of the visible light image according to the following formula:
W 2 =1-W 1
wherein W is 2 And the second weight value is the second weight value.
Optionally, before the acquiring the infrared image by the infrared camera, the program further includes instructions for:
performing double-target calibration on the infrared camera and the visible light camera by using a heatable checkerboard to obtain a calibration result;
determining a perspective transformation matrix between the infrared camera and the visible light camera according to the calibration result;
wherein, acquire infrared image through infrared camera, include:
acquiring an infrared image through an infrared camera, and performing perspective transformation on the infrared image according to the perspective transformation matrix;
the obtaining of the visible light image by the visible light camera comprises:
And obtaining a visible light image through a visible light camera, and performing perspective transformation on the visible light image according to the perspective transformation matrix.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It is to be understood that, in order to achieve the above-described functions, they comprise corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units according to the method example, for example, each functional unit can be divided corresponding to each function, or two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 4 is a block diagram showing functional units of an image fusion apparatus 400 according to an embodiment of the present application, the apparatus 400 includes: an acquisition unit 401, a determination unit 402, and an image fusion unit 403, wherein,
the acquiring unit 401 is configured to acquire an infrared image through an infrared camera; the method comprises the steps that a visible light camera is used for obtaining a visible light image, and the infrared camera and the visible light camera correspond to the same shooting range;
the determining unit 402 is configured to determine a first saliency map of the infrared image; determining a second saliency map of the visible light image; determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
the image fusion unit 403 is configured to perform image fusion on the infrared image and the visible light image according to the first weight and the second weight, so as to obtain a fused image.
It can be seen that, in the image fusion device described in the embodiment of the application, an infrared image is obtained through an infrared camera, a visible light image is obtained through a visible light camera, the infrared camera and the visible light camera correspond to the same shooting range, a first saliency map of the infrared image is determined, a second saliency map of the visible light image is determined, a first weight of the infrared image and a second weight of the visible light image are determined according to the first saliency map and the second saliency map, image fusion is performed on the infrared image and the visible light image according to the first weight and the second weight, so as to obtain a fusion image, furthermore, the region of a detection target in the image can be determined through saliency detection on the image, the loss of infrared and visible light image pixel level fusion on scene details is reduced, in addition, the fusion method based on video saliency detection does not need to decompose the image, the use and calculation time of an operation memory is reduced, good real-time can be achieved on an embedded platform, and the image quality is facilitated to be improved, and the real-time performance is guaranteed.
Optionally, in the determining the first saliency map of the infrared image, the determining unit 402 is specifically configured to:
acquiring a histogram of the infrared image;
a first saliency map of the infrared image is determined from the histogram.
Optionally, in the determining the second saliency map of the visible light image, the determining unit 402 is specifically configured to:
acquiring color channel parameters of the visible light image;
determining a red-green color component, a blue Huang Yanse component, a brightness component and a motion component of the visible light image according to the color channel parameters;
determining a first reference expression of the visible light image according to the red-green color component, the blue Huang Yanse component, the luminance component, and the motion component;
simplifying the reference expression to obtain a simplified expression;
performing quaternary Fourier transform on the simplified expression to obtain a frequency domain expression;
extracting target phase information according to the frequency domain expression, and performing inverse Fourier transform on the target phase information to obtain a second reference expression;
and filtering the second reference expression to obtain the second saliency map.
Optionally, in the determining the first weight of the infrared image and the second weight of the visible light image according to the first saliency map and the second saliency map, the determining unit 402 is specifically configured to:
determining a first weight of the infrared image according to the following formula:
wherein W is 1 For the first weight, V 1 For the first saliency map, V 2 Is the second saliency map;
determining a second weight of the visible light image according to the following formula:
W 2 =1-W 1
wherein W is 2 And the second weight value is the second weight value.
Optionally, before the acquiring the infrared image by the infrared camera, the apparatus 400 is further specifically configured to:
performing double-target calibration on the infrared camera and the visible light camera by using a heatable checkerboard to obtain a calibration result;
determining a perspective transformation matrix between the infrared camera and the visible light camera according to the calibration result;
wherein, acquire infrared image through infrared camera, include:
acquiring an infrared image through an infrared camera, and performing perspective transformation on the infrared image according to the perspective transformation matrix;
the obtaining of the visible light image by the visible light camera comprises:
And obtaining a visible light image through a visible light camera, and performing perspective transformation on the visible light image according to the perspective transformation matrix.
It may be understood that the functions of each program module of the image fusion apparatus of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method of image fusion, the method comprising:
acquiring an infrared image through an infrared camera;
the method comprises the steps that a visible light image is obtained through a visible light camera, and the infrared camera and the visible light camera correspond to the same shooting range;
Determining a first saliency map of the infrared image;
determining a second saliency map of the visible light image;
determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
performing image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image;
wherein, acquire infrared image through infrared camera, include:
acquiring a target environmental temperature;
determining a first shooting parameter corresponding to the target environmental temperature according to a mapping relation between a preset environmental temperature and shooting parameters;
shooting according to the first shooting parameters to obtain the infrared image;
wherein the obtaining the target environmental temperature includes:
acquiring a preview image through the infrared camera;
determining an average gray value of the preview image;
determining a reference temperature corresponding to the average gray according to a mapping relation between a preset temperature and a gray value;
dividing the preview image into a plurality of areas, and determining the gray value of each area in the plurality of areas to obtain a plurality of gray values;
Performing mean square error operation according to the gray values to obtain a target mean square error;
determining a target optimization factor corresponding to the target mean square error according to a mapping relation between a preset mean square error and the optimization factor;
optimizing the reference temperature according to the target optimization factor to obtain the target environment temperature, wherein the target environment temperature is specifically: target ambient temperature= (1+ target optimization factor) reference temperature.
2. The method of claim 1, wherein the determining the first saliency map of the infrared image comprises:
acquiring a histogram of the infrared image;
a first saliency map of the infrared image is determined from the histogram.
3. The method of claim 1, wherein the determining the second saliency map of the visible light image comprises:
acquiring color channel parameters of the visible light image;
determining a red-green color component, a blue Huang Yanse component, a brightness component and a motion component of the visible light image according to the color channel parameters;
determining a first reference expression of the visible light image according to the red-green color component, the blue Huang Yanse component, the luminance component, and the motion component;
Simplifying the reference expression to obtain a simplified expression;
performing quaternary Fourier transform on the simplified expression to obtain a frequency domain expression;
extracting target phase information according to the frequency domain expression, and performing inverse Fourier transform on the target phase information to obtain a second reference expression;
and filtering the second reference expression to obtain the second saliency map.
4. A method according to any one of claims 1-3, wherein said determining a first weight for the infrared image and a second weight for the visible image from the first saliency map and the second saliency map comprises:
determining a first weight of the infrared image according to the following formula:
wherein W is 1 For the first weight, V 1 For the first saliency map, V 2 Is the firstA second saliency map;
determining a second weight of the visible light image according to the following formula:
W 2 =1-W 1
wherein W is 2 And the second weight value is the second weight value.
5. A method according to any one of claims 1-3, wherein prior to said acquiring an infrared image by an infrared camera, the method further comprises:
performing double-target calibration on the infrared camera and the visible light camera by using a heatable checkerboard to obtain a calibration result;
Determining a perspective transformation matrix between the infrared camera and the visible light camera according to the calibration result;
wherein, acquire infrared image through infrared camera, include:
acquiring an infrared image through an infrared camera, and performing perspective transformation on the infrared image according to the perspective transformation matrix;
the obtaining of the visible light image by the visible light camera comprises:
and obtaining a visible light image through a visible light camera, and performing perspective transformation on the visible light image according to the perspective transformation matrix.
6. An image fusion apparatus, the apparatus comprising: an acquisition unit, a determination unit and an image fusion unit, wherein,
the acquisition unit is used for acquiring an infrared image through the infrared camera; the method comprises the steps that a visible light camera is used for obtaining a visible light image, and the infrared camera and the visible light camera correspond to the same shooting range;
the determining unit is used for determining a first saliency map of the infrared image; determining a second saliency map of the visible light image; determining a first weight of the infrared image and a second weight of the visible light image according to the first saliency map and the second saliency map;
The image fusion unit is used for carrying out image fusion on the infrared image and the visible light image according to the first weight and the second weight to obtain a fusion image;
wherein, acquire infrared image through infrared camera, include:
acquiring a target environmental temperature;
determining a first shooting parameter corresponding to the target environmental temperature according to a mapping relation between a preset environmental temperature and shooting parameters;
shooting according to the first shooting parameters to obtain the infrared image;
wherein the obtaining the target environmental temperature includes:
acquiring a preview image through the infrared camera;
determining an average gray value of the preview image;
determining a reference temperature corresponding to the average gray according to a mapping relation between a preset temperature and a gray value;
dividing the preview image into a plurality of areas, and determining the gray value of each area in the plurality of areas to obtain a plurality of gray values;
performing mean square error operation according to the gray values to obtain a target mean square error;
determining a target optimization factor corresponding to the target mean square error according to a mapping relation between a preset mean square error and the optimization factor;
Optimizing the reference temperature according to the target optimization factor to obtain the target environment temperature, wherein the target environment temperature is specifically: target ambient temperature= (1+ target optimization factor) reference temperature.
7. The apparatus according to claim 6, wherein in said determining the first saliency map of the infrared image, the determining unit is specifically configured to:
acquiring a histogram of the infrared image;
a first saliency map of the infrared image is determined from the histogram.
8. The apparatus according to claim 6, wherein in said determining the second saliency map of the visible light image, the determining unit is specifically configured to:
acquiring color channel parameters of the visible light image;
determining a red-green color component, a blue Huang Yanse component, a brightness component and a motion component of the visible light image according to the color channel parameters;
determining a first reference expression of the visible light image according to the red-green color component, the blue Huang Yanse component, the luminance component, and the motion component;
simplifying the reference expression to obtain a simplified expression;
performing quaternary Fourier transform on the simplified expression to obtain a frequency domain expression;
Extracting target phase information according to the frequency domain expression, and performing inverse Fourier transform on the target phase information to obtain a second reference expression;
and filtering the second reference expression to obtain the second saliency map.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202110548512.1A 2021-05-19 2021-05-19 Image fusion method, electronic equipment and related products Active CN113159229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110548512.1A CN113159229B (en) 2021-05-19 2021-05-19 Image fusion method, electronic equipment and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110548512.1A CN113159229B (en) 2021-05-19 2021-05-19 Image fusion method, electronic equipment and related products

Publications (2)

Publication Number Publication Date
CN113159229A CN113159229A (en) 2021-07-23
CN113159229B true CN113159229B (en) 2023-11-07

Family

ID=76876727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110548512.1A Active CN113159229B (en) 2021-05-19 2021-05-19 Image fusion method, electronic equipment and related products

Country Status (1)

Country Link
CN (1) CN113159229B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113990044B (en) * 2021-11-19 2023-07-25 福建钰融科技有限公司 Waste liquid transportation safety early warning method, safety early warning device and related products
CN115311180A (en) * 2022-07-04 2022-11-08 优利德科技(中国)股份有限公司 Image fusion method and device based on edge features, user terminal and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700381A (en) * 2015-03-13 2015-06-10 中国电子科技集团公司第二十八研究所 Infrared and visible light image fusion method based on salient objects
CN108769550A (en) * 2018-05-16 2018-11-06 中国人民解放军军事科学院军事医学研究院 A kind of notable analysis system of image based on DSP and method
CN109584193A (en) * 2018-10-24 2019-04-05 航天时代飞鸿技术有限公司 A kind of unmanned plane based on target preextraction is infrared and visible light image fusion method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
CN111062905A (en) * 2019-12-17 2020-04-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN111080724A (en) * 2019-12-17 2020-04-28 大连理工大学 Infrared and visible light fusion method
WO2020107716A1 (en) * 2018-11-30 2020-06-04 长沙理工大学 Target image segmentation method and apparatus, and device
CN111652243A (en) * 2020-04-26 2020-09-11 中国人民解放***箭军工程大学 Infrared and visible light image fusion method based on significance fusion
CN111881853A (en) * 2020-07-31 2020-11-03 中北大学 Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN112115979A (en) * 2020-08-24 2020-12-22 深圳大学 Fusion method and device of infrared image and visible image
CN112215787A (en) * 2020-04-30 2021-01-12 温州大学智能锁具研究院 Infrared and visible light image fusion method based on significance analysis and adaptive filter

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700381A (en) * 2015-03-13 2015-06-10 中国电子科技集团公司第二十八研究所 Infrared and visible light image fusion method based on salient objects
CN108769550A (en) * 2018-05-16 2018-11-06 中国人民解放军军事科学院军事医学研究院 A kind of notable analysis system of image based on DSP and method
CN109584193A (en) * 2018-10-24 2019-04-05 航天时代飞鸿技术有限公司 A kind of unmanned plane based on target preextraction is infrared and visible light image fusion method
WO2020107716A1 (en) * 2018-11-30 2020-06-04 长沙理工大学 Target image segmentation method and apparatus, and device
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
CN111062905A (en) * 2019-12-17 2020-04-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN111080724A (en) * 2019-12-17 2020-04-28 大连理工大学 Infrared and visible light fusion method
CN111652243A (en) * 2020-04-26 2020-09-11 中国人民解放***箭军工程大学 Infrared and visible light image fusion method based on significance fusion
CN112215787A (en) * 2020-04-30 2021-01-12 温州大学智能锁具研究院 Infrared and visible light image fusion method based on significance analysis and adaptive filter
CN111881853A (en) * 2020-07-31 2020-11-03 中北大学 Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN112115979A (en) * 2020-08-24 2020-12-22 深圳大学 Fusion method and device of infrared image and visible image

Also Published As

Publication number Publication date
CN113159229A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109376667B (en) Target detection method and device and electronic equipment
WO2020113408A1 (en) Image processing method and device, unmanned aerial vehicle, system, and storage medium
CN113992861B (en) Image processing method and image processing device
CN109474780B (en) Method and device for image processing
CN110660088A (en) Image processing method and device
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113159229B (en) Image fusion method, electronic equipment and related products
KR101664123B1 (en) Apparatus and method of creating high dynamic range image empty ghost image by using filtering
WO2023134103A1 (en) Image fusion method, device, and storage medium
CN113902657A (en) Image splicing method and device and electronic equipment
CN107704798A (en) Image weakening method, device, computer-readable recording medium and computer equipment
CN109712177A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN104412298B (en) Method and apparatus for changing image
CN114820405A (en) Image fusion method, device, equipment and computer readable storage medium
Choi et al. A method for fast multi-exposure image fusion
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN113628134B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110392211A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN106469435B (en) Image processing method, device and equipment
CN111247558A (en) Image processing method, device, unmanned aerial vehicle, system and storage medium
CN116506732B (en) Image snapshot anti-shake method, device and system and computer equipment
Shah et al. Multimodal image/video fusion rule using generalized pixel significance based on statistical properties of the neighborhood
CN115631140A (en) Industrial robot image processing method based on image fusion
CN108269278B (en) Scene modeling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant