CN115457089A - Registration fusion method and device for visible light image and infrared image - Google Patents

Registration fusion method and device for visible light image and infrared image Download PDF

Info

Publication number
CN115457089A
CN115457089A CN202110645248.3A CN202110645248A CN115457089A CN 115457089 A CN115457089 A CN 115457089A CN 202110645248 A CN202110645248 A CN 202110645248A CN 115457089 A CN115457089 A CN 115457089A
Authority
CN
China
Prior art keywords
visible light
camera
image
infrared
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110645248.3A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
詹建明
孙福恭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kuang Chi Space Technology Co Ltd
Original Assignee
Shenzhen Kuang Chi Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kuang Chi Space Technology Co Ltd filed Critical Shenzhen Kuang Chi Space Technology Co Ltd
Priority to CN202110645248.3A priority Critical patent/CN115457089A/en
Priority to PCT/CN2022/095838 priority patent/WO2022257794A1/en
Publication of CN115457089A publication Critical patent/CN115457089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a registration and fusion method and a registration and fusion device for a visible light image and an infrared image, which are applied to equipment provided with a visible light camera and an infrared camera, wherein the optical axis of the infrared camera is vertical to the front view plane of the equipment, and the method comprises the following steps: establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, the conversion parameter and the horizontal distance between the target object and the visible light camera; and registering and fusing the visible light image and the infrared image of the target object according to the registration model. In the invention, the data of the visible light image and the data of the infrared image in the equipment with the double cameras can be fused into one image through the established registration model.

Description

Registration and fusion method and device for visible light image and infrared image
Technical Field
The invention relates to the field of image processing, in particular to a registration fusion method and device for a visible light image and an infrared image.
Background
At present, a plurality of infrared temperature measurement and face detection and recognition devices exist, the infrared temperature measurement and face detection and recognition devices are all fixedly installed devices, often an infrared camera module and a visible light camera module are integrated together, and the relative positions and the relative angles of the two camera modules are fixed. For example, a dual optical (infrared and visible light or white light) module design is adopted, because the optical axes of the centers of the two camera modules are fixed in parallel and do not change in the dual optical module design, the positions of the centers of the two camera modules in the Z-axis direction are the same, the heights of the centers of the two camera modules in the longitudinal direction (Y-axis direction) are the same and fixed, the relative positions of the centers of the two camera modules in the transverse direction (X-axis direction) are fixed and have a very small distance, or the relative position deviations of the heights of the centers of the two camera modules in the longitudinal direction (Y-axis direction) are fixed, and the positions of the centers of the two camera modules in the transverse direction (X-axis direction) are the same and fixed. The fixed infrared temperature measurement and face detection and recognition equipment has the following three characteristics that the central optical axes of the two camera modules are parallel, the zero coordinate positions of the Z axes of the two camera modules are the same, and the zero coordinate positions of the Y axes are the same or the zero coordinate positions of the X axes are the same.
These characteristics provide advantages for the data fusion of two different camera pictures, but in product design and various practical solutions, the two modules of the infrared camera and the visible light camera are not designed at the same position (the positions of the zero coordinates of the three axes X, Y and Z are different). For example, when the wearable application scene is used, the angles of the visible light camera or the infrared camera often need to be adjusted when people with different heights wear the wearable application scene and the application of different scenes, and the two camera modules, namely the infrared camera and the visible light camera, are often not realized by adopting a binocular module, so that the central optical axes of the two camera modules are not parallel (an included angle exists), and the zero point coordinate positions of the three axes X, Y and Z of the spatial positions of the other two camera modules are different. The above three characteristics of the fixed infrared temperature measurement and the face detection and recognition equipment have obvious differences, and the differences bring great challenges to the data fusion of two different camera pictures of the wearable equipment.
Although the existing equipment is provided with an infrared camera and a visible light camera, the registration and fusion of double images are not realized, data analysis processing can be performed on one camera picture, and the equipment cannot automatically fuse the data of the visible light picture and the data of the infrared picture into one image.
Disclosure of Invention
The invention provides a registration and fusion method and a registration and fusion device for a visible light image and an infrared image, which at least solve the problem that equipment provided with double cameras in the related technology cannot automatically fuse the data of a visible light picture and the data of an infrared picture into one picture.
According to an aspect of the present invention, a registration and fusion method for a visible light image and an infrared image is provided, which is applied to a device configured with a visible light camera and an infrared camera, wherein an optical axis of the infrared camera is perpendicular to a front view plane of the device, and the method includes: establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, the conversion parameter and the horizontal distance between the target object and the visible light camera; and registering and fusing the visible light image and the infrared image of the target object according to the registration model.
In an exemplary embodiment, the registration model is:
Figure BDA0003106930770000021
wherein A, B, C and D are respectively a first conversion parameter, a second conversion parameter, a third conversion parameter and a fourth conversion parameter, m, n and D are respectively the relative distances of the spatial positions of the visible light camera and the X, Y and Z axes of the infrared camera,
Figure BDA0003106930770000022
and gamma is the transverse angle and longitudinal angle of the optical axis of the visible light camera and the infrared camera, L is the horizontal distance between the target object and the visible light camera, and (x) VR ,y VR ) (x) is the visible image pixel coordinate IR ,y IR ) Is the infrared image pixel coordinate.
In an exemplary embodiment, before the visible light image and the infrared image of the target object are registered and fused according to the registration model, a transverse included angle and a longitudinal intersection angle between an optical axis of a visible light camera and an optical axis of an infrared camera in the registration model are calibrated.
In an exemplary embodiment, calibrating the transverse included angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model includes: selecting a first reference object with a first set distance from the horizontal distance of the visible light camera; simultaneously acquiring images of the first reference object through a visible light camera and an infrared camera, and acquiring coordinates of the same position of the first reference object in the visible light image and the infrared image; and according to the first set distance and the coordinates, calibrating a transverse included angle and a longitudinal intersection angle of an optical axis of the visible light camera and an optical axis of the infrared camera in the registration model.
In an exemplary embodiment, the transverse included angle between the optical axes of the two cameras in the registration model can be calibrated by the following formula
Figure BDA0003106930770000023
And longitudinal intersection angle γ:
Figure BDA0003106930770000024
Figure BDA0003106930770000025
wherein L is C Is a first set distance, (x) VR-C ,y VR-C ) Being the same as the first reference objectCoordinates of the position in the visible image, coordinates (x) of the same position of the first reference object in the infrared image IR-C ,y IR-C )。
In an exemplary embodiment, calibrating the transverse included angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model includes: selecting a second reference object with a horizontal distance from the visible light camera as a second set distance; adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image; and according to the second set distance and the coordinate value of the specific position, marking a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
In an exemplary embodiment, a transverse included angle between an optical axis of a visible light camera and an optical axis of an infrared camera in the registration model is calibrated through the following formula
Figure BDA0003106930770000031
And longitudinal intersection angle γ:
Figure BDA0003106930770000032
wherein L is C For a second set distance, the coordinates of the same position of the second reference object in the specific position in the visible light image are (0, 0), and the coordinates of the same position of the second reference object in the specific position in the visible light image are (0, 0);
or the like, or, alternatively,
Figure BDA0003106930770000033
wherein L is C For the second set distance, the coordinates of the same position of the second reference object at the specific position in the visible light image are (200, 100), and the coordinates of the same position of the second reference object at the specific position in the visible light image are (0, 0).
In an exemplary embodiment, after calibrating a transverse included angle and a longitudinal intersection angle between an optical axis of the visible light camera and an optical axis of the infrared camera in the registration model, the method further includes the following steps: substituting a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera into the registration model, and establishing a height and width mapping model of the visible light image and the infrared image; and establishing a mapping relation between the height of the designated area of the target object in the multiple groups of visible light images and the horizontal distance from the target object to the visible light camera based on the multiple different horizontal distances from the target object to the visible light camera.
In an exemplary embodiment, the registration fusing the visible light image and the infrared image of the target object according to the registration model includes: acquiring a central position coordinate value of a designated area of the target object in a visible light image and the height and width of the designated area of the target object in the visible light image, and finding a horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model according to the height and width of the designated area of the target object in the visible light image; inputting the corresponding horizontal distance value and the central position coordinate value of the designated area of the target object into the registration model, and calculating to obtain the central position coordinate value of the designated area of the target object in the infrared image; inputting the height and the width of the designated area of the target object in the visible light image into the height and width mapping model, and calculating to obtain the height and the width of the designated area of the target object in the infrared image; and determining the designated area of the target object in the infrared image according to the center position coordinates of the designated area of the target object in the infrared image and the height and width of the designated area of the target object in the infrared image.
In an exemplary embodiment, after determining the designated area of the target object in the infrared image, the method further includes: acquiring a highest temperature value in a designated area of the target object in the infrared image; and marking the temperature value at a specified position of a specified area of the target object in the visible light image.
According to another aspect of the present invention, there is provided a device for registering and fusing a visible light image and an infrared image, which is located on a device equipped with a visible light camera and an infrared camera, wherein an optical axis of the infrared camera is perpendicular to a front view plane of the device, the device comprising: the registration model establishing module is used for establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, the conversion parameter and the horizontal distance between the target object and the visible light camera; and the image fusion module is used for registering and fusing the visible light image and the infrared image of the target object according to the registration model.
According to a further aspect of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps in the above-described method embodiments.
According to yet another aspect of the present invention, there is also provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the above-described method embodiments when executing the computer program.
In the embodiment of the present invention, a registration model of coordinate positions of a visible light image of a target object acquired by a visible light camera and an infrared image of the target object acquired by an infrared camera is established according to a spatial relative position of the visible light camera and the infrared camera and a horizontal distance between the target object and the visible light camera, and the visible light image and the infrared image of the target object are registered and fused according to the registration model, so that a problem that a device configured with a dual optical and infrared camera cannot fuse data of the visible light image and data of the infrared image into one picture is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for registration fusion of visible light images and infrared images according to an embodiment of the invention;
FIG. 2 is a block diagram of a device for registering and fusing visible light images and infrared images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the horizontal position of a visible light camera and an infrared camera according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of the horizontal position of a visible light camera and an infrared camera according to another embodiment of the invention;
FIG. 5 is a flowchart of a method for registration and fusion of a visible light image and an infrared image according to a first embodiment of the invention;
fig. 6 is a flowchart of a registration fusion method of a visible light image and an infrared image according to a second embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The embodiment provides a registration and fusion method of a visible light image and an infrared image, which is applied to equipment equipped with a visible light camera and an infrared camera, for example, the equipment is a helmet equipped with the visible light camera and the infrared camera, one of the cameras is installed right in front of the helmet, the other camera is installed on one side of the helmet in an external hanging mode, and when a user wears the helmet, the two cameras shoot towards the front to obtain a visual angle close to the wearer. However, because the optical axes of the two cameras cannot be guaranteed to be parallel, and the two cameras have a certain distance in space, the images of the two cameras are difficult to coincide, and the temperature of the position cannot be directly calibrated on the visible light image. For this purpose, the images thereof need to be subjected to a fusion operation.
In one realisable example the optical axis of the infrared camera is perpendicular to the plane of the front view of the device. Fig. 1 is a flowchart of a registration fusion method of a visible light image and an infrared image according to an embodiment of the present invention, as shown in fig. 1, the flowchart includes the following steps:
step S102, establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera and the horizontal distance between the target object and the visible light camera;
and S104, registering and fusing the visible light image and the infrared image of the target object according to the registration model.
In step S102 of this embodiment, a registration model may be established, for example, as follows:
Figure BDA0003106930770000051
wherein A, B, C and D are respectively a first conversion parameter, a second conversion parameter, a third conversion parameter and a fourth conversion parameter, m, n and D are respectively the relative distances of the spatial positions of X, Y and Z axes of the visible light camera and the infrared camera,
Figure BDA0003106930770000053
and gamma is the transverse included angle and the longitudinal intersection angle of the optical axes of the visible light camera and the infrared camera respectively, L is the horizontal distance between the target object and the visible light camera, and (x) VR ,y VR ) (x) is the visible image pixel coordinate IR ,y IR ) Is the infrared image pixel coordinate.
In this embodiment, the registration model can be used to quickly register and fuse the coordinate positions of two camera frame objects. From this registration model, it can be known that the non-0 term in the registration model matrix is not a fixed value, is a function of the horizontal distance L of the object from the visible light camera, and changes with the change of the horizontal distance L of the object from the visible light camera, and therefore, the non-zero term in the registration model matrix cannot be determined by selecting a picture image acquired from a scene in which a specific object is at the horizontal distance from the visible light camera.
Before the registration model of this embodiment is applied, it is necessary to calibrate an unknown parameter in the registration model, that is, to calibrate a transverse included angle between two optical axes in the registration model of the visible light image and the infrared image
Figure BDA0003106930770000052
And a longitudinal intersection angle γ. For example, in this embodiment, the transverse included angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera can be calibrated in the following two ways:
the first calibration method comprises the following steps:
1) Selecting a first reference object with a horizontal distance from the visible light camera as a first set distance;
2) Simultaneously acquiring images of the first reference object through a visible light camera and an infrared camera, and obtaining coordinates of the same position of the first reference object in the visible light image and the infrared image respectively;
3) And according to the first set distance and the coordinates of the same position of the first reference object in the visible light image and the infrared image, respectively, calibrating a transverse included angle and a longitudinal intersection angle of the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
For example, the transverse included angle of the optical axes of the two cameras in the registration model can be calibrated by the following formula
Figure BDA0003106930770000061
Angle of intersection with longitudinal directionγ:
Figure BDA0003106930770000062
Figure BDA0003106930770000063
Wherein L is C Is a first set distance, (x) VR-C ,y VR-C ) Coordinates (x) of the same position of the first reference object in the infrared image for the same position of the first reference object in the visible image IR-C ,y IR-C )。
The second calibration method comprises the following steps:
1) Selecting a second reference object with a horizontal distance from the visible light camera as a second set distance;
2) Adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
3) And according to the second set distance and the coordinate value of the specific position, marking a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
For example, assume that the coordinate value x of a specific position VR 、y VR ,x IR 、y IR All are 0, and the transverse included angle of the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure BDA0003106930770000064
And the longitudinal intersection angle γ of the two optical axes are respectively:
Figure BDA0003106930770000065
for example: assuming coordinate values x of a specific position IR 、y IR Are all 0,x VR Is 200, y VR Is 100, according to the resultsThe registration model of the optical image and the infrared image can calculate the transverse included angle of the two optical axes
Figure BDA0003106930770000066
And the longitudinal intersection angle γ of the two optical axes are respectively:
Figure BDA0003106930770000067
in this embodiment, after calibrating the transverse included angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model, the method may further include the following steps: substituting a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera into the registration model, and establishing a height and width mapping model of the visible light image and the infrared image; and establishing a mapping relation between the height of the designated area of the target object in the multiple groups of visible light images and the horizontal distance from the target object to the visible light camera based on the multiple different horizontal distances from the target object to the visible light camera.
For example, if the registration model of the present embodiment is applied to a thermometry scene of a human body, the following height and width mapping models of the visible light image and the infrared image are established:
Figure BDA0003106930770000071
wherein A, C,
Figure BDA0003106930770000072
Gamma, n and L are known, lambda is configuration parameter, and the range of lambda is 0.1<Lambda is less than or equal to 1. In this embodiment, by adjusting the λ, interference caused by background temperature in a region other than a human face can be avoided.
In this embodiment, step S104 may include: acquiring a central position coordinate value of a designated area of the target object in a visible light image and the height and width of the designated area of the target object in the visible light image, and finding a horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model according to the height and width of the designated area of the target object in the visible light image; inputting the corresponding horizontal distance value and the central position coordinate value of the designated area of the target object into the registration model, and calculating to obtain the central position coordinate value of the designated area of the target object in the infrared image; inputting the height and the width of the designated area of the target object in the visible light image into the height and width mapping model, and calculating to obtain the height and the width of the designated area of the target object in the infrared image; and determining the designated area of the target object in the infrared image according to the center position coordinates of the designated area of the target object in the infrared image and the height and width of the designated area of the target object in the infrared image.
In this embodiment, after determining the designated area of the target object in the infrared image, the method may further include: acquiring the highest temperature value in the designated area of the target object in the infrared image; and marking the temperature value at the specified position of the specified area of the target object in the visible light image.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The embodiment also provides a registration and fusion device for visible light images and infrared images, which is used for implementing the above embodiments and preferred embodiments, and the description of the device is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 2 is a block diagram of a registration and fusion apparatus for visible light images and infrared images, which is located on a device equipped with a visible light camera and an infrared camera, wherein an optical axis of the infrared camera is perpendicular to a front view plane of the device, as shown in fig. 2, and the apparatus includes a registration model building module 10 and an image fusion module 20.
And the registration model establishing module 10 is configured to establish a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera and the horizontal distance between the target object and the visible light camera.
And the image fusion module 20 is configured to perform registration fusion on the visible light image and the infrared image of the target object according to the registration model.
It should be noted that the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in a plurality of processors.
In order to facilitate understanding of the technical solutions provided by the present invention, the following detailed description is made with reference to specific scenario embodiments.
Example 1
The embodiment provides a registration and fusion method of visible light and infrared images. The method is applied to equipment provided with a visible light camera and an infrared camera. Fig. 3 and 4 are schematic horizontal position views of a visible light camera and an infrared camera on the device according to the embodiment of the invention. Fig. 3 shows an apparatus area 1, an infrared camera 2, a visible light camera 3, an apparatus front view plane horizontal line 4, an infrared camera optical axis 5, a visible light camera optical axis 6, and a transverse angle 7 between the two optical axes.
As shown in fig. 3, the optical axis 5 of the infrared camera is perpendicular to the plane of the front view of the device, and the visible light camera 3 is located below the infrared camera 2, and the optical axis 6 thereof is not perpendicular to the plane of the front view of the device but intersects the optical axis 5 of the infrared camera.
In fig. 4, a device area 1, an infrared camera 2, a visible light camera 3, a device front view plane horizontal line 4, an infrared camera optical axis 5, a visible light camera optical axis 6, and a transverse included angle 7 of the two optical axes are shown. As shown in fig. 4, the infrared camera optical axis 5 is perpendicular to the device front view plane, the visible light camera 3 is located below the infrared camera 2, and the visible light camera optical axis 6 is not perpendicular to the device front view plane but intersects the infrared camera optical axis 5.
As shown in fig. 3 and 4, in the present embodiment, the optical axis of the infrared camera is perpendicular to the device front view plane, the optical axis of the visible light camera may not be perpendicular to the device front view plane, and the relative angle of the same object in two different camera pictures is 0 (i.e., the position of the same object in the pictures is not rotated).
The present embodiment will be described in detail below with reference to a scene in which a visible light camera and an infrared camera are used to measure temperature, but the technical solution provided in the present embodiment may also be applied to other scenes in which images need to be fused. As shown in fig. 5, the registration fusion of the visible light image and the infrared image provided by the present embodiment may include the following steps:
step S501, a registration model of the visible light image and the infrared image is established.
Specifically, in this step, the following registration model of the visible light image and the infrared image can be established according to the relative distances m, d, n of the spatial positions of the X, Y and Z axes of the visible light and the infrared cameras, the display resolution, the horizontal angle of the two optical axes (i.e., the angle of the ZOY planes of the two cameras), the longitudinal angle of the two optical axes (i.e., the angle of the ZOX planes of the two cameras), and the horizontal distance L of the target object from the visible light camera:
Figure BDA0003106930770000081
wherein A, B, C and D are conversion parameters, m, n and D are the relative distances of the X, Y and Z axes of the two cameras and are known constants,
Figure BDA0003106930770000091
and gamma is the intersection angle of the transverse included angle and the longitudinal direction of the optical axes of the two cameras, and the horizontal distance L between the target object and the visible light camera is the variable quantity. x is the number of VR And y VR Is the pixel coordinate value, x, of the visible light image IR And y IR Is the infrared image pixel coordinate value.
In the embodiment, the conversion parameters a, B, C, D can be calculated from the visible light horizontal and vertical field angles, the infrared horizontal and vertical field angles, and the infrared and visible display resolution size parameters. For example, in the case of a liquid,
Figure BDA0003106930770000092
wherein, w VR Is the horizontal display resolution, h, of the visible light camera VR Is the vertical display resolution, w, of the visible light camera IR Is the horizontal display resolution, h, of the infrared camera IR The vertical display resolution of the infrared camera is shown, alpha is the horizontal field angle of the visible light camera, beta is the vertical field angle of the visible light camera, theta is the horizontal field angle of the infrared camera, and phi is the vertical field angle of the infrared camera.
And step S502, calibrating a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model. In this embodiment, the horizontal angle and the vertical angle of intersection can be calibrated by the following method without manually adjusting the visible light and the infrared camera:
1) 1) selecting a horizontal distance L from the visible camera C Target object. For example, L C Can be selected in the range of [0.5 m-7 m ]];
2) The visible light and the infrared camera simultaneously acquire the image of the target object (the target object can be a regular cuboid, a cube or a part of a human body, such as the head five sense organs of a human or worn glasses), and find out the coordinates (x) in the visible light image of the same position of the object VR-C ,y VR-C ) And coordinates (x) in the infrared image IR-C ,y IR-C );
3) According to the registration model of the visible light image and the infrared image, the transverse included angle between the two optical axes can be calculated
Figure BDA0003106930770000093
And the longitudinal intersection angle γ of the two optical axes:
Figure BDA0003106930770000094
Figure BDA0003106930770000095
step S503, establishing a height and width mapping model of the visible light image and the infrared image as follows:
Figure BDA0003106930770000096
wherein A, C,
Figure BDA0003106930770000097
Gamma, n and L are known, lambda is configuration parameter, and the range of lambda is 0.1<Lambda is less than or equal to 1. By adjusting the size of lambda, the interference caused by background temperature brought by areas except the human face can be completely avoided.
In this embodiment, the size-proportional relationship of the object can be further and rapidly fused through the established height and width mapping model. For example, for a given distance L, the height H of the face picture in the visible light image of the center of the face picture VR-Face And width W VR-Face H can be obtained according to the height and width mapping proportion model of the visible light image and the infrared image ir_face And W ir_face And further according to x IR 、y IR 、H ir_face 、W ir_face The corresponding acquired temperature information of the infrared image face area can be acquired.
Step S504, a mapping table of range intervals corresponding to different face heights in the visible light image and horizontal distance L between the face and the visible light camera is established, for example, as shown in table 1, H can be calculated vr_face The range interval is divided into a plurality of intervals, wherein H k+1 >H k >H k-1 >......>H 6 >H 5 >H 4 >H 3 >H 2 >H 1 ,L 1 >L 2 >L 3 >L 4 >......>L k-2 >L k-1 >L k
TABLE 1
Hvr_face L
[H 1 ,H 2 ] L 1
(H 2 ,H 3 ] L 2
(H 3 ,H 4 ] L 3
(H k-2 ,H k-1 ] L k-2
(H k-1 ,H k ] L k-1
(H k ,H k+1 ] L k
Step S505, the human face detection module outputs one or more human face central position coordinate values (x) in the visible light image VR ,y VR ) And height H of face frame picture vr_face Width W vr_face Finding out the corresponding distance L value of the height or width of each detected face picture in the mapping model, and calculating the corresponding distance L value and the coordinate value (x) of the face center position VR ,y VR ) Inputting the image into a registration model of the visible light image and the infrared image, and calculating to obtain a coordinate value (x) of the center position of the face image corresponding to the infrared image IR ,y IR ) Until the calculation is completed, the central position coordinate value (x) of the face picture of the infrared image corresponding to the currently detected multiple persons is calculated VR ,y VR )。
Step S506, inputting the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculating to obtain the height H of the corresponding face picture ir_face And width W ir_face
Step S507, according to each detectionThe coordinates (x) of the center position of the face picture of the infrared image corresponding to the coming face IR ,y IR ) Height H of face picture ir_face And width W ir_face And determining the corresponding infrared image area, taking the highest temperature from the corresponding infrared image area, recording the highest temperature as the face temperature of the corresponding person, and marking each corresponding person temperature value around or in each face visible light picture face frame.
Through the steps of the embodiment, the problem of image registration and fusion under the condition that the positions of the visible light camera and the infrared camera are not parallel to each other (the two optical axes transversely have an intersection angle and the two optical axes longitudinally also have an intersection angle) and the positions of the two cameras in the X-axis, Y-axis and Z-axis directions are different is solved, and the problem of interference on face temperature detection caused by abnormal ambient temperature around the face in temperature measurement scene application is further solved.
When the method of the embodiment is applied to a temperature measurement scene, when the positions and the area range data of a plurality of faces are detected in a visible light picture, the face ranges corresponding to all the detected faces in an infrared picture can be quickly and accurately obtained, the face temperature data of the corresponding face ranges can be further and accurately obtained, the problem of interference on face temperature detection caused by abnormal ambient temperature of the faces can be completely solved, and the face temperature detection efficiency and accuracy are greatly improved.
Example 2
The embodiment provides another visible light and infrared image registration and fusion method, which can be applied to equipment provided with a visible light camera and an infrared camera. In the present embodiment, the spatial positional relationship between the visible light camera and the infrared camera can be seen in fig. 3 and 4.
As shown in fig. 3 and 4, in the present embodiment, the optical axis of the infrared camera is perpendicular to the device front view plane, the optical axis of the visible light camera may not be perpendicular to the device front view plane, and the relative angle of the same object in two different camera pictures is 0 (i.e., the position of the same object in the pictures is not rotated).
The present embodiment will be described in detail below with reference to a scene in which a visible light camera and an infrared camera are used to measure the temperature of multiple people, but the technical solution provided by the present embodiment may also be applied to other scenes in which image fusion is required. As shown in fig. 6, the shooting scene is a helmet equipped with a visible light camera and an infrared camera, and the method for shooting the human face in front of the helmet includes the following steps:
step S601, establishing a registration model of the visible light image and the infrared image.
Specifically, in this step, the following registration model of the visible light image and the infrared image can be established according to the parameters of the relative distance of the spatial positions of the X, Y and Z axes of the visible light and the infrared cameras, the relative distance of the display resolution, the horizontal and vertical field angles, the transverse included angle of the two optical axes (i.e., the included angle of the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (i.e., the included angle of the ZOX planes of the two cameras), and the horizontal distance L of the target object from the visible light camera:
Figure BDA0003106930770000111
wherein A, B, C and D are conversion parameters, m, n and D are the relative distances of the X, Y and Z axes of the two cameras and are known constants,
Figure BDA0003106930770000112
and gamma is a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera, and the horizontal distance L between the target object and the visible light camera is a variable quantity. x is the number of VR And y VR Is the pixel coordinate value, x, of the visible light image IR And y IR Is the infrared image pixel coordinate value.
In this embodiment, the conversion parameters a, B, C, and D may be obtained by calculating parameters of the horizontal and vertical field angles of the visible light camera, the horizontal and vertical field angles of the infrared camera, and the display resolution of the infrared camera and the visible light camera. For example, in the case of a liquid,
Figure BDA0003106930770000113
wherein w VR Is the horizontal display resolution, h, of the visible light camera VR Is the vertical display resolution, w, of the visible light camera IR Is the horizontal display resolution of the infrared camera, h IR The vertical display resolution of the infrared camera is shown, alpha is the horizontal field angle of the visible light camera, beta is the vertical field angle of the visible light camera, theta is the horizontal field angle of the infrared camera, and phi is the vertical field angle of the infrared camera.
Step S602, calibrating a transverse included angle and a longitudinal intersection angle between an optical axis of a visible light camera and an optical axis of an infrared camera in the registration model. In this embodiment, another calibration method for a transverse included angle and a longitudinal intersection angle in a registration model is provided, and specifically, the method may include the following steps:
1) The horizontal distance from the visible light camera to the infrared camera is selected to be L without manually adjusting the visible light camera and the infrared camera C An object. For example, L C Can be selected in the range of [0.3 m-7 m ]];
2) Adjusting the optical axis of the visible light camera so that the same position of the object or the human body is located at a specific position of the visible light picture and the infrared picture, and the specific position is displayed by a marker (such as a cross line or other marker images) on the visible light picture and the infrared picture, for example: x is the number of IR 、y IR Are all 0,x VR Is 200, y VR Is 100;
3) The transverse included angle of the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure BDA0003106930770000121
And the longitudinal intersection angle γ of the two optical axes are respectively:
Figure BDA0003106930770000122
the specific location of this embodiment can be flexibly selected, e.g., in another embodiment, x VR 、y VR ,x IR 、y IR Are all 0, and the transverse included angle of the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure BDA0003106930770000123
And the longitudinal intersection angle gamma of the two optical axes is respectively:
Figure BDA0003106930770000124
step S603, establishing a height and width mapping model of the visible light image and the infrared image as follows:
Figure BDA0003106930770000125
wherein A, C,
Figure BDA0003106930770000126
Gamma, n and L are known, lambda is configuration parameter, and the range of lambda is 0.1<Lambda is less than or equal to 1. By adjusting the size of lambda, the interference caused by background temperature brought by areas except the human face can be avoided.
In this embodiment, the size-proportional relationship of the object can be further and rapidly fused through the established height and width mapping model. For example, for a given distance L, the height H of the face picture in the visible light image of the center of the face picture VR-Face And width W VR-Face H can be obtained according to the height and width mapping proportion model of the visible light image and the infrared image ir_face And W ir_face And further according to x IR 、y IR 、H ir_face And W ir_face The corresponding acquired temperature information of the infrared image face area can be acquired.
Step S604, a mapping table of range intervals corresponding to different human face heights in the visible light image and horizontal distance L between the human face and the visible light camera is established, for example, as shown in table 1, H can be calculated vr_face The range section is divided into a plurality of sections, wherein Hk +1>Hk>Hk-1>......>H6>H5>H4>H3>H2>H1,L1>L2>L3>L4>......>Lk-2>Lk-1>Lk。
TABLE 1
Hvr_face L
[H1,H2] L1
(H2,H3] L2
(H3,H4] L3
(Hk-2,Hk-1] Lk-2
(Hk-1,Hk] Lk-1
(Hk,Hk+1] Lk
Step S605, the human face detection module outputs one or more human face central position coordinate values (x) in the visible light image VR ,y VR ) And height H of face frame picture vr_face Width W vr_face Finding out the corresponding distance L value of the height or width of each detected face picture in the mapping model, and comparing the corresponding distance L value with the coordinate value (x) of the face center position VR ,y VR ) Inputting the image into a registration model of the visible light image and the infrared image to calculate and obtain a central position coordinate value (x) of the face image corresponding to the infrared image IR ,y IR ) Until the calculation is completed, the central position coordinate value (x) of the face picture of the infrared image corresponding to the currently detected multiple persons is calculated VR ,y VR )。
Step S606, inputting the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculating to obtain the height H of the corresponding face picture ir_face And width W ir_face
Step S607, according to the face picture central position coordinate (x) of the infrared image corresponding to each detected face IR ,y IR ) Height H of face Picture ir_face And width W ir_face And determining the corresponding infrared image area, taking the highest temperature from the corresponding infrared image area, recording the highest temperature as the face temperature of the corresponding person, and marking each corresponding person temperature value around or in each face visible light picture face frame.
Through the steps of the embodiment, the problem of image registration and fusion under the condition that the positions of the visible light camera and the infrared camera in the X-axis, Y-axis and Z-axis directions are different due to the fact that central optical axes are not parallel (two optical axes have intersection angles horizontally and two optical axes also have intersection angles longitudinally) and the positions of the two cameras in the X-axis, Y-axis and Z-axis directions is solved, and the problem of interference on face temperature detection caused by abnormal ambient temperature around the face in temperature measurement scene application is further solved.
The technical scheme that this embodiment provided can solve among the wearable equipment visible light camera and infrared camera and have the image registration fusion problem under the central optical axis nonparallel, the position of two camera X axles, Y axle, Z axle directions all inequality condition, solves temperature measurement scene and uses, and the ambient temperature anomaly in the face leads to the face temperature detection interference problem. When the technical scheme provided by the embodiment is applied to the situation that when the visible light picture detects the positions and the area range data of a plurality of faces, the face ranges of all the detected faces corresponding to the infrared picture are quickly and accurately obtained, the face temperature data of the corresponding face ranges are further and accurately obtained, the problem of interference on face temperature detection caused by abnormal face ambient temperature can be completely solved, and the face temperature detection efficiency and accuracy are greatly improved. In addition, the calibration process of the image registration fusion model provided by the embodiment is simple and quick, a large amount of complex computing resource consumption is avoided, and compared with other algorithms requiring more computing resources to support the image fusion algorithm, the image registration fusion model provided by the embodiment requires fewer computing resources and has higher efficiency.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be configured to store program codes for performing the following steps:
optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A registration and fusion method of a visible light image and an infrared image is applied to equipment provided with a visible light camera and an infrared camera, wherein an optical axis of the infrared camera is vertical to a front view plane of the equipment, and the registration and fusion method is characterized by comprising the following steps:
establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, the conversion parameter and the horizontal distance between the target object and the visible light camera;
and registering and fusing the visible light image and the infrared image of the target object according to the registration model.
2. The method of claim 1, wherein the registration model is:
Figure FDA0003106930760000011
wherein A, B, C and D are respectively a first conversion parameter, a second conversion parameter, a third conversion parameter and a fourth conversion parameter, m, n and D are respectively the relative distances of the spatial positions of the visible light camera and the X, Y and Z axes of the infrared camera,
Figure FDA0003106930760000012
and gamma is a transverse clamp of the optical axes of the visible light camera and the infrared camera respectivelyAngle and longitudinal intersection angle, L is the horizontal distance of the target from the visible camera, (x) VR ,y VR ) (x) is the visible image pixel coordinate IR ,y IR ) Is the infrared image pixel coordinate.
3. The method according to claim 2, wherein the transverse included angle and the longitudinal intersection angle between the visible light camera and the optical axis of the infrared camera in the registration model are obtained by the following steps:
selecting a first reference object with a first set distance from the horizontal distance of the visible light camera;
simultaneously acquiring images of the first reference object through a visible light camera and an infrared camera, and acquiring coordinates of the same position of the first reference object in the visible light image and the infrared image respectively;
and respectively calibrating a transverse included angle and a longitudinal intersection angle of the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model according to the first set distance and the coordinates of the same position of the first reference object in the visible light image and the infrared image.
4. The method of claim 3, wherein the registration model includes a lateral angle between an optical axis of the visible camera and an optical axis of the infrared camera
Figure FDA0003106930760000013
And the longitudinal intersection angle γ is:
Figure FDA0003106930760000014
Figure FDA0003106930760000015
wherein L is C For a first set distance, (x) VR-C ,y VR-C ) Is a first reference objectCoordinates of the same position of the body in the visible image, and coordinates (x) of the same position of the first reference object in the infrared image IR-C ,y IR-C )。
5. The method according to claim 2, wherein the transverse included angle and the longitudinal intersection angle between the visible light camera and the optical axis of the infrared camera in the registration model are obtained by the following steps: :
selecting a second reference object with a horizontal distance from the visible light camera as a second set distance;
adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
and according to the second set distance and the coordinate value of the specific position, marking a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
6. The method of claim 5, wherein the registration model includes a lateral angle between an optical axis of a visible camera and an optical axis of an infrared camera
Figure FDA0003106930760000021
And the longitudinal intersection angle γ is:
Figure FDA0003106930760000022
wherein L is C For a second set distance, the coordinate of the same position of the second reference object at the specific position in the visible light image is (0, 0), and the coordinate of the same position of the second reference object at the specific position in the visible light image is (0, 0);
or the like, or a combination thereof,
Figure FDA0003106930760000023
wherein L is C For the second set distance, the coordinates of the same position of the second reference object at the specific position in the visible light image are (200, 100), and the coordinates of the same position of the second reference object at the specific position in the visible light image are (0, 0).
7. The method according to claim 3 or 5, after calibrating the transverse angle and the longitudinal angle of intersection between the optical axis of the visible-light camera and the optical axis of the infrared camera in the registration model, further comprising:
substituting a transverse included angle and a longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera into the registration model to establish a height and width mapping model of the visible light image and the infrared image;
and establishing a mapping relation between the height of the designated area of the target object in the multiple groups of visible light images and the horizontal distance from the target object to the visible light camera based on the plurality of different horizontal distances from the target object to the visible light camera.
8. The method of claim 7, wherein registering and fusing the visible light image and the infrared image of the target object according to the registration model comprises:
acquiring a central position coordinate value of a designated area of the target object in a visible light image and the height and width of the designated area of the target object in the visible light image, and finding a horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model according to the height and width of the designated area of the target object in the visible light image;
inputting the corresponding horizontal distance value and the central position coordinate value of the designated area of the target object into the registration model, and calculating to obtain the central position coordinate value of the designated area of the target object in the infrared image;
inputting the height and the width of the designated area of the target object in the visible light image into the height and width mapping model, and calculating to obtain the height and the width of the designated area of the target object in the infrared image;
and determining the designated area of the target object in the infrared image according to the center position coordinates of the designated area of the target object in the infrared image and the height and width of the designated area of the target object in the infrared image.
9. The method of claim 8, further comprising, after determining the designated area of the target object in the infrared image:
acquiring the highest temperature value in the designated area of the target object in the infrared image;
and marking the temperature value at the specified position of the specified area of the target object in the visible light image.
10. A registration and fusion device for visible light images and infrared images, which is located on a device equipped with a visible light camera and an infrared camera, wherein an optical axis of the infrared camera is perpendicular to a front view plane of the device, and the registration and fusion device is characterized by comprising:
the registration model establishing module is used for establishing a registration model of the coordinate positions of the visible light image of the target object acquired by the visible light camera and the infrared image of the target object acquired by the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, the conversion parameter and the horizontal distance between the target object and the visible light camera;
and the image fusion module is used for registering and fusing the visible light image and the infrared image of the target object according to the registration model.
11. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, wherein the computer program, when being executed by a processor, carries out the steps of the method as claimed in any one of the claims 1 to 9.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method as claimed in any of claims 1 to 9 are implemented when the computer program is executed by the processor.
CN202110645248.3A 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image Pending CN115457089A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110645248.3A CN115457089A (en) 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image
PCT/CN2022/095838 WO2022257794A1 (en) 2021-06-08 2022-05-30 Method and apparatus for processing visible light image and infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110645248.3A CN115457089A (en) 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image

Publications (1)

Publication Number Publication Date
CN115457089A true CN115457089A (en) 2022-12-09

Family

ID=84294461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110645248.3A Pending CN115457089A (en) 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image

Country Status (1)

Country Link
CN (1) CN115457089A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541629A (en) * 2023-06-25 2024-02-09 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541629A (en) * 2023-06-25 2024-02-09 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet
CN117541629B (en) * 2023-06-25 2024-06-11 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet

Similar Documents

Publication Publication Date Title
CN111210468B (en) Image depth information acquisition method and device
JP4224260B2 (en) Calibration apparatus, method, result diagnosis apparatus, and calibration chart
US9482515B2 (en) Stereoscopic measurement system and method
CN112232279B (en) Personnel interval detection method and device
US9454822B2 (en) Stereoscopic measurement system and method
CN106514651A (en) Measurement system and calibration method
US9182220B2 (en) Image photographing device and method for three-dimensional measurement
WO2017022033A1 (en) Image processing device, image processing method, and image processing program
JP2014013147A5 (en)
WO2022257794A1 (en) Method and apparatus for processing visible light image and infrared image
JP7151879B2 (en) Camera calibration device, camera calibration method, and program
US9286506B2 (en) Stereoscopic measurement system and method
US20190156511A1 (en) Region of interest image generating device
CN103198481A (en) Camera calibration method and achieving system of same
CN115457089A (en) Registration fusion method and device for visible light image and infrared image
CN115187612A (en) Plane area measuring method, device and system based on machine vision
Santana-Cedrés et al. Estimation of the lens distortion model by minimizing a line reprojection error
JPH09210649A (en) Three dimensional measurement device
US20240159621A1 (en) Calibration method of a portable electronic device
CN115457090A (en) Registration fusion method and device for visible light image and infrared image
CN112241984A (en) Binocular vision sensor calibration method and device, computer equipment and storage medium
JP2002288633A (en) Image processing device and its positional correction method
CN114862960A (en) Multi-camera calibrated image ground leveling method and device, electronic equipment and medium
EP2283314B1 (en) Stereoscopic measurement system and method
KR101239671B1 (en) Method and apparatus for correcting distortion of image by lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination