WO2022252697A1 - Camera image fusion method and camera image fusion system - Google Patents

Camera image fusion method and camera image fusion system Download PDF

Info

Publication number
WO2022252697A1
WO2022252697A1 PCT/CN2022/076719 CN2022076719W WO2022252697A1 WO 2022252697 A1 WO2022252697 A1 WO 2022252697A1 CN 2022076719 W CN2022076719 W CN 2022076719W WO 2022252697 A1 WO2022252697 A1 WO 2022252697A1
Authority
WO
WIPO (PCT)
Prior art keywords
image area
pixel
image
camera
pixels
Prior art date
Application number
PCT/CN2022/076719
Other languages
French (fr)
Chinese (zh)
Inventor
尹睿
徐敏
张卫
Original Assignee
上海集成电路制造创新中心有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海集成电路制造创新中心有限公司 filed Critical 上海集成电路制造创新中心有限公司
Publication of WO2022252697A1 publication Critical patent/WO2022252697A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the invention relates to the technical field of camera image fusion, in particular to a camera image fusion method and a camera image fusion system.
  • the object of the present invention is to provide a camera image fusion method and a camera image fusion system to reduce errors and improve image fusion speed.
  • the camera image fusion method of the present invention includes the following steps:
  • S1 Adjust the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image;
  • step S3 Comparing the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the If the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and then step S4 is performed; if the pixel matching degree is greater than or equal to the matching degree threshold, then it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, and then perform the steps S2 and S3:
  • the beneficial effect of the camera image fusion method is: take the pixel matching degree between the first main image area of the first image and the first secondary image area of the second image, and compare the pixel matching degree with the predicted
  • the set matching degree threshold is compared to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the pixel matching degree is less than the matching degree threshold, Then it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, which reduces the image fusion error and improves the image fusion speed.
  • the step S2 includes a pixel column difference calculation step, multiplying the baseline lengths of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then multiplying the first process value dividing by the pixel depth value of the first main image area to obtain the pixel column difference. It is beneficial in that it is convenient to calculate the coordinates of the images captured by the two cameras.
  • the step S2 also includes an image area acquisition step, selecting any position on the first image as the first main image area, and then according to the coordinate position in the first main image area and the The pixel column difference obtains the first secondary main image area.
  • the beneficial effect is that the selected image area is guaranteed to be the same position of the target captured by the two cameras.
  • the step S2 also includes a pixel gray scale difference calculation step, calculating the difference between the pixel value of each coordinate position in the first main image area and the pixel value of the corresponding coordinate position in the first secondary image area , to get the grayscale difference.
  • the beneficial effect is that it is convenient to calculate the gray scale difference.
  • the step S2 also includes a pixel gray scale difference calculation step, calculating the pixel value of each coordinate position in the first main image area and the calibrated pixel value of the corresponding coordinate position in the first secondary image area to get the grayscale difference.
  • the step S2 further includes a summing step of pixel grayscale differences, adding the absolute values of the grayscale differences to obtain the pixel matching degree.
  • the beneficial effect is that it is convenient to obtain the pixel matching degree.
  • the step S3 includes judging that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, then adding black pixel information to the first main image area .
  • the camera image fusion method further includes a matching degree threshold calculation step.
  • the present invention also provides a camera image fusion system, including a TOF camera, an RGB camera, an adjustment unit, a pixel matching degree calculation unit, a judging unit and a fusion unit, the TOF camera and the RGB camera are used to photograph the same target, to Respectively form a first image and a second image;
  • the adjustment unit is used to adjust the positions of the TOF camera and the RGB camera, so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or at In the same plane;
  • the pixel matching unit degree calculation unit is used to obtain the pixel matching degree between the first main image area of the first image and the first sub image area of the second image, and the first main image area
  • the relative position of the image area on the first image is the same as the relative position of the first secondary image area on the second image;
  • the judging unit is used to compare the pixel matching degree with a preset matching degree Thresholds are compared to determine whether the pixels in the first main image area and the pixels in the first
  • the beneficial effect of the camera image fusion system is that: the pixel matching unit degree calculation unit is used to obtain the pixel matching between the first main image area of the first image and the first secondary image area of the second image degree, the judging unit is used to compare the pixel matching degree with a preset matching degree threshold to judge whether the pixels in the first main image area and the pixels in the first secondary image area conform to the image Fusion requirements, if the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, reducing the image fusion error, Improved image fusion speed.
  • the pixel matching degree calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is used to multiply the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain the first A process value, and then divide the first process value by the pixel depth value of the first main image area to obtain the pixel column difference.
  • the pixel matching calculation unit further includes an image area acquisition module, the image area acquisition module is used to select any position on the first image as the first main image area, and then according to the second The coordinate position in a main image area and the pixel column difference are used to obtain the first secondary main image area.
  • the pixel matching calculation unit further includes a first pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel value and the pixel value of each coordinate position in the first main image area. Differences in pixel values at corresponding coordinate positions in the first secondary image area to obtain grayscale differences.
  • the pixel matching calculation unit further includes a second pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the difference between the pixel value of each coordinate position in the first main image area and The difference between the calibrated pixel values of the corresponding coordinate positions in the first sub-image area to obtain a gray level difference.
  • Fig. 1 is the structural block diagram of camera image fusion system of the present invention
  • FIG. 2 is a flow chart of the camera image fusion method of the present invention.
  • an embodiment of the present invention provides a camera image fusion system.
  • the TOF camera 101 and the RGB camera 102 are used to photograph the same target to form a first image and a second image respectively;
  • the adjustment unit 103 is used to adjust the TOF The positions of the camera and the RGB camera, so that the imaging plane of the TOF camera 101 and the imaging plane of the RGB camera 102 are parallel to each other or in the same plane;
  • the pixel matching calculation unit 104 is used to obtain the The degree of pixel matching between the first main image area of the first image and the first sub-image area of the second image, the relative position of the first main image area on the first image to the first The relative positions of the secondary image areas on the second image are the same;
  • the judging unit 105 is used to compare the pixel matching degree with a preset matching degree threshold to judge the pixels in the first main image area and whether the pixels in the first sub-image area meet the image fusion
  • the fusion unit 106 is used to determine that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements in the judging unit, then the first secondary image area The corresponding pixel color information is added to the first main image area.
  • the pixel matching calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is used to multiply the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain A first process value is obtained, and then the first process value is divided by the pixel depth value of the first main image area to obtain a pixel column difference.
  • the pixel matching calculation unit further includes an image area acquisition module, the image area acquisition module is used to select any position on the first image as the first main image area, and then according to the The coordinate position in the first main image area and the pixel column difference obtain the first sub-main image area.
  • the pixel matching calculation unit further includes a first pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel values of each coordinate position in the first main image area The difference between the pixel value and the corresponding coordinate position in the first secondary image area to obtain the gray level difference.
  • the pixel matching degree calculation unit further includes a second pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel values of each coordinate position in the first main image area The difference between the calibrated pixel value and the corresponding coordinate position in the first sub-image area to obtain a grayscale difference.
  • Fig. 2 is a flowchart of an image fusion method in some embodiments of the present invention. With reference to Fig. 2, described image fusion method comprises the following steps:
  • S1 Adjust the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image;
  • step S3 Comparing the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the If the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and then step S4 is performed; if the pixel matching degree is greater than or equal to the matching degree threshold, then it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, and then perform the steps S2 and S3:
  • step S4 Add the pixel color information corresponding to the first secondary image area to the first main image area, and then perform step S2 and step S3 until all positions of the first image and the second image are equal Finished.
  • the image fusion error is reduced, and the image fusion speed is improved.
  • epipolar correction is performed on the TOF camera and the RGB camera, so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are located in the same plane.
  • the epipolar line correction is a well-known technology in the art, and will not be described in detail here.
  • the step S2 includes a pixel column difference calculation step, multiplying the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then multiplying the first process value
  • the process value is divided by the pixel depth value of the first main image area to obtain the pixel column difference.
  • the step S2 further includes an image area acquisition step, selecting any position on the first image as the first main image area, and then according to the coordinate position in the first main image area and the obtained The pixel column difference is used to obtain the first sub-main image area.
  • the step S2 further includes a pixel gray scale difference calculation step, calculating the difference between the pixel value at each coordinate position in the first main image area and the pixel value at the corresponding coordinate position in the first secondary image area difference to get the grayscale difference.
  • the step S2 further includes a pixel gray scale difference calculation step, calculating the pixel value of each coordinate position in the first main image area and the pixel after calibration at the corresponding coordinate position in the first secondary image area Value difference to get the grayscale difference.
  • the step S2 further includes a summing step of pixel grayscale differences, adding the absolute values of the grayscale differences to obtain the pixel matching degree.
  • the calculation formula for calculating the pixel matching degree is f(x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, IL represents the pixel value of the pixel in the first main image area, I R represents the pixel value of the pixel in the image area of the first secondary image.
  • the calculation formula for calculating the pixel matching degree is f(x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, IL represents the pixel value of the pixel in the first main image area, I R represents the pixel value of the pixel in the image area of the first sub-image, and g() represents the calibration function, which represents the difference in perception ability between the TOF camera and the RGB camera for the same color light.
  • the method for calculating the gray level difference of the first point is the same, and will not be described in detail here.
  • the step S3 includes judging that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, then adding black pixel information to the first main image area. image area.
  • the camera image fusion method further includes a matching degree threshold calculation step, and the calculation formula for calculating the matching degree threshold is: tgresl is the threshold of the matching degree, h is the lighting consistency function between different cameras, row_size is the resolution of the camera in the V direction, and col_size is the resolution of the camera in the H direction. For example, if the resolution of the camera is 1920 ⁇ 1080, the row_size is 1080, col_size is 1920, i is 0 ⁇ row_size, j is 0 ⁇ col_size, g() represents the calibration function, which represents the difference between the perception ability of the TOF camera and the RGB camera for the same color light.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a camera image fusion method, comprising: enabling an imaging plane of a TOF camera and an imaging plane of an RGB camera to be parallel to each other or to be located in the same plane; photographing a same target to form a first image and a second image respectively; obtaining a degree of pixel matching between a first primary image area of the first image and a first secondary image area of the second image; comparing the degree of pixel matching to a preset degree-of-matching threshold to determine whether pixels in the first primary image area and pixels in the first secondary image area meet a fusion requirement; and if the degree of pixel matching is less than the degree-of-matching threshold, determining that the image fusion requirement is met, and adding pixel color information corresponding to the first secondary image area to the first primary image area. Thus, image fusion errors are reduced and the image fusion speed is increased. The present invention further provides a camera image fusion system for implementing the camera image fusion method.

Description

相机图像融合方法及相机图像融合***Camera image fusion method and camera image fusion system
交叉引用cross reference
本申请要求2021年5月31日提交的申请号为2021105966899的中国专利申请的优先权。上述申请的内容以引用方式被包含于此。This application claims the priority of the Chinese patent application with application number 2021105966899 filed on May 31, 2021. The content of the above application is incorporated herein by reference.
技术领域technical field
本发明涉及相机图像融合技术领域,尤其涉及一种相机图像融合方法及相机图像融合***。The invention relates to the technical field of camera image fusion, in particular to a camera image fusion method and a camera image fusion system.
技术背景technical background
目计算机视觉在现实生活中有着广泛的应用,但日常使用的RGB相机只能获取视野的色彩信息,而RGB-D相机在提供常用的RGB图像的同时,能够提供对应的深度信息,但通过视差计算深度存在误差,且依赖于计算机计算每个像素点的匹配像素,算力消耗巨大。Computer vision is widely used in real life, but the RGB camera used in daily life can only obtain the color information of the field of view, while the RGB-D camera can provide the corresponding depth information while providing the commonly used RGB image, but through the parallax There are errors in calculating the depth, and it depends on the computer to calculate the matching pixels of each pixel, which consumes a lot of computing power.
因此,有必要提供一种新型的相机图像融合方法及相机图像融合***以解决现有技术中存在的上述问题。Therefore, it is necessary to provide a novel camera image fusion method and camera image fusion system to solve the above-mentioned problems in the prior art.
发明概要Summary of the invention
本发明的目的在于提供一种相机图像融合方法及相机图像融合***,以减少误差,并提高图像融合速度。The object of the present invention is to provide a camera image fusion method and a camera image fusion system to reduce errors and improve image fusion speed.
为实现上述目的,本发明的所述相机图像融合方法,包括以下步骤:In order to achieve the above object, the camera image fusion method of the present invention includes the following steps:
S1:调节TOF相机和RGB相机的位置,以使所述TOF相机的成像平面和所述RGB相机的成像平面相互平行或位于同一平面内,所述TOF相机 和RGB相机拍摄同一目标,以分别形成第一图像和第二图像;S1: Adjust the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image;
S2:获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同;S2: Obtain the pixel matching degree between the first main image area of the first image and the first sub-image area of the second image, the first main image area is located at a relative position on the first image The same relative position as that of the first secondary image area on the second image;
S3:将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,然后执行步骤S4;若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,然后执行所述步骤S2和所述步骤S3:S3: Comparing the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the If the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and then step S4 is performed; if the pixel matching degree is greater than or equal to the matching degree threshold, then it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, and then perform the steps S2 and S3:
S4:将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域,然后执行步骤S2和步骤S3,直至所述第一图像和所述第二图像的所有位置均处理完毕。S4: Add the pixel color information corresponding to the first secondary image area to the first main image area, and then perform step S2 and step S3 until all positions of the first image and the second image are equal Finished.
所述相机图像融合方法的有益效果在于:取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,减少了图像融合误差,提高了图像融合速度。The beneficial effect of the camera image fusion method is: take the pixel matching degree between the first main image area of the first image and the first secondary image area of the second image, and compare the pixel matching degree with the predicted The set matching degree threshold is compared to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the pixel matching degree is less than the matching degree threshold, Then it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, which reduces the image fusion error and improves the image fusion speed.
优选地,所述步骤S2包括像素列差计算步骤,将所述TOF相机和RGB 相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述第一主图像区域的像素深度值,以得到像素列差。其有益在于:便于计算两个相机所拍摄图像的坐标。Preferably, the step S2 includes a pixel column difference calculation step, multiplying the baseline lengths of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then multiplying the first process value dividing by the pixel depth value of the first main image area to obtain the pixel column difference. It is beneficial in that it is convenient to calculate the coordinates of the images captured by the two cameras.
进一步优选地,所述步骤S2还包括图像区域获取步骤,选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。其有益效果在于:保证了所选取图像区域为两个相机所拍摄目标的同一个位置。Further preferably, the step S2 also includes an image area acquisition step, selecting any position on the first image as the first main image area, and then according to the coordinate position in the first main image area and the The pixel column difference obtains the first secondary main image area. The beneficial effect is that the selected image area is guaranteed to be the same position of the target captured by the two cameras.
进一步优选地,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。其有益效果在于:便于计算灰度差。Further preferably, the step S2 also includes a pixel gray scale difference calculation step, calculating the difference between the pixel value of each coordinate position in the first main image area and the pixel value of the corresponding coordinate position in the first secondary image area , to get the grayscale difference. The beneficial effect is that it is convenient to calculate the gray scale difference.
进一步优选地,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。Further preferably, the step S2 also includes a pixel gray scale difference calculation step, calculating the pixel value of each coordinate position in the first main image area and the calibrated pixel value of the corresponding coordinate position in the first secondary image area to get the grayscale difference.
进一步优选地,所述步骤S2还包括像素灰度差求和步骤,将所述灰度差的绝对值相加,以得到所述像素匹配度。其有益效果在于:便于得到像素匹配度。Further preferably, the step S2 further includes a summing step of pixel grayscale differences, adding the absolute values of the grayscale differences to obtain the pixel matching degree. The beneficial effect is that it is convenient to obtain the pixel matching degree.
优选地,所述步骤S3包括判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,则将黑色像素信息添加到所述第一主图像区域。Preferably, the step S3 includes judging that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, then adding black pixel information to the first main image area .
优选地,所述相机图像融合方法还包括匹配度阈值计算步骤,。Preferably, the camera image fusion method further includes a matching degree threshold calculation step.
本发明还提供了一种相机图像融合***,包括TOF相机、RGB相机、 调节单元、像素匹配度计算单元、判断单元和融合单元,所述TOF相机和所述RGB相机用于拍摄同一目标,以分别形成第一图像和第二图像;所述调节单元用于调节所述TOF相机和所述RGB相机的位置,以使所述TOF相机的成像平面和所述RGB相机的成像平面相互平行或位于同一平面内;所述像素匹配单元度计算单元用于获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同;所述判断单元用于将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求;所述融合单元用于在所述判断单元判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,则将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域。The present invention also provides a camera image fusion system, including a TOF camera, an RGB camera, an adjustment unit, a pixel matching degree calculation unit, a judging unit and a fusion unit, the TOF camera and the RGB camera are used to photograph the same target, to Respectively form a first image and a second image; the adjustment unit is used to adjust the positions of the TOF camera and the RGB camera, so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or at In the same plane; the pixel matching unit degree calculation unit is used to obtain the pixel matching degree between the first main image area of the first image and the first sub image area of the second image, and the first main image area The relative position of the image area on the first image is the same as the relative position of the first secondary image area on the second image; the judging unit is used to compare the pixel matching degree with a preset matching degree Thresholds are compared to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and if the matching degree of the pixels is less than the matching degree threshold, it is judged that the The pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and if the matching degree of the pixels is greater than or equal to the matching degree threshold, it is determined that the pixels in the first main image area The pixels and the pixels in the first secondary image area do not meet the image fusion requirements; the fusion unit is used to judge the pixels in the first main image area and the pixels in the first secondary image area in the judging unit If the pixel meets the image fusion requirements, the pixel color information corresponding to the first secondary image area is added to the first main image area.
所述相机图像融合***的有益效果在于:所述像素匹配单元度计算单元用于获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述判断单元用于将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合 图像融合要求,减少了图像融合误差,提高了图像融合速度。The beneficial effect of the camera image fusion system is that: the pixel matching unit degree calculation unit is used to obtain the pixel matching between the first main image area of the first image and the first secondary image area of the second image degree, the judging unit is used to compare the pixel matching degree with a preset matching degree threshold to judge whether the pixels in the first main image area and the pixels in the first secondary image area conform to the image Fusion requirements, if the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, reducing the image fusion error, Improved image fusion speed.
优选地,所述像素匹配度计算单元包括像素列差计算模块,所述像素列差计算模块用于将所述TOF相机和RGB相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述第一主图像区域的像素深度值,以得到像素列差。Preferably, the pixel matching degree calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is used to multiply the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain the first A process value, and then divide the first process value by the pixel depth value of the first main image area to obtain the pixel column difference.
进一步优选地,所述像素匹配度计算单元还包括图像区域获取模块,所述图像区域获取模块用于选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。Further preferably, the pixel matching calculation unit further includes an image area acquisition module, the image area acquisition module is used to select any position on the first image as the first main image area, and then according to the second The coordinate position in a main image area and the pixel column difference are used to obtain the first secondary main image area.
进一步优选地,所述像素匹配度计算单元还包括第一像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。Further preferably, the pixel matching calculation unit further includes a first pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel value and the pixel value of each coordinate position in the first main image area. Differences in pixel values at corresponding coordinate positions in the first secondary image area to obtain grayscale differences.
进一步优选地,所述像素匹配度计算单元还包括第二像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。Further preferably, the pixel matching calculation unit further includes a second pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the difference between the pixel value of each coordinate position in the first main image area and The difference between the calibrated pixel values of the corresponding coordinate positions in the first sub-image area to obtain a gray level difference.
附图说明Description of drawings
图1为本发明的相机图像融合***的结构框图;Fig. 1 is the structural block diagram of camera image fusion system of the present invention;
图2为本发明的相机图像融合方法的流程图。FIG. 2 is a flow chart of the camera image fusion method of the present invention.
发明内容Contents of the invention
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。除非另外定义,此处使用的技术术语或者科学术语应当为本发明所属领域内具有一般技能的人士所理解的通常意义。本文中使用的“包括”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings of the present invention. Obviously, the described embodiments are part of the present invention Examples, not all examples. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention. Unless otherwise defined, the technical terms or scientific terms used herein shall have the usual meanings understood by those skilled in the art to which the present invention belongs. As used herein, "comprising" and similar words mean that the elements or items appearing before the word include the elements or items listed after the word and their equivalents, without excluding other elements or items.
针对现有技术存在的问题,本发明的实施例提供了一种相机图像融合***,参照图1,所述相机图像融合***100包括TOF相机101、RGB相机102、调节单元103、像素匹配度计算单元104、判断单元105和融合单元106,所述TOF相机101和所述RGB相机102用于拍摄同一目标,以分别形成第一图像和第二图像;所述调节单元103用于调节所述TOF相机和所述RGB相机的位置,以使所述TOF相机101的成像平面和所述RGB相机102的成像平面相互平行或位于同一平面内;所述像素匹配度计算单元元104用于获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同;所述判断单元105用于将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像 区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求;所述融合单元106用于在所述判断单元判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,则将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域。Aiming at the problems existing in the prior art, an embodiment of the present invention provides a camera image fusion system. Referring to FIG. Unit 104, judging unit 105 and fusion unit 106, the TOF camera 101 and the RGB camera 102 are used to photograph the same target to form a first image and a second image respectively; the adjustment unit 103 is used to adjust the TOF The positions of the camera and the RGB camera, so that the imaging plane of the TOF camera 101 and the imaging plane of the RGB camera 102 are parallel to each other or in the same plane; the pixel matching calculation unit 104 is used to obtain the The degree of pixel matching between the first main image area of the first image and the first sub-image area of the second image, the relative position of the first main image area on the first image to the first The relative positions of the secondary image areas on the second image are the same; the judging unit 105 is used to compare the pixel matching degree with a preset matching degree threshold to judge the pixels in the first main image area and whether the pixels in the first sub-image area meet the image fusion requirements, and if the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the first sub-image area If the pixel matching degree is greater than or equal to the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements. Requirements; the fusion unit 106 is used to determine that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements in the judging unit, then the first secondary image area The corresponding pixel color information is added to the first main image area.
一些实施例中,所述像素匹配度计算单元包括像素列差计算模块,所述像素列差计算模块用于将所述TOF相机和RGB相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述第一主图像区域的像素深度值,以得到像素列差。In some embodiments, the pixel matching calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is used to multiply the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain A first process value is obtained, and then the first process value is divided by the pixel depth value of the first main image area to obtain a pixel column difference.
一些实施例中,所述像素匹配度计算单元还包括图像区域获取模块,所述图像区域获取模块用于选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。In some embodiments, the pixel matching calculation unit further includes an image area acquisition module, the image area acquisition module is used to select any position on the first image as the first main image area, and then according to the The coordinate position in the first main image area and the pixel column difference obtain the first sub-main image area.
一些实施例中,所述像素匹配度计算单元还包括第一像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。In some embodiments, the pixel matching calculation unit further includes a first pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel values of each coordinate position in the first main image area The difference between the pixel value and the corresponding coordinate position in the first secondary image area to obtain the gray level difference.
一些实施例中,所述像素匹配度计算单元还包括第二像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标 位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。In some embodiments, the pixel matching degree calculation unit further includes a second pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the pixel values of each coordinate position in the first main image area The difference between the calibrated pixel value and the corresponding coordinate position in the first sub-image area to obtain a grayscale difference.
图2为本发明一些实施例中图像融合方法的流程图。参照图2,所述图像融合方法包括以下步骤:Fig. 2 is a flowchart of an image fusion method in some embodiments of the present invention. With reference to Fig. 2, described image fusion method comprises the following steps:
S1:调节TOF相机和RGB相机的位置,以使所述TOF相机的成像平面和所述RGB相机的成像平面相互平行或位于同一平面内,所述TOF相机和RGB相机拍摄同一目标,以分别形成第一图像和第二图像;S1: Adjust the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image;
S2:获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同;S2: Obtain the pixel matching degree between the first main image area of the first image and the first sub-image area of the second image, the first main image area is located at a relative position on the first image The same relative position as that of the first secondary image area on the second image;
S3:将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,然后执行步骤S4;若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,然后执行所述步骤S2和所述步骤S3:S3: Comparing the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the If the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and then step S4 is performed; if the pixel matching degree is greater than or equal to the matching degree threshold, then it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, and then perform the steps S2 and S3:
S4:将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域,然后执行步骤S2和步骤S3,直至所述第一图像和所述第二图像的所有位置均处理完毕。减少了图像融合误差,提高了图像融合速度。S4: Add the pixel color information corresponding to the first secondary image area to the first main image area, and then perform step S2 and step S3 until all positions of the first image and the second image are equal Finished. The image fusion error is reduced, and the image fusion speed is improved.
一些具体实施例中,所述步骤S1中,对所述TOF相机和所述RGB相 机做极线校正,以使所述TOF相机的成像平面和所述RGB相机的成像平面位于同一平面内。所述极线校正为本领域的公知技术,在此不做详细赘述。In some specific embodiments, in the step S1, epipolar correction is performed on the TOF camera and the RGB camera, so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are located in the same plane. The epipolar line correction is a well-known technology in the art, and will not be described in detail here.
一些实施例中,所述步骤S2包括像素列差计算步骤,将所述TOF相机和RGB相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述第一主图像区域的像素深度值,以得到像素列差。具体地,所述像素列差的计算公式为d=(B×f)/Z,d表示所述像素列差,B表示所述TOF相机和所述RGB相机的基线长度,f为所述TOF相机的焦距,Z表示像素深度值。In some embodiments, the step S2 includes a pixel column difference calculation step, multiplying the baseline length of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then multiplying the first process value The process value is divided by the pixel depth value of the first main image area to obtain the pixel column difference. Specifically, the calculation formula of the pixel column difference is d=(B×f)/Z, d represents the pixel column difference, B represents the baseline length of the TOF camera and the RGB camera, f is the TOF The focal length of the camera, Z represents the pixel depth value.
一些实施例中,所述步骤S2还包括图像区域获取步骤,选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。In some embodiments, the step S2 further includes an image area acquisition step, selecting any position on the first image as the first main image area, and then according to the coordinate position in the first main image area and the obtained The pixel column difference is used to obtain the first sub-main image area.
一些实施例中,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。In some embodiments, the step S2 further includes a pixel gray scale difference calculation step, calculating the difference between the pixel value at each coordinate position in the first main image area and the pixel value at the corresponding coordinate position in the first secondary image area difference to get the grayscale difference.
一些实施例中,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。In some embodiments, the step S2 further includes a pixel gray scale difference calculation step, calculating the pixel value of each coordinate position in the first main image area and the pixel after calibration at the corresponding coordinate position in the first secondary image area Value difference to get the grayscale difference.
一些实施例中,所述步骤S2还包括像素灰度差求和步骤,将所述灰度差的绝对值相加,以得到所述像素匹配度。In some embodiments, the step S2 further includes a summing step of pixel grayscale differences, adding the absolute values of the grayscale differences to obtain the pixel matching degree.
一些具体实施例中,计算所述像素匹配度的计算公式为
Figure PCTCN2022076719-appb-000001
f(x,y,d)表示所述像素匹配度,x表示横坐标,y表示纵坐标,d表示所述像素列差,I L表示所述第一主图像区域内像素点的像素值,I R表示第一副图像图像区域内像素点的像素值。
In some specific embodiments, the calculation formula for calculating the pixel matching degree is
Figure PCTCN2022076719-appb-000001
f(x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, IL represents the pixel value of the pixel in the first main image area, I R represents the pixel value of the pixel in the image area of the first secondary image.
又一些具体实施例中,计算所述像素匹配度的计算公式为
Figure PCTCN2022076719-appb-000002
f(x,y,d)表示所述像素匹配度,x表示横坐标,y表示纵坐标,d表示所述像素列差,I L表示所述第一主图像区域内像素点的像素值,I R表示第一副图像图像区域内像素点的像素值,g()表示校准函数,代表所述TOF相机和所述RGB相机对于同一颜色光的感知能力的差别。
In some other specific embodiments, the calculation formula for calculating the pixel matching degree is
Figure PCTCN2022076719-appb-000002
f(x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, IL represents the pixel value of the pixel in the first main image area, I R represents the pixel value of the pixel in the image area of the first sub-image, and g() represents the calibration function, which represents the difference in perception ability between the TOF camera and the RGB camera for the same color light.
一些具体实施例中,所述第一主图像区域的像素点坐标包括第一点(1,3),第二点(1,4),第三点(1,5),第四点(2,3),第五点(2,4),第六点(2,5),第七点(3,3),第八点(3,4)和第九点(3,5),当d为2时,则所述第一点的灰度差为I L(x+dx,y+dy)-I R(x+dx-d,y+dy)=I L(1+2×1,1+2×1)-I(1+2×1-1,1+2×1)=I L(3,3)-I(2,3)=1-2=-1,所述第二点、所述第三点、所述第四点、所述第五点、所述第六点、所述第七点、所述第八点、所述第九点的灰度差计算方式与所述第一点的灰度差计算方式相同,在此不再详细一一赘述。 In some specific embodiments, the pixel point coordinates of the first main image area include the first point (1, 3), the second point (1, 4), the third point (1, 5), the fourth point (2 , 3), the fifth point (2, 4), the sixth point (2, 5), the seventh point (3, 3), the eighth point (3, 4) and the ninth point (3, 5), when When d is 2, the gray level difference of the first point is I L (x+dx, y+dy)-I R (x+dx-d, y+dy)=I L (1+2×1 ,1+2×1)-I(1+2×1-1,1+2×1)=I L (3,3)-I(2,3)=1-2=-1, the first Calculation method of the gray level difference of the second point, the third point, the fourth point, the fifth point, the sixth point, the seventh point, the eighth point, and the ninth point The method for calculating the gray level difference of the first point is the same, and will not be described in detail here.
一些具体实施例中,所述第一主图像区域的像素点坐标包括一点(1,1),第二点(1,2),第三点(1,3),第四点(2,1),第五点(2,2),第六点 (2,3),第七点(3,1),第八点(3,2)和第九点(3,3),当d=2时,所述第一主图像区域所对应所述第一副图像区域的像素点坐标包括第一对应点(1,3),第二对应点(1,4),第三对应点(1,5),第四对应点(2,3),第五对应点(2,4),第六对应点(2,5),第七对应点(3,3),第八对应点(3,4)和第九对应点(3,5),所述第一点和所述第一对应点的灰度差为I L(x+dx,y+dy)-g(I R(x+dx,y+dy+d))=I L(1,1)-g(I(1,3)),其余点与所述第一点的和所述第一对应点之间灰度差的计算方式相同。 In some specific embodiments, the pixel point coordinates of the first main image area include a point (1, 1), a second point (1, 2), a third point (1, 3), a fourth point (2, 1 ), the fifth point (2, 2), the sixth point (2, 3), the seventh point (3, 1), the eighth point (3, 2) and the ninth point (3, 3), when d= 2, the pixel point coordinates of the first secondary image area corresponding to the first main image area include the first corresponding point (1, 3), the second corresponding point (1, 4), the third corresponding point (1 , 5), the fourth corresponding point (2, 3), the fifth corresponding point (2, 4), the sixth corresponding point (2, 5), the seventh corresponding point (3, 3), the eighth corresponding point (3 , 4) and the ninth corresponding point (3, 5), the gray level difference between the first point and the first corresponding point is I L (x+dx, y+dy)-g(I R (x+ dx, y+dy+d))=I L (1,1)-g(I(1,3)), the remaining points and the gray difference between the first point and the first corresponding point Calculated in the same way.
一些实施例中,所述步骤S3包括判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,则将黑色像素信息添加到所述第一主图像区域。In some embodiments, the step S3 includes judging that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, then adding black pixel information to the first main image area. image area.
一些实施例中,所述相机图像融合方法还包括匹配度阈值计算步骤,计算所述匹配度阈值的计算公式为
Figure PCTCN2022076719-appb-000003
tgresl为所述匹配度阈值,h为标定不同相机之间的采光一致性函数,row_size为相机V方向分辨率,col_size为相机H方向分辨率,例如相机的分辨率为1920×1080,则row_size为1080,col_size为1920,i为大小为0~row_size,j为0~col_size,g()表示校准函数,代表所述TOF相机和所述RGB相机对于同一颜色光的感知能力的差别。
In some embodiments, the camera image fusion method further includes a matching degree threshold calculation step, and the calculation formula for calculating the matching degree threshold is:
Figure PCTCN2022076719-appb-000003
tgresl is the threshold of the matching degree, h is the lighting consistency function between different cameras, row_size is the resolution of the camera in the V direction, and col_size is the resolution of the camera in the H direction. For example, if the resolution of the camera is 1920×1080, the row_size is 1080, col_size is 1920, i is 0~row_size, j is 0~col_size, g() represents the calibration function, which represents the difference between the perception ability of the TOF camera and the RGB camera for the same color light.
虽然在上文中详细说明了本发明的实施方式,但是对于本领域的技术人员来说显而易见的是,能够对这些实施方式进行各种修改和变化。但是,应理解,这种修改和变化都属于权利要求书中所述的本发明的范围和精神之内。而且,在此说明的本发明可有其它的实施方式,并且可通过多种方式实 施或实现。While the embodiments of the present invention have been described in detail above, it will be apparent to those skilled in the art that various modifications and changes can be made to the embodiments. However, it should be understood that such modifications and changes are within the scope and spirit of the present invention described in the claims. Furthermore, the invention described herein is capable of other embodiments and of being practiced or carried out in various ways.

Claims (13)

  1. 一种相机图像融合方法,其特征在于,包括以下步骤:A camera image fusion method, characterized in that, comprising the following steps:
    S1:调节TOF相机和RGB相机的位置,以使所述TOF相机的成像平面和所述RGB相机的成像平面相互平行或位于同一平面内,所述TOF相机和RGB相机拍摄同一目标,以分别形成第一图像和第二图像;S1: Adjust the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image;
    S2:获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同;S2: Obtain the pixel matching degree between the first main image area of the first image and the first sub-image area of the second image, the first main image area is located at a relative position on the first image The same relative position as that of the first secondary image area on the second image;
    S3:将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,然后执行步骤S3: Comparing the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the If the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, and then perform the steps
    S4;若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,然后执行所述步骤S2和所述步骤S3:S4: If the pixel matching degree is greater than or equal to the matching degree threshold, then judge that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, and then execute the Step S2 and said step S3:
    S4:将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域,然后执行步骤S2和步骤S3,直至所述第一图像和所述第二图像的所有位置均处理完毕。S4: Add the pixel color information corresponding to the first secondary image area to the first main image area, and then perform step S2 and step S3 until all positions of the first image and the second image are equal Finished.
  2. 根据权利要求1所述的相机图像融合方法,其特征在于,所述步骤S2包括像素列差计算步骤,将所述TOF相机和RGB相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述 第一主图像区域的像素深度值,以得到像素列差。The camera image fusion method according to claim 1, wherein said step S2 includes a pixel column difference calculation step, multiplying the baseline length of said TOF camera and RGB camera with the focal length of said TOF camera to obtain A first process value, and then divide the first process value by the pixel depth value of the first main image area to obtain the pixel column difference.
  3. 根据权利要求2所述的相机图像融合方法,其特征在于,所述步骤S2还包括图像区域获取步骤,选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。The camera image fusion method according to claim 2, wherein said step S2 also includes an image area acquisition step, selecting any position on said first image as said first main image area, and then according to said The coordinate position in the first main image area and the pixel column difference obtain the first sub-main image area.
  4. 根据权利要求3所述的相机图像融合方法,其特征在于,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。The camera image fusion method according to claim 3, wherein the step S2 also includes a pixel gray scale difference calculation step, calculating the difference between the pixel value of each coordinate position in the first main image area and the first sub-image The difference between the pixel values of the corresponding coordinate positions in the image area to obtain the gray level difference.
  5. 根据权利要求3所述的相机图像融合方法,其特征在于,所述步骤S2还包括像素灰度差计算步骤,计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。The camera image fusion method according to claim 3, wherein the step S2 also includes a pixel gray scale difference calculation step, calculating the difference between the pixel value of each coordinate position in the first main image area and the first sub-image The difference between the calibrated pixel values of the corresponding coordinate positions in the image area to obtain the gray level difference.
  6. 根据权利要求4或3所述的相机图像融合方法,其特征在于,所述步骤S2还包括像素灰度差求和步骤,将所述灰度差的绝对值相加,以得到所述像素匹配度。The camera image fusion method according to claim 4 or 3, wherein said step S2 also includes a summing step of pixel grayscale differences, adding the absolute values of said grayscale differences to obtain said pixel matching Spend.
  7. 根据权利要求1所述的相机图像融合方法,其特征在于,所述步骤S3包括判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求,则将黑色像素信息添加到所述第一主图像区域。The camera image fusion method according to claim 1, wherein the step S3 includes judging that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements, then Adding black pixel information to the first main image area.
  8. 根据权利要求1所述的相机图像融合方法,其特征在于,还包括匹配度阈值计算步骤。The camera image fusion method according to claim 1, further comprising a matching degree threshold calculation step.
  9. 一种相机图像融合***,用于实现如权利要求1~8中任一项所述的相机图像融合方法,其特征在于,包括TOF相机、RGB相机、调节单元、像素匹配度计算单元、判断单元和融合单元,所述TOF相机和所述RGB相机用于拍摄同一目标,以分别形成第一图像和第二图像,所述调节单元用于调节所述TOF相机和所述RGB相机的位置,以使所述TOF相机的成像平面和所述RGB相机的成像平面相互平行或位于同一平面内,所述像素匹配单元度计算单元用于获取所述第一图像的第一主图像区域和所述第二图像的第一副图像区域之间的像素匹配度,所述第一主图像区域位于所述第一图像上的相对位置与所述第一副图像区域位于所述第二图像上的相对位置相同,所述判断单元用于将所述像素匹配度与预设的匹配度阈值进行比较,以判断所述第一主图像区域内的像素和所述第一副图像区域内的像素是否符合图像融合要求,若所述像素匹配度小于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,若所述像素匹配度大于或等于所述匹配度阈值,则判断所述第一主图像区域内的像素和所述第一副图像区域内的像素不符合图像融合要求;所述融合单元用于在所述判断单元判断所述第一主图像区域内的像素和所述第一副图像区域内的像素符合图像融合要求,则将所述第一副图像区域所对应的像素颜色信息添加到所述第一主图像区域。A camera image fusion system for realizing the camera image fusion method according to any one of claims 1 to 8, characterized in that it includes a TOF camera, an RGB camera, an adjustment unit, a pixel matching degree calculation unit, and a judgment unit and a fusion unit, the TOF camera and the RGB camera are used to photograph the same target to form a first image and a second image respectively, and the adjustment unit is used to adjust the positions of the TOF camera and the RGB camera to The imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, and the pixel matching unit calculation unit is used to acquire the first main image area of the first image and the second The pixel matching degree between the first sub-image areas of the two images, the relative position of the first main image area on the first image and the relative position of the first sub-image area on the second image Similarly, the judging unit is used to compare the pixel matching degree with a preset matching degree threshold to judge whether the pixels in the first main image area and the pixels in the first secondary image area conform to the image Fusion requirements, if the pixel matching degree is less than the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area meet the image fusion requirements, if the pixel matching degree greater than or equal to the matching degree threshold, it is judged that the pixels in the first main image area and the pixels in the first secondary image area do not meet the image fusion requirements; the fusion unit is used to judge in the judgment unit If the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirements, then the pixel color information corresponding to the first sub-image area is added to the first main image area .
  10. 根据权利要求9所述的相机图像融合***,其特征在于,所述像素匹配度计算单元包括像素列差计算模块,所述像素列差计算模块用于将所述TOF相机和RGB相机的基线长度与所述TOF相机的焦距相乘,以得到第一过程值,然后将所述第一过程值除以所述第一主图像区域的像素深度值,以 得到像素列差。The camera image fusion system according to claim 9, wherein the pixel matching degree calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is used to use the baseline length of the TOF camera and the RGB camera multiplied by the focal length of the TOF camera to obtain a first process value, and then divide the first process value by the pixel depth value of the first main image area to obtain a pixel column difference.
  11. 根据权利要求10所述的相机图像融合***,其特征在于,所述像素匹配度计算单元还包括图像区域获取模块,所述图像区域获取模块用于选取所述第一图像上的任意位置作为所述第一主图像区域,然后根据所述第一主图像区域内的坐标位置和所述像素列差获取所述第一副主图像区域。The camera image fusion system according to claim 10, wherein the pixel matching calculation unit further includes an image area acquisition module, and the image area acquisition module is used to select any position on the first image as the The first main image area, and then acquire the first sub-main image area according to the coordinate position in the first main image area and the pixel column difference.
  12. 根据权利要求11所述的相机图像融合***,其特征在于,所述像素匹配度计算单元还包括第一像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置的像素值的差,以得到灰度差。The camera image fusion system according to claim 11, wherein the pixel matching calculation unit further includes a first pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the first pixel grayscale difference calculation module. The difference between the pixel value of each coordinate position in a main image area and the pixel value of the corresponding coordinate position in the first sub-image area to obtain a grayscale difference.
  13. 根据权利要求11所述的相机图像融合***,其特征在于,所述像素匹配度计算单元还包括第二像素灰度差计算模块,所述第一像素灰度差计算模块用于计算所述第一主图像区域内各个坐标位置的像素值与所述第一副图像区域内相对应坐标位置校准后的像素值的差,以得到灰度差。The camera image fusion system according to claim 11, wherein the pixel matching calculation unit further includes a second pixel grayscale difference calculation module, and the first pixel grayscale difference calculation module is used to calculate the second pixel grayscale difference calculation module. The difference between the pixel value of each coordinate position in the main image area and the calibrated pixel value of the corresponding coordinate position in the first sub-image area to obtain the gray level difference.
PCT/CN2022/076719 2021-05-31 2022-02-18 Camera image fusion method and camera image fusion system WO2022252697A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110596689.9 2021-05-31
CN202110596689.9A CN113379854B (en) 2021-05-31 2021-05-31 Camera image fusion method and camera image fusion system

Publications (1)

Publication Number Publication Date
WO2022252697A1 true WO2022252697A1 (en) 2022-12-08

Family

ID=77574867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076719 WO2022252697A1 (en) 2021-05-31 2022-02-18 Camera image fusion method and camera image fusion system

Country Status (2)

Country Link
CN (1) CN113379854B (en)
WO (1) WO2022252697A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379854B (en) * 2021-05-31 2022-12-06 上海集成电路制造创新中心有限公司 Camera image fusion method and camera image fusion system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
US20170316602A1 (en) * 2014-10-31 2017-11-02 Nokia Technologies Oy Method for alignment of low-quality noisy depth map to the high-resolution colour image
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN109905691A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Depth image acquisition device and depth image acquisition system and its image processing method
CN113379854A (en) * 2021-05-31 2021-09-10 上海集成电路制造创新中心有限公司 Camera image fusion method and camera image fusion system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10917628B2 (en) * 2018-04-02 2021-02-09 Mediatek Inc. IR pattern characteristics for active stereo matching
CN112770100B (en) * 2020-12-31 2023-03-21 南昌欧菲光电技术有限公司 Image acquisition method, photographic device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316602A1 (en) * 2014-10-31 2017-11-02 Nokia Technologies Oy Method for alignment of low-quality noisy depth map to the high-resolution colour image
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN109905691A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Depth image acquisition device and depth image acquisition system and its image processing method
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN113379854A (en) * 2021-05-31 2021-09-10 上海集成电路制造创新中心有限公司 Camera image fusion method and camera image fusion system

Also Published As

Publication number Publication date
CN113379854B (en) 2022-12-06
CN113379854A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
WO2020151212A1 (en) Calibration method for extrinsic camera parameter of on-board camera system, and calibration system
TWI383666B (en) An advanced dynamic stitching method for multi-lens camera system
US8199202B2 (en) Image processing device, storage medium storing image processing program, and image pickup apparatus
CN108470356B (en) Target object rapid ranging method based on binocular vision
CN106651897B (en) Parallax correction method based on super-pixel segmentation
WO2021218603A1 (en) Image processing method and projection system
WO2022089082A1 (en) Method for adjusting display image, terminal device, and computer readable storage medium
CN106469444A (en) Eliminate the rapid image fusion method in splicing gap
WO2022135588A1 (en) Image correction method, apparatus and system, and electronic device
WO2019105254A1 (en) Background blur processing method, apparatus and device
CN107527325B (en) Monocular underwater vision enhancement method based on dark channel priority
CN113393540B (en) Method and device for determining color edge pixel points in image and computer equipment
WO2022252697A1 (en) Camera image fusion method and camera image fusion system
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN111553862A (en) Sea-sky background image defogging and binocular stereo vision positioning method
CN116152068A (en) Splicing method for solar panel images
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix
CN108055487B (en) Method and system for consistent correction of image sensor array nonuniformity
CN107172323B (en) Method and device for removing dark corners of images of large-view-field camera
JP2009301181A (en) Image processing apparatus, image processing program, image processing method and electronic device
CN107644403A (en) The non-uniform color calibration method of severe environmental conditions hypograph
CN111260538B (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN116934833A (en) Binocular vision-based underwater structure disease detection method, equipment and medium
US20220076428A1 (en) Product positioning method
CN112766338B (en) Method, system and computer readable storage medium for calculating distance image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814746

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE