CN113379854A - Camera image fusion method and camera image fusion system - Google Patents
Camera image fusion method and camera image fusion system Download PDFInfo
- Publication number
- CN113379854A CN113379854A CN202110596689.9A CN202110596689A CN113379854A CN 113379854 A CN113379854 A CN 113379854A CN 202110596689 A CN202110596689 A CN 202110596689A CN 113379854 A CN113379854 A CN 113379854A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- camera
- matching degree
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 61
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 23
- 238000003384 imaging method Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims description 12
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a camera image fusion method, which comprises the steps of enabling an imaging plane of a TOF camera and an imaging plane of an RGB camera to be parallel to each other or to be positioned in the same plane, shooting the same target to form a first image and a second image respectively, obtaining the pixel matching degree between a first main image area of the first image and a first auxiliary image area of the second image, comparing the pixel matching degree with a preset matching degree threshold value, to judge whether the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, if the pixel matching degree is less than the threshold value of the matching degree, and judging that the image fusion requirement is met, and adding the pixel color information corresponding to the first auxiliary image area to the first main image area, so that the image fusion error is reduced, and the image fusion speed is increased. The invention also provides a camera image fusion system for realizing the camera image fusion method.
Description
Technical Field
The invention relates to the technical field of camera image fusion, in particular to a camera image fusion method and a camera image fusion system.
Background
The vision of the computer is widely applied in real life, but the daily used RGB camera can only acquire the color information of the visual field, while the RGB-D camera can provide the common RGB image and the corresponding depth information, but the depth is calculated through parallax error, and the computer is relied on to calculate the matched pixel of each pixel point, so that the computational power consumption is huge.
Therefore, there is a need to provide a novel camera image fusion method and camera image fusion system to solve the above-mentioned problems in the prior art.
Disclosure of Invention
The invention aims to provide a camera image fusion method and a camera image fusion system, which are used for reducing errors and improving the image fusion speed.
In order to achieve the above object, the camera image fusion method of the present invention includes the following steps:
s1: adjusting the positions of a TOF camera and an RGB camera so that an imaging plane of the TOF camera and an imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image respectively;
s2: acquiring a pixel matching degree between a first main image area of the first image and a first sub-image area of the second image, wherein the relative position of the first main image area on the first image is the same as the relative position of the first sub-image area on the second image;
s3: comparing the pixel matching degree with a preset matching degree threshold value to judge whether the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, if the pixel matching degree is smaller than the matching degree threshold value, judging that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, and then executing step S4; if the pixel matching degree is greater than or equal to the matching degree threshold, determining that the pixels in the first main image region and the pixels in the first sub-image region do not meet the image fusion requirement, and then performing the step S2 and the step S3:
s4: adding the pixel color information corresponding to the first sub-image area to the first main image area, and then performing steps S2 and S3 until all positions of the first image and the second image are processed.
The camera image fusion method has the beneficial effects that: and if the pixel matching degree is smaller than the matching degree threshold value, judging that the pixels in the first main image area and the pixels in the first auxiliary image area accord with the image fusion requirement, reducing the image fusion error and improving the image fusion speed.
Preferably, the step S2 includes a pixel column difference calculating step of multiplying the base line lengths of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then dividing the first process value by the pixel depth value of the first main image region to obtain a pixel column difference. It has the advantages that: it is convenient to calculate the coordinates of the images taken by the two cameras.
Further preferably, the step S2 further includes an image area acquiring step, selecting an arbitrary position on the first image as the first main image area, and then acquiring the first sub-main image area according to the coordinate position in the first main image area and the pixel column difference. The beneficial effects are that: the selected image area is ensured to be the same position of the target shot by the two cameras.
Further preferably, the step S2 further includes a pixel gray scale difference calculating step of calculating a difference between a pixel value at each coordinate position in the first main image area and a pixel value at a corresponding coordinate position in the first sub-image area to obtain a gray scale difference. The beneficial effects are that: the gray level difference can be calculated conveniently.
Further preferably, the step S2 further includes a pixel gray scale difference calculating step, which calculates a difference between a pixel value at each coordinate position in the first main image area and a pixel value calibrated at a corresponding coordinate position in the first sub-image area, so as to obtain a gray scale difference.
Further preferably, the step S2 further includes a step of summing pixel gray scale differences, in which absolute values of the gray scale differences are added to obtain the pixel matching degree. The beneficial effects are that: the pixel matching degree is convenient to obtain.
Preferably, the step S3 includes adding black pixel information to the first main image area if it is determined that the pixels in the first main image area and the pixels in the first sub-image area do not meet the image fusion requirement.
Preferably, the camera image fusion method further includes a matching degree threshold calculation step.
The invention also provides a camera image fusion system, which comprises a TOF camera, an RGB camera, an adjusting unit, a pixel matching degree calculating unit, a judging unit and a fusion unit, wherein the TOF camera and the RGB camera are used for shooting the same target to respectively form a first image and a second image; the adjusting unit is used for adjusting the positions of the TOF camera and the RGB camera so that an imaging plane of the TOF camera and an imaging plane of the RGB camera are parallel to each other or are positioned in the same plane; the pixel matching unit degree calculating unit is used for acquiring the pixel matching degree between a first main image area of the first image and a first sub-image area of the second image, and the relative position of the first main image area on the first image is the same as the relative position of the first sub-image area on the second image; the judging unit is configured to compare the pixel matching degree with a preset matching degree threshold to judge whether pixels in the first main image region and pixels in the first sub-image region meet an image fusion requirement, judge that the pixels in the first main image region and the pixels in the first sub-image region meet the image fusion requirement if the pixel matching degree is smaller than the matching degree threshold, and judge that the pixels in the first main image region and the pixels in the first sub-image region do not meet the image fusion requirement if the pixel matching degree is greater than or equal to the matching degree threshold; the fusion unit is configured to add, when the determination unit determines that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, the color information of the pixels corresponding to the first sub-image area to the first main image area.
The camera image fusion system has the advantages that: the pixel matching unit degree calculating unit is used for acquiring the pixel matching degree between a first main image area of the first image and a first auxiliary image area of the second image, the judging unit is used for comparing the pixel matching degree with a preset matching degree threshold value so as to judge whether the pixels in the first main image area and the pixels in the first auxiliary image area meet the image fusion requirement, and if the pixel matching degree is smaller than the matching degree threshold value, the pixels in the first main image area and the pixels in the first auxiliary image area are judged to meet the image fusion requirement, so that the image fusion error is reduced, and the image fusion speed is improved.
Preferably, the pixel matching degree calculation unit includes a pixel column difference calculation module, and the pixel column difference calculation module is configured to multiply the base line lengths of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then divide the first process value by the pixel depth value of the first main image region to obtain a pixel column difference.
Further preferably, the pixel matching degree calculation unit further includes an image area acquisition module, where the image area acquisition module is configured to select an arbitrary position on the first image as the first main image area, and then acquire the first sub-main image area according to a coordinate position in the first main image area and the pixel column difference.
Further preferably, the pixel matching degree calculating unit further includes a first pixel gray difference calculating module, and the first pixel gray difference calculating module is configured to calculate a difference between a pixel value at each coordinate position in the first main image area and a pixel value at a corresponding coordinate position in the first sub-image area, so as to obtain a gray difference.
Further preferably, the pixel matching degree calculating unit further includes a second pixel gray scale difference calculating module, and the first pixel gray scale difference calculating module is configured to calculate a difference between a pixel value at each coordinate position in the first main image area and a pixel value calibrated at a corresponding coordinate position in the first sub-image area, so as to obtain a gray scale difference.
Drawings
FIG. 1 is a block diagram of a camera image fusion system according to the present invention;
fig. 2 is a flowchart of a camera image fusion method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and similar words are intended to mean that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
In order to solve the problems in the prior art, an embodiment of the present invention provides a camera image fusion system, and referring to fig. 1, the camera image fusion system 100 includes a TOF camera 101, an RGB camera 102, an adjusting unit 103, a pixel matching degree calculating unit 104, a determining unit 105, and a fusion unit 106, where the TOF camera 101 and the RGB camera 102 are used to shoot a same target to form a first image and a second image respectively; the adjusting unit 103 is used for adjusting the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera 101 and the imaging plane of the RGB camera 102 are parallel to each other or in the same plane; the pixel matching degree calculating unit 104 is configured to obtain a pixel matching degree between a first main image region of the first image and a first sub-image region of the second image, where a relative position of the first main image region on the first image is the same as a relative position of the first sub-image region on the second image; the determining unit 105 is configured to compare the pixel matching degree with a preset matching degree threshold to determine whether the pixels in the first main image region and the pixels in the first sub image region meet the image fusion requirement, determine that the pixels in the first main image region and the pixels in the first sub image region meet the image fusion requirement if the pixel matching degree is smaller than the matching degree threshold, and determine that the pixels in the first main image region and the pixels in the first sub image region do not meet the image fusion requirement if the pixel matching degree is greater than or equal to the matching degree threshold; the fusion unit 106 is configured to add, when the determining unit determines that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, the color information of the pixels corresponding to the first sub-image area to the first main image area.
In some embodiments, the pixel matching degree calculation unit comprises a pixel column difference calculation module for multiplying the base line lengths of the TOF camera and the RGB camera with the focal length of the TOF camera to obtain a first process value, and then dividing the first process value by the pixel depth value of the first main image region to obtain a pixel column difference.
In some embodiments, the pixel matching degree calculation unit further includes an image area acquisition module, where the image area acquisition module is configured to select an arbitrary position on the first image as the first main image area, and then acquire the first sub-main image area according to a coordinate position in the first main image area and the pixel column difference.
In some embodiments, the pixel matching degree calculating unit further includes a first pixel gray difference calculating module, and the first pixel gray difference calculating module is configured to calculate a difference between a pixel value at each coordinate position in the first main image area and a pixel value at a corresponding coordinate position in the first sub-image area to obtain a gray difference.
In some embodiments, the pixel matching degree calculating unit further includes a second pixel gray difference calculating module, and the first pixel gray difference calculating module is configured to calculate a difference between a pixel value at each coordinate position in the first main image area and a calibrated pixel value at a corresponding coordinate position in the first sub-image area, so as to obtain a gray difference.
FIG. 2 is a flow chart of an image fusion method in some embodiments of the invention. Referring to fig. 2, the image fusion method includes the steps of:
s1: adjusting the positions of a TOF camera and an RGB camera so that an imaging plane of the TOF camera and an imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image respectively;
s2: acquiring a pixel matching degree between a first main image area of the first image and a first sub-image area of the second image, wherein the relative position of the first main image area on the first image is the same as the relative position of the first sub-image area on the second image;
s3: comparing the pixel matching degree with a preset matching degree threshold value to judge whether the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, if the pixel matching degree is smaller than the matching degree threshold value, judging that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, and then executing step S4; if the pixel matching degree is greater than or equal to the matching degree threshold, determining that the pixels in the first main image region and the pixels in the first sub-image region do not meet the image fusion requirement, and then performing the step S2 and the step S3:
s4: adding the pixel color information corresponding to the first sub-image area to the first main image area, and then performing steps S2 and S3 until all positions of the first image and the second image are processed. The image fusion error is reduced, and the image fusion speed is improved.
In some embodiments, in step S1, epipolar line correction is performed on the TOF camera and the RGB camera, so that an imaging plane of the TOF camera and an imaging plane of the RGB camera are located in the same plane. The epipolar line correction is well known in the art and will not be described in detail herein.
In some embodiments, the step S2 includes a pixel column difference calculating step of multiplying the base line lengths of the TOF camera and the RGB camera by the focal length of the TOF camera to obtain a first process value, and then dividing the first process value by the pixel depth value of the first main image region to obtain a pixel column difference. Specifically, the calculation formula of the pixel column difference is d ═ B × f)/Z, d represents the pixel column difference, B represents a baseline length of the TOF camera and the RGB camera, f is a focal length of the TOF camera, and Z represents a pixel depth value.
In some embodiments, the step S2 further includes an image area acquiring step, selecting an arbitrary position on the first image as the first main image area, and then acquiring the first sub-main image area according to the coordinate position in the first main image area and the pixel column difference.
In some embodiments, the step S2 further includes a pixel gray scale difference calculating step of calculating a difference between a pixel value at each coordinate position in the first main image area and a pixel value at a corresponding coordinate position in the first sub-image area to obtain a gray scale difference.
In some embodiments, the step S2 further includes a pixel gray scale difference calculating step, which calculates a difference between a pixel value at each coordinate position in the first main image area and a calibrated pixel value at a corresponding coordinate position in the first sub-image area to obtain a gray scale difference.
In some embodiments, the step S2 further includes a pixel gray difference summing step, in which absolute values of the gray differences are added to obtain the pixel matching degree.
In some embodiments, the calculation formula for calculating the pixel matching degree isf (x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, ILRepresenting pixel values, I, of pixels within said first main image regionRAnd representing the pixel value of a pixel point in the image area of the first sub-image.
In still other embodiments, the pixel matching degree is calculated by the formulaf (x, y, d) represents the pixel matching degree, x represents the abscissa, y represents the ordinate, d represents the pixel column difference, ILRepresenting pixel values, I, of pixels within said first main image regionRRepresenting pixel values of pixel points within a first secondary image area, g () representing a calibration function representing a difference in the perceptibility of the TOF camera and the RGB camera for light of the same color.
In some embodiments, the pixel coordinates of the first main image region include a first point (1,3), a second point (1, 4), a third point (1, 5), a fourth point (2,3), a fifth point (2, 4), a sixth point (2, 5), a seventh point (3,3), an eighth point (3, 4) and a ninth point (3, 5), and when d is 2, the gray scale difference of the first point is IL(x+dx,y+dy)-IR(x+dx-d,y+dy)=IL(1+2×1,1+2×1)-I(1+2×1-1,1+2×1)=IL(3,3) -I (2,3) ═ 1-2 ═ 1, of the second point, the third point, the fourth point, the fifth point, the sixth point, the seventh point, the eighth point, and the ninth pointThe calculation of the gray scale difference is the same as that of the first point, and is not described in detail herein.
In some embodiments, the pixel coordinates of the first main image area include a point (1,1), a second point (1, 2), a third point (1,3), a fourth point (2, 1), a fifth point (2, 2), a sixth point (2,3), a seventh point (3, 1), an eighth point (3, 2) and a ninth point (3,3), when d is 2, the pixel point coordinates of the first sub-image region corresponding to the first main image region include a first corresponding point (1,3), a second corresponding point (1, 4), a third corresponding point (1, 5), a fourth corresponding point (2,3), a fifth corresponding point (2, 4), a sixth corresponding point (2, 5), a seventh corresponding point (3,3), an eighth corresponding point (3, 4) and a ninth corresponding point (3, 5), and the gray difference between the first point and the first corresponding point is I.L(x+dx,y+dy)-g(IR(x+dx,y+dy+d))=IL(1,1) -g (I (1,3)), and the remaining points are calculated in the same manner as the gray-scale difference between the first point and the first corresponding point.
In some embodiments, the step S3 includes adding black pixel information to the first main image area if it is determined that the pixels in the first main image area and the pixels in the first sub-image area do not meet the image fusion requirement.
In some embodiments, the camera image fusion method further comprises a matching degree threshold calculation step, wherein the calculation formula for calculating the matching degree threshold istgrasl is the threshold of the matching degree, H is a function for calibrating the consistency of the lighting between different cameras, row _ size is the resolution of the V direction of the camera, col _ size is the resolution of the H direction of the camera, for example, the resolution of the camera is 1920 × 1080, row _ size is 1080, col _ size is 1920, i is 0 to row _ size, j is 0 to col _ size, g () represents a calibration function, and represents the difference of the sensing ability of the TOF camera and the RGB camera for the light of the same color.
Although the embodiments of the present invention have been described in detail hereinabove, it is apparent to those skilled in the art that various modifications and variations can be made to these embodiments. However, it is to be understood that such modifications and variations are within the scope and spirit of the present invention as set forth in the following claims. Moreover, the invention as described herein is capable of other embodiments and of being practiced or of being carried out in various ways.
Claims (13)
1. A camera image fusion method is characterized by comprising the following steps:
s1: adjusting the positions of a TOF camera and an RGB camera so that an imaging plane of the TOF camera and an imaging plane of the RGB camera are parallel to each other or in the same plane, and the TOF camera and the RGB camera shoot the same target to form a first image and a second image respectively;
s2: acquiring a pixel matching degree between a first main image area of the first image and a first sub-image area of the second image, wherein the relative position of the first main image area on the first image is the same as the relative position of the first sub-image area on the second image;
s3: comparing the pixel matching degree with a preset matching degree threshold value to judge whether the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, if the pixel matching degree is smaller than the matching degree threshold value, judging that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, and then executing step S4; if the pixel matching degree is greater than or equal to the matching degree threshold, determining that the pixels in the first main image region and the pixels in the first sub-image region do not meet the image fusion requirement, and then performing the step S2 and the step S3:
s4: adding the pixel color information corresponding to the first sub-image area to the first main image area, and then performing steps S2 and S3 until all positions of the first image and the second image are processed.
2. The camera image fusion method according to claim 1, characterized in that the step S2 includes a pixel column difference calculating step of multiplying the base line lengths of the TOF camera and RGB camera by the focal length of the TOF camera to obtain a first process value, and then dividing the first process value by the pixel depth value of the first main image area to obtain a pixel column difference.
3. The camera image fusion method according to claim 2, wherein the step S2 further includes an image region acquisition step of selecting an arbitrary position on the first image as the first main image region, and then acquiring the first sub-main image region according to the coordinate position in the first main image region and the pixel column difference.
4. The camera image fusion method according to claim 3, wherein the step S2 further comprises a pixel gray difference calculating step of calculating a difference between a pixel value at each coordinate position in the first main image region and a pixel value at a corresponding coordinate position in the first sub image region to obtain a gray difference.
5. The camera image fusion method according to claim 3, wherein the step S2 further comprises a pixel gray difference calculating step of calculating a difference between a pixel value at each coordinate position in the first main image region and a pixel value calibrated at a corresponding coordinate position in the first sub image region to obtain a gray difference.
6. The camera image fusion method according to claim 4 or 3, wherein the step S2 further comprises a pixel gray difference summing step of adding absolute values of the gray differences to obtain the pixel matching degree.
7. The camera image fusion method according to claim 1, wherein the step S3 includes adding black pixel information to the first main image region if it is determined that the pixels in the first main image region and the pixels in the first sub image region do not meet the image fusion requirement.
8. The camera image fusion method according to claim 1, further comprising a matching degree threshold calculation step.
9. A camera image fusion system for implementing the camera image fusion method according to any one of claims 1 to 8, comprising a TOF camera, an RGB camera, an adjusting unit, a pixel matching degree calculating unit, a judging unit and a fusion unit, wherein the TOF camera and the RGB camera are used for shooting the same target to form a first image and a second image respectively, the adjusting unit is used for adjusting the positions of the TOF camera and the RGB camera so that the imaging plane of the TOF camera and the imaging plane of the RGB camera are parallel to each other or in the same plane, the pixel matching degree calculating unit is used for obtaining the pixel matching degree between a first main image area of the first image and a first sub-image area of the second image, the relative position of the first main image area on the first image is the same as the relative position of the first sub-image area on the second image, the judging unit is configured to compare the pixel matching degree with a preset matching degree threshold to judge whether pixels in the first main image region and pixels in the first sub-image region meet an image fusion requirement, judge that the pixels in the first main image region and the pixels in the first sub-image region meet the image fusion requirement if the pixel matching degree is smaller than the matching degree threshold, and judge that the pixels in the first main image region and the pixels in the first sub-image region do not meet the image fusion requirement if the pixel matching degree is greater than or equal to the matching degree threshold; the fusion unit is configured to add, when the determination unit determines that the pixels in the first main image area and the pixels in the first sub-image area meet the image fusion requirement, the color information of the pixels corresponding to the first sub-image area to the first main image area.
10. The camera image fusion system according to claim 9, wherein the pixel matching degree calculation unit comprises a pixel column difference calculation module for multiplying the base line lengths of the TOF camera and the RGB camera with the focal length of the TOF camera to obtain a first process value, and then dividing the first process value by the pixel depth value of the first main image area to obtain a pixel column difference.
11. The camera image fusion system according to claim 10, wherein the pixel matching degree calculation unit further includes an image area acquisition module, and the image area acquisition module is configured to select an arbitrary position on the first image as the first main image area, and then acquire the first sub-main image area according to a coordinate position in the first main image area and the pixel row difference.
12. The camera image fusion system according to claim 11, wherein the pixel matching degree calculation unit further comprises a first pixel gray difference calculation module, and the first pixel gray difference calculation module is configured to calculate a difference between a pixel value at each coordinate position in the first main image region and a pixel value at a corresponding coordinate position in the first sub-image region to obtain a gray difference.
13. The camera image fusion system according to claim 11, wherein the pixel matching degree calculating unit further comprises a second pixel gray difference calculating module, and the first pixel gray difference calculating module is configured to calculate a difference between a pixel value at each coordinate position in the first main image region and a calibrated pixel value at a corresponding coordinate position in the first sub-image region to obtain a gray difference.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110596689.9A CN113379854B (en) | 2021-05-31 | 2021-05-31 | Camera image fusion method and camera image fusion system |
PCT/CN2022/076719 WO2022252697A1 (en) | 2021-05-31 | 2022-02-18 | Camera image fusion method and camera image fusion system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110596689.9A CN113379854B (en) | 2021-05-31 | 2021-05-31 | Camera image fusion method and camera image fusion system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379854A true CN113379854A (en) | 2021-09-10 |
CN113379854B CN113379854B (en) | 2022-12-06 |
Family
ID=77574867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110596689.9A Active CN113379854B (en) | 2021-05-31 | 2021-05-31 | Camera image fusion method and camera image fusion system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113379854B (en) |
WO (1) | WO2022252697A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022252697A1 (en) * | 2021-05-31 | 2022-12-08 | 上海集成电路制造创新中心有限公司 | Camera image fusion method and camera image fusion system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816619A (en) * | 2019-01-28 | 2019-05-28 | 努比亚技术有限公司 | Image interfusion method, device, terminal and computer readable storage medium |
CN109905691A (en) * | 2017-12-08 | 2019-06-18 | 浙江舜宇智能光学技术有限公司 | Depth image acquisition device and depth image acquisition system and its image processing method |
US20190306489A1 (en) * | 2018-04-02 | 2019-10-03 | Mediatek Inc. | Method And Apparatus Of Depth Fusion |
CN112770100A (en) * | 2020-12-31 | 2021-05-07 | 南昌欧菲光电技术有限公司 | Image acquisition method, photographic device and computer readable storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2532003A (en) * | 2014-10-31 | 2016-05-11 | Nokia Technologies Oy | Method for alignment of low-quality noisy depth map to the high-resolution colour image |
CN106447677A (en) * | 2016-10-12 | 2017-02-22 | 广州视源电子科技股份有限公司 | Image processing method and apparatus thereof |
CN113379854B (en) * | 2021-05-31 | 2022-12-06 | 上海集成电路制造创新中心有限公司 | Camera image fusion method and camera image fusion system |
-
2021
- 2021-05-31 CN CN202110596689.9A patent/CN113379854B/en active Active
-
2022
- 2022-02-18 WO PCT/CN2022/076719 patent/WO2022252697A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109905691A (en) * | 2017-12-08 | 2019-06-18 | 浙江舜宇智能光学技术有限公司 | Depth image acquisition device and depth image acquisition system and its image processing method |
US20190306489A1 (en) * | 2018-04-02 | 2019-10-03 | Mediatek Inc. | Method And Apparatus Of Depth Fusion |
CN109816619A (en) * | 2019-01-28 | 2019-05-28 | 努比亚技术有限公司 | Image interfusion method, device, terminal and computer readable storage medium |
CN112770100A (en) * | 2020-12-31 | 2021-05-07 | 南昌欧菲光电技术有限公司 | Image acquisition method, photographic device and computer readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022252697A1 (en) * | 2021-05-31 | 2022-12-08 | 上海集成电路制造创新中心有限公司 | Camera image fusion method and camera image fusion system |
Also Published As
Publication number | Publication date |
---|---|
WO2022252697A1 (en) | 2022-12-08 |
CN113379854B (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8199202B2 (en) | Image processing device, storage medium storing image processing program, and image pickup apparatus | |
US8620070B2 (en) | Corresponding image processing method for compensating colour | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
EP1087336A2 (en) | Apparatus and method for stereoscopic image processing | |
CN112200203B (en) | Matching method of weak correlation speckle images in oblique field of view | |
CN106651897B (en) | Parallax correction method based on super-pixel segmentation | |
US20120263386A1 (en) | Apparatus and method for refining a value of a similarity measure | |
CN114693760A (en) | Image correction method, device and system and electronic equipment | |
CN208254424U (en) | A kind of laser blind hole depth detection system | |
CN115082450A (en) | Pavement crack detection method and system based on deep learning network | |
CN113379854B (en) | Camera image fusion method and camera image fusion system | |
JP6942566B2 (en) | Information processing equipment, information processing methods and computer programs | |
CN111383254A (en) | Depth information acquisition method and system and terminal equipment | |
CN112419427A (en) | Method for improving time-of-flight camera accuracy | |
US20050286059A1 (en) | Attitude and position measurement of objects using image processing processes | |
CN117152330A (en) | Point cloud 3D model mapping method and device based on deep learning | |
CN113723432B (en) | Intelligent identification and positioning tracking method and system based on deep learning | |
CN115953456A (en) | Binocular vision-based vehicle overall dimension dynamic measurement method | |
CN115546312A (en) | Method and device for correcting external parameters of camera | |
CN110766740B (en) | Real-time high-precision binocular range finding system and method based on pedestrian tracking | |
CN112464727A (en) | Self-adaptive face recognition method based on light field camera | |
CN106780324B (en) | Edge joint correction method for orthoimage mosaic | |
CN114792288B (en) | Curved screen image gray scale correction method and related device | |
Shan et al. | Research on 3D pose measurement algorithm based on binocular vision | |
CN116468799A (en) | Real-time camera attitude estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |