CN117455768A - Three-eye camera image stitching method - Google Patents

Three-eye camera image stitching method Download PDF

Info

Publication number
CN117455768A
CN117455768A CN202311801023.8A CN202311801023A CN117455768A CN 117455768 A CN117455768 A CN 117455768A CN 202311801023 A CN202311801023 A CN 202311801023A CN 117455768 A CN117455768 A CN 117455768A
Authority
CN
China
Prior art keywords
image
images
splicing
matching
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311801023.8A
Other languages
Chinese (zh)
Inventor
刘杰
文彪
欧阳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Maizhe Technology Co ltd
Original Assignee
Shenzhen Maizhe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Maizhe Technology Co ltd filed Critical Shenzhen Maizhe Technology Co ltd
Priority to CN202311801023.8A priority Critical patent/CN117455768A/en
Publication of CN117455768A publication Critical patent/CN117455768A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-eye camera image splicing method in the technical field of computer vision, which comprises the following steps of S1, acquiring and cutting a first frame image of a three-eye camera; s2, extracting features; s3, registering the features; s4, mismatching and removing; s5, mapping SIFT feature coordinate points to an original image; s6, generating splicing parameters; s7, splicing images; s8, image fusion. According to the method, the video is acquired through the fixed camera position, the three-eye camera pictures are spliced in real time by combining the method of extracting and registering the first frame local feature points to acquire the perspective transformation matrix, splicing marks generated in the splicing process of two images are reduced, brightness change of the two images is uniform, and a seamless spliced image with higher resolution is formed.

Description

Three-eye camera image stitching method
Technical Field
The invention relates to the technical field of computer vision, in particular to a three-eye camera image stitching method.
Background
The image stitching technology is a technology for stitching a plurality of images with overlapped parts into a seamless panoramic image or a high-resolution image, and mainly comprises two key links, namely image registration and image fusion, and for the image fusion part, the time consumption is not too large, and the effect difference of the traditional main methods is not too great, so that the algorithm is mature in general. The image registration part is a core part of the whole image stitching technology and directly relates to the success rate and the running speed of an image stitching algorithm, so that the research of the registration algorithm is an important point of research for many years. Current image registration algorithms can be largely divided into two categories, frequency domain based methods (phase correlation methods) and time domain based methods.
Phase correlation method: the method is firstly proposed by Kuglin and Hines in 1975, and proves that under the condition of pure two-dimensional translation, the splicing precision can reach 1 pixel, and the method is widely used in the fields of registration of aerial photographs and satellite remote sensing images and the like. The method carries out fast Fourier transformation on the spliced images, transforms the two images to be registered into a frequency domain, and then directly calculates a translation vector between the two images through a cross power spectrum of the two images, thereby realizing the registration of the images. It has become one of the most promising image registration algorithms since it has the characteristics of simplicity and accuracy.
Based on a time domain method: feature-based methods and region-based methods can be distinguished in particular. The feature-based method firstly finds out feature points (such as boundary points and inflection points) in two images, determines the corresponding relation of the feature points between the images, and then finds out the transformation relation between the two images by using the corresponding relation. The method does not directly utilize the gray information of the image, so that the method is insensitive to light change, but the accuracy degree of the corresponding relation of the characteristic points is greatly dependent. The method has the advantages that the concept is visual, and most image registration algorithms can be classified into the method. The specific implementation of the matching algorithm can be divided into two major categories, namely a direct method and a search method, wherein the direct method mainly comprises a transformation optimization method, a transformation model between two images to be spliced is firstly established, and then transformation parameters of the model are directly calculated by adopting a nonlinear iterative minimization algorithm, so that the registration position of the images is determined. The algorithm has good effect and high convergence speed, but has good initial estimation to meet the convergence requirement of the process, and if the initial estimation is not good, image stitching failure can be caused. The searching method mainly uses some characteristics in one image as the basis, and searches the optimal registration position in the other image, and the common methods include a ratio matching method, a block matching method and a grid matching method. The ratio matching method is to take out partial pixels from two adjacent columns in the overlapping region of one image, and then take their ratio as a template to search for the best match in the other image. The calculation amount of the algorithm is smaller, but the accuracy is lower; the block matching rule is to search for a matching block most similar to a template in another image by taking one block in an overlapping area of one image as the template, and the algorithm has higher precision but overlarge calculated amount; the grid matching method reduces the calculation amount of the block matching method, firstly, coarse matching is carried out, each time, a step length is moved horizontally or vertically, the best matching position is recorded, then accurate matching is carried out near the position, each time, the step length is halved, and then the process is circulated until the step length is reduced to 0. The algorithm is reduced in the amount of calculation compared with the former two methods, but the algorithm is still large in practical application, and if the step length is too large in rough matching, a large rough matching error is likely to be caused, so that accurate matching is difficult to realize.
The phase correlation method generally requires a relatively large overlap ratio (typically requiring a 50% overlap ratio between the registered images), and if the overlap ratio is small, erroneous estimation of the translation vector is easily caused, so that it is difficult to achieve registration of the images. The area-based method is to search for a matching block most similar to a template in another image by taking one block in an overlapping area of one image as the template, and the algorithm has higher precision, but has overlarge calculated amount and slower speed. The existing image stitching technology has the technical problems that the extraction and matching of the characteristic points are another important module capable of being successfully stitched in the image processing, a large number of characteristic points are obtained after the characteristic points are extracted from two images shot by the same target, the characteristic points are quite similar, but cannot form a one-to-one correspondence, pseudo-matching points exist, namely, partial characteristic points exist in one image at the same time, and the phenomenon that the other matched image does not exist is avoided. Therefore, proper search strategies and similarity measurement criteria need to be selected in the characteristic point matching process, so that the search efficiency and accuracy of image characteristic point matching are ensured, and obvious splicing marks exist in spliced images due to discontinuity of the image overlapping parts in strength or color after image registration is completed. For this reason, we propose a three-eye camera image stitching method to solve the above-mentioned problems.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the invention and in the title of the invention, which may not be used to limit the scope of the invention.
Therefore, the invention aims to provide a three-eye camera image stitching method, which can solve the problems that the existing image stitching cannot form a one-to-one correspondence after feature point extraction, and pseudo-matching points exist, namely part of feature points exist in one image at the same time, but do not exist in the other matched image, so that after image registration is completed, obvious stitching marks exist in stitched images due to the discontinuity of the overlapped part of the images in intensity or color.
In order to solve the technical problems, the invention adopts the following technical scheme: a three-eye camera image splicing method, which comprises the following steps,
s1, acquiring and cutting a first frame image of a three-eye camera, wherein images acquired by cameras on two sides are cut by 1/3 of a middle camera part, and the middle camera uses a complete image;
s2, extracting features, namely respectively extracting SIFT features of the three images in the S1 by using an OpenCV library;
s3, registering features, namely registering SIFT features of the middle image and the left image and the right image by using a BBF algorithm;
s4, mismatching is removed, namely, firstly, a RANSAC algorithm is used for removing mismatching, and then abnormal matching is removed by utilizing the distance relation between the residual matching points;
s5, mapping SIFT feature coordinate points to the image cut in the S1, and recovering coordinates from the feature point coordinates in the S4 to coordinates in the original image according to the cutting parameters in the S1;
s6, generating splicing parameters, and respectively calculating perspective transformation matrixes of the left image, the right image and the intermediate image by utilizing the coordinates of the characteristic points in the S5;
s7, splicing the images, namely respectively completing splicing of the left and right camera images and the middle camera image by using an OpenCV library according to the perspective transformation matrix calculated in the S6;
s8, image fusion, wherein overlapping pixels are fused by using weighted average color values of image joint areas.
Optionally, the step of removing abnormal matching according to the S4 distance relation includes calculating the distance between the matching points in each image, calculating the scaling of the images, circularly calculating the distance between each matching point in the same image and other matching points, and removing the matching points with the distance conforming to the scaling of less than 30% in the two images.
Optionally, in S6, the translation distance (Δx, Δy) of the image is calculated using the filtered matching points, and the conversion matrix of the stitched image is calculated using the OpenCV self-contained findhomograph function.
Optionally, S7 performs affine transformation on the left and right images according to the transformation matrix by using an OpenCV warp persistence function, and then splices the left and right images with the intermediate image by using the calculated translation distance.
In summary, the present invention includes at least one of the following beneficial effects:
according to the method, the SIFT algorithm is adopted to finish feature extraction of the video image, after feature points of the image are extracted, the BBF algorithm is utilized to conduct feature registration, the RANSAC algorithm is combined to remove mismatching, the consistency of distances between the matched feature points is combined to further remove mismatching, then perspective transformation matrixes among the images are calculated, the real-time requirements of video stitching are met, the frame image stitching technology and video acquisition based on the SIFT algorithm are improved, the video is acquired through the position of a fixed camera, the method of acquiring the perspective transformation matrixes by means of first frame local feature point extraction and registration is adopted, real-time stitching of three-mesh camera pictures is achieved, stitching marks generated in the stitching process of two images are reduced, brightness change of the two images is uniform, and a seamless stitching image with high resolution is formed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an algorithm of the present invention.
Detailed Description
The present invention will be described in further detail with reference to fig. 1.
Referring to fig. 1, the invention discloses a three-eye camera image stitching method, which comprises the following steps,
s1, acquiring and cutting a first frame image of a three-eye camera, wherein images acquired by cameras on two sides are cut by 1/3 of a middle camera part, and the middle camera uses a complete image;
s2, extracting features, namely respectively extracting SIFT features of the three images in the S1 by using an OpenCV library;
s3, registering features, namely registering SIFT features of the middle image and the left image and the right image by using a BBF algorithm;
s4, removing mismatching by using a RANSAC algorithm, removing abnormal matching by using a distance relation between the rest matching points, wherein the distance relation removing abnormal matching comprises calculating the distance between the matching points in each image, calculating the scaling of the image, circularly calculating the distance between each matching point and other matching points in the same image, and removing the matching points with the distance conforming to the scaling of less than 30% in the two images;
s5, mapping SIFT feature coordinate points to the image cut in the S1, and recovering coordinates from the feature point coordinates in the S4 to coordinates in the original image according to the cutting parameters in the S1;
s6, generating splicing parameters, respectively calculating perspective transformation matrixes of the left image, the right image and the intermediate image by utilizing the characteristic point coordinates in the S5, calculating translation distances (delta x, delta y) of the images by utilizing the screened matching points, and calculating a transformation matrix of the spliced images by utilizing an OpenCV self-contained findHomoprography function;
s7, splicing images, namely respectively splicing left and right camera images and an intermediate camera image by using an OpenCV library according to the perspective transformation matrix calculated in the S6, respectively carrying out affine transformation on the left and right images by using a warp Perselected function of the OpenCV according to the transformation matrix, and respectively splicing the left and right images and the intermediate image by using the calculated translation distance;
s8, image fusion, wherein overlapping pixels are fused by using weighted average color values of image joint areas.
And then judging whether to continue image stitching according to the external conditions, and intervening by the external. Generally, before large-scale use, the initial calculation of the image stitching parameters may be performed, in fig. 1, that is, a stage before determining whether to end stitching, and then the images are stitched in a subsequent loop using the stitching parameters.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (4)

1. A three-eye camera image stitching method is characterized in that: comprises the steps of,
s1, acquiring and cutting a first frame image of a three-eye camera, wherein images acquired by cameras on two sides are cut by 1/3 of a middle camera part, and the middle camera uses a complete image;
s2, extracting features, namely respectively extracting SIFT features of the three images in the S1 by using an OpenCV library;
s3, registering features, namely registering SIFT features of the middle image and the left image and the right image by using a BBF algorithm;
s4, mismatching is removed, namely, firstly, a RANSAC algorithm is used for removing mismatching, and then abnormal matching is removed by utilizing the distance relation between the residual matching points;
s5, mapping SIFT feature coordinate points to the image cut in the S1, and recovering coordinates from the feature point coordinates in the S4 to coordinates in the original image according to the cutting parameters in the S1;
s6, generating splicing parameters, and respectively calculating perspective transformation matrixes of the left image, the right image and the intermediate image by utilizing the coordinates of the characteristic points in the S5;
s7, splicing the images, namely respectively completing splicing of the left and right camera images and the middle camera image by using an OpenCV library according to the perspective transformation matrix calculated in the S6;
s8, image fusion, wherein overlapping pixels are fused by using weighted average color values of image joint areas.
2. The method for stitching images of a three-dimensional camera according to claim 1, wherein: the S4 distance relation eliminating abnormal matching comprises the steps of calculating the distance between the matching points in each image, calculating the scaling of the images, circularly calculating the distance between each matching point and other matching points in the same image, and eliminating the matching points with the distance conforming to the scaling of less than 30% in the two images.
3. The method for stitching images of a three-dimensional camera according to claim 1, wherein: and S6, calculating translation distances (delta x, delta y) of the images by using the screened matching points, and calculating a transformation matrix of the spliced images by using a findHomograph function of OpenCV.
4. The method for stitching images of a three-dimensional camera according to claim 1, wherein: and S7, carrying out affine transformation on the left image and the right image respectively by using an OpenCV warp Perselected function according to a transformation matrix, and then splicing the left image, the right image and the intermediate image respectively by using the calculated translation distance.
CN202311801023.8A 2023-12-26 2023-12-26 Three-eye camera image stitching method Pending CN117455768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311801023.8A CN117455768A (en) 2023-12-26 2023-12-26 Three-eye camera image stitching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311801023.8A CN117455768A (en) 2023-12-26 2023-12-26 Three-eye camera image stitching method

Publications (1)

Publication Number Publication Date
CN117455768A true CN117455768A (en) 2024-01-26

Family

ID=89595210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311801023.8A Pending CN117455768A (en) 2023-12-26 2023-12-26 Three-eye camera image stitching method

Country Status (1)

Country Link
CN (1) CN117455768A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN111047510A (en) * 2019-12-17 2020-04-21 大连理工大学 Large-field-angle image real-time splicing method based on calibration
CN115546021A (en) * 2022-05-13 2022-12-30 冶金自动化研究设计院有限公司 Multi-camera image splicing method applied to cold bed shunting scene detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN111047510A (en) * 2019-12-17 2020-04-21 大连理工大学 Large-field-angle image real-time splicing method based on calibration
CN115546021A (en) * 2022-05-13 2022-12-30 冶金自动化研究设计院有限公司 Multi-camera image splicing method applied to cold bed shunting scene detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾新贵 等: "基于Harris 算子和方向场的图像配准算法", 计算机应用, 31 December 2016 (2016-12-31), pages 146 - 148 *

Similar Documents

Publication Publication Date Title
CN111968129A (en) Instant positioning and map construction system and method with semantic perception
CN110689008A (en) Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN111079545A (en) Three-dimensional target detection method and system based on image restoration
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN111553939B (en) Image registration algorithm of multi-view camera
US8867826B2 (en) Disparity estimation for misaligned stereo image pairs
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN113658337A (en) Multi-mode odometer method based on rut lines
CN113538569A (en) Weak texture object pose estimation method and system
CN113255449A (en) Real-time matching method of binocular video images
KR20050063991A (en) Image matching method and apparatus using image pyramid
CN111047513B (en) Robust image alignment method and device for cylindrical panorama stitching
CN117455768A (en) Three-eye camera image stitching method
CN113674407B (en) Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image
CN112700504B (en) Parallax measurement method of multi-view telecentric camera
CN115456870A (en) Multi-image splicing method based on external parameter estimation
Kitt et al. Trinocular optical flow estimation for intelligent vehicle applications
CN110059651B (en) Real-time tracking and registering method for camera
CN109242910B (en) Monocular camera self-calibration method based on any known plane shape
Lin et al. An Improved ICP with Heuristic Initial Pose for Point Cloud Alignment
Bai Overview of image mosaic technology by computer vision and digital image processing
CN113487487B (en) Super-resolution reconstruction method and system for heterogeneous stereo image
Gao et al. MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems
Zheng et al. Study of binocular parallax estimation algorithms with different focal lengths

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination