CN110223222B - Image stitching method, image stitching device, and computer-readable storage medium - Google Patents

Image stitching method, image stitching device, and computer-readable storage medium Download PDF

Info

Publication number
CN110223222B
CN110223222B CN201810175736.0A CN201810175736A CN110223222B CN 110223222 B CN110223222 B CN 110223222B CN 201810175736 A CN201810175736 A CN 201810175736A CN 110223222 B CN110223222 B CN 110223222B
Authority
CN
China
Prior art keywords
image
grid
feature point
homography matrix
stitching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810175736.0A
Other languages
Chinese (zh)
Other versions
CN110223222A (en
Inventor
王艺伟
刘丽艳
王炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201810175736.0A priority Critical patent/CN110223222B/en
Publication of CN110223222A publication Critical patent/CN110223222A/en
Application granted granted Critical
Publication of CN110223222B publication Critical patent/CN110223222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image stitching method, an image stitching device and a computer readable storage medium, wherein the image at least comprises a first image and a second image, and the image stitching method comprises the following steps: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.

Description

Image stitching method, image stitching device, and computer-readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image stitching method, an image stitching device, and a computer readable storage medium.
Background
The image stitching technology is a technology for stitching a plurality of images with overlapped parts (which can be obtained by different time, different visual angles or different sensors) into a large seamless high-resolution image, and is the research interest of graphics and machine vision.
In the prior art, image stitching algorithms often solve the image stitching problem by means of grid optimization, and also use a process such as Shape preservation (Shape preservation) to obtain a relatively natural stitching effect. Specifically, the existing image stitching technology generally performs feature point detection and matching firstly, then estimates a homography matrix according to the result of feature point matching, and completes stitching for the gridded image according to the homography matrix. However, in the image stitching process, the estimation manner of the homography matrix often limits the accuracy of the final stitched image, and affects the stitching effect.
Therefore, an image stitching method capable of further improving image stitching accuracy is demanded.
Disclosure of Invention
In order to solve the above technical problem, according to an aspect of the present invention, there is provided an image stitching method, where the image includes at least a first image and a second image, the method including: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.
According to another aspect of the present invention, there is provided an image stitching apparatus, the image including at least a first image and a second image, the apparatus comprising: the matching unit is configured to detect and match the feature points of the first image to be spliced with the feature points of the second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; the matrix calculation unit is configured to divide the first image into at least two first splicing areas and respectively calculate a first homography matrix of each first splicing area according to the characteristic point matching; the grid dividing unit is configured to divide the first image into a plurality of first grids and calculate a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and the coordinate transformation unit is configured to perform coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combine the second images to form a spliced image.
According to another aspect of the present invention, there is provided an image stitching apparatus, the image including at least a first image and a second image, the apparatus comprising: a processor and a memory, in which computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.
According to another aspect of the present invention, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, perform the following image stitching steps, wherein the images comprise at least a first image and a second image: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.
According to the image stitching method, the image stitching device and the computer readable storage medium, different first homography matrixes can be formed according to the first stitching areas dividing the first image, and the meshed first image is subjected to coordinate transformation and stitching according to the different first homography matrixes, so that the accuracy of the obtained stitched image is improved, and the stitching effect is improved.
Drawings
The above and other objects, features, and advantages of the present invention will become more apparent by describing in detail embodiments thereof with reference to the attached drawings.
FIG. 1 shows a flow chart of a method of image stitching according to one embodiment of the invention;
FIG. 2 (a) shows a schematic representation of a first image according to an embodiment of the invention; fig. 2 (b) shows a schematic diagram of a second image; FIG. 2 (c) shows a schematic diagram of a stitched image after stitching a first image and a second image;
FIG. 3 shows a schematic diagram of feature point matching pairs according to one embodiment of the invention;
FIG. 4 illustrates a schematic view of a first feature point in a first image and a corresponding number distribution histogram along a stitching edge direction, according to one embodiment of the present invention;
FIG. 5 illustrates a schematic division of a first stitching region in accordance with an embodiment of the present invention;
FIG. 6 shows a schematic diagram of stitched images of a first image and a second image implementing an image stitching method according to one embodiment of the invention;
FIG. 7 illustrates a schematic diagram of a stitched image after alignment correction according to one embodiment of the invention;
FIG. 8 shows a block diagram of an image stitching device according to one embodiment of the present invention;
fig. 9 shows a block diagram of an image stitching device according to an embodiment of the invention.
Detailed Description
An image stitching method, an image stitching apparatus, and a computer-readable storage medium according to embodiments of the present invention will be described below with reference to the accompanying drawings. In the drawings, like reference numerals refer to like elements throughout. It should be understood that: the embodiments described herein are merely illustrative and should not be construed as limiting the scope of the invention.
In the embodiment of the invention, in order to solve the problem of insufficient image splicing precision caused by a homography matrix estimation mode in the prior art, the method considers that the images to be spliced are subjected to region division in the image splicing process, different homography matrixes are obtained according to different regions obtained by division, and coordinate transformation and splicing are carried out according to the homography matrixes, so that the image splicing precision is improved, and the splicing effect is improved.
An image stitching method according to an embodiment of the present invention will be described below with reference to fig. 1. Fig. 1 shows a flow chart of the image stitching method 100. The image at least comprises a first image and a second image.
As shown in fig. 1, in step S101, a first image to be stitched and a second image are subjected to feature point detection and matching, and a plurality of feature point matching pairs are obtained, where each feature point matching pair includes a first feature point of the first image and a second feature point of the second image.
In an embodiment of the present invention, it is desirable to stitch the first image and the second image into a larger range of stitched images. Wherein the first image and the second image may have overlapping portions with each other, and positions of the overlapping portions in the first image and the second image, respectively, are not limited herein. For example, fig. 2 (a) shows a schematic view of a first image according to an example of the present invention; fig. 2 (b) shows a schematic diagram of a second image; fig. 2 (c) shows a schematic diagram of a stitched image obtained by stitching a first image and a second image, and it can be seen that in the examples shown in fig. 2 (a) -2 (c), overlapping portions of the first image and the second image are located on the right side of the first image and the left side of the second image, respectively, so that the left half of the stitched image is substantially derived from the first image, and the right half is substantially derived from the second image, and the stitched image becomes a larger range of images. In another example of the invention, it is also possible to have the overlapping portion located on the lower side of the first image and on the upper side of the second image, respectively, in which case the first image and the second image will be stitched up and down, the stitched image having an upper half substantially derived from the first image and a lower half substantially derived from the second image. In yet another example of the present invention, the overlapping portions of the first image and the second image may be located at any position in the first image and the second image, respectively, and have a certain rotation angle with respect to each other, so that the spliced image will also be formed by rotating at least one of the first image and the second image by a certain angle. The above descriptions are only examples, and in practical application, any overlapping manner of the first image and the second image may be adopted.
In the step, feature point detection and matching are carried out on a first image and a second image to be spliced so as to obtain a plurality of feature point matching pairs. Specifically, first, the first image and the second image may be acquired separately. In one example of the present invention, the first image and the second image may be images acquired by a photographing unit provided on an object (e.g., a mobile robot, a smart car, an unmanned aerial vehicle, etc.), where the photographing unit may be a monocular camera or a video camera, and of course may be a binocular or a multi-view camera or a video camera, which is not limited herein. The acquired first image and second image may be acquired at different times, at different positions, or within a certain viewing angle range, respectively, as long as they have a certain overlapping portion therebetween.
After the first image and the second image are acquired, feature point detection can be performed on the first image and the second image respectively based on a preset feature point detection mode. In the embodiment of the present invention, the preset feature point detection manner may include various feature point detection methods, such as scale invariant feature transform (Scale Invariant Feature Transform, SIFT) features, acceleration robust (Speeded Up Robust Features, SURF) features, harris corner points, and the like, and may also be ORB (Oriented FAST and Rotated BRIEF) feature point detection methods. After the feature points are detected, the feature points detected by the first image and the second image may be optionally described, for example, various methods for feature description such as gray scale features, gradient features, parallax information, and the like may be used to describe the feature points in the first image and the second image.
Finally, the feature points detected by the first image and the feature points detected by the second image can be matched to obtain a plurality of feature point matching pairs. Optionally, grid-based motion statistics (Grid-based Motion Statistics, GMS) may be utilized for feature point matching. When the feature points are matched, the correct or not can be judged to eliminate some wrong feature points which cannot be matched, and only correct feature points and corresponding feature point matching pairs are left, so that the matching stability is improved. After the matching is finished, each obtained feature point matching pair comprises a first feature point of the first image and a second feature point of the second image, wherein all the first feature points of the first image are from the feature points detected in the first image before and can be all or part of the feature points detected in the first image; likewise, all of the second feature points of the second image may be all or part of the feature points detected by the second image. All first feature points in the first image and all second feature points in the second image are in one-to-one correspondence, and each feature point matching pair is formed respectively. In one example of the embodiment of the present invention, the acquired feature point matching pairs may be four or more. Fig. 3 is a schematic diagram of a feature point matching pair obtained by performing feature point detection using ORB and performing feature point matching using GMS in an embodiment of the present invention, where the left side is a first image and the right side is a second image. In the example shown in fig. 3, the ORB is used to detect the feature points and the GMS is used to match the feature points, which can have stronger robustness to image rotation and scale transformation, so as to reduce the error of the matched feature points.
In step S102, the first image is divided into at least two first stitching regions, and a first homography matrix of each first stitching region is calculated according to the feature point matching.
In this step, optionally, the first image may be divided into at least two first stitching regions according to at least part of the first feature points in the first image. The manner of dividing the first stitching region into the first image may be based on grouping and clustering some or all of the first feature points in the first image.
In one example, the first feature points may be grouped according to the number of first feature points along the splice edge direction and clustered using a clustering algorithm. Specifically, a first feature point number distribution histogram of at least part of the first feature points in the first image along the stitching edge direction of the first image may be first calculated. Fig. 4 shows a schematic view of a first feature point in a first image according to an embodiment of the invention and a corresponding number distribution histogram along the direction of the stitching edge, the first image in fig. 4 corresponding to the first image on the left in fig. 3. As can be seen in fig. 4, the upper half of the first image has a relatively large number of first feature points, while the lower half has a smaller number of first feature points. Subsequently, the number of the first stitching regions may be determined according to the first feature point number distribution histogram. Specifically, as shown in fig. 4, the histogram of the number distribution of the first feature points may be fitted, for example, by using gaussian fitting, and parameters such as a derivative, a second derivative, a peak value, or a half-width of each point on the fitted curve are calculated, and the number of the divided first splicing regions, that is, the number of the first feature point groups, is determined according to the parameters. For example, when the first feature points are grouped according to the gaussian fitting curve, a k value, for example, 1, may be preset, and then the change of the k value is determined according to the number proportion occupied by the first feature points in different regions and/or the distance between adjacent peaks on the curve. As can be seen in fig. 4, the gaussian fitting curve has a plurality of peaks and a plurality of valleys. For the gaussian fitting curve shown in the figure, the distance between adjacent peaks can be considered, and when the distance is greater than a certain preset threshold value and the number proportion occupied by the first feature points in each region divided according to the distance is also in accordance with a preset condition, the k value can be increased by 1. In fig. 4, the distance a and the other peaks do not exceed the preset distance, while the distance B exceeds the preset distance. On this basis, the two characteristic point grouping areas can be divided by utilizing the trough position between the two peaks at the two ends of the distance B. Then, judging whether the number proportion occupied by the first characteristic points of each of the two areas meets the preset condition or not. For example, when it is determined that the number proportion occupied by the first feature points of each region is greater than 10% of the total amount of the first feature points participating in statistics, the k value may be increased by 1 to become 2. That is, the number of groupings of the first feature points may be 2, and the corresponding number of first stitching regions divided from the first image may also be 2. When the number of the first splicing regions is determined, the dividing limit of the first splicing regions may be further determined according to the regions of the feature point group. The first image may then be partitioned into a plurality of first stitching regions according to the determined number of first stitching regions and a partitioning boundary. For example, the aforementioned region division manner on the histogram may be referred to first, and the first feature points may be clustered into 2 groups by using a clustering algorithm (for example, k-means algorithm, k=2), and the corresponding upper and lower two first stitching regions may be divided according to the clustered first feature points, as shown in fig. 5. Fig. 5 illustrates a schematic view of the division of the first stitching region in the example illustrated in fig. 4, according to one embodiment of the present invention. One or more of the first stitching regions may include all or a majority (e.g., may include a predetermined proportion (e.g., 90%) of the corresponding set of first feature points). Wherein, optionally, in the process of clustering the first feature points, a data center of each group of the first feature points may also be acquired.
The first feature point grouping and clustering manner and the first splicing region dividing manner shown in fig. 4 and 5 are only examples, and any other feature point grouping and region dividing manner may be adopted in practical applications of the embodiment of the present invention. For example, in another example, the plurality of first stitching regions in the first image may also be partitioned according to texture, color, shading, spatial distribution of objects, and/or the like of the first image. In addition, alternatively, the first feature points used for dividing the first stitching region may be all of the first feature points in the first image, or may be a part of the first feature points, and accordingly, the divided first stitching region may also occupy all of the first image, or only occupy a part thereof. Alternatively, the first feature point and the divided first stitching region may be located in a portion of the first image that overlaps the second image. In addition, in practical application of the embodiment of the present invention, other numbers of first feature point groups and first splicing regions may be divided, which is not limited herein. For example, in still another example of the embodiment of the present invention, the first feature points may be further divided into three groups of upper, middle and lower, and the first splicing regions may be divided into corresponding three first splicing regions of upper, middle and lower. Alternatively, the boundary between two adjacent first stitching regions may not be parallel to the stitching edge direction of the first images.
After the first image is divided into the plurality of first stitching regions, a first homography matrix for each first stitching region may be calculated from the feature point matches, respectively. For example, the corresponding upper and lower first homography matrices may be calculated for the upper and lower first splicing regions shown in fig. 5, for example, may be H top And H bottom . In the calculation, optionally, a homography matrix for each first stitching region may be estimated using, for example, random sample consensus (Random Sample Consensus, RANSAC). The method for estimating the homography matrixes aiming at the first image subareas can avoid the problem that the homography matrixes are always prone to be estimated based on relatively dense data in the prior art, so that other effective sparse data are easy to ignore, the accuracy and the effectiveness of homography matrix calculation are improved, and the image splicing accuracy is improved. Of course, the calculation manner of the first homography matrix depends on the division manner and the number of the first stitching regions, for example, when the first stitching regions are three upper, middle and lower along the direction of the stitching edges of the first image, the first homography matrix may include H top 、H mid And H bottom There is no limitation in this regard.
In step S103, the first image is divided into a plurality of first grids, and a first grid homography matrix of each first grid of the first image is calculated according to at least one of the first homography matrices of the at least two first stitching regions.
In this step, the first image is first divided into a plurality of first grids, and specifically, the first image may be gridded by using various image gridding methods according to edge features, feature points or other information of the image. Then, a first grid homography matrix of each first grid in the first image may be calculated according to the first homography matrices corresponding to the first stitching regions. Alternatively, when calculating the first grid homography matrix of one of the first grids, first, the distances between the first grid and each of the first stitching regions may be calculated, for example, the distances between the centroid of the first grid and the data center of the group of the first feature points in each of the first stitching regions along the stitching edge direction may be calculated. Then, a weight of a first homography matrix corresponding to each first splicing region of the first grid may be obtained according to the calculated distance (for example, an alpha weight distribution method may be adopted), and a first grid homography matrix of the first grid may be calculated according to the weight of each first splicing region and the first homography matrix. Optionally, after the first grid homography matrix is acquired, a corresponding first grid homography matrix may also be calculated for each first grid partitioned in the first image.
For example, in the examples of fig. 4 and 5, the first homography matrix H of the upper and lower first splicing regions may be utilized, respectively top And H bottom To calculate respective first grid homography matrices. In particular, it is possible first to calculate the distance in the vertical direction of the centroid of one of the first grids from the data center of the group of first feature points in the upper and lower stitching regions, e.g. d, respectively top And d bottom Then, judging each first homography matrix H corresponding to the first grid according to the calculated distance top And H bottom Weight W of (2) top And W is bottom Finally according to the first homography matrix H top And H bottom Respectively corresponding weights W top And W is bottom To calculate a first grid homography matrix h=w for the first grid top ×H top +W bottom ×H bottom . Meter for measuring timeCalculating a first homography matrix H top And H bottom Weight W of (2) top And W is bottom In one example, when the centroid of the first grid is higher than the data center C corresponding to the first splice area above top At the time W top Can be 1, W bottom May be 0; when the centroid of the first grid is lower than the data center C corresponding to the first splicing area below bottom At the time W top Can be 0, W bottom May be 1; while in other cases W bottom =d top /(d top +d bottom ),W top =1-W bottom . In another example, when the centroid of the first grid is higher than the centroid of the data center (C top +θ) (θ is positive), W top Can be 1, W bottom May be 0; when the centroid of the first grid is lower than the centroid of the data center (C bottom - θ), W top Can be 0, W bottom May be 1; while in other cases W bottom =d top /(d top +d bottom ),W top =1-W bottom
The above calculation manner of the first grid homography matrix is merely an example, and in practical application of the embodiment of the present invention, any calculation and weight expression manner of the homography matrix may be adopted to determine the first grid homography matrix in consideration of specific application situations, which is not limited herein.
In step S104, each first grid in the first image is subjected to coordinate transformation according to the corresponding first grid homography matrix, and the second image is combined to form a spliced image.
In this step, the first grid may be subjected to coordinate transformation by using the first grid homography matrix corresponding to each first grid, and spliced with the original image or the corresponding deformation of the second image, so that overlapping portions of the first image and the second image overlap each other, to form a spliced image. Specifically, one or more corner points of each first grid can be multiplied by a first grid homography matrix used for coordinate transformation to obtain coordinates of the first grid homography matrix in the spliced image; then, on the basis of the calculated coordinates of the corner points in the spliced image, calculating other pixel points in the first grid by using an interpolation algorithm to obtain corresponding coordinates of the pixel points in the spliced image; and finally, filling the spliced image according to the coordinate corresponding relation of each pixel point in the spliced image so as to splice with the original image or the corresponding deformation of the second image.
In the embodiment of the invention, the first grid divided in the first image may occupy all or a part of the first image. When the first grid occupies one part of the first image, other parts in the first image can be subjected to grid division and coordinate transformation by utilizing other matrix transformation modes, such as similarity transformation, transition transformation and the like, and the spliced image can be filled in.
In one example of an embodiment of the present invention, the original image of the second image may be padded into the stitched image to stitch with the coordinate transformed first image. In this example, each pixel or each grid in the second image may also be considered to be multiplied by one identity matrix, respectively, and filled into the stitched image.
In another example of the embodiment of the present invention, homography matrix calculation and coordinate transformation similar to those of the first image may also be performed on the second image, and the second image after coordinate transformation is filled into the stitched image, so as to obtain a more accurate image stitching effect. The image stitching method of the embodiment of the invention further comprises the following steps: dividing the second image into at least two second splicing areas, and respectively calculating a second homography matrix of each second splicing area according to the characteristic point matching; dividing the second image into a plurality of second grids, and calculating a second grid homography matrix of each second grid of the second image according to at least one of the second homography matrices of the at least two second splicing areas. Alternatively, the second image may be divided into at least two second stitching regions according to the relationship between the first stitching region divided by the first image and the feature point matching pair. Specifically, after the first stitching regions that are divided by the first images are obtained, a grouping of the second feature points in the second images corresponding to each group of the first feature points may be obtained according to each group of the first feature points included in each first stitching region in combination with the relationship of the feature point matching pairs obtained in step S101, and the dividing limit of the second stitching region may be determined according to the feature points in the second images after the grouping. The second image may then be partitioned into a plurality of second stitching regions according to the determined groupings of second feature points and the partition boundary. One or more of the second stitching regions may include all or a majority (e.g., may include a predetermined proportion (e.g., 90%) of the set of second feature points of the corresponding set of second feature points). In one example, the number of second stitching regions in the second image may be the same as the number of first stitching regions, and the second feature points included in the second stitching regions may be substantially one-to-one corresponding to the first feature points in the corresponding first stitching regions.
The above-mentioned dividing manner of the second splicing area is only an example, and any other feature point grouping and area dividing manner may be adopted in the practical application of the embodiment of the present invention. The division of the second stitching region of the second image may also be performed independently, and not affected by the division result of the first stitching region, and a specific division manner may be similar to the division manner of the first stitching region, which is not described herein again. For example, in another example, the second stitching region in the second image may also be partitioned according to texture, color, shading, and/or spatial distribution of objects, etc. of the second image. In addition, the second feature points used for dividing the second stitching region may be all of the second feature points in the second image, or may be a part of the second feature points, and accordingly, the divided second stitching region may occupy all of the second image, or only a part thereof. Alternatively, the second split-joint region may be located at a portion of the second image overlapping the first image. In addition, in practical application of the embodiment of the present invention, other numbers of second splicing areas may be divided, which is not limited herein. Alternatively, the boundary between two adjacent second stitching regions may not be parallel to the stitching edge direction of the second image.
After the second image is divided into the plurality of second stitching regions, a second homography matrix for each second stitching region may be calculated based on the feature point matching, respectively. For example, when there are two second splicing regions, the corresponding two second homography matrices, such as H ', can be calculated' top And H' bottom . In the calculation, optionally, a homography matrix for each second stitching region may be estimated using, for example, random sample consensus (Random Sample Consensus, RANSAC). The method for estimating the homography matrixes aiming at the second image subareas can avoid the problem that the homography matrixes are always prone to be estimated based on relatively dense data in the prior art, so that other effective sparse data are easy to ignore, the accuracy and the effectiveness of homography matrix calculation are improved, and the image splicing accuracy is improved. Of course, the calculation manner of the second homography matrix depends on the division manner and the number of the second stitching regions, for example, when the second stitching regions are three upper, middle and lower along the stitching edge direction of the second image, the second homography matrix may include H' top 、H’ mid And H' bottom There is no limitation in this regard.
After the homography matrix corresponding to the second stitching region in the second image is obtained, the second image may be divided into a plurality of second grids, and specifically, the second image may be gridded by adopting various image gridding methods according to edge features, feature points or other information of the image. The second grid homography matrix for each second grid in the second image may then be calculated from one or more of the aforementioned second homography matrices for each second stitching region. Alternatively, when calculating the second mesh homography matrix of one of the second meshes, first, distances between the second mesh and each of the second stitching regions may be calculated, for example, distances between the centroid of the second mesh and the data center of the group of the second feature points in each of the second stitching regions along the stitching edge direction may be calculated. Then, the weight of the second homography matrix corresponding to each second splicing area of the second grid may be obtained according to the calculated distance (for example, an alpha weight distribution method may be adopted), and the second grid homography matrix of the second grid may be calculated according to the weight of each second splicing area and the second homography matrix, and a specific weight calculation manner may be similar to the calculation manner of the weight of the first grid homography matrix, which is not described herein again. Optionally, after the second grid homography matrix is acquired, a corresponding second grid homography matrix may also be calculated for each second grid partitioned in the second image. The above calculation manner of the second grid homography matrix is merely an example, and in practical application of the embodiment of the present invention, any calculation and weight expression manner of the homography matrix may be used to determine the second grid homography matrix in consideration of specific application situations, which is not limited herein.
Accordingly, after performing the second meshing and the second mesh homography matrix calculation on the second image, the combining the second image to form the stitched image may further include: and carrying out coordinate transformation on each second grid in the second image according to the corresponding second grid homography matrix, and combining the first images after coordinate transformation to form a spliced image. Specifically, one or more corner points of each second grid can be multiplied by a second grid homography matrix used for coordinate transformation to obtain coordinates of the second grid homography matrix in the spliced image; then, on the basis of the calculated coordinates of the corner points in the spliced image, calculating other pixel points in the second grid by using an interpolation algorithm to obtain corresponding coordinates of the pixel points in the spliced image; and finally, filling the spliced image according to the coordinate corresponding relation of each pixel point in the spliced image so as to splice with the first image after coordinate transformation.
In the embodiment of the present invention, the second grid divided in the second image may occupy all or a part of the second image. When the second grid occupies a part of the second image, other parts in the second image can be subjected to grid division and coordinate transformation by using other matrix transformation modes, such as similarity transformation, transition transformation and the like, and the spliced image can be filled in.
Fig. 6 shows a schematic diagram of a stitched image for performing an image stitching method on the first image and the second image shown in fig. 3, according to an embodiment of the present invention. It can be seen that the transition near the spliced edge of the spliced image shown in fig. 6 is natural, and the splicing effect is accurate.
However, as can be seen from the content of the dashed box in fig. 6 and the enlarged schematic view thereof, the image spliced by the splicing method according to the embodiment of the present invention may cause a certain directional distortion, so that the object that should be approximately perpendicular in the image is rotated and offset by an angle. Thus, in one example of the present invention, the image stitching method may further comprise: and correcting the directional distortion of the spliced image. For example, directional distortion of the stitched image may be corrected using directional consistency. Specifically, the line detection may be performed first in the first image, the second image, and the stitched image, respectively, to obtain a line detection result. Alternatively, the straight line detection result may be filtered and noise removed. And then, according to the corresponding relation between the characteristic point matching pairs obtained before and/or the corresponding relation of grids between the first image and the second image before splicing, the corresponding relation between the detected straight lines can be obtained, and the included angle omega between a certain straight line in the detected image and the straight line in the corresponding first image and/or second image can be calculated. Finally, the direction correction, such as rotation, is performed on the straight line in the spliced image according to the included angle ω, so that the straight line in the spliced image is as close as possible to the angular direction of the straight line before the direction distortion is generated in the first image and/or the second image. Fig. 7 is a schematic diagram of a stitched image after the area in the dashed box in fig. 6 is subjected to the direction consistency correction, according to an embodiment of the present invention, it can be seen that the straight line in the solid box of the stitched image shown in fig. 7 generates a direction change, which is closer to the direction in the original second image on the right side of fig. 3.
In another example of the embodiment of the present invention, optionally, image fusion may be performed on the spliced edge of the spliced image, so as to eliminate the discontinuity of the brightness or illumination of the image as much as possible. For example, the gradual change treatment may be performed on the spliced edge of the spliced image, so that the brightness, illumination, etc. of the images at both sides of the spliced edge are as uniform as possible.
Although only the embodiment of the image stitching method according to the above embodiment of the present invention is described for stitching two images, the method according to the above embodiment of the present invention is also applicable to the stitching process for three or more images, and the specific embodiment is similar to the foregoing image stitching process and will not be repeated here. In addition, in the process of stitching three or more images, the images may be simultaneously subjected to coordinate transformation and one-time stitched image may be obtained, or two or more adjacent images may be subjected to fractional processing and finally stitched images may be obtained, which is not limited herein.
According to the image stitching method provided by the embodiment of the invention, different first homography matrixes can be formed respectively according to the plurality of first stitching areas dividing the first image, and the meshed first image is subjected to coordinate transformation and stitching according to the different first homography matrixes, so that the accuracy of the acquired stitched image is improved, and the stitching effect is improved.
Next, an image stitching apparatus according to an embodiment of the present invention is described with reference to fig. 8. Fig. 8 shows a block diagram of an image stitching device 800 according to an embodiment of the present invention. The image at least comprises a first image and a second image. As shown in fig. 8, the image stitching apparatus 800 includes a matching unit 810, a matrix calculation unit 820, a mesh division unit 830, and a coordinate transformation unit 840. In addition to these elements, the apparatus 800 may include other components, however, since these components are not related to the contents of the embodiments of the present invention, illustration and description thereof are omitted herein. Further, since the specific details of the following operations performed by the image stitching apparatus 800 according to an embodiment of the present invention are the same as those described above with reference to fig. 1 and 3 to 7, repeated descriptions of the same details are omitted here to avoid redundancy.
The matching unit 810 of the image stitching apparatus 800 in fig. 8 is configured to perform feature point detection and matching on a first image to be stitched and a second image, and obtain a plurality of feature point matching pairs, where each of the feature point matching pairs includes a first feature point of the first image and a second feature point of the second image.
In an embodiment of the present invention, it is desirable to stitch the first image and the second image into a larger range of stitched images. Wherein the first image and the second image may have overlapping portions with each other, and positions of the overlapping portions in the first image and the second image, respectively, are not limited herein. For example, the overlapping portions of the first image and the second image may be located to the right of the first image and the left of the second image, respectively, such that the left half of the stitched image is substantially derived from the first image and the right half is substantially derived from the second image, the stitched image being a larger range of images. In another example of the invention, it is also possible to have the overlapping portion located on the lower side of the first image and on the upper side of the second image, respectively, in which case the first image and the second image will be stitched up and down, the stitched image having an upper half substantially derived from the first image and a lower half substantially derived from the second image. In yet another example of the present invention, the overlapping portions of the first image and the second image may be located at any position in the first image and the second image, respectively, and have a certain rotation angle with respect to each other, so that the spliced image will also be formed by rotating at least one of the first image and the second image by a certain angle. The above descriptions are only examples, and in practical application, any overlapping manner of the first image and the second image may be adopted.
The matching unit 810 performs feature point detection and matching on the first image and the second image to be stitched to obtain a plurality of feature point matching pairs. Specifically, first, the first image and the second image may be acquired separately. In one example of the present invention, the first image and the second image may be images acquired by a photographing unit provided on an object (e.g., a mobile robot, a smart car, an unmanned aerial vehicle, etc.), where the photographing unit may be a monocular camera or a video camera, and of course may be a binocular or a multi-view camera or a video camera, which is not limited herein. The acquired first image and second image may be acquired at different times, at different positions, or within a certain viewing angle range, respectively, as long as they have a certain overlapping portion therebetween.
After the first image and the second image are acquired, the matching unit 810 may perform feature point detection on the first image and the second image respectively based on a preset feature point detection manner. In the embodiment of the present invention, the preset feature point detection manner may include various feature point detection methods, such as scale invariant feature transform (Scale Invariant Feature Transform, SIFT) features, acceleration robust (Speeded Up Robust Features, SURF) features, harris corner points, and the like, and may also be ORB (Oriented FAST and Rotated BRIEF) feature point detection methods. After the feature points are detected, the feature points detected by the first image and the second image may be optionally described, for example, various methods for feature description such as gray scale features, gradient features, parallax information, and the like may be used to describe the feature points in the first image and the second image.
Finally, the matching unit 810 may match the feature points detected by the first image with the feature points detected by the second image to obtain a plurality of feature point matching pairs. Alternatively, the matching unit 810 may utilize Grid-based motion statistics (Grid-based Motion Statistics, GMS) for feature point matching. When the feature points are matched, the correct or not can be judged to eliminate some wrong feature points which cannot be matched, and only correct feature points and corresponding feature point matching pairs are left, so that the matching stability is improved. After the matching is finished, each feature point matching pair acquired by the matching unit 810 includes a first feature point of the first image and a second feature point of the second image, where all the first feature points of the first image are from feature points detected in the first image before, and may be all or a part of the feature points detected in the first image; likewise, all of the second feature points of the second image may be all or part of the feature points detected by the second image. All first feature points in the first image and all second feature points in the second image are in one-to-one correspondence, and each feature point matching pair is formed respectively. In one example of the embodiment of the present invention, the acquired feature point matching pairs may be four or more. Fig. 3 is a schematic diagram of a feature point matching pair obtained by performing feature point detection using ORB and performing feature point matching using GMS in an embodiment of the present invention, where the left side is a first image and the right side is a second image. In the example shown in fig. 3, the ORB is used to detect the feature points and the GMS is used to match the feature points, which can have stronger robustness to image rotation and scale transformation, so as to reduce the error of the matched feature points.
The matrix calculating unit 820 divides the first image into at least two first stitching regions, and calculates a first homography matrix of each first stitching region according to the feature point matching.
Alternatively, the matrix calculating unit 820 may divide the first image into at least two first stitching regions according to at least part of the first feature points in the first image. The manner of dividing the first stitching region into the first image may be based on grouping and clustering some or all of the first feature points in the first image.
In one example, the matrix calculation unit 820 may group the first feature points according to the number of the first feature points along the spliced edge direction and cluster the first feature points using a clustering algorithm. Specifically, a first feature point number distribution histogram of at least part of the first feature points in the first image along the stitching edge direction of the first image may be first calculated. Fig. 4 shows a schematic view of a first feature point in a first image according to an embodiment of the invention and a corresponding number distribution histogram along the direction of the stitching edge, the first image in fig. 4 corresponding to the first image on the left in fig. 3. As can be seen in fig. 4, the upper half of the first image has a relatively large number of first feature points, while the lower half has a smaller number of first feature points. Subsequently, the number of the first stitching regions may be determined according to the first feature point number distribution histogram. Specifically, as shown in fig. 4, the histogram of the number distribution of the first feature points may be fitted, for example, by using gaussian fitting, and parameters such as a derivative, a second derivative, a peak value, or a half-width of each point on the fitted curve are calculated, and the number of the divided first splicing regions, that is, the number of the first feature point groups, is determined according to the parameters. For example, when the first feature points are grouped according to the gaussian fitting curve, a k value, for example, 1, may be preset, and then the change of the k value is determined according to the number proportion occupied by the first feature points in different regions and/or the distance between adjacent peaks on the curve. As can be seen in fig. 4, the gaussian fitting curve has a plurality of peaks and a plurality of valleys. For the gaussian fitting curve shown in the figure, the distance between adjacent peaks can be considered, and when the distance is greater than a certain preset threshold value and the number proportion occupied by the first feature points in each region divided according to the distance is also in accordance with a preset condition, the k value can be increased by 1. In fig. 4, the distance a and the other peaks do not exceed the preset distance, while the distance B exceeds the preset distance. On this basis, the two characteristic point grouping areas can be divided by utilizing the trough position between the two peaks at the two ends of the distance B. Then, judging whether the number proportion occupied by the first characteristic points of each of the two areas meets the preset condition or not. For example, when it is determined that the number proportion occupied by the first feature points of each region is greater than 10% of the total amount of the first feature points participating in statistics, the k value may be increased by 1 to become 2. That is, the number of groupings of the first feature points may be 2, and the corresponding number of first stitching regions divided from the first image may also be 2. When the number of the first splicing regions is determined, the dividing limit of the first splicing regions may be further determined according to the regions of the feature point group. The first image may then be partitioned into a plurality of first stitching regions according to the determined number of first stitching regions and a partitioning boundary. For example, the aforementioned region division manner on the histogram may be referred to first, and the first feature points may be clustered into 2 groups by using a clustering algorithm (for example, k-means algorithm, k=2), and the corresponding upper and lower two first stitching regions may be divided according to the clustered first feature points, as shown in fig. 5. Fig. 5 illustrates a schematic view of the division of the first stitching region in the example illustrated in fig. 4, according to one embodiment of the present invention. One or more of the first stitching regions may include all or a majority (e.g., may include a predetermined proportion (e.g., 90%) of the corresponding set of first feature points). Wherein, optionally, in the process of clustering the first feature points, a data center of each group of the first feature points may also be acquired.
The first feature point grouping and clustering manner and the first splicing region dividing manner shown in fig. 4 and 5 are only examples, and any other feature point grouping and region dividing manner may be adopted by the matrix calculating unit 820 in the practical application of the embodiment of the present invention. For example, in another example, the matrix calculation unit 820 may further divide the plurality of first stitching regions in the first image according to texture, color, shading, and/or spatial distribution of objects, etc. of the first image. In addition, alternatively, the first feature points used for dividing the first stitching region may be all of the first feature points in the first image, or may be a part of the first feature points, and accordingly, the divided first stitching region may also occupy all of the first image, or only occupy a part thereof. Alternatively, the first feature point and the divided first stitching region may be located in a portion of the first image that overlaps the second image. In addition, in practical application of the embodiment of the present invention, other numbers of first feature point groups and first splicing regions may be divided, which is not limited herein. For example, in still another example of the embodiment of the present invention, the first feature points may be further divided into three groups of upper, middle and lower, and the first splicing regions may be divided into corresponding three first splicing regions of upper, middle and lower. Alternatively, the boundary between two adjacent first stitching regions may not be parallel to the stitching edge direction of the first images.
After dividing the first image into a plurality of first stitching regions, the matrix calculation unit 820 may calculate a first homography for each first stitching region based on the feature point matching pairA sex matrix. For example, the corresponding upper and lower first homography matrices may be calculated for the upper and lower first splicing regions shown in fig. 5, for example, may be H top And H bottom . In the calculation, optionally, a homography matrix for each first stitching region may be estimated using, for example, random sample consensus (Random Sample Consensus, RANSAC). The method for estimating the homography matrixes aiming at the first image subareas can avoid the problem that the homography matrixes are always prone to be estimated based on relatively dense data in the prior art, so that other effective sparse data are easy to ignore, the accuracy and the effectiveness of homography matrix calculation are improved, and the image splicing accuracy is improved. Of course, the calculation manner of the first homography matrix depends on the division manner and the number of the first stitching regions, for example, when the first stitching regions are three upper, middle and lower along the direction of the stitching edges of the first image, the first homography matrix may include H top 、H mid And H bottom There is no limitation in this regard.
The mesh dividing unit 830 may divide the first image into a plurality of first meshes and calculate a first mesh homography matrix of each first mesh of the first image according to at least one of the first homography matrices of the at least two first stitching regions.
The mesh dividing unit 830 may first divide the first image into a plurality of first meshes, and in particular, may mesh the first image using various image mesh division methods according to edge features, feature points, or other information of the image. Subsequently, the mesh dividing unit 830 may calculate a first mesh homography matrix of each first mesh in the first image according to the aforementioned first homography matrices corresponding to the first stitching regions. Alternatively, when calculating the first grid homography matrix of one of the first grids, first, the distances between the first grid and each of the first stitching regions may be calculated, for example, the distances between the centroid of the first grid and the data center of the group of the first feature points in each of the first stitching regions along the stitching edge direction may be calculated. Then, a weight of a first homography matrix corresponding to each first splicing region of the first grid may be obtained according to the calculated distance (for example, an alpha weight distribution method may be adopted), and a first grid homography matrix of the first grid may be calculated according to the weight of each first splicing region and the first homography matrix. Optionally, after the first grid homography matrix is acquired, a corresponding first grid homography matrix may also be calculated for each first grid partitioned in the first image.
For example, in the examples of fig. 4 and 5, the meshing unit 830 may utilize the first homography matrix H of the upper and lower two first splicing regions, respectively top And H bottom To calculate respective first grid homography matrices. In particular, it is possible first to calculate the distance in the vertical direction of the centroid of one of the first grids from the data center of the group of first feature points in the upper and lower stitching regions, e.g. d, respectively top And d bottom Then, judging each first homography matrix H corresponding to the first grid according to the calculated distance top And H bottom Weight W of (2) top And W is bottom Finally according to the first homography matrix H top And H bottom Respectively corresponding weights W top And W is bottom To calculate a first grid homography matrix h=w for the first grid top ×H top +W bottom ×H bottom . In calculating the first homography matrix H top And H bottom Weight W of (2) top And W is bottom In one example, when the centroid of the first grid is higher than the data center C corresponding to the first splice area above top At the time W top Can be 1, W bottom May be 0; when the centroid of the first grid is lower than the data center C corresponding to the first splicing area below bottom At the time W top Can be 0, W bottom May be 1; while in other cases W bottom =d top /(d top +d bottom ),W top =1-W bottom . In another example, when the centroid of the first grid is higher than the first stitching region above Domain-specific data center (C) top +θ) (θ is positive), W top Can be 1, W bottom May be 0; when the centroid of the first grid is lower than the centroid of the data center (C bottom - θ), W top Can be 0, W bottom May be 1; while in other cases W bottom =d top /(d top +d bottom ),W top =1-W bottom
The above calculation manner of the first grid homography matrix is merely an example, and in practical application of the embodiment of the present invention, the grid dividing unit 830 may determine the first grid homography matrix by adopting any calculation and weight expression manner of the homography matrix in consideration of specific application situations, which is not limited herein.
The coordinate transformation unit 840 may perform coordinate transformation on each first grid in the first image according to its corresponding first grid homography matrix, and combine the second images to form a stitched image.
The coordinate transformation unit 840 may perform coordinate transformation on the first mesh using the first mesh homography matrix corresponding to each first mesh, and stitch with the original image or the corresponding deformation of the second image, so that overlapping portions of the first image and the second image overlap each other, to form a stitched image. Specifically, one or more corner points of each first grid can be multiplied by a first grid homography matrix used for coordinate transformation to obtain coordinates of the first grid homography matrix in the spliced image; then, on the basis of the calculated coordinates of the corner points in the spliced image, calculating other pixel points in the first grid by using an interpolation algorithm to obtain corresponding coordinates of the pixel points in the spliced image; and finally, filling the spliced image according to the coordinate corresponding relation of each pixel point in the spliced image so as to splice with the original image or the corresponding deformation of the second image.
In the embodiment of the invention, the first grid divided in the first image may occupy all or a part of the first image. When the first grid occupies one part of the first image, other parts in the first image can be subjected to grid division and coordinate transformation by utilizing other matrix transformation modes, such as similarity transformation, transition transformation and the like, and the spliced image can be filled in.
In one example of the embodiment of the present invention, the coordinate transformation unit 840 may fill in the original image of the second image into the stitched image to stitch with the coordinate transformed first image. In this example, each pixel or each grid in the second image may also be considered to be multiplied by one identity matrix, respectively, and filled into the stitched image.
In another example of the embodiment of the present invention, the coordinate transformation unit 840 may further perform homography matrix calculation and coordinate transformation similar to the above-mentioned first image on the second image, and fill the second image after the coordinate transformation into the stitched image, so as to obtain a more accurate image stitching effect. On this basis, the aforementioned matrix calculation unit 820 may further divide the second image into at least two second stitching regions, and calculate a second homography matrix for each second stitching region according to the feature point matching pair, respectively; the foregoing mesh dividing unit 830 may divide the second image into a plurality of second meshes, and calculate a second mesh homography matrix of each second mesh of the second image according to at least one of the second homography matrices of the at least two second stitching regions.
Alternatively, the matrix calculating unit 820 may divide the second image into at least two second stitching regions according to the relationship between the first stitching regions divided by the first image and the feature point matching pairs. Specifically, after the first stitching regions that are divided by the first images are obtained, a grouping of the second feature points in the second images corresponding to each group of the first feature points may be obtained according to each group of the first feature points included in each first stitching region and the relationship of the feature point matching pairs obtained by the matching unit 810, and the dividing limit of the second stitching region may be determined according to the feature points in the second images after the grouping. The second image may then be partitioned into a plurality of second stitching regions according to the determined groupings of second feature points and the partition boundary. One or more of the second stitching regions may include all or a majority (e.g., may include a predetermined proportion (e.g., 90%) of the set of second feature points of the corresponding set of second feature points). In one example, the number of second stitching regions in the second image may be the same as the number of first stitching regions, and the second feature points included in the second stitching regions may be substantially one-to-one corresponding to the first feature points in the corresponding first stitching regions.
The above-mentioned dividing manner of the second splicing area is merely an example, and in practical application of the embodiment of the present invention, the matrix calculating unit 820 may also use any other feature point grouping and area dividing manners. The division of the second stitching region of the second image may also be performed independently, and not affected by the division result of the first stitching region, and a specific division manner may be similar to the division manner of the first stitching region, which is not described herein again. For example, in another example, the second stitching region in the second image may also be partitioned according to texture, color, shading, and/or spatial distribution of objects, etc. of the second image. In addition, the second feature points used for dividing the second stitching region may be all of the second feature points in the second image, or may be a part of the second feature points, and accordingly, the divided second stitching region may occupy all of the second image, or only a part thereof. Alternatively, the second split-joint region may be located at a portion of the second image overlapping the first image. In addition, in practical application of the embodiment of the present invention, other numbers of second splicing areas may be divided, which is not limited herein. Alternatively, the boundary between two adjacent second stitching regions may not be parallel to the stitching edge direction of the second image.
After dividing the second image into a plurality of second stitching regions, the matrix calculation unit 820 may calculate a second homography matrix for each second stitching region, respectively, according to the feature point matching. For example, when there are two second splicing regions, the corresponding two second homography matrices, such as H ', can be calculated' top And H' bottom . In the calculation, optionally, a homography matrix for each second stitching region may be estimated using, for example, random sample consensus (Random Sample Consensus, RANSAC). The method for estimating the homography matrixes aiming at the second image subareas can avoid the problem that the homography matrixes are always prone to be estimated based on relatively dense data in the prior art, so that other effective sparse data are easy to ignore, the accuracy and the effectiveness of homography matrix calculation are improved, and the image splicing accuracy is improved. Of course, the calculation manner of the second homography matrix depends on the division manner and the number of the second stitching regions, for example, when the second stitching regions are three upper, middle and lower along the stitching edge direction of the second image, the second homography matrix may include H' top 、H’ mid And H' bottom There is no limitation in this regard.
After acquiring the homography matrix corresponding to the second stitching region in the second image, the grid dividing unit 830 may divide the second image into a plurality of second grids, and specifically may perform gridding on the second image by using various image gridding methods according to edge features, feature points or other information of the image. The second grid homography matrix for each second grid in the second image may then be calculated from one or more of the aforementioned second homography matrices for each second stitching region. Alternatively, when calculating the second mesh homography matrix of one of the second meshes, first, distances between the second mesh and each of the second stitching regions may be calculated, for example, distances between the centroid of the second mesh and the data center of the group of the second feature points in each of the second stitching regions along the stitching edge direction may be calculated. Then, the weight of the second homography matrix corresponding to each second splicing area of the second grid may be obtained according to the calculated distance (for example, an alpha weight distribution method may be adopted), and the second grid homography matrix of the second grid is calculated according to the weight and the second homography matrix, and a specific weight calculation manner may be similar to that of the foregoing first grid homography matrix weight, which is not described herein again. Optionally, after the second grid homography matrix is acquired, a corresponding second grid homography matrix may also be calculated for each second grid partitioned in the second image. The above calculation manner of the second grid homography matrix is merely an example, and in practical application of the embodiment of the present invention, any calculation and weight expression manner of the homography matrix may be used to determine the second grid homography matrix in consideration of specific application situations, which is not limited herein.
Accordingly, after performing the second mesh division and the second mesh homography matrix calculation on the second image, the coordinate transformation unit 840 may perform coordinate transformation on each second mesh in the second image according to the corresponding second mesh homography matrix, and form a stitched image by combining the first images after coordinate transformation. Specifically, the coordinate transformation unit 840 may multiply one or more corner points of each second grid by its second grid homography matrix for coordinate transformation, respectively, to obtain coordinates thereof in the stitched image; then, on the basis of the calculated coordinates of the corner points in the spliced image, calculating other pixel points in the second grid by using an interpolation algorithm to obtain corresponding coordinates of the pixel points in the spliced image; and finally, filling the spliced image according to the coordinate corresponding relation of each pixel point in the spliced image so as to splice with the first image after coordinate transformation.
In the embodiment of the present invention, the second grid divided in the second image may occupy all or a part of the second image. When the second grid occupies a part of the second image, other parts in the second image can be subjected to grid division and coordinate transformation by using other matrix transformation modes, such as similarity transformation, transition transformation and the like, and the spliced image can be filled in.
Fig. 6 shows a schematic diagram of a stitched image for performing an image stitching method on the first image and the second image shown in fig. 3, according to an embodiment of the present invention. It can be seen that the transition near the spliced edge of the spliced image shown in fig. 6 is natural, and the splicing effect is accurate.
However, as can be seen from the content of the dashed box in fig. 6 and the enlarged schematic view thereof, the image spliced by the splicing method according to the embodiment of the present invention may cause a certain directional distortion, so that the object that should be approximately perpendicular in the image is rotated and offset by an angle. Therefore, in one example of the present invention, the coordinate transformation unit 840 may also correct the directional distortion of the stitched image. For example, directional distortion of the stitched image may be corrected using directional consistency. Specifically, the line detection may be performed first in the first image, the second image, and the stitched image, respectively, to obtain a line detection result. Alternatively, the straight line detection result may be filtered and noise removed. And then, according to the corresponding relation between the characteristic point matching pairs obtained before and/or the corresponding relation of grids between the first image and the second image before splicing, the corresponding relation between the detected straight lines can be obtained, and the included angle omega between a certain straight line in the detected image and the straight line in the corresponding first image and/or second image can be calculated. Finally, the direction correction, such as rotation, is performed on the straight line in the spliced image according to the included angle ω, so that the straight line in the spliced image is as close as possible to the angular direction of the straight line before the direction distortion is generated in the first image and/or the second image. Fig. 7 is a schematic diagram of a stitched image after the area in the dashed box in fig. 6 is subjected to the direction consistency correction, according to an embodiment of the present invention, it can be seen that the straight line in the solid box of the stitched image shown in fig. 7 generates a direction change, which is closer to the direction in the original second image on the right side of fig. 3.
In another example of the embodiment of the present invention, optionally, the coordinate transformation unit 840 may further perform image fusion on the stitched edges of the stitched images, so as to eliminate the discontinuity of the brightness or illumination of the images as much as possible. For example, the gradual change treatment may be performed on the spliced edge of the spliced image, so that the brightness, illumination, etc. of the images at both sides of the spliced edge are as uniform as possible.
Although only the embodiment of the image stitching device according to the above embodiment of the present invention is described for stitching two images, the above method according to the embodiment of the present invention is also applicable to the stitching process for three or more images, and the specific embodiment is similar to the foregoing image stitching process and will not be repeated here. In addition, in the process of stitching three or more images, the images may be simultaneously subjected to coordinate transformation and one-time stitched image may be obtained, or two or more adjacent images may be subjected to fractional processing and finally stitched images may be obtained, which is not limited herein.
According to the image stitching device provided by the embodiment of the invention, different first homography matrixes can be formed according to the plurality of first stitching areas dividing the first image, and the meshed first image is subjected to coordinate transformation and stitching according to the different first homography matrixes, so that the accuracy of the acquired stitched image is improved, and the stitching effect is improved.
Next, an image stitching apparatus according to an embodiment of the present invention is described with reference to fig. 9. Fig. 9 shows a block diagram of an image stitching device 900 according to an embodiment of the present invention. The image at least comprises a first image and a second image. As shown in fig. 9, the apparatus 900 may be a computer or a server.
As shown in fig. 9, the image stitching device 900 includes one or more processors 910 and memory 920, although, in addition, the image stitching device 900 may include a stereoscopic camera having multiple panoramic cameras, an output device (not shown), etc., which may be interconnected by a bus system and/or other form of connection mechanism. It should be noted that the components and structures of the image stitching device 900 shown in fig. 9 are exemplary only and not limiting, as the image stitching device 900 may have other components and structures as desired.
Processor 910 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may utilize computer program instructions stored in memory 920 to perform the desired functions, and may include: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.
Memory 920 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. On which one or more computer program instructions may be stored that the processor 910 may execute to implement the functions of the image stitching device of the embodiments of the present invention described above and/or other desired functions and/or may perform the image stitching method according to embodiments of the present invention. Various applications and various data may also be stored in the computer readable storage medium.
A computer readable storage medium according to an embodiment of the present invention is described below, having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the following image stitching steps, wherein the images comprise at least a first image and a second image: performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image; dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching; dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas; and carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image.
Therefore, the present invention is explained in detail by using the above-described embodiments; however, it should be clear to a person skilled in the art that the present invention is not limited to the examples that are explained in the theory. The invention may be implemented as a corrected, modified mode without departing from the scope of the invention as defined by the claims. Accordingly, the description of the specification is intended to be illustrative only and is not intended to be in any way limiting.

Claims (8)

1. An image stitching method, the image comprising at least a first image and a second image, the method comprising:
performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image;
dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching;
dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas;
Carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image;
wherein the dividing the first image into at least two first stitching regions includes:
dividing the first image into at least two first splicing areas according to at least part of first characteristic points in the first image, wherein the method specifically comprises the following steps:
calculating a first feature point number distribution histogram of at least part of first feature points in the first image along the splicing edge direction of the first image;
determining the number of the first splicing areas according to the first feature point number distribution histogram;
dividing the first image into the first stitching region according to the determined number.
2. The method of claim 1, wherein,
the method further comprises the steps of: dividing the second image into at least two second splicing areas, and respectively calculating a second homography matrix of each second splicing area according to the characteristic point matching; dividing the second image into a plurality of second grids, and calculating a second grid homography matrix of each second grid of the second image according to at least one of the second homography matrices of the at least two second splicing areas;
The combining the second image to form a stitched image further comprises: and carrying out coordinate transformation on each second grid in the second image according to the corresponding second grid homography matrix, and combining the first images after coordinate transformation to form a spliced image.
3. The method of claim 2, wherein the dividing the second image into at least two second stitching regions comprises:
and dividing the second image into at least two second splicing areas according to the relation between the first splicing areas divided by the first image and the characteristic point matching pairs.
4. The method of claim 1, wherein the computing a first grid homography matrix for each first grid of the first image from the first homography matrix comprises:
calculating the distance between one first grid and each first splicing area;
acquiring the weight of a first homography matrix corresponding to each first splicing area of the first grid according to the distance;
and calculating a first grid homography matrix of the first grid according to the weight and the first homography matrix.
5. The method of claim 2, wherein the computing a second grid homography matrix for each second grid of the second image from the second homography matrix comprises:
Calculating the distance between one second grid and each second splicing area;
acquiring the weight of a second homography matrix corresponding to each second splicing area of the second grid according to the distance;
and calculating a second grid homography matrix of the second grid according to the weight and the second homography matrix.
6. An image stitching device, the image comprising at least a first image and a second image, the device comprising:
the matching unit is configured to detect and match the feature points of the first image to be spliced with the feature points of the second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image;
the matrix calculation unit is configured to divide the first image into at least two first splicing areas and respectively calculate a first homography matrix of each first splicing area according to the characteristic point matching;
the grid dividing unit is configured to divide the first image into a plurality of first grids and calculate a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas;
The coordinate transformation unit is configured to transform the coordinates of each first grid in the first image according to the corresponding first grid homography matrix and combine the second image to form a spliced image;
the matrix calculating unit divides the first image into at least two first splicing areas according to at least part of first characteristic points in the first image, and the matrix calculating unit is specifically configured to:
calculating a first feature point number distribution histogram of at least part of first feature points in the first image along the splicing edge direction of the first image;
determining the number of the first splicing areas according to the first feature point number distribution histogram;
dividing the first image into the first stitching region according to the determined number.
7. An image stitching device, the image comprising at least a first image and a second image, the device comprising:
a processor;
and a memory in which computer program instructions are stored,
wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of:
performing feature point detection and matching on a first image to be spliced and a second image to obtain a plurality of feature point matching pairs, wherein each feature point matching pair comprises a first feature point of the first image and a second feature point of the second image;
Dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching;
dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas;
carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image;
wherein the dividing the first image into at least two first stitching regions includes:
dividing the first image into at least two first splicing areas according to at least part of first characteristic points in the first image, wherein the method specifically comprises the following steps:
calculating a first feature point number distribution histogram of at least part of first feature points in the first image along the splicing edge direction of the first image;
determining the number of the first splicing areas according to the first feature point number distribution histogram;
dividing the first image into the first stitching region according to the determined number.
8. A computer readable storage medium having stored thereon computer program instructions which when executed by a processor perform the following image stitching steps, wherein the images comprise at least a first image and a second image:
performing feature point detection and matching on the first image and the second image to be spliced to obtain multiple images
A plurality of feature point matching pairs, wherein each of the feature point matching pairs includes a first feature point of the first image and a second feature point of the second image;
dividing the first image into at least two first splicing areas, and respectively calculating a first homography matrix of each first splicing area according to the characteristic point matching;
dividing the first image into a plurality of first grids, and calculating a first grid homography matrix of each first grid of the first image according to at least one of the first homography matrices of the at least two first splicing areas;
carrying out coordinate transformation on each first grid in the first image according to the corresponding first grid homography matrix, and combining the second images to form a spliced image;
wherein the dividing the first image into at least two first stitching regions includes:
Dividing the first image into at least two first splicing areas according to at least part of first characteristic points in the first image, wherein the method specifically comprises the following steps:
calculating a first feature point number distribution histogram of at least part of first feature points in the first image along the splicing edge direction of the first image;
determining the number of the first splicing areas according to the first feature point number distribution histogram;
dividing the first image into the first stitching region according to the determined number.
CN201810175736.0A 2018-03-02 2018-03-02 Image stitching method, image stitching device, and computer-readable storage medium Active CN110223222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810175736.0A CN110223222B (en) 2018-03-02 2018-03-02 Image stitching method, image stitching device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810175736.0A CN110223222B (en) 2018-03-02 2018-03-02 Image stitching method, image stitching device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110223222A CN110223222A (en) 2019-09-10
CN110223222B true CN110223222B (en) 2023-12-05

Family

ID=67822002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810175736.0A Active CN110223222B (en) 2018-03-02 2018-03-02 Image stitching method, image stitching device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110223222B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675437B (en) * 2019-09-24 2023-03-28 重庆邮电大学 Image matching method based on improved GMS-ORB characteristics and storage medium
CN110781903B (en) * 2019-10-12 2022-04-01 中国地质大学(武汉) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
CN112686806B (en) * 2021-01-08 2023-03-24 腾讯科技(深圳)有限公司 Image splicing method and device, electronic equipment and storage medium
CN113052900A (en) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 Position determination method and device, electronic equipment and storage medium
CN113450253B (en) * 2021-05-20 2022-05-20 北京城市网邻信息技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113253968B (en) * 2021-06-01 2021-11-02 卡莱特云科技股份有限公司 Abnormal slice image judgment method and device for special-shaped LED display screen
CN113610710A (en) * 2021-07-30 2021-11-05 广州文远知行科技有限公司 Vehicle image splicing method and device, computer equipment and storage medium
CN116704046B (en) * 2023-08-01 2023-11-10 北京积加科技有限公司 Cross-mirror image matching method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10547825B2 (en) * 2014-09-22 2020-01-28 Samsung Electronics Company, Ltd. Transmission of three-dimensional video
CN105678687A (en) * 2015-12-29 2016-06-15 天津大学 Stereo image stitching method based on content of images
CN107240067A (en) * 2017-05-11 2017-10-10 同济大学 A kind of sequence image method for automatically split-jointing based on three-dimensional reconstruction
CN107665479A (en) * 2017-09-05 2018-02-06 平安科技(深圳)有限公司 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN110223222A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110223222B (en) Image stitching method, image stitching device, and computer-readable storage medium
WO2018127007A1 (en) Depth image acquisition method and system
CN107230225B (en) Method and apparatus for three-dimensional reconstruction
CN109544447B (en) Image splicing method and device and storage medium
CN106355570B (en) A kind of binocular stereo vision matching method of combination depth characteristic
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
KR102367361B1 (en) Location measurement and simultaneous mapping method and device
CN107507277B (en) Three-dimensional point cloud reconstruction method and device, server and readable storage medium
JP5954712B2 (en) Image processing apparatus, image processing method, and program thereof
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN106447602B (en) Image splicing method and device
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN108596975B (en) Stereo matching algorithm for weak texture region
US20170019655A1 (en) Three-dimensional dense structure from motion with stereo vision
CN111160232B (en) Front face reconstruction method, device and system
WO2020125637A1 (en) Stereo matching method and apparatus, and electronic device
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
US20190082173A1 (en) Apparatus and method for generating a camera model for an imaging system
CN111295667A (en) Image stereo matching method and driving assisting device
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN108122280A (en) The method for reconstructing and device of a kind of three-dimensional point cloud
CN112862890B (en) Road gradient prediction method, device and storage medium
WO2021244161A1 (en) Model generation method and apparatus based on multi-view panoramic image
CN110660034A (en) Image correction method and device and electronic equipment
CN109902695B (en) Line feature correction and purification method for image pair linear feature matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant