CN109840884B - Image stitching method and device and electronic equipment - Google Patents

Image stitching method and device and electronic equipment Download PDF

Info

Publication number
CN109840884B
CN109840884B CN201711229638.2A CN201711229638A CN109840884B CN 109840884 B CN109840884 B CN 109840884B CN 201711229638 A CN201711229638 A CN 201711229638A CN 109840884 B CN109840884 B CN 109840884B
Authority
CN
China
Prior art keywords
matrix
point matching
feature point
homography matrix
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711229638.2A
Other languages
Chinese (zh)
Other versions
CN109840884A (en
Inventor
王舸
邹纯稳
姚佳宝
王莉
谢小燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201711229638.2A priority Critical patent/CN109840884B/en
Publication of CN109840884A publication Critical patent/CN109840884A/en
Application granted granted Critical
Publication of CN109840884B publication Critical patent/CN109840884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image stitching method, an image stitching device and electronic equipment, wherein in the method, characteristic point matching pairs between two images to be stitched are determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; according to the first feature point matching pair and the first homography matrix, and splicing the two images to be spliced into a spliced image. In the invention, the first homography matrix is determined according to the nature of the camera, so that the error matching point pairs can be removed no matter more or less, and the aim of improving the splicing quality of spliced images is fulfilled.

Description

An image stitching method device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image stitching method, an image stitching device, and an electronic device.
Background
Currently, there is a need to splice two pictures to be spliced, which have a splicing requirement, into one spliced picture. The related spliced image technology comprises the following steps: and respectively extracting characteristic points of the two images to be spliced, matching the extracted characteristic points to obtain characteristic point matching pairs between the two images to be spliced, removing error characteristic point matching pairs from the obtained characteristic point matching pairs by a random sampling consistency method to obtain residual characteristic point matching pairs, and splicing the two images to be spliced into a spliced image based on the residual characteristic point matching pairs.
Since there is a case of erroneous matching when matching the extracted feature points, there are also a large number of erroneous feature point matching pairs, for example: when there are repeated texture areas in the images to be stitched, a large number of erroneous feature point matching pairs will be generated when feature point matching is performed, for example:as shown in fig. 1, it is assumed that a repetitive texture region exists in an image 1 to be stitched, and a feature point a in the repetitive texture region of the image 1 to be stitched and a feature point a in an image 2 to be stitched 1 Is a matched characteristic point, and the characteristic point B in the repeated texture area of the image 1 to be spliced and the characteristic point B in the image 2 to be spliced 1 Is a matched feature point, and since the feature point A and the feature point B are positioned in the repeated texture region, the feature point A is similar to the feature point B in texture, so that the feature point B is mistakenly taken as the feature point A when the feature point matching is carried out 1 Matching feature points, wherein feature point A is taken as feature point B 1 The feature points that match, at this time, will produce a large number of erroneous feature point matching pairs.
Because the method of random sampling and matching is used to remove the wrong matching pair of characteristic points, only suitable for the case of fewer wrong matching pairs of characteristic points, if the number of the wrong matching pairs of characteristic points is more, the method of random sampling and matching is likely to remove the correct matching pairs of characteristic points and leave the wrong matching pairs of characteristic points, at this time, the splicing quality of the image will be seriously affected, so that an image splicing method is needed to improve the splicing quality of the image.
Disclosure of Invention
The embodiment of the invention aims to provide an image stitching method, an image stitching device and electronic equipment so as to improve stitching quality. The specific technical scheme is as follows:
a method of image stitching, the method comprising:
determining characteristic point matching pairs between two images to be spliced;
acquiring an internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
and splicing the two images to be spliced into a spliced image according to the first feature point matching pair and the first homography matrix.
Optionally, the step of calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information includes:
determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external parameter information;
according to a preset selection rule, selecting the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information;
And calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
Optionally, the step of selecting the parameter corresponding to the posture relation and the relative position relation from the obtained parameter information according to a preset selection rule includes:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
the step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
determining rotation matrixes of the two cameras according to the gesture information of the two cameras;
and calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
Optionally, the step of selecting the parameter corresponding to the posture relation and the relative position relation from the obtained parameter information according to a preset selection rule includes:
When the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
before the step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, the method further comprises:
acquiring object distances of the two cameras, wherein the object distances of the two cameras are the same;
the step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
determining a translation amount matrix between the two images to be spliced according to the optical center coordinates of the two cameras, one of the acquired internal reference matrixes and the object distances of the two cameras, wherein the translation amount matrix represents the translation amount between the matched feature points of the two images to be spliced;
determining a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
Optionally, the step of removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair includes:
for each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
calculating the distance between the other characteristic point and the projection point;
judging whether the distance is smaller than a preset distance threshold value or not;
if yes, determining the feature point matching pair as a first feature point matching pair;
if not, the feature point matching pair is removed as an error feature point matching pair.
Optionally, before the step of removing the erroneous feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair, the method further includes:
removing the error feature point matching pair from the feature point matching pair to obtain a second feature point matching pair;
calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
Judging whether the similarity condition is met between the first homography matrix and the third homography matrix;
and if so, executing the step of removing the error characteristic point matching pair from the characteristic point matching pair according to the first homography matrix to obtain a first characteristic point matching pair.
Optionally, the step of determining whether the similarity condition is satisfied between the first homography matrix and the third homography matrix includes:
judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
An image stitching device, the device comprising:
the determining module is used for determining characteristic point matching pairs between two images to be spliced;
the acquisition module is used for acquiring the internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
the first homography matrix calculation module is used for calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
The first feature point matching pair determining module is used for removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
and the splicing module is used for splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix.
Optionally, the first homography matrix calculation module includes:
a relation determining unit for determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the obtained external parameter information;
the parameter determining unit is used for selecting parameter corresponding to the gesture relation and the relative position relation from the obtained parameter information according to a preset selection rule;
and the first calculation unit is used for calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
Optionally, the parameter determination unit is specifically configured to:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
The first computing unit includes:
a rotation matrix determining subunit, configured to determine rotation matrices of the two cameras according to pose information of the two cameras;
and the first calculating subunit is used for calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
Optionally, the parameter determination unit is specifically configured to:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
the apparatus further comprises:
the object distance acquisition module is used for acquiring object distances of the two cameras before calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, wherein the object distances of the two cameras are the same;
the first computing unit includes:
a translation amount matrix determining subunit, configured to determine a translation amount matrix between the two images to be stitched according to the optical center coordinates of the two cameras, one of the acquired internal reference matrices, and the object distances of the two cameras, where the translation amount matrix characterizes a translation amount between feature points matched with the two images to be stitched;
A second homography matrix determining subunit, configured to determine a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and the second calculation subunit is used for calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
Optionally, the first feature point matching pair determining module includes:
the projection point determining unit is used for aiming at each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
a distance calculation unit configured to calculate a distance between the other feature point and the projection point;
the judging unit is used for judging whether the distance is smaller than a preset distance threshold value, if yes, the determining unit is triggered, and if no, the removing unit is triggered;
the determining unit is used for determining the feature point matching pair as a first feature point matching pair;
the removing unit is used for removing the characteristic point matching pair as an error characteristic point matching pair.
Optionally, the apparatus further includes:
the second feature point matching pair determining module is used for removing the error feature point matching pair from the feature point matching pair before the error feature point matching pair is removed from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair, so as to obtain a second feature point matching pair;
the third homography matrix determining module is used for calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
the judging module is used for judging whether the similarity condition is met between the first homography matrix and the third homography matrix, and if so, the first feature point matching pair determining module is triggered.
Optionally, the judging module is specifically configured to:
judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
An electronic device comprising a processor and a memory, wherein,
A memory for storing a computer program;
a processor for performing any of the method steps described above when executing a computer program stored on a memory.
A computer readable storage medium having stored therein a computer program which when executed by a processor performs any of the method steps described above.
In the embodiment of the invention, the feature point matching pair between two images to be spliced is determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix. In the embodiment of the invention, the first homography matrix is calculated according to the internal reference matrix and the external reference information which characterize the properties of the camera, and the wrong characteristic point matching pairs are removed according to the first homography matrix, and because the first homography matrix is determined according to the properties of the camera, the wrong matching point pairs can be removed no matter how many or how few, so that the aim of improving the splicing quality of spliced images is achieved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram illustrating conventional feature point matching;
fig. 2 is a schematic flow chart of a first image stitching method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first flowchart for calculating a first homography matrix between two images to be stitched according to an embodiment of the present invention;
fig. 4 is a second flowchart of calculating a first homography matrix between two images to be spliced according to an embodiment of the present invention;
FIG. 5 shows an embodiment of the present invention light of two cameras is provided heart relationship and optical axis relationship is a first schematic of (a);
FIG. 6 is a third flowchart of calculating a first homography matrix between two images to be stitched according to an embodiment of the present invention;
FIG. 7 shows an embodiment of the present invention light of two cameras is provided a second schematic of the relationship between the heart and the optical axis;
FIG. 8 is a schematic flow chart of obtaining a first feature point matching pair according to an embodiment of the present invention;
fig. 9 is a second flowchart of an image stitching method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an image stitching device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the related technical problems, the embodiment of the invention provides an image stitching method, an image stitching device and electronic equipment.
The following first describes an image stitching method provided by the embodiment of the present invention.
As shown in fig. 2, an image stitching method provided by an embodiment of the present invention may include:
s101: and determining a characteristic point matching pair between the two images to be spliced.
When image stitching is performed, feature point extraction is performed on two images to be stitched, and feature point descriptors, for example, are generally adopted when feature points are extracted: SIFT (Scale-invariant feature transform ), or a SURF algorithm modified on the SIFT basis, i.e., the speed-Up Robust Features algorithm.
After extracting the feature points, matching the extracted feature points, namely searching for the corresponding relation of the feature points between the two images to be spliced, wherein the corresponding relation represents the corresponding relation of the feature points corresponding to the same position in the two images to be spliced, and the corresponding relation can be used for image registration.
For example: the nearest neighbor method can be adopted to search the nearest neighbor feature point of each feature point in one image to be spliced in the other image to be spliced, wherein the nearest neighbor is that the feature points of the same part between two images have the same feature description vector in an ideal state, so that the distance between the feature points is nearest. Of course, the method of matching the feature points is not limited thereto, and any other feature matching algorithm may be employed.
After the corresponding relation between each characteristic point in one image to be spliced and each characteristic point in the other image to be spliced is found, the characteristic point matching pair between the two images to be spliced can be obtained.
S102: and acquiring the internal reference matrix and external reference information of two cameras for shooting two images to be spliced.
The camera coordinate system is used for representing the position of the object in the three-dimensional space, the image coordinate system is used for representing the pixel position of the object in the two-dimensional image, and the function of the internal reference matrix of the camera is to perform linear change between the two coordinate systems.
The rotation and translation of the camera belong to external parameter information of the camera, which is used for describing the movement of the camera in a static scene, so that in image stitching, the external parameter information is needed to be used for solving the relative movement between two images with stitching, and the two images to be stitched are placed in the same coordinate system for stitching.
From the above, since the internal reference matrix and the external reference information of the camera can represent the properties of the camera, in order to obtain the homography matrix between the two images to be spliced, after obtaining the feature point matching pair between the two images to be spliced, the internal reference matrix and the external reference information of the two cameras capturing the two images to be spliced are obtained, that is, the two internal reference matrices and the two external reference information are obtained.
Note that, the internal reference matrix of the camera is generally:
wherein f x And f y Focal length of camera in x direction and y direction, c x And c y The principal point coordinates of the camera in the x-direction and the y-direction are respectively, wherein the intersection point of the optical axis direction of the camera and the imaging plane is called the principal point.
The camera's extrinsic information may include one or more of the following parameters: optical center coordinates of the camera, an optical axis direction, pose information of the camera, and the like, wherein the pose information of the camera may include the optical axis direction of the camera.
S103: based on the acquired internal reference matrix and external reference information, a first homography matrix between two images to be spliced is calculated.
After the internal reference matrix and the external reference information of the two cameras are acquired, a first homography matrix between the two images to be spliced is calculated based on the acquired internal reference matrix and external reference information.
Because of the attitude relation and the position relation of the two cameras when shooting images, the two shot images to be spliced can be influenced, and the spliced images are further influenced. Therefore, in calculating the first homography matrix, the pose relationship and the position relationship of the two cameras need to be considered, referring to fig. 3, step S103 may include:
s1031: and determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external parameter information.
Since the position of the camera can be determined by the optical center position of the camera and the posture of the camera can be determined by the optical axis direction of the camera, after the internal reference matrix and the external reference information of the two cameras are acquired, the posture relationship and the relative position relationship of the two cameras can be determined according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external reference information.
S1032: and selecting the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information according to a preset selection rule.
Because the attitude relationship and the position relationship of the two cameras when shooting images can influence the two shot images to be spliced, the external parameter information used by the two cameras for calculating the homography matrix under different attitude relationships and different relative position relationships is also different.
Therefore, after determining the posture relationship and the relative position relationship of the two cameras, the parameter of the external parameter corresponding to the posture relationship and the relative position relationship needs to be selected from the obtained external parameter information according to a preset selection rule.
S1033: and calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
After the external parameter corresponding to the posture relation and the relative position relation is selected, a first homography matrix between two images to be spliced can be calculated according to the acquired internal parameter matrix and the external parameter.
Thus, based on the gesture relation and the relative position relation of the two cameras, a first homography matrix between the two images to be spliced is calculated.
In one implementation of the embodiment of the present invention, referring to fig. 4, step S1032 may include:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between the optical centers of the two cameras is smaller than a preset optical center distance threshold, the posture information of the two cameras is selected from the obtained external parameter information.
When the posture relationship is that the optical axis directions of the two cameras are not parallel, and the relative position relationship is that the distance between the optical centers of the two cameras is smaller than the preset optical center distance threshold, as shown in fig. 5, the distance between the two optical centers is smaller than the preset optical center distance threshold, which indicates that when two images to be spliced are shot, the distance between the two cameras is closer, the same scene in the two shot images to be spliced is more, the two optical axis directions are not parallel, which indicates that a rotation relationship exists between the two cameras, and because the distance between the two cameras is closer, a translation relationship does not exist between the two cameras.
It should be noted that, the cameras capturing two images to be stitched may be the same camera, for example: it is reasonable that the camera shoots one image to be spliced at a certain position, and shoots another image to be spliced after rotating the camera by a certain angle.
Since there is a rotational relationship between the two cameras, it is necessary to select pose information of the two cameras from the acquired external parameter information. The attitude information of the camera comprises a pitch angle, a yaw angle and a roll angle of the camera.
In fig. 4, step S1033 may include:
s10331: determining rotation matrixes of the two cameras according to the gesture information of the two cameras;
after the gesture information is selected, the rotation matrix of the two cameras can be determined according to the gesture information of the two cameras.
Since the pose information of the camera includes the pitch angle, yaw angle and roll angle of the camera, and the roll angle of the camera is generally 0 ° in practical applications, the rotation matrix of the camera can be determined by the following formula:
R i =RxRz
wherein Rz is a rotation matrix corresponding to a yaw angle, rx is a rotation matrix corresponding to a pitch angle, alpha is a pitch angle of a camera i, beta is a yaw angle of the camera i, R i Is the rotation matrix of camera i.
S10332: and calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
After the rotation matrixes of the two cameras are obtained, a first homography matrix between the two images to be spliced can be calculated according to the determined rotation matrix and the acquired internal reference matrix.
Determining a first homography matrix between two images to be spliced by the following formula:
H 1 =A 2 R 2 -1 R 1 A 1 -1
H 1 =A 2 R 2 -1 R 1 A 1 -1
wherein H is 1 Is a first homography matrix between two images to be spliced, A 1 And A 2 Reference matrix R of two cameras respectively 1 And R is 2 Respectively the rotation matrices of the two cameras.
In another implementation manner of the embodiment of the present invention, as shown in fig. 6, step S1032 may include:
when the gesture is that the optical axis directions of the two cameras are parallel and the relative position relationship is that the distance between the optical centers of the two cameras is not smaller than a preset optical center distance threshold, the optical center coordinates of the two cameras are selected from the obtained external parameter information.
When the posture relationship is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between the two optical centers is not smaller than the preset optical center distance threshold, as shown in fig. 7, the distance between the two optical centers is not smaller than the preset optical center distance threshold, which indicates that when two images to be spliced are shot, the distance between the two cameras is far, the same scene in the two shot images to be spliced is less, and the two optical axis directions are parallel, which indicates that the two cameras have no rotation relationship, but the translation relationship exists because the distance between the two cameras is far.
Since the two cameras have a translational relationship, it is necessary to select the optical center coordinates of the two cameras from the acquired external parameter information.
Prior to step S1033 in fig. 6, the method may further include:
S1033A: the object distances of the two cameras are obtained, wherein the object distances of the two cameras are the same.
After the optical center coordinates of the two cameras are selected from the acquired external parameter information, in order to determine a second homography matrix between the two images to be spliced, a translation amount matrix between the two images to be spliced needs to be determined first, and in order to determine the translation amount matrix, object distances of the two cameras need to be acquired, wherein the object distances of the two cameras are the same.
It should be noted that, the object distance here refers to a distance between an object and a camera when two images to be stitched achieve the best stitching quality during stitching, and the object distance of the two cameras may be preset, for example, may be set according to a working environment of the cameras and a clearest imaging distance.
Step S1033 in fig. 6 may include:
s10333: and determining a translation amount matrix between the two images to be spliced according to the optical center coordinates of the two cameras, one of the acquired internal reference matrixes and the object distances of the two cameras, wherein the translation amount matrix represents the translation amount between the matched characteristic points of the two images to be spliced.
After the object distances of the two cameras are obtained, a translation matrix between the two images to be spliced can be determined according to the optical center coordinates of the two cameras, one of the obtained internal reference matrices and the object distances of the two cameras.
One of the acquired reference matrices may be any one of two reference matrices of two cameras, for example: the acquired internal reference matrix is A 1 And A 2 One of the acquired reference matrices may be A 1 May also be A 2
By way of example, the translation matrix between two images to be stitched is determined by the following formula:
wherein A is 2 For one of the acquired reference matrices, O 1 And O 2 Respectively the optical center coordinates of the two cameras, S is the object distance of the two cameras, t 1 -t 3 Is an element in the translation matrix between two images to be stitched.
Thus, a translation matrix representing the translation between the matched characteristic points of the two images to be spliced is obtained.
S10334: and determining a second homography matrix between the two images to be spliced according to the acquired internal reference matrix.
Because the two cameras have a translation relationship, and the translation relationship affects the relationship between the two images to be spliced when the two cameras are spliced, after the translation amount matrix between the two images to be spliced is determined, a translation reference is needed to be determined, and the reference is a third homography matrix of the two images to be spliced under the condition that the translation relationship does not exist.
Therefore, after determining the translation matrix between the two images to be spliced, a second homography matrix between the two images to be spliced can be determined according to the acquired internal reference matrix.
Illustratively, the second homography matrix between two images to be stitched is determined by the following formula:
wherein H is 2 A is a second homography matrix between two images to be spliced, A 1 And A 2 Reference matrix of two cameras respectively, h 0 -h 8 Is an element in the second homography matrix.
S10335: and calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
After the second homography matrix is determined, the determined second homography matrix is used as a translation reference, and the first homography matrix between the two images to be spliced is calculated according to the translation quantity matrix.
Illustratively, a first homography matrix between two images to be stitched is determined by the following formula:
wherein H is 1 A first homography matrix between two images to be spliced, h 0 -h 8 T is an element in the second homography matrix 1 -t 3 Is an element in the translation matrix between two images to be stitched.
The element h in the third column in the second homography matrix 2 And h 5 Is an element for characterizing the translation relationship, and thus a homography matrix can be obtained by adding the third column of the second homography matrix to the first column of the translation matrix.
S104: removing erroneous feature point matching pairs from the feature point matching pairs according to the first homography matrix, and obtaining a first characteristic point matching pair.
Because there is often an erroneous feature point matching pair in the feature point matching pair, after the first homography matrix is determined, the erroneous feature point matching pair may be removed from the feature point matching pair according to the first homography matrix, so as to obtain the first feature point matching pair.
Removing the error characteristic point matching pair from the characteristic point matching pair according to the first homography matrix to obtain a first characteristic point matching pair, the first feature point matching pair may be obtained by projecting the feature point through the first homography matrix. Referring to fig. 8, step S104 may include:
s1041: for each pair of feature point matching pairs, one feature point of the feature point matching pair, and projecting the first homography matrix to the image to be spliced where the other characteristic point in the characteristic matching point pair is located, so as to obtain projected points after projection.
Since the homography matrix is used to characterize the correspondence of feature points between two images to be stitched, in order to remove erroneous feature point matching pairs, for each pair of feature point matching pairs, one feature point in the feature point matching pair can be projected to an image to be spliced where the other feature point in the feature point matching pair is located through a first homography matrix, so that a projected point after projection is obtained.
Specifically, the proxel coordinates are calculated by the following formula:
wherein s is a normalized coefficient of homography change, x 2 ' and y 2 ' is the coordinates of the projection point, x 1 y 1 For the coordinates of one feature point in the feature matching point pair, H 1 Is a first homography matrix.
S1042: the distance between the other feature point and the projection point is calculated.
After the projection point is obtained, in order to determine whether the feature point matching pair is an erroneous feature point matching pair, a distance between the other feature point in the feature point matching pair and the projection point needs to be determined.
Wherein the distance between the other feature point and the projection point is calculated by the following formula:
wherein dist is the distance between another feature point and the projection point, x 2 And y 2 And matching the coordinates of the other feature point in the point pair for the feature.
S1043: whether the distance is smaller than the preset distance threshold is determined, if yes, step S1044 is performed, and if no, step S1045 is performed.
After the distance between the other feature point and the projection point is calculated, whether the feature point matching pair is an error feature point matching pair or not is determined in a distance judging mode.
S1044: and determining the feature point matching pair as a first feature point matching pair.
When the distance is smaller than the preset distance threshold value, the distance between the projection point and the other characteristic point is relatively close, namely the characteristic point matching pair is a correct characteristic point matching pair, and at the moment, the characteristic point matching pair is determined to be a first characteristic point matching pair
S1045: the feature point matching pair is removed as an erroneous feature point matching pair.
And when the distance is not smaller than the preset distance threshold value, the distance between the projection point and the other characteristic point is far, namely the characteristic point matching pair is an incorrect characteristic point matching pair, and at the moment, the characteristic point matching pair is removed as the incorrect characteristic point matching pair.
Thus, the erroneous feature point matching pair is removed from the feature point matching pair, resulting in a first feature point matching pair.
S105: and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix.
After the first characteristic point matching pair is obtained, the two images to be spliced are spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix.
For each image to be spliced, a projection image corresponding to the image to be spliced is generated according to the first characteristic point matching pair and the first homography matrix, and the two projection images are spliced to generate a spliced image.
It should be noted that, when a plurality of images to be spliced are spliced, the image splicing method provided by the embodiment of the invention can be adopted for each two images to be spliced, and then the images are spliced in pairs, so as to obtain a spliced image.
Because the intrinsic and extrinsic information of the camera characterizes the nature of the camera, the invention solves the problem that the correctness of the characteristic point matching result is difficult to judge in the related technology by utilizing the intrinsic and extrinsic information of the camera on the basis of the scheme of registering images based on characteristic points and splicing images, and can generate a correct first characteristic point matching pair based on a first homography matrix, thereby improving the success rate of automatic image registration and spliced image generation.
In the embodiment of the invention, the feature point matching pair between two images to be spliced is determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix. In the embodiment of the invention, the first homography matrix is calculated according to the internal reference matrix and the external reference information which characterize the properties of the camera, and the wrong characteristic point matching pairs are removed according to the first homography matrix, and because the first homography matrix is determined according to the properties of the camera, the wrong matching point pairs can be removed no matter how many or how few, so that the aim of improving the splicing quality of spliced images is achieved.
On the basis of the method shown in fig. 1, as shown in fig. 9, the image stitching method provided by the embodiment of the present invention may further include, before step S104:
S104A: removing erroneous feature point matching pairs from the feature point matching pairs, and obtaining a second characteristic point matching pair.
After determining the feature point matching pair between the two images to be spliced, since there is usually an erroneous feature point matching pair in the feature point matching pair, it is necessary to remove the erroneous feature point matching pair from the feature point matching pair, and obtain a second feature point matching pair.
For example, the error feature point matching pair may be removed by a random sampling coincidence method to obtain the second feature point matching pair, and of course, the method of removing the error feature point matching pair is not limited to this, and any existing removal method may be adopted.
S104B: and calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair.
After the second feature point matching pair is obtained, in order to splice the two images to be spliced, a third homography matrix between the two images to be spliced is calculated according to the second feature point matching pair.
Illustratively, a third homography matrix between two images to be stitched is calculated by the following formula:
M=H 3 *N
Wherein H is 3 A third homography matrix between two images to be spliced, M is the firstAnd the coordinates of one feature point in the two feature point matching pairs are N, and the coordinates of the other feature point in the second feature point matching pair are N.
It should be noted that steps S104A and S104B may be performed before step S102, and the order of execution of steps S104A and S104B is not limited herein, as long as steps S104A and S104B are performed before step S104.
S104C: and judging whether the similarity condition is satisfied between the first homography matrix and the third homography matrix, and if so, executing step S104.
The first homography matrix is determined according to the internal reference matrix and the external reference information of the camera, and can be used as a reference of the homography matrix between two images to be spliced because the internal reference matrix and the external reference information of the camera can represent the property of the camera.
After determining the third homography matrix, whether the third homography matrix is accurate or not may be determined by judging whether a similarity condition is satisfied between the first homography matrix and the third homography matrix.
When the similarity condition is satisfied between the first homography matrix and the third homography matrix, it is indicated that the third homography matrix is not accurate enough, and in this case, in order to improve the stitching quality, two images to be stitched need to be stitched based on the first homography matrix to be stitched into a stitched image, that is, step S104 is executed.
The method comprises the steps of generating a first homography matrix by utilizing internal and external parameter information of a camera, generating a third homography matrix by utilizing a second feature point matching pair obtained by removing an error feature point matching pair from a feature point matching pair, determining the accuracy of the third homography matrix by judging whether a similarity condition is met between the first homography matrix and the third homography matrix, and if the third homography matrix is inaccurate, indicating that the second feature point matching pair is wrong, at the moment, removing the error feature point matching pair from the feature point matching pair again by utilizing the first homography matrix to obtain a correct first feature point matching pair, and then splicing two images to be spliced into a spliced image based on the first homography matrix and the first feature point matching pair, so that the quality of the spliced image is guaranteed, and the aim of improving the splicing quality is fulfilled.
In one implementation of the present invention, step S104C may include:
judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold value or not;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
Since the matrix similarity can most represent whether the two matrices are similar, whether the two matrices are similar or not can be determined by judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold value, and whether the third homography matrix is accurate or not can be further determined.
Illustratively, the matrix similarity between the first homography matrix and the third homography matrix is calculated by the following formula:
H_all i =[h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 ]
wherein H_all i In the form of the vector of the ith homography matrix, sim1 is the matrix similarity between the first homography matrix and the third homography matrix, H_all 1 In the form of a vector of a first homography matrix, H_all 3 In the form of a vector of a third homography matrix, h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 Is the element contained in the ith homography matrix.
When the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold, the third homography matrix is not accurate enough.
Because the first two columns of elements in the homography matrix are in standard rotation perspective relation, the third column represents translation relation, the rotation perspective relation is generally in angle relation, namely the magnitude of the elements representing the rotation perspective relation is generally 0-1, and the translation relation is generally in distance relation, namely the magnitude of the elements representing the translation relation is generally greater than 1, when the matrix similarity of the two homography matrices is calculated, the similarity is greatly influenced by the elements representing the translation relation, and therefore, whether the two homography matrices are similar or not cannot be accurately indicated only by obtaining the matrix similarity.
Therefore, the element similarity between the elements representing the rotation perspective relationship of the first homography matrix and the third homography matrix can be further calculated to determine whether the third homography matrix is accurate.
Illustratively, the element similarity between the elements representing the rotational perspective relationship of the first homography matrix and the third homography matrix is calculated by the following formula:
H_r i =[h 0 h 1 h 3 h 4 h 6 h 7 ]
wherein sim2 is the element similarity between the elements of the first homography matrix and the elements of the third homography matrix representing the rotation perspective relationship, H_r 1 In the form of a vector of elements representing a rotational perspective in a first homography matrix, H_r 3 In the form of a vector of elements representing the rotational perspective relationship in a third homography matrix, H_r i In the form of a vector of elements representing the rotational perspective relationship in the ith homography matrix, h 0 h 1 h 3 h 4 h 6 h 7 The element representing the rotation perspective relationship is the ith homography matrix.
When the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value, the third homography matrix is not accurate enough. The first similarity threshold value and the second similarity threshold value may be the same or different, and are not limited in any way.
Therefore, whether the first homography matrix is similar to the third homography matrix is further accurately determined through a mode of calculating the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix.
Of course, it is also possible to calculate only the element similarity between the elements of the first homography matrix and the elements of the third homography matrix representing the rotation perspective relationship to determine whether the third homography matrix is accurate.
In addition, when the same stitching situation is aimed at, the same stitching situation here refers to that when two images to be stitched are shot, the two cameras are identical, the positions and the postures of the two cameras are identical, and the object distances of the two cameras are identical, the obtained first homography matrix can be used as a default homography matrix, and the two images to be stitched can be stitched directly by using the default homography matrix.
It should be noted that, when it is determined that the element similarity between the elements representing the rotation perspective relationship in the first homography matrix and the third homography matrix is not smaller than the second preset similarity threshold, it is sufficient to indicate that the third homography matrix is accurate, and in this case, in order to reduce the calculation amount, the two images to be stitched may be stitched together based on the third homography matrix to form a stitched image.
With respect to the above method embodiment, the embodiment of the present invention further provides an image stitching device, as shown in fig. 10, where the device may include:
a determining module 201, configured to determine a feature point matching pair between two images to be stitched;
the acquisition module 202 is configured to acquire an internal reference matrix and external reference information of two cameras that capture the two images to be stitched;
the first homography matrix calculation module 203 is configured to calculate a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
a first feature point matching pair determining module 204, configured to remove an erroneous feature point matching pair from the feature point matching pair according to the first homography matrix, to obtain a first feature point matching pair;
and the stitching module 205 is configured to stitch the two images to be stitched into a stitched image according to the first feature point matching pair and the first homography matrix.
In the embodiment of the invention, the feature point matching pair between two images to be spliced is determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix. In the embodiment of the invention, the first homography matrix is calculated according to the internal reference matrix and the external reference information which characterize the properties of the camera, and the wrong characteristic point matching pairs are removed according to the first homography matrix, and because the first homography matrix is determined according to the properties of the camera, the wrong matching point pairs can be removed no matter how many or how few, so that the aim of improving the splicing quality of spliced images is achieved.
In one implementation manner of the embodiment of the present invention, the first homography matrix calculation module 203 may include:
a relation determining unit for determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the obtained external parameter information;
a parameter determination unit for determining parameters of the external parameters according to preset selection rules, selecting parameter corresponding to the attitude relation and the relative position relation from the obtained parameter information;
and the first calculation unit is used for calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
In an implementation manner of the embodiment of the present invention, the parameter determining unit may be specifically configured to:
when the posture relation is that the optical axis directions of the two cameras are not parallel, the relative position relation is that the distance between the optical centers of the two cameras is smaller than a preset optical center distance threshold value, the gesture information of two cameras is selected from the acquired external parameter information;
the first computing unit may include:
a rotation matrix determining subunit, configured to determine rotation matrices of the two cameras according to pose information of the two cameras;
And the first calculating subunit is used for calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
In an implementation manner of the embodiment of the present invention, the parameter determining unit may be specifically configured to:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
the apparatus may further include:
the object distance acquisition module is used for acquiring object distances of the two cameras before calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, wherein the object distances of the two cameras are the same;
the first computing unit may include:
a translation amount matrix determining subunit, configured to determine a translation amount matrix between the two images to be stitched according to the optical center coordinates of the two cameras, one of the acquired internal reference matrices, and the object distances of the two cameras, where the translation amount matrix characterizes a translation amount between feature points matched with the two images to be stitched;
A second homography matrix determining subunit, configured to determine a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and the second calculation subunit is used for calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
In one implementation manner of the embodiment of the present invention, the first feature point matching pair determining module 204 may include:
the projection point determining unit is used for aiming at each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
a distance calculation unit configured to calculate a distance between the other feature point and the projection point;
the judging unit is used for judging whether the distance is smaller than a preset distance threshold value, if yes, the determining unit is triggered, and if no, the removing unit is triggered;
the determining unit is used for determining the feature point matching pair as a first feature point matching pair;
the removing unit is used for removing the characteristic point matching pair as an error characteristic point matching pair.
In an implementation manner of the embodiment of the present invention, the apparatus may further include:
the second feature point matching pair determining module is used for removing the error feature point matching pair from the feature point matching pair before the error feature point matching pair is removed from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair, so as to obtain a second feature point matching pair;
the third homography matrix determining module is used for calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
and the judging module is configured to judge whether a similarity condition is satisfied between the first homography matrix and the third homography matrix, and if so, trigger the first feature point matching pair determining module 204.
In an implementation manner of the embodiment of the present invention, the determining module may be specifically configured to:
judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
The embodiment of the invention also provides an electronic device, as shown in fig. 11, including: a processor 1101 and a memory 1102, wherein the memory 1102 is used for storing a computer program; the processor 1101 is configured to execute a program stored on the memory 1102, and implement the following steps:
determining characteristic point matching pairs between two images to be spliced;
acquiring an internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
and splicing the two images to be spliced into a spliced image according to the first feature point matching pair and the first homography matrix.
In the embodiment of the invention, the feature point matching pair between two images to be spliced is determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix. In the embodiment of the invention, the first homography matrix is calculated according to the internal reference matrix and the external reference information which characterize the properties of the camera, and the wrong characteristic point matching pairs are removed according to the first homography matrix, and because the first homography matrix is determined according to the properties of the camera, the wrong matching point pairs can be removed no matter how many or how few, so that the aim of improving the splicing quality of spliced images is achieved.
In an implementation manner of the embodiment of the present invention, the step of calculating the first homography matrix between the two images to be stitched based on the acquired internal reference matrix and external reference information may include:
determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external parameter information;
according to a preset selection rule, selecting the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information;
and calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
In an implementation manner of the embodiment of the present invention, the step of selecting, according to a preset selection rule, a parameter of the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information may include:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
The step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
determining rotation matrixes of the two cameras according to the gesture information of the two cameras;
and calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
In one implementation of the embodiment of the present invention, according to the preset selection rule, the step of selecting the parameter corresponding to the posture relation and the relative position relation from the acquired parameter information may include:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
before the step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, the method may further include:
acquiring object distances of the two cameras, wherein the object distances of the two cameras are the same;
The step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter may include:
determining a translation amount matrix between the two images to be spliced according to the optical center coordinates of the two cameras, one of the acquired internal reference matrixes and the object distances of the two cameras, wherein the translation amount matrix represents the translation amount between the matched feature points of the two images to be spliced;
determining a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
In an implementation manner of the embodiment of the present invention, the step of removing the erroneous feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair may include:
for each pair of feature point matching pairs, one feature point of the feature point matching pair, projecting the first homography matrix to an image to be spliced where the other feature point in the feature matching point pair is located, so as to obtain projected points;
Calculating the distance between the other characteristic point and the projection point;
judging whether the distance is smaller than a preset distance threshold value or not;
if it is the case, determining the feature point matching pair as a first feature point matching pair;
if not, the feature point matching pair is removed as an error feature point matching pair.
In an implementation manner of the embodiment of the present invention, before the step of removing the erroneous feature point matching pair from the feature point matching pair according to the first homography matrix to obtain the first feature point matching pair, the method may further include:
removing the error feature point matching pair from the feature point matching pair to obtain a second feature point matching pair;
calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
judging whether the similarity condition is met between the first homography matrix and the third homography matrix;
and if so, executing the step of removing the error characteristic point matching pair from the characteristic point matching pair according to the first homography matrix to obtain a first characteristic point matching pair.
In an implementation manner of the embodiment of the present invention, the step of determining whether the similarity condition is satisfied between the first homography matrix and the third homography matrix may include:
Judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
Note that, the Memory mentioned in the above electronic device may include a random access Memory (Random Access Memory, RAM) or a Non-Volatile Memory (NVM), for example, at least one magnetic disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the following steps are realized:
determining characteristic point matching pairs between two images to be spliced;
acquiring an internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
and splicing the two images to be spliced into a spliced image according to the first feature point matching pair and the first homography matrix.
In the embodiment of the invention, the feature point matching pair between two images to be spliced is determined; acquiring an internal reference matrix and external reference information of two cameras for shooting two images to be spliced; calculating a first homography matrix between two images to be spliced based on the acquired internal reference matrix and external reference information; removing the error feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair; and splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix. In the embodiment of the invention, the first homography matrix is calculated according to the internal reference matrix and the external reference information which characterize the properties of the camera, and the wrong characteristic point matching pairs are removed according to the first homography matrix, and because the first homography matrix is determined according to the properties of the camera, the wrong matching point pairs can be removed no matter how many or how few, so that the aim of improving the splicing quality of spliced images is achieved.
In an implementation manner of the embodiment of the present invention, the step of calculating the first homography matrix between the two images to be stitched based on the acquired internal reference matrix and external reference information may include:
determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external parameter information;
according to a preset selection rule, selecting the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information;
and calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
In an implementation manner of the embodiment of the present invention, the step of selecting, according to a preset selection rule, a parameter of the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information may include:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
The step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
determining rotation matrixes of the two cameras according to the gesture information of the two cameras;
and calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
In an implementation manner of the embodiment of the present invention, the step of selecting, according to a preset selection rule, a parameter of the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information may include:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
before the step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, the method may further include:
acquiring object distances of the two cameras, wherein the object distances of the two cameras are the same;
The step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter may include:
determining a translation amount matrix between the two images to be spliced according to the optical center coordinates of the two cameras, one of the acquired internal reference matrixes and the object distances of the two cameras, wherein the translation amount matrix represents the translation amount between the matched feature points of the two images to be spliced;
determining a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
In an implementation manner of the embodiment of the present invention, the step of removing the erroneous feature point matching pair from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair may include:
for each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
Calculating the distance between the other characteristic point and the projection point;
judging whether the distance is smaller than a preset distance threshold value or not;
if yes, determining the feature point matching pair as a first feature point matching pair;
if not, the feature point matching pair is removed as an error feature point matching pair.
In an implementation manner of the embodiment of the present invention, before the step of removing the erroneous feature point matching pair from the feature point matching pair according to the first homography matrix to obtain the first feature point matching pair, the method may further include:
removing the error feature point matching pair from the feature point matching pair to obtain a second feature point matching pair;
calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
judging whether the similarity condition is met between the first homography matrix and the third homography matrix;
and if so, executing the step of removing the error characteristic point matching pair from the characteristic point matching pair according to the first homography matrix to obtain a first characteristic point matching pair.
In an implementation manner of the embodiment of the present invention, the step of determining whether the similarity condition is satisfied between the first homography matrix and the third homography matrix may include:
Judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (13)

1. A method of image stitching, the method comprising:
determining characteristic point matching pairs between two images to be spliced;
acquiring an internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
Splicing the two images to be spliced into a spliced image according to the first feature point matching pair and the first homography matrix;
the step of calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information comprises the following steps:
determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the acquired external parameter information; according to a preset selection rule, selecting the external parameter corresponding to the posture relation and the relative position relation from the obtained external parameter information; and calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
2. The method according to claim 1, wherein the step of selecting the parameter of the external parameter corresponding to the posture relation and the relative positional relation from the acquired external parameter information according to a preset selection rule includes:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
The step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
determining rotation matrixes of the two cameras according to the gesture information of the two cameras;
and calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
3. The method of claim 1, wherein the selecting is performed according to a predetermined selection rule, the step of selecting the parameter corresponding to the attitude relation and the relative position relation from the obtained parameter information comprises the following steps:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
before the step of calculating the first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, the method further comprises:
acquiring object distances of the two cameras, wherein the object distances of the two cameras are the same;
The step of calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter comprises the following steps:
based on the optical center coordinates of the two cameras, one of the acquired internal reference matrices and the object distances of the two cameras, determining a translation amount matrix between the two images to be spliced, wherein the translation amount matrix represents translation amount between the feature points matched with the two images to be spliced;
determining a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
4. The method of claim 1, wherein the step of removing erroneous feature point matching pairs from the feature point matching pairs based on the first homography matrix, results in first feature point matching pairs, comprises:
for each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
Calculating the distance between the other characteristic point and the projection point;
judging whether the distance is smaller than a preset distance threshold value or not;
if yes, determining the feature point matching pair as a first feature point matching pair;
if not, the feature point matching pair is removed as an error feature point matching pair.
5. The method of claim 1, wherein prior to the step of removing erroneous feature point matching pairs from the feature point matching pairs according to the first homography matrix, the method further comprises:
removing erroneous feature point matching pairs from the feature point matching pairs, obtaining a second characteristic point matching pair;
calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
judging whether the similarity condition is met between the first homography matrix and the third homography matrix;
and if so, executing the step of removing the error characteristic point matching pair from the characteristic point matching pair according to the first homography matrix to obtain a first characteristic point matching pair.
6. The method of claim 5, wherein the step of determining whether a similarity condition is satisfied between the first homography matrix and the third homography matrix comprises:
Judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
7. An image stitching device, the device comprising:
the determining module is used for determining characteristic point matching pairs between two images to be spliced;
the acquisition module is used for acquiring the internal reference matrix and external reference information of two cameras for shooting the two images to be spliced;
the first homography matrix calculation module is used for calculating a first homography matrix between the two images to be spliced based on the acquired internal reference matrix and external reference information;
the first feature point matching pair determining module is used for removing error feature point matching pairs from the feature point matching pairs according to the first homography matrix to obtain first feature point matching pairs;
the splicing module is used for splicing the two images to be spliced into a spliced image according to the first characteristic point matching pair and the first homography matrix;
The first homography matrix calculation module includes:
a relation determining unit for determining the posture relation and the relative position relation of the two cameras according to the optical center coordinates and the optical axis direction of the two cameras in the obtained external parameter information;
the parameter determining unit is used for selecting parameter corresponding to the gesture relation and the relative position relation from the obtained parameter information according to a preset selection rule;
and the first calculation unit is used for calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter.
8. The device according to claim 7, wherein the parameter determination unit is specifically configured to:
when the posture relation is that the optical axis directions of the two cameras are not parallel, and the relative position relation is that the distance between optical centers of the two cameras is smaller than a preset optical center distance threshold, posture information of the two cameras is selected from the acquired external parameter information;
the first computing unit includes:
a rotation matrix determining subunit, configured to determine rotation matrices of the two cameras according to pose information of the two cameras;
And the first calculating subunit is used for calculating a first homography matrix between the two images to be spliced according to the determined rotation matrix and the acquired internal reference matrix.
9. The device according to claim 7, wherein the parameter determination unit is specifically configured to:
when the gesture is that the optical axis directions of the two cameras are parallel, and the relative position relationship is that the distance between optical centers of the two cameras is not smaller than a preset optical center distance threshold, optical center coordinates of the two cameras are selected from the acquired external parameter information;
the apparatus further comprises:
the object distance acquisition module is used for acquiring object distances of the two cameras before calculating a first homography matrix between the two images to be spliced according to the acquired internal parameter matrix and the external parameter, wherein the object distances of the two cameras are the same;
the first computing unit includes:
a translation amount matrix determining subunit, configured to determine a translation amount matrix between the two images to be stitched according to the optical center coordinates of the two cameras, one of the acquired internal reference matrices, and the object distances of the two cameras, where the translation amount matrix characterizes a translation amount between feature points matched with the two images to be stitched;
A second homography matrix determining subunit, configured to determine a second homography matrix between the two images to be spliced according to the acquired internal reference matrix;
and the second calculation subunit is used for calculating a first homography matrix between the two images to be spliced according to the translation quantity matrix and the second homography matrix.
10. The apparatus of claim 7, wherein the first feature point matching pair determination module comprises:
the projection point determining unit is used for aiming at each pair of feature point matching pairs, projecting one feature point in the feature point matching pair to an image to be spliced where the other feature point in the feature point matching pair is located through the first homography matrix to obtain projected points;
a distance calculation unit configured to calculate a distance between the other feature point and the projection point;
the judging unit is used for judging whether the distance is smaller than a preset distance threshold value, if yes, the determining unit is triggered, and if no, the removing unit is triggered;
the determining unit is used for determining the feature point matching pair as a first feature point matching pair;
the removing unit is used for removing the characteristic point matching pair as an error characteristic point matching pair.
11. The apparatus of claim 7, wherein the apparatus further comprises:
the second feature point matching pair determining module is used for removing the error feature point matching pair from the feature point matching pair before the error feature point matching pair is removed from the feature point matching pair according to the first homography matrix to obtain a first feature point matching pair, so as to obtain a second feature point matching pair;
the third homography matrix determining module is used for calculating a third homography matrix between the two images to be spliced according to the second characteristic point matching pair;
the judging module is used for judging whether the similarity condition is met between the first homography matrix and the third homography matrix, and if so, the first feature point matching pair determining module is triggered.
12. The apparatus of claim 11, wherein the determining module is specifically configured to:
judging whether the matrix similarity between the first homography matrix and the third homography matrix is smaller than a first preset similarity threshold;
and/or the number of the groups of groups,
and judging whether the element similarity between the elements representing the rotation perspective relation in the first homography matrix and the third homography matrix is smaller than a second preset similarity threshold value.
13. An electronic device comprising a processor and a memory, wherein,
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-6 when executing a computer program stored on a memory.
CN201711229638.2A 2017-11-29 2017-11-29 Image stitching method and device and electronic equipment Active CN109840884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711229638.2A CN109840884B (en) 2017-11-29 2017-11-29 Image stitching method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711229638.2A CN109840884B (en) 2017-11-29 2017-11-29 Image stitching method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109840884A CN109840884A (en) 2019-06-04
CN109840884B true CN109840884B (en) 2023-10-10

Family

ID=66882368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711229638.2A Active CN109840884B (en) 2017-11-29 2017-11-29 Image stitching method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109840884B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335315B (en) * 2019-06-27 2021-11-02 Oppo广东移动通信有限公司 Image processing method and device and computer readable storage medium
CN110717936B (en) * 2019-10-15 2023-04-28 哈尔滨工业大学 Image stitching method based on camera attitude estimation
CN111047510B (en) 2019-12-17 2023-02-14 大连理工大学 Large-field-angle image real-time splicing method based on calibration
CN111583118B (en) * 2020-05-13 2023-09-29 创新奇智(北京)科技有限公司 Image stitching method and device, storage medium and electronic equipment
CN112308777A (en) * 2020-10-16 2021-02-02 易思维(杭州)科技有限公司 Rapid image splicing method for plane and plane-like parts
CN112581369A (en) * 2020-12-24 2021-03-30 ***股份有限公司 Image splicing method and device
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN112950481B (en) * 2021-04-22 2022-12-06 上海大学 Water bloom shielding image data collection method based on image mosaic network
CN113222878B (en) * 2021-06-04 2023-09-05 杭州海康威视数字技术股份有限公司 Image stitching method
CN115829833B (en) * 2022-08-02 2024-04-26 爱芯元智半导体(上海)有限公司 Image generation method and mobile device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary

Also Published As

Publication number Publication date
CN109840884A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN109840884B (en) Image stitching method and device and electronic equipment
US10559090B2 (en) Method and apparatus for calculating dual-camera relative position, and device
EP3028252B1 (en) Rolling sequential bundle adjustment
CN111457886B (en) Distance determination method, device and system
CN105934757B (en) A kind of method and apparatus of the key point for detecting the first image and the incorrect incidence relation between the key point of the second image
JPH09212648A (en) Moving image processing method
CN110288511B (en) Minimum error splicing method and device based on double camera images and electronic equipment
EP3093822B1 (en) Displaying a target object imaged in a moving picture
CN113222878B (en) Image stitching method
CN111445537B (en) Calibration method and system of camera
KR20120023052A (en) Method and device for determining shape congruence in three dimensions
KR20240089161A (en) Filming measurement methods, devices, instruments and storage media
US20220405968A1 (en) Method, apparatus and system for image processing
CN111818260B (en) Automatic focusing method and device and electronic equipment
CN111028296A (en) Method, device, equipment and storage device for estimating focal length value of dome camera
CN116188599A (en) Calibration plate generation method, camera calibration method, device, equipment and calibration plate
CN109242894B (en) Image alignment method and system based on mobile least square method
CN111951211A (en) Target detection method and device and computer readable storage medium
CN113920196A (en) Visual positioning method and device and computer equipment
CN102110292B (en) Zoom lens calibration method and device in virtual sports
CN112991411A (en) Image registration method and apparatus, and storage medium
CN111814869B (en) Method and device for synchronous positioning and mapping, electronic equipment and storage medium
CN112819901B (en) Infrared camera self-calibration method based on image edge information
CN115278071B (en) Image processing method, device, electronic equipment and readable storage medium
CN116051634A (en) Visual positioning method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant