CN113033590A - Image feature matching method and device, image processing equipment and storage medium - Google Patents

Image feature matching method and device, image processing equipment and storage medium Download PDF

Info

Publication number
CN113033590A
CN113033590A CN201911357846.XA CN201911357846A CN113033590A CN 113033590 A CN113033590 A CN 113033590A CN 201911357846 A CN201911357846 A CN 201911357846A CN 113033590 A CN113033590 A CN 113033590A
Authority
CN
China
Prior art keywords
image
feature
point
rotation angle
initial point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911357846.XA
Other languages
Chinese (zh)
Inventor
龙学雄
易雨亭
李建禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201911357846.XA priority Critical patent/CN113033590A/en
Publication of CN113033590A publication Critical patent/CN113033590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image feature matching method and device, image processing equipment and a storage medium, and belongs to the technical field of image processing. According to the method and the device, most initial point pairs which are matched wrongly are screened out from a plurality of initial point pairs according to the rotation angle of the image acquisition equipment and the direction angle difference value between the direction angles of the two characteristic points included by each initial point pair, and the characteristic point pairs which are used for representing the same physical point are obtained. The scheme utilizes the principle that the direction angle difference value is approximately consistent with the rotation angle under the condition that the optical axis of the camera is approximately vertical to the observation plane, and can eliminate the error matching which is difficult to eliminate by other feature matching methods. In addition, the method for screening the characteristic point pairs is not influenced by the sparsity degree of the characteristic points in the image and the internal point rate, so that the robustness is high.

Description

Image feature matching method and device, image processing equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image feature matching method and apparatus, an image processing device, and a storage medium.
Background
With the development of image processing technology, feature matching is more and more widely applied to the image processing technology, for example, in unmanned aerial Vehicle image stitching and Automatic Guided Vehicle (AGV) map construction, feature matching can be applied to process acquired images. Here, the feature matching is to match feature points in a plurality of images to obtain a pair of feature points representing the same object point.
In the related art, when feature matching is performed on a plurality of images, a brute force matching method is firstly adopted, that is, feature point matching is performed according to the similarity of descriptors of the plurality of images to obtain a plurality of initial point pairs. And then screening the plurality of initial point pairs to obtain characteristic point pairs, wherein the descriptor can be obtained by calculation according to symbols or operators for encoding and describing local information of the characteristic points. In the screening of the initial point pair, commonly used methods include a two-way matching, a GMS (Grid-based Motion Statistics) method, a RANSAC (Random Sample Consensus) method, and the like.
When the bidirectional matching method is used, two times of violence matching are required, and the two times of violence matching are time-consuming. When the GMS method is used, it is difficult to use the GMS method when feature points are sparse because the GMS method relies on the neighbor relation between a plurality of initial point pairs for screening. When the RANSAC method is used, since the method models by randomly selecting a small number of initial point pairs and iteratively adjusts the model by using other initial point pairs step by step to realize the screening of the initial point pairs, the number of iterations is high and time is consumed in the case where the ratio of feature point pairs included in the initial point pairs and used for representing the same entity point is low. Therefore, the image feature matching method commonly used in the related art is still time-consuming, and the robustness needs to be improved.
Disclosure of Invention
The application provides an image feature matching method, an image feature matching device, image processing equipment and a storage medium, which can solve the problems of time consumption and low robustness in feature matching in the related technology. The technical scheme is as follows:
in one aspect, an image feature matching method is provided, and the method includes:
acquiring a plurality of initial point pairs, wherein each initial point pair in the plurality of initial point pairs comprises two characteristic points, one characteristic point of the two characteristic points is a characteristic point in a first image, and the other characteristic point is a characteristic point in a second image;
determining a rotation angle of an image acquisition device within a first period of time, the first period of time being a period of time between an acquisition instant of the first image and an acquisition instant of the second image;
and determining the characteristic point pairs in the plurality of initial point pairs for representing the same entity point according to the rotation angle and the direction angle difference value between the direction angles of the two characteristic points included in each initial point pair.
Optionally, the obtaining a plurality of initial point pairs includes:
determining a descriptor of each feature point in the first image according to the main direction of each feature point in the first image;
determining a descriptor of each feature point in the second image according to the main direction of each feature point in the second image;
and determining the plurality of initial point pairs according to the descriptor of each characteristic point in the first image and the descriptor of each characteristic point in the second image.
Optionally, the determining a rotation angle of the image acquisition device within the first period comprises:
receiving motion data acquired by the image acquisition device through a motion sensor included in the image acquisition device in the first time period;
estimating the rotation angle of the image acquisition device within the first period of time from the motion data.
Optionally, the determining a rotation angle of the image acquisition device within the first period comprises:
estimating the rotation angle of the image capturing apparatus in the first period of time from the direction angles of the two feature points included in each of the plurality of initial point pairs.
Optionally, the determining, according to the rotation angle and a direction angle difference between direction angles of two feature points included in each initial point pair, a feature point pair in the plurality of initial point pairs for representing the same object point includes:
determining the pair of feature points from the plurality of initial point pairs based on at least one of a descriptor distance and a composite distance between two feature points included in each initial point pair, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each initial point pair.
Optionally, the determining the pair of feature points from the plurality of initial point pairs according to at least one of a descriptor distance and a comprehensive distance between two feature points included in each initial point pair, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each initial point pair includes:
determining a difference value between the direction angle difference value corresponding to each initial point pair and the rotation angle to obtain a rotation angle difference value corresponding to each initial point pair;
and acquiring initial point pairs with corresponding rotation angle difference values smaller than an angle threshold value from the plurality of initial point pairs, and taking the acquired initial point pairs as the plurality of first candidate point pairs.
Obtaining a first candidate point pair, of which the descriptor distance between two feature points is smaller than a descriptor distance threshold, from the plurality of first candidate point pairs, and taking the obtained first candidate point pair as a plurality of second candidate point pairs;
and acquiring a second candidate point pair with the comprehensive distance between two characteristic points smaller than a comprehensive distance threshold from the plurality of second candidate point pairs, and taking the acquired candidate point pair as the characteristic point pair.
Optionally, the obtaining, from the plurality of second candidate point pairs, a second candidate point pair whose combined distance between two feature points is smaller than a combined distance threshold, and before taking the obtained candidate point pair as the feature point pair, further includes:
acquiring a first weight and a second weight, wherein the first weight is a weight corresponding to the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance;
and determining a comprehensive distance between the two characteristic points included by the corresponding second candidate point pair according to the first weight, the rotation angle difference corresponding to each second candidate point pair, the second weight and the descriptor distance between the two characteristic points included by the corresponding second candidate point pair.
In another aspect, an image feature matching apparatus is provided, the apparatus including:
an obtaining module, configured to obtain a plurality of initial point pairs, where each of the plurality of initial point pairs includes two feature points, one of the two feature points is a feature point in a first image, and the other feature point is a feature point in a second image;
a first determination module for determining a rotation angle of an image acquisition device within a first period, the first period being a period between an acquisition instant of the first image and an acquisition instant of the second image;
and a second determining module, configured to determine, according to the rotation angle and a direction angle difference between direction angles of two feature points included in each initial point pair, a feature point pair in the plurality of initial point pairs, which is used for representing the same object point.
Optionally, the obtaining module includes:
a first determining unit, configured to determine a descriptor of each feature point in the first image according to a main direction of each feature point in the first image;
a second determining unit, configured to determine a descriptor of each feature point in the second image according to the main direction of each feature point in the second image;
a third determining unit, configured to determine the plurality of initial point pairs according to the descriptor of each feature point in the first image and the descriptor of each feature point in the second image.
Optionally, the first determining module includes:
a receiving unit, configured to receive motion data acquired by the image acquisition device through a motion sensor included in the image acquisition device within the first period;
a first estimation unit for estimating the rotation angle of the image capture device within the first period based on the motion data.
Optionally, the first determining module includes:
a second estimation unit configured to estimate the rotation angle of the image pickup apparatus in the first period based on a direction angle of two feature points included in each of the plurality of initial point pairs.
Optionally, the second determining module includes:
a fourth determination unit configured to determine the feature point pair from the plurality of initial point pairs based on at least one of a descriptor distance and a comprehensive distance between two feature points included in each initial point pair, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each initial point pair.
Optionally, the fourth determining unit includes:
a first determining subunit, configured to determine a difference between the direction angle difference corresponding to each initial point pair and the rotation angle, so as to obtain a rotation angle difference corresponding to each initial point pair;
and a second determining subunit, configured to obtain, from the plurality of initial point pairs, an initial point pair whose corresponding rotation angle difference is smaller than an angle threshold, and use the obtained initial point pair as the plurality of first candidate point pairs.
A third determining subunit, configured to acquire, from the plurality of first candidate point pairs, a first candidate point pair in which a descriptor distance between two feature points included in the first candidate point pair is smaller than a descriptor distance threshold, and to take the acquired first candidate point pair as a plurality of second candidate point pairs;
and a fourth determining subunit, configured to acquire, from the plurality of second candidate point pairs, a second candidate point pair in which a composite distance between two feature points included in the plurality of second candidate point pairs is smaller than a composite distance threshold, and use the acquired second candidate point pair as the feature point pair.
Optionally, the fourth determining unit further includes:
the acquiring subunit is configured to acquire a first weight and a second weight, where the first weight is a weight corresponding to the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance;
and a fifth determining subunit, configured to determine, according to the first weight, the rotation angle difference corresponding to each second candidate point pair, and the descriptor distance between the second weight and two feature points included in the corresponding second candidate point pair, a comprehensive distance between the two feature points included in the corresponding second candidate point pair.
In another aspect, an image processing apparatus is provided, where the computer apparatus includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus, the memory is used to store a computer program, and the processor is used to execute the program stored in the memory to implement the steps of the image feature matching method.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the image feature matching method described above.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the image feature matching method described above.
The technical scheme provided by the application can at least bring the following beneficial effects:
in the present application, most of the initial point pairs which are mismatched may be screened out from the plurality of initial point pairs according to a rotation angle of the image capturing apparatus and a direction angle difference between direction angles of two feature points included in each initial point pair, so as to obtain feature point pairs representing the same physical point. The scheme utilizes the principle that the direction angle difference value is approximately consistent with the rotation angle under the condition that the optical axis of the camera is approximately vertical to the observation plane, and can eliminate the error matching which is difficult to eliminate by other feature matching methods. In addition, the method for screening the characteristic point pairs is not influenced by the sparsity degree of the characteristic points in the image and the internal point rate, so that the robustness is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram for determining an orientation angle of a feature point a according to an embodiment of the present application;
fig. 2 is a schematic diagram for determining an orientation angle of a feature point B according to an embodiment of the present application;
fig. 3 is a schematic diagram of an implementation environment of an image feature matching method provided in an embodiment of the present application;
fig. 4 is a flowchart of an image feature matching method provided in an embodiment of the present application;
FIG. 5 is a flow chart of another image feature matching method provided in the embodiments of the present application;
fig. 6 is a schematic structural diagram of an image feature matching apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, some terms referred to in the embodiments of the present application are explained to facilitate understanding.
Observation plane: and a plane perpendicular to an optical axis of a camera mounted on the image pickup apparatus. For example, when image acquisition equipment is unmanned aerial vehicle, the observation plane can be the plane parallel with unmanned aerial vehicle's lower bottom surface, and when image acquisition equipment was AGV, the observation plane can be parallel with the loading surface of AGV or the plane of coincidence.
Rotation angle: the image acquisition device is moved through an angle of rotation in a plane parallel to the observation plane.
The characteristic points are as follows: points with salient features in local areas on the image.
And (3) feature matching: and matching the characteristic points on different images to obtain characteristic point pairs for representing the same physical point.
Initial point pair: and carrying out primary matching on the feature points on different images to obtain matched point pairs.
Characteristic point pairs: and screening the initial point pairs to obtain matched point pairs which can be used for representing the same physical point, namely the characteristic point pairs.
Inner point rate: and the ratio of the characteristic point pairs obtained through the characteristic matching to the initial point pairs is obtained.
Main direction: the stable direction of the feature point in its local region, the principal direction, may be such that the feature has rotational invariance.
The direction angle is as follows: and the included angle between the main direction and the reference direction is the direction angle of the characteristic point. Illustratively, referring to fig. 1 and 2, u-axis and v-axis are two coordinate axes of a pixel coordinate system of an image, and a u-axis positive direction may be taken as a reference direction. FIG. 1 shows a first captured image, where feature point A is a feature point on the first captured image, and the main direction of feature point A is assumed to be the direction
Figure BDA0002336416310000071
Then the direction angle of the feature point a is the direction
Figure BDA0002336416310000072
At an angle theta to the positive direction of the u-axisA. Fig. 2 shows a second image, and the feature point B on the second image and the feature point a shown in fig. 1 may represent the same physical point, assuming that the main direction of the feature point B is the direction
Figure BDA0002336416310000073
The direction angle of the feature point B is the direction
Figure BDA0002336416310000074
At an angle theta to the positive direction of the u-axisB
Before explaining the image feature matching method provided by the embodiment of the present application in detail, an application scenario and an implementation environment provided by the embodiment of the present application are introduced.
With the development of image processing technology, feature matching is more and more widely applied to the image processing technology, for example, in unmanned aerial vehicle image splicing and AGV map construction, feature matching can be applied to process images acquired by image acquisition equipment.
For example, when scanning the ground with an unmanned aerial vehicle to obtain a ground image, the unmanned aerial vehicle generally moves on a plane parallel to the ground, that is, the optical axis of the camera carried on the unmanned aerial vehicle may be always perpendicular to the ground. In this case, the angle rotated on the plane during the acquisition of the two images by the drone may be referred to as the rotation angle. The direction angle difference between the direction angles of the two feature points included in the feature point pairs corresponding to the two images is approximately equal to the rotation angle, that is, the direction angle rotation angles of the two feature points are approximately equal to the rotation angle of the image capturing device, and in other words, the variation of the direction angle rotation of the two feature points is approximately equal to the variation of the rotation direction of the image capturing device. In this case, the image processing apparatus may filter matched feature point pairs according to the image feature matching method provided in the embodiment of the present application, and after obtaining the feature point pairs that can be used to represent the same object point, may perform image stitching or other processing on the acquired images.
For another example, when an AGV is used to capture an image looking down or up, the optical axis of a camera mounted on the AGV is also generally perpendicular to the observation plane. In this case, the angle of rotation produced by the AGV movement is also approximately equal to the difference in orientation angle between the orientation angles of the two characteristic points included in the characteristic point pair. In this case, the image processing apparatus may also screen matched feature point pairs according to the image feature matching method provided in the embodiment of the present application, and after obtaining feature point pairs that can be used to represent the same object point, may perform image stitching on the acquired images to construct a map or the like.
As is apparent from the foregoing description, in the case where the optical axis of the camera mounted on the image pickup apparatus is perpendicular to the observation plane, the change in the direction angle of the feature point is approximately coincident with the movement of the image pickup apparatus. The principle thereof will be explained next.
The direction vector of the main direction of a feature point on an image can be represented by pixel coordinates of two pixel points, the two pixel points have corresponding world coordinates in a three-dimensional space, and the projection relation is as follows:
Figure BDA0002336416310000081
Figure BDA0002336416310000082
Figure BDA0002336416310000083
wherein K is an internal parameter of the image acquisition equipment, X1And X2World coordinates of two pixels corresponding to a principal direction of a feature point on the first image are respectively represented, [ u ] u1,v1,1]And [ u ]2,v2,1]Respectively representing the normalized pixel coordinates, z, for the two pixels1And z2Respectively representing the coordinates of the two pixel points on the z-axis of the camera coordinate system, vector vec1A direction vector representing the main direction.
When the image acquisition equipment moves, the new projection relation is generated as follows:
Figure BDA0002336416310000084
Figure BDA0002336416310000085
Figure BDA0002336416310000086
wherein R and T are external parameters of the image acquisition equipment, R is a rotation matrix, T is a translation matrix, and RX1+ T and RX2+ T represents the world coordinates of the two aforementioned pixel points after the movement of the image capturing device, [ u ] respectively1',v1',1]And [ u ]2',v2',1]Respectively representing the pixel coordinates, vector vec, of the two rotated pixel points2A direction vector representing the main direction after rotation.
Z is a value obtained by taking the optical axis of a camera mounted on an image pickup apparatus perpendicular to an observation plane1=z2=z′1=z′2Let us order
Figure BDA0002336416310000091
At this time, the direction vector expressed by the formula (3) may be converted into the formula (7), and the direction vector expressed by the formula (6) may be converted into the formula (8):
Figure BDA0002336416310000092
Figure BDA0002336416310000093
from equations (7) and (8), equation (9) can be derived:
Figure BDA0002336416310000094
wherein the content of the first and second substances,
Figure BDA0002336416310000095
theta denotes image acquisitionAngle of rotation of the apparatus, 0 ≦ theta<360°。
When the image acquisition equipment rotates by taking the z axis as an axis, namely, the image acquisition equipment rotates by taking the optical axis of the camera as an axis, and the aspect ratio of the image acquired by the image acquisition equipment is consistent with that of the real object, f in the internal parameter K of the image acquisition equipmentxAnd fyAre in agreement, and cxAnd cyIs 0, in this case, the derivation result as shown in the formula (10) can be obtained.
Figure BDA0002336416310000101
In equation (10), vector vec1A direction vector, vec, representing the principal direction of a feature point on the first image2A direction vector representing a principal direction of the feature point on the second image,
Figure BDA0002336416310000102
a rotational variation of the direction vector of the main direction may be represented, where θ represents a rotational angle of the image capturing apparatus. It can be seen from equation (10) that the amount of rotation of the direction vector of the main direction approximately coincides with the rotation angle of the image pickup apparatus. Since the direction angle is determined by the included angle between the main direction and the reference direction, the rotation of the main direction can represent the change of the direction angle, and therefore, the change of the direction angle of the characteristic point can be obtained to be approximately equal to the rotation angle of the image acquisition device.
It should be noted that, when the angle by which the image capturing apparatus is moved exceeds 360 degrees, the rotation angle of the image capturing apparatus is the remainder obtained by dividing the actual rotated angle by 360 degrees.
It should be noted that the above description is only two possible application scenarios given in the embodiments of the present application. For scenes in which the optical axes of other cameras are perpendicular to the observation plane, the method provided by the embodiment of the application can be used for image feature matching. In addition, in the application scenario described above, the optical axis of the camera in the image capturing device is perpendicular to the observation plane, in this embodiment of the application, the optical axis of the camera may not be strictly perpendicular to the observation plane, that is, some deviation is allowed to exist, and when the deviation is within a certain range, the feature point pairs may also be screened according to the image matching method provided in this embodiment of the application.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an implementation environment in accordance with an example embodiment. The implementation environment includes an image processing device 101 and an image capture device 102, and the image processing device 101 may be communicatively connected to the image capture device 102. The communication connection may be a wired or wireless connection, which is not limited in this application.
The image processing device 101 may be a notebook computer, a desktop computer, a tablet computer, a mobile phone, a wearable device, a smart car machine, a smart television, or the like, may also be a server, or a server cluster formed by a plurality of servers, or may also be a cloud computing service center, which is not limited in this embodiment of the present application. The image capturing device 102 may be any device with an image capturing function, such as an unmanned aerial vehicle, an AGV, etc. with an image capturing function, the image capturing device 102 may include a motion sensor, etc. which may capture motion data, such as a motion mileage, a motion acceleration, a motion angular velocity, a posture, etc., when the image capturing device 102 is in motion.
The image capturing device 102 is configured to capture an image and send the captured image to the image processing device 101, and the image capturing device 102 may also be configured to capture motion data of itself and send the captured motion data to the image processing device 101. The image processing device 101 is configured to receive the image and the motion data acquired by the image acquisition device 102, and process the received image according to the image feature matching method provided in the embodiment of the present application.
It should be noted that the image processing apparatus 101 and the image capturing apparatus 102 may be two apparatuses or may be one apparatus. For example, assuming that the image capturing device 102 is an unmanned aerial vehicle, the unmanned aerial vehicle can process the captured image, that is, the unmanned aerial vehicle can also be used as the image processing device 101, in this case, the image processing device 101 and the image capturing device 102 are one device. Assuming that the image capturing device 102 is an unmanned aerial vehicle, the unmanned aerial vehicle sends the captured image to a server, and the server can process the received image, that is, the server is the image processing device 101, in this case, the image processing device 101 and the image capturing device 102 are two devices.
It will be understood by those skilled in the art that the above-described image processing apparatus and image capturing apparatus are merely exemplary, and other existing or future image processing apparatus or image capturing apparatus, as may be suitable for use in the present application, are also intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
Next, the image feature matching method provided in the embodiment of the present application is explained in detail.
Fig. 4 is a flowchart of an image feature matching method provided in an embodiment of the present application, which may be applied to an image processing apparatus. Referring to fig. 4, the method includes the following steps.
Step 401: a plurality of initial point pairs are obtained, each of the plurality of initial point pairs including two feature points, one of the two feature points being a feature point in the first image, and the other feature point being a feature point in the second image.
In this embodiment of the application, the image capturing device may capture an image, and the image processing device may receive the first image and the second image captured by the image capturing device, acquire feature points in the two images, and determine a plurality of initial point pairs matched in the two images.
It should be noted that, in the embodiment of the present application, there are two cases of the method for determining a plurality of initial point pairs, and these two cases will be described below.
In the first case, the image capturing device may include a motion sensor, and the motion sensor may capture motion data of the image capturing device at the time of capturing the image, and the motion data may include a motion mileage, a motion acceleration, a motion angular velocity, a motion posture, and the like. In this case, in the process of acquiring an image, after the image acquisition device acquires an image, the position of each pixel point in the next image can be estimated according to the position of each pixel point in the image and the motion acceleration, the motion angular velocity, the posture and the like in the motion data, that is, the pixel point which can represent an actual object point on the matching image can be tracked. Based on this, the image capturing apparatus may first capture the first image, and after capturing the second image, transmit the first image and the second image to the image processing apparatus. The image processing device can estimate the position of each pixel point in the second image according to the motion data and the position of each pixel point in the first image, determine the matched pixel point of each pixel point on the second image according to the estimated position, and take each pixel point in the first image and the matched pixel point of the corresponding pixel point in the second image as an initial point pair, so that a plurality of initial point pairs are obtained. Wherein, the motion sensor can be a speedometer, a GPS localizer, etc.
In the second case, after the image capturing device sends the captured first image and second image to the image processing device, the image processing device may extract feature points on the first image and second image by calculating descriptors of pixel points. A plurality of initial point pairs is then determined based on the descriptor distance of each feature point on the first image from each feature point on the second image.
In the embodiment of the present application, the image processing apparatus may determine a descriptor of each feature point in the first image according to the principal direction of each feature point in the first image, determine a descriptor of each feature point in the second image according to the principal direction of each feature point in the second image, and determine a plurality of initial point pairs according to the descriptor of each feature point in the first image and the descriptor of each feature point in the second image.
The image processing device may first perform feature extraction on each pixel point in the first image and the second image, and determine a pixel point having a significant feature as a feature point on the corresponding image. For example, pixel points with significant features on the contour and the edge are used as feature points. Then, the image processing device may determine a principal direction of each feature point on the first image and the second image, the principal direction may provide rotation invariance to the feature, and a descriptor of each feature point on the corresponding image may be calculated from the principal direction of each feature point on the first image and the second image, and the descriptor may be used to represent the feature of the feature point.
The method for determining the main direction of the feature point may be to calculate a gradient direction and a gradient amplitude of the gray value of a neighborhood pixel point of the feature point, obtain a gradient direction histogram according to the gradient direction and the gradient amplitude, and determine the main direction of the feature point according to the gradient direction histogram. The neighborhood pixels may be pixels in a circular region or a rectangular region with the feature point as the center.
After determining the dominant direction of the feature points, the reference direction of the image may be rotated to the dominant direction to compute a descriptor of the corresponding feature point, so that the features represented by the descriptor may have rotational invariance. For example, the descriptors of the respective feature points are calculated by rotating the u-axis of the pixel coordinate system to coincide with the principal direction.
It should be noted that, in the embodiment of the present application, the descriptor may be calculated by an operator such as SIFT (Scale Invariant Feature Transform), SURF (Speeded Up Robust Features), or ORB, or may be calculated by another operator, which is not limited in the embodiment of the present application.
After determining the descriptor of each feature point on the first image and in the second image, for each feature point in the first image, the image processing apparatus may obtain, from the plurality of feature points included in the second image, a matching feature point having a minimum distance to the descriptor of the corresponding feature point according to the descriptor of the corresponding feature point and the descriptor of each feature point in the second image, and use each feature point in the first image and the obtained matching feature point of the corresponding feature point as one initial point pair to obtain the plurality of initial point pairs.
Note that when calculating the descriptor distance, the euclidean distance, the hamming distance, or the like may be used. For example, the SIFT and SURF operators may use euclidean distances and the ORB operator may use hamming distances.
Exemplarily, it is assumed that an operator used by the descriptor is SIFT, a Euclidean distance is used for the descriptor distance, and a descriptor distance threshold is distthThe two feature points included in one initial point pair are feature point B and feature point C, and the descriptor of feature point B is x1,y1The descriptor of the feature point C is x2,y2The Euclidean distance between the descriptors of the two feature points is
Figure BDA0002336416310000131
Step 402: the rotation angle of the image acquisition device within a first period of time is determined, the first period of time being the period of time between the acquisition instant of the first image and the acquisition instant of the second image.
As can be seen from the foregoing description of the application scenario, in the embodiment of the present application, an optical axis of a camera carried on the image capturing device may be perpendicular to an observation plane, for example, an unmanned aerial vehicle performs ground scanning jigsaw puzzle, and an AGV performs downward-looking or upward-looking positioning navigation, where the movement of the image capturing device in the first time period refers to rotation on a plane parallel to the observation plane. I.e. from the moment of acquiring the first image to the moment of acquiring the second image, the image acquisition device is rotated in a plane parallel to the observation plane. Based on this, the image processing device may determine the rotation angle of the image capturing device within the first period, i.e. may determine the amount of change in the direction of rotation of the image capturing device within the first period.
As can be seen from the foregoing, the image capturing device may or may not include a motion sensor, and the motion sensor may capture motion data such as motion acceleration, motion angular velocity, and posture. On this basis, the image processing device may also determine the rotation angle of the image acquisition device within the first time period in two cases, which will be described next.
In the first case, the image capturing device may receive motion data captured by the image capturing device during a first period through a motion sensor included in the image capturing device, and then may estimate a rotation angle of the image capturing device during the first period based on the motion data.
In this embodiment, the image capturing device may capture motion data in the first time period through a motion sensor, and the motion data may include a posture, for example, the motion sensor may include an inertial sensor, where the inertial sensor may be a gyroscope, the gyroscope may capture a posture of the image capturing device at each time, and the image processing device may determine the rotation angle of the image capturing device in the first time period according to the posture of the image capturing device at the time of capturing the first image and the posture of the image capturing device at the time of capturing the second image.
In the second case, the image processing apparatus may estimate the rotation angle of the image pickup apparatus in the first period based on the direction angles of the two feature points included in each of the plurality of pairs of initial points.
In the embodiment of the present application, when the rotation angle cannot be determined by the image capturing apparatus, or in other possible cases, the image processing apparatus may estimate the rotation angle of the image capturing apparatus in the first period based on the direction angles of the two feature points included in each of the plurality of initial point pairs.
It should be noted that, when the optical axis of the camera mounted on the image capturing device is perpendicular to the observation plane, since the two main directions of the two feature points included in the feature point pair representing the same object point are determined by the same neighborhood pixel point, the rotation angles of the two main directions will be equal to the rotation angle of the image capturing device, that is, for the two feature points included in the feature point pair representing the same object point, the direction angle difference between the direction angles of the two feature points will be equal to the rotation angle of the image capturing device. Based on this, the image processing apparatus may use, according to the obtained main directions of the two feature points included in each initial point pair of the plurality of initial point pairs, an included angle between the main direction of each feature point and the reference direction as a direction angle of the corresponding feature point, and may further estimate a rotation angle of the image capturing apparatus in the first period according to the direction angle.
In this case, the image capturing apparatus may first screen out initial point pairs whose direction angles are rotated in agreement from the plurality of initial point pairs by the RANSAC method, calculate a direction angle difference value between the direction angles of two feature points included in each of the screened initial point pairs, estimate the rotation angle from the direction angle difference value corresponding to each of the screened initial point pairs, and may calculate an average value of the plurality of direction angle difference values as the estimated rotation angle.
Optionally, because the initial point pair with the wrong matching in the multiple initial point pairs has a high ratio and may take time using the RANSAC method, the image capturing apparatus may also not use the RANSAC method to screen the initial point pairs, but may directly calculate a direction angle difference between the direction angles of two feature points in each of the multiple initial point pairs, and count an average value or a median value of all the direction angle differences, where the direction angle difference obtained through the statistics may be used to represent the direction angle difference of the initial point pair with the direction angles in the multiple initial point pairs rotating consistently, and may use the direction angle difference obtained through the statistics as the rotation angle of the image capturing apparatus in the first time period.
It should be noted that, in the embodiment of the present application, when the image processing apparatus includes the motion sensor, the step of determining the rotation angle of the image capturing apparatus in the first period by the image processing apparatus may also be performed before the step of acquiring the plurality of initial point pairs, which is not limited by the embodiment of the present application.
Step 403: and determining the characteristic point pairs in the plurality of initial point pairs for representing the same entity point according to the rotation angle and the direction angle difference value between the direction angles of the two characteristic points included in each initial point pair.
In the embodiment of the present application, as can be seen from the foregoing description, when the optical axis of the camera of the image capturing device is perpendicular to the plane where the captured image is located, the direction angle difference between the direction angles of the two feature points on the first image and the second image, which are used to represent the same object point, is approximately equal to the rotation angle of the image capturing device. Based on this, the image processing apparatus may screen a plurality of initial point pairs according to the rotation angle and the direction angle difference, and determine a feature point pair that may be used to represent the same physical point.
In one possible implementation, the image processing apparatus may determine a difference between the direction angle difference value corresponding to each initial point pair and the rotation angle according to the rotation angle and a direction angle difference value between the direction angles of the two feature points included in each initial point pair to obtain a rotation angle difference value corresponding to each initial point pair. Then, the image processing apparatus may acquire, from the plurality of initial point pairs, initial point pairs whose corresponding rotation angle difference values are smaller than an angle threshold value, and regard the acquired initial point pairs as characteristic point pairs for representing the same real object.
In the embodiment of the present application, in the case where the image processing apparatus can determine a plurality of initial point pairs by tracking matching, the image processing apparatus can determine the principal directions of two feature points included in each of the plurality of initial point pairs based on the foregoing description about the determination of the principal directions, and then determine the direction angles of the two feature points included in each of the feature point pairs. In the case where the image processing apparatus cannot determine a plurality of initial point pairs by tracking matching, since the image processing apparatus has already determined the principal direction of each feature point in the image at the time of calculating the descriptor of the feature point in determining the plurality of initial point pairs, the direction angle of two feature points included in each initial point pair can be determined directly from the previously determined principal direction.
The image processing apparatus may calculate a direction angle difference between the direction angles of the respective two feature points, that is, a direction angle difference corresponding to the respective pair of initial points, after determining the direction angles of the two feature points included in each pair of initial points. Then, the image processing apparatus may use a difference between the direction angle difference and the rotation angle corresponding to each of the initial point pairs as the rotation angle difference corresponding to the corresponding initial point pair, and thereafter, may retain the initial point pair corresponding to the rotation angle difference smaller than the angle threshold among the plurality of initial point pairs, and may reject the initial point pair larger than or equal to the angle threshold, and may use the retained initial point pair as the feature point pair.
It should be noted that the angle threshold may be an arc value greater than 0, such as 1 arc, and the direction angle difference may be greater thanOr an arc value equal to 0, the rotational angle difference may be an arc value greater than or equal to 0. In the embodiment of the present application, the rotation angle difference Δ θ can be calculated according to equation (11)d
Δθd=||θ21|-Δθ| (11)
Wherein, theta1、θ2Direction angles of the feature point 1 and the feature point 2 included for one initial point pair, respectively, and Δ θ is a rotation angle of the image pickup apparatus in the first period.
Illustratively, still taking fig. 1 and 2 as an example, assume that the angle threshold is θth,θth>0, the rotation angle of the image acquisition equipment in the first period is delta theta, the characteristic point A on the first image and the characteristic point B on the second image are an initial point pair, and the direction angle of the characteristic point A is thetaAThe direction angle of the characteristic point B is thetaB,θAAnd thetaBMay be | θBA|=ΔθABThe difference between the rotation angles corresponding to the two feature points may be | Δ θ |AB-Δθ|=ΔθdWhere | represents calculating the absolute value. When Δ θdthWhen Δ θ is smaller than Δ θ, the characteristic point a and the characteristic point B may be regarded as one characteristic point paird≥θthThen, the two feature points can be culled.
Therefore, according to the rotation angle and the direction angle difference value corresponding to each initial point pair, the initial point pairs with the direction angle variation exceeding the angle threshold can be considered as the initial point pairs with the matching error to be removed, that is, the initial point pairs which are difficult to be removed by other feature matching methods can be removed.
In another possible implementation, the image processing apparatus may determine the feature point pair from the plurality of initial point pairs based on at least one of a descriptor distance and a comprehensive distance between two feature points included in each of the initial point pairs, and a direction angle difference between the rotation angle and the direction angle of the two feature points included in each of the initial point pairs. In this possible implementation, there may be multiple implementations of determining pairs of feature points from multiple initial pairs of points, three of which are described in detail below.
In a first implementation, the image processing apparatus may determine the feature point pairs from the plurality of initial point pairs based on the rotation angle, a direction angle difference between direction angles of two feature points included in each of the initial point pairs, and a descriptor distance between two feature points included in each of the initial point pairs.
Since the descriptor may represent the feature of the feature point, the image processing apparatus may filter the initial point pair in combination with the rotation angle difference and the descriptor distance. The image capturing apparatus may first determine a plurality of first candidate point pairs from the plurality of initial point pairs based on the rotation angle and a direction angle difference between direction angles of two feature points included in each of the initial point pairs. Then, the image processing apparatus determines a feature point pair from the plurality of first candidate point pairs again based on the descriptor distance between the two feature points included in each of the first candidate point pairs. That is, the image processing apparatus may remove an initial point pair that is erroneously matched from among the plurality of initial point pairs, that is, two feature points that cannot be used to represent the same physical point, according to the rotation angle and the direction angle difference value corresponding to each initial point pair, and then further perform screening according to the descriptor distance.
In this implementation, the image processing apparatus may first acquire, from the plurality of initial point pairs, initial point pairs whose corresponding rotation angle difference values are smaller than the angle threshold value, according to the angle threshold value, and take the acquired initial point pairs as the plurality of first candidate point pairs.
It should be noted that the image processing apparatus may determine the plurality of first candidate point pairs according to equation (11) with reference to the foregoing description about the filtering according to the angle threshold, which is not described herein again.
The image processing apparatus may determine a descriptor distance between two feature points included in each of the plurality of first candidate point pairs, after acquiring the plurality of first candidate point pairs, according to descriptors of the two feature points, and then acquire, from the plurality of first candidate point pairs, a first candidate point pair in which the descriptor distance between the two feature points included is smaller than a descriptor distance threshold, and take the acquired first candidate point pair as the feature point pair.
The descriptor distance threshold may be preset according to practice, and as can be seen from the foregoing, the descriptor may be calculated through SIFT, SURF, or ORB, and when the descriptor distance is calculated, the euclidean distance, the hamming distance, or the like may be used. For example, the SIFT and SURF operators use euclidean distances and the ORB operator uses hamming distances.
Illustratively, assume that the descriptor distance threshold is distthTwo feature points included in one first candidate point pair are feature point B and feature point C, and descriptor distance between the descriptor of feature point B and the descriptor of feature point C is dist12. When dist12<distthIn this case, the feature point B and the feature point C may be used as feature points that can be used to represent the same physical point, when dist12≥distthIn time, feature points B and C may be eliminated.
Optionally, in this implementation manner, the image processing apparatus may also filter a plurality of initial point pairs according to the descriptor distance to obtain a plurality of first candidate point pairs, and then filter the plurality of first candidate point pairs according to the rotation angle difference to obtain the feature point pair. The methods of performing screening according to the descriptor distance and performing screening according to the rotation angle difference can refer to the foregoing implementation methods, and the embodiments of the present application are not described herein again.
In a second implementation, the image processing apparatus may determine the feature point pairs from the plurality of initial point pairs based on the rotation angle, the direction angle difference between the direction angles of the two feature points included in each of the initial point pairs, and the integrated distance between the two feature points included in each of the initial point pairs.
In the embodiment of the present application, the image processing apparatus may filter the plurality of first candidate point pairs in combination with the descriptor distances and the rotation angle difference, that is, may further filter the plurality of first candidate point pairs according to the composite distance. Since the synthetic distance is a combination of the two screening conditions of the descriptor distance and the variation of the direction angle rotation, the synthetic distance can be used for comprehensively evaluating the matching error rate of the candidate point pair.
For example, the image processing apparatus may first determine a rotation angle difference value corresponding to each initial point pair from a direction angle and a rotation angle of two feature points included in each initial point pair, acquire an initial point pair, of which a rotation angle difference value between two feature points included in the plurality of initial point pairs is smaller than an angle threshold value, from the plurality of initial point pairs, and regard the acquired initial point pair as the plurality of first candidate point pairs. Thereafter, the image processing apparatus may acquire, from the plurality of first candidate point pairs, a first candidate point pair including two feature points whose combined distance between the two feature points is smaller than a combined distance threshold value, and take the acquired first candidate point pair as a feature point pair. Wherein, the comprehensive distance threshold value can be preset according to the actual situation.
It should be noted that, for the implementation manner of the image processing apparatus screening a plurality of initial point pairs according to the rotation angle difference to obtain a plurality of first candidate point pairs, reference may also be made to the foregoing related description, and details are not repeated here.
The image processing apparatus may determine, after determining the plurality of first candidate point pairs, a descriptor distance between two feature points included in the corresponding first candidate point pair from a descriptor between the two feature points included in each of the first candidate point pairs. Then, the image processing apparatus may determine a comprehensive distance between two feature points included in the corresponding first candidate point pair, based on the rotation angle difference between the two feature points included in each first candidate point pair and the descriptor distance. The implementation manner of determining the descriptor distance may refer to the foregoing related description, and is not described herein again.
When determining the comprehensive distance according to the rotation angle difference and the descriptor distance, the image processing device may obtain a first weight and a second weight, where the first weight is a weight of the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance. Then, the image processing apparatus may determine a composite distance between two feature points included in the corresponding first candidate point pair, based on the first weight, the rotation angle difference corresponding to each first candidate point pair, the second weight, and the descriptor distance between the two feature points included in the corresponding first candidate point pair.
It should be noted that both the first weight and the second weight may be preset parameters. In addition, since the descriptors and the angles belong to different calculation dimensions, when the comprehensive distance is calculated according to the descriptors and the angles, the first weight and the second weight can be understood as two conversion factors, that is, the angles and the descriptors can be converted into the same calculation dimension by the first weight and the second weight.
In the embodiment of the present application, the image processing apparatus may first determine the rotation angle difference value by the aforementioned formula (11), and then calculate the comprehensive distance T by the following formula (12)12
T12=α1·Δθd2·dist12 (13)
Wherein, Delta thetadIndicates the rotation angle difference, dist, between feature point 1 and feature point 212Descriptor distance, α, between descriptors representing feature point 1 and feature point 21、α2The first weight and the second weight respectively, and feature point 1 and feature point 2 are two feature points in a first candidate point pair.
Illustratively, assume that the composite distance threshold is TthTwo feature points included in a first candidate point pair are feature point D and feature point E, and the descriptor distance between the descriptors of the two feature points is distDEThe difference in the direction angle between the direction angles of the two feature points is Δ θDEThe rotation angle of the image pickup device in the first period is Δ θ1Δ θ can be obtainedDEAnd Δ θ1The difference between the rotation angles is | Delta thetaDE-Δθ1L, will alpha1·(|ΔθDE-Δθ1I) and alpha2·distDEAdding the two characteristic points to obtain the comprehensive distance T of the two characteristic pointsDE=α1·(|ΔθDE-Δθ1|)+α2·distDE. When T isDE<TthIn this case, the two feature points may be regarded as a feature point pair that can be used to represent the same real object. When in useTDE≥TthThe two feature points can be eliminated.
Optionally, in this implementation manner, the image processing apparatus may also filter the plurality of initial point pairs according to the comprehensive distance to obtain a plurality of first candidate point pairs, and then filter the plurality of first candidate point pairs according to the rotation angle difference to obtain the feature point pair. The modes of screening according to the comprehensive distance and screening according to the rotation angle difference can refer to the implementation modes, and the embodiment of the application is not described herein again.
In a third implementation, the image processing apparatus may determine the feature point pairs from the plurality of initial point pairs based on the rotation angle, the direction angle difference between the direction angles of the two feature points included in each of the initial point pairs, and the descriptor distance and the integrated distance between the two feature points included in each of the initial point pairs.
In this implementation, the image capturing apparatus may first determine a rotation angle difference value corresponding to each initial point pair according to a rotation angle and a direction angle difference value between direction angles of two feature points included in each initial point pair, acquire an initial point pair having a corresponding rotation angle difference value smaller than an angle threshold value from among the plurality of initial point pairs, and take the acquired initial point pair as the plurality of first candidate point pairs. That is, the image acquisition device may first screen the plurality of initial point pairs according to the rotation angle difference to obtain a plurality of first candidate point pairs. The method for screening according to the rotation angle difference can refer to the related descriptions, and the embodiments of the present application are not described herein again.
Then, the image processing apparatus may acquire, from the plurality of first candidate point pairs, a first candidate point pair including two feature points whose descriptor distance between the two feature points is smaller than a descriptor distance threshold value, and treat the acquired first candidate point pair as a plurality of second candidate point pairs. Then, the image acquisition device acquires a second candidate point pair, of which the comprehensive distance between two feature points is smaller than a comprehensive distance threshold, from the plurality of second candidate point pairs, and takes the acquired second candidate point pair as a feature point pair. That is, the image acquisition device may first use the descriptor distance threshold to screen a plurality of first candidate point pairs, and then further use the comprehensive distance threshold to screen further, so as to finally obtain a feature point pair that can be used to represent the same entity point.
In the embodiment of the application, after the angle threshold and the descriptor distance threshold are used for screening, characteristic point pairs with mismatching may still exist, so that comprehensive distance can be reused for further screening to reject characteristic point pairs with larger descriptor distance and direction angle errors.
The method for calculating the descriptor distance and the comprehensive distance may refer to the related descriptions, and will not be described herein again.
Optionally, in a third implementation manner, the order of the three steps of performing the screening according to the descriptor distance, performing the screening according to the rotation angle and the direction angle difference value, and performing the screening according to the comprehensive distance may be optionally changed and combined, which is not limited in the embodiment of the present application.
It should be noted that, in the above several implementation manners, in the case that the image acquisition device can track and match to obtain a plurality of initial point pairs, the image processing device may first screen the plurality of initial point pairs according to the difference between the rotation angle and the direction angle, so as to eliminate most of the initial point pairs with matching errors, and then further screen the initial point pairs according to the descriptor distance and/or the comprehensive distance, so that the calculation amount of the descriptors can be reduced, and the matching speed can be increased.
In the embodiment of the application, after the screening is completed, the RANSAC method can be used for screening the obtained feature point pairs again, so that the error matching can be further eliminated.
Fig. 5 is a flowchart of another image feature matching method provided in an embodiment of the present application. Referring to fig. 5, the image processing device may receive the first image and the second image transmitted by the image capturing device, and determine whether the image capturing device has transmitted motion data. When the determination is yes, the motion data may be received, a plurality of initial point pairs may be determined from the motion data, and the direction angles of the two feature points included in each of the initial point pairs may be acquired. Then, from the motion data, a rotation angle of the image acquisition device within the first period is determined. And then screening a plurality of initial point pairs according to the direction angle, the rotation angle and the angle threshold of the two characteristic points included in each initial point pair. And then, screening again by using the descriptor distance, screening again by using the comprehensive distance, and finally screening by using a RANSAC method to obtain a final characteristic point pair. When the judgment result is negative, the feature points can be extracted firstly, a plurality of initial point pairs are determined according to the descriptor distance of each feature point, then the RANSAC method is used for screening, the direction angle of two feature points included in each screened initial point pair is obtained, the rotation angle is estimated, then the angle threshold, the descriptor distance threshold and the comprehensive distance threshold can be used for screening in sequence, and finally the RANSAC method is used for screening.
It should be noted that, in the embodiment of the present application, RANSAC screening after screening according to descriptor distances or synthetic distances is an optional step.
In the embodiment of the present application, when feature matching of a complex and repetitive texture image is handled, since descriptors of a plurality of feature points may exist in the image are approximately consistent, in this case, if screening matching is performed only according to a descriptor distance, feature points with more matching errors still exist in a plurality of finally obtained feature point pairs, that is, accuracy may be low. The technical scheme provided by the application is that the corresponding direction angle difference value can be obtained according to the rotation angle of the image acquisition equipment and each initial point, namely, the characteristic matching is carried out according to the variation of the rotation direction of the image acquisition equipment and the variation of the direction angle rotation corresponding to the two matched characteristic points, and the influence of repeated textures can be reduced to a certain extent. For example, when an image of a road surface is acquired, the image of a tile on the road surface is repeated, in this case, a pair of tiles with identical textures may exist on two acquired images, but the two tiles are actually two different real objects, and if feature points on the two tiles match as initial point pairs, the initial point pairs are actually matching errors, but since descriptors of the initial point pairs are approximately consistent, when the features match, if the screening is performed only according to the descriptor distance, the initial point pairs matching errors may not be removed, and if the screening is performed according to the direction angle, the initial point pairs matching errors may be removed.
In summary, in the embodiment of the present application, most of the initial point pairs that are incorrectly matched may be screened out from the plurality of initial point pairs according to the rotation angle of the image capturing apparatus and the direction angle difference between the direction angles of the two feature points included in each initial point pair, so as to obtain the feature point pairs representing the same physical point. The scheme utilizes the principle that the direction angle difference value is approximately consistent with the rotation angle under the condition that the optical axis of the camera is approximately vertical to the observation plane, and can eliminate the error matching which is difficult to eliminate by other feature matching methods. In addition, the method for screening the characteristic point pairs is not influenced by the sparsity degree of the characteristic points in the image and the internal point rate, so that the robustness is high.
Fig. 6 is a schematic structural diagram of an image feature matching apparatus provided in an embodiment of the present application, where the image feature matching apparatus may be implemented by software, hardware, or a combination of the two to be a part or all of an image processing device, and the image processing device may be the image processing device shown in fig. 3. Referring to fig. 6, the apparatus includes: an acquisition module 601, a first determination module 602, and a second determination module 603.
An obtaining module 601, configured to obtain a plurality of initial point pairs, where each of the plurality of initial point pairs includes two feature points, one of the two feature points is a feature point in a first image, and the other feature point is a feature point in a second image;
a first determining module 602, configured to determine a rotation angle of the image capturing device within a first time period, where the first time period is a time period between a capturing time of the first image and a capturing time of the second image;
a second determining module 603, configured to determine, according to the rotation angle and a direction angle difference between direction angles of two feature points included in each initial point pair, a feature point pair in the plurality of initial point pairs, which is used to represent the same object point.
Optionally, the obtaining module 601 includes:
the first determining unit is used for determining a descriptor of each feature point in the first image according to the main direction of each feature point in the first image;
the second determining unit is used for determining a descriptor of each characteristic point in the second image according to the main direction of each characteristic point in the second image;
a third determining unit, configured to determine the plurality of initial point pairs according to the descriptor of each feature point in the first image and the descriptor of each feature point in the second image.
Optionally, the first determining module comprises:
the receiving unit is used for receiving motion data acquired by the image acquisition equipment through a motion sensor included in the image acquisition equipment in a first time interval;
a first estimation unit for estimating a rotation angle of the image pickup device within a first period based on the motion data.
Optionally, the first determining module includes:
a second estimation unit configured to estimate a rotation angle of the image pickup apparatus in the first period based on a direction angle of two feature points included in each of the plurality of pairs of initial points.
Optionally, the second determining module includes:
a fourth determination unit configured to determine a pair of feature points from the plurality of pairs of initial points based on at least one of a descriptor distance and a comprehensive distance between two feature points included in each pair of initial points, and a direction angle difference between a rotation angle and a direction angle of two feature points included in each pair of initial points.
Optionally, the fourth determining unit includes:
the first determining subunit is used for determining the difference value between the direction angle difference value and the rotation angle corresponding to each initial point pair to obtain a rotation angle difference value corresponding to each initial point pair;
and a second determining subunit, configured to acquire, from the plurality of initial point pairs, an initial point pair whose corresponding rotation angle difference is smaller than the angle threshold, and to take the acquired initial point pair as a plurality of first candidate point pairs.
A third determining subunit, configured to acquire, from the plurality of first candidate point pairs, a first candidate point pair in which a descriptor distance between two feature points included in the first candidate point pair is smaller than a descriptor distance threshold, and to take the acquired first candidate point pair as a plurality of second candidate point pairs;
and a fourth determining subunit, configured to acquire, from the plurality of second candidate point pairs, a second candidate point pair in which a composite distance between two feature points included in the second candidate point pair is smaller than a composite distance threshold, and use the acquired second candidate point pair as a feature point pair.
Optionally, the fourth determining unit further includes:
the acquisition subunit is used for acquiring a first weight and a second weight, wherein the first weight is a weight corresponding to the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance;
and a fifth determining subunit, configured to determine, according to the first weight, the rotation angle difference corresponding to each second candidate point pair, the second weight, and a descriptor distance between two feature points included in the corresponding second candidate point pair, a composite distance between two feature points included in the corresponding second candidate point pair.
In this embodiment, most of the initial point pairs which are mismatched may be screened out from the plurality of initial point pairs according to a rotation angle of the image capturing apparatus and a direction angle difference between direction angles of two feature points included in each initial point pair, so as to obtain feature point pairs representing the same entity point. The scheme utilizes the principle that the direction angle difference value is approximately consistent with the rotation angle under the condition that the optical axis of the camera is approximately vertical to the observation plane, and can eliminate the error matching which is difficult to eliminate by other feature matching methods. In addition, the method for screening the characteristic point pairs is not influenced by the sparsity degree of the characteristic points in the image and the internal point rate, so that the robustness is high.
It should be noted that: in the image feature matching device provided in the above embodiment, when the features are matched, only the division of the above functional modules is taken as an example, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the image feature matching device and the image feature matching method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail, and are not described herein again.
Fig. 7 is a schematic structural diagram of an image processing apparatus 700 according to an embodiment of the present application. The image processing device 700 may be a portable mobile image processing device such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Image processing device 700 may also be referred to by other names such as user device, portable image processing device, laptop image processing device, desktop image processing device, and the like.
Generally, the image processing apparatus 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the image feature matching method provided by method embodiments herein.
In some embodiments, the image processing apparatus 700 may further include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other image processing devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be a front panel disposed on the image processing device 700; in other embodiments, the display screens 705 may be at least two, respectively disposed on different surfaces of the image processing apparatus 700 or in a folded design; in other embodiments, the display 705 may be a flexible display, disposed on a curved surface or on a folded surface of the image processing device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. In general, a front camera is provided on a front panel of an image processing apparatus, and a rear camera is provided on a rear surface of the image processing apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. The microphones may be plural and disposed at different portions of the image processing apparatus 700 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to position the current geographic Location of the image processing apparatus 700 to implement navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 709 is used to supply power to various components in the image processing apparatus 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the image processing apparatus 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the image processing apparatus 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the image processing apparatus 700, and the gyro sensor 712 may acquire a 3D motion of the user on the image processing apparatus 700 in cooperation with the acceleration sensor 711. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 713 may be disposed on a side bezel of the image processing device 700 and/or on an underlying layer of the touch display 705. When the pressure sensor 713 is disposed on the side frame of the image processing apparatus 700, a user's grip signal to the image processing apparatus 700 may be detected, and left-right hand recognition or shortcut operation may be performed by the processor 701 according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the image processing device 700. When a physical button or vendor Logo is provided on the image processing apparatus 700, the fingerprint sensor 714 may be integrated with the physical button or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also called a distance sensor, is typically provided on the front panel of the image processing apparatus 700. The proximity sensor 716 is used to capture the distance between the user and the front of the image processing device 700. In one embodiment, the processor 701 controls the touch display screen 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front face of the image processing device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the image processing apparatus 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the image processing apparatus 700, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the image feature matching method in the above-mentioned embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the image feature matching method described above.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (16)

1. An image feature matching method, characterized in that the method comprises:
acquiring a plurality of initial point pairs, wherein each initial point pair in the plurality of initial point pairs comprises two characteristic points, one characteristic point of the two characteristic points is a characteristic point in a first image, and the other characteristic point is a characteristic point in a second image;
determining a rotation angle of an image acquisition device within a first period of time, the first period of time being a period of time between an acquisition instant of the first image and an acquisition instant of the second image;
and determining the characteristic point pairs in the plurality of initial point pairs for representing the same entity point according to the rotation angle and the direction angle difference value between the direction angles of the two characteristic points included in each initial point pair.
2. The method of claim 1, wherein the obtaining a plurality of initial point pairs comprises:
determining a descriptor of each feature point in the first image according to the main direction of each feature point in the first image;
determining a descriptor of each feature point in the second image according to the main direction of each feature point in the second image;
and determining the plurality of initial point pairs according to the descriptor of each characteristic point in the first image and the descriptor of each characteristic point in the second image.
3. The method of claim 1, wherein determining the rotation angle of the image capture device within the first time period comprises:
receiving motion data acquired by the image acquisition device through a motion sensor included in the image acquisition device in the first time period;
estimating the rotation angle of the image acquisition device within the first period of time from the motion data.
4. The method of claim 1, wherein determining the rotation angle of the image capture device within the first time period comprises:
estimating the rotation angle of the image capturing apparatus in the first period of time from the direction angles of the two feature points included in each of the plurality of initial point pairs.
5. The method according to any one of claims 1 to 4, wherein the determining the pair of characteristic points in the plurality of initial point pairs for representing the same physical point according to the rotation angle and a direction angle difference between direction angles of two characteristic points included in each initial point pair comprises:
determining the pair of feature points from the plurality of initial point pairs based on at least one of a descriptor distance and a composite distance between two feature points included in each initial point pair, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each initial point pair.
6. The method according to claim 5, wherein the determining the pairs of feature points from the plurality of pairs of initial points based on at least one of a descriptor distance and a composite distance between two feature points included in each pair of initial points, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each pair of initial points comprises:
determining a difference value between the direction angle difference value corresponding to each initial point pair and the rotation angle to obtain a rotation angle difference value corresponding to each initial point pair;
acquiring initial point pairs of which corresponding rotation angle difference values are smaller than an angle threshold value from the initial point pairs, and taking the acquired initial point pairs as a plurality of first candidate point pairs;
obtaining a first candidate point pair, of which the descriptor distance between two feature points is smaller than a descriptor distance threshold, from the plurality of first candidate point pairs, and taking the obtained first candidate point pair as a plurality of second candidate point pairs;
and acquiring a second candidate point pair with the comprehensive distance between two characteristic points smaller than a comprehensive distance threshold from the plurality of second candidate point pairs, and taking the acquired second candidate point pair as the characteristic point pair.
7. The method according to claim 6, wherein before the obtaining a second candidate point pair including two feature points whose combined distance between the two feature points is smaller than a combined distance threshold from the plurality of second candidate point pairs, the method further comprises:
acquiring a first weight and a second weight, wherein the first weight is a weight corresponding to the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance;
and determining a comprehensive distance between the two characteristic points included by the corresponding second candidate point pair according to the first weight, the rotation angle difference corresponding to each second candidate point pair, the second weight and the descriptor distance between the two characteristic points included by the corresponding second candidate point pair.
8. An image feature matching apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain a plurality of initial point pairs, where each of the plurality of initial point pairs includes two feature points, one of the two feature points is a feature point in a first image, and the other feature point is a feature point in a second image;
a first determination module for determining a rotation angle of an image acquisition device within a first period, the first period being a period between an acquisition instant of the first image and an acquisition instant of the second image;
and a second determining module, configured to determine, according to the rotation angle and a direction angle difference between direction angles of two feature points included in each initial point pair, a feature point pair in the plurality of initial point pairs, which is used for representing the same object point.
9. The apparatus of claim 8, wherein the obtaining module comprises:
a first determining unit, configured to determine a descriptor of each feature point in the first image according to a main direction of each feature point in the first image;
a second determining unit, configured to determine a descriptor of each feature point in the second image according to the main direction of each feature point in the second image;
a third determining unit, configured to determine the plurality of initial point pairs according to the descriptor of each feature point in the first image and the descriptor of each feature point in the second image.
10. The apparatus of claim 8, wherein the first determining module comprises:
a receiving unit, configured to receive motion data acquired by the image acquisition device through a motion sensor included in the image acquisition device within the first period;
a first estimation unit for estimating the rotation angle of the image capture device within the first period based on the motion data.
11. The apparatus of claim 8, wherein the first determining module comprises:
a second estimation unit configured to estimate the rotation angle of the image pickup apparatus in the first period based on a direction angle of two feature points included in each of the plurality of initial point pairs.
12. The apparatus of any of claims 8-11, wherein the second determining module comprises:
a fourth determination unit configured to determine the feature point pair from the plurality of initial point pairs based on at least one of a descriptor distance and a comprehensive distance between two feature points included in each initial point pair, and a direction angle difference between the rotation angle and a direction angle of two feature points included in each initial point pair.
13. The apparatus of claim 12, wherein the fourth determining unit comprises:
a first determining subunit, configured to determine a difference between the direction angle difference corresponding to each initial point pair and the rotation angle, so as to obtain a rotation angle difference corresponding to each initial point pair;
a second determining subunit, configured to obtain, from the plurality of initial point pairs, initial point pairs whose corresponding rotation angle differences are smaller than an angle threshold, and use the obtained initial point pairs as the plurality of first candidate point pairs;
a third determining subunit, configured to acquire, from the plurality of first candidate point pairs, a first candidate point pair in which a descriptor distance between two feature points included in the plurality of first candidate point pairs is smaller than a descriptor distance threshold, and to take the acquired first candidate point pair as the plurality of second candidate point pairs;
and a fourth determining subunit, configured to acquire, from the plurality of second candidate point pairs, a second candidate point pair in which a composite distance between two feature points included in the plurality of second candidate point pairs is smaller than a composite distance threshold, and use the acquired second candidate point pair as the feature point pair.
14. The apparatus of claim 13, wherein the fourth determining unit further comprises:
the acquiring subunit is configured to acquire a first weight and a second weight, where the first weight is a weight corresponding to the rotation angle difference, and the second weight is a weight corresponding to the descriptor distance;
and a fifth determining subunit, configured to determine, according to the first weight, the rotation angle difference corresponding to each second candidate point pair, and the descriptor distance between the second weight and two feature points included in the corresponding second candidate point pair, a comprehensive distance between the two feature points included in the corresponding second candidate point pair.
15. An image processing apparatus, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus, the memory is used for storing computer programs, and the processor is used for executing the programs stored in the memory to realize the steps of the method according to any one of claims 1 to 7.
16. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911357846.XA 2019-12-25 2019-12-25 Image feature matching method and device, image processing equipment and storage medium Pending CN113033590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357846.XA CN113033590A (en) 2019-12-25 2019-12-25 Image feature matching method and device, image processing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357846.XA CN113033590A (en) 2019-12-25 2019-12-25 Image feature matching method and device, image processing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113033590A true CN113033590A (en) 2021-06-25

Family

ID=76458253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357846.XA Pending CN113033590A (en) 2019-12-25 2019-12-25 Image feature matching method and device, image processing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113033590A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574421A (en) * 2015-01-29 2015-04-29 北方工业大学 Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN106780729A (en) * 2016-11-10 2017-05-31 中国人民解放军理工大学 A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN108615248A (en) * 2018-04-27 2018-10-02 腾讯科技(深圳)有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108648235A (en) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 Method for relocating, device and the storage medium of camera posture tracing process
CN109165657A (en) * 2018-08-20 2019-01-08 贵州宜行智通科技有限公司 A kind of image feature detection method and device based on improvement SIFT
WO2019041265A1 (en) * 2017-08-31 2019-03-07 深圳市大疆创新科技有限公司 Feature extraction circuit and integrated image processing circuit
CN110110767A (en) * 2019-04-23 2019-08-09 广州智能装备研究院有限公司 A kind of characteristics of image optimization method, device, terminal device and readable storage medium storing program for executing
CN110148162A (en) * 2019-04-29 2019-08-20 河海大学 A kind of heterologous image matching method based on composition operators

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574421A (en) * 2015-01-29 2015-04-29 北方工业大学 Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN106780729A (en) * 2016-11-10 2017-05-31 中国人民解放军理工大学 A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
WO2019041265A1 (en) * 2017-08-31 2019-03-07 深圳市大疆创新科技有限公司 Feature extraction circuit and integrated image processing circuit
CN108615248A (en) * 2018-04-27 2018-10-02 腾讯科技(深圳)有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108648235A (en) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 Method for relocating, device and the storage medium of camera posture tracing process
CN109165657A (en) * 2018-08-20 2019-01-08 贵州宜行智通科技有限公司 A kind of image feature detection method and device based on improvement SIFT
CN110110767A (en) * 2019-04-23 2019-08-09 广州智能装备研究院有限公司 A kind of characteristics of image optimization method, device, terminal device and readable storage medium storing program for executing
CN110148162A (en) * 2019-04-29 2019-08-20 河海大学 A kind of heterologous image matching method based on composition operators

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN115205562B (en) * 2022-07-22 2023-03-14 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points

Similar Documents

Publication Publication Date Title
CN108682038B (en) Pose determination method, pose determination device and storage medium
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
CN108734736B (en) Camera posture tracking method, device, equipment and storage medium
CN108682036B (en) Pose determination method, pose determination device and storage medium
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110148178B (en) Camera positioning method, device, terminal and storage medium
CN108876854B (en) Method, device and equipment for relocating camera attitude tracking process and storage medium
CN110555882A (en) Interface display method, device and storage medium
CN110059652B (en) Face image processing method, device and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN114170349A (en) Image generation method, image generation device, electronic equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN111862148A (en) Method, device, electronic equipment and medium for realizing visual tracking
CN111753606A (en) Intelligent model upgrading method and device
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111928861A (en) Map construction method and device
CN113033590A (en) Image feature matching method and device, image processing equipment and storage medium
CN111757146B (en) Method, system and storage medium for video splicing
CN110443841B (en) Method, device and system for measuring ground depth
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112861565A (en) Method and device for determining track similarity, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.