CN113591949A - Standing tree feature point matching method, device, equipment and medium - Google Patents

Standing tree feature point matching method, device, equipment and medium Download PDF

Info

Publication number
CN113591949A
CN113591949A CN202110810361.2A CN202110810361A CN113591949A CN 113591949 A CN113591949 A CN 113591949A CN 202110810361 A CN202110810361 A CN 202110810361A CN 113591949 A CN113591949 A CN 113591949A
Authority
CN
China
Prior art keywords
standing tree
matching
point
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110810361.2A
Other languages
Chinese (zh)
Inventor
徐爱俊
顾雯钧
管孝锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang A&F University ZAFU
Original Assignee
Zhejiang A&F University ZAFU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang A&F University ZAFU filed Critical Zhejiang A&F University ZAFU
Priority to CN202110810361.2A priority Critical patent/CN113591949A/en
Publication of CN113591949A publication Critical patent/CN113591949A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and provides a stumpage feature point matching method, device, equipment and medium. The method comprises the following steps: obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image; detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm; extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor; and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint. By adopting the technical scheme in the embodiment of the application, compared with the problem that the color information is ignored and the discrimination of the standing tree feature points is insufficient due to the fact that the SIFT algorithm only utilizes the gray information, the color features of the standing tree can be enhanced, the description capacity of the feature point descriptors is improved, and the matching rate of the standing tree feature points is improved.

Description

Standing tree feature point matching method, device, equipment and medium
Technical Field
The invention relates to the technical field of image processing, in particular to a standing tree feature point matching method, a standing tree feature point matching device, standing tree feature point matching equipment and a standing tree feature point matching medium.
Background
The image feature point matching is important research content in computer vision, and can provide data basis for work such as three-dimensional reconstruction, map navigation, image registration and the like. The feature points contain important information of the image. At present, the application of image feature point matching in agriculture and forestry is widely regarded, and researchers at home and abroad carry out deep research on the image feature point matching. At present, for the feature point matching of plants in agriculture and forestry, after the plant images are preprocessed, the feature points in the plant images are extracted and matched by using an SIFT algorithm, so that the subsequent work of three-dimensional reconstruction, map navigation, image registration and the like can be conveniently carried out.
However, due to the particularity of the growing environment of the standing tree, the background of the growing environment of the standing tree is usually complex, and the detected feature points contain a large number of non-target objects, so that the correct matching of the standing tree feature points is influenced. Meanwhile, because the stumpage tree leaves have similar structures, the feature descriptors constructed by the existing algorithm have insufficient discrimination, so that the matching accuracy is low, and the requirement of practical application cannot be met due to less matching pairs.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a stumpage feature point matching method, a device, equipment and a medium, which are used for solving the problems of low matching accuracy and few matching pairs caused by the particularity of a stumpage growing environment at present.
In a first aspect, the invention provides a standing tree feature point matching method, which includes:
obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image;
detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm;
extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
According to the technical scheme, the standing tree feature point matching method is provided for solving the problems of low feature point matching accuracy and few matching pairs caused by complex background and similar leaf structure of the standing tree image. Firstly, separating the stumpage from a background image by a Graph Cut algorithm; detecting and positioning the standing tree feature points by using an SIFT algorithm; then, extracting the standing tree color features by using an ultragreen operator, and constructing feature point descriptors by combining RGB color spaces to enhance the description capability of standing tree feature points with similar structures; and finally, establishing geometric position constraint on the basis of applying a nearest neighbor distance ratio method to complete standing tree feature point matching. By adopting the method, compared with the problem that the color information is ignored to cause insufficient distinguishing degree of the standing tree feature points by only utilizing the gray information in the SIFT algorithm, the color feature of the standing tree can be enhanced, the description capability of the feature point descriptor is improved, and the matching rate of the standing tree feature points is improved.
Optionally, the extracting, based on an ultragreen operator, color features in the target standing tree image, replacing color information expressed by a G channel with the color features extracted by the ultragreen operator, and forming a standing tree descriptor includes:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing the color information expressed by the G channel with the color feature extracted by the ultragreen operator is as follows:
Figure BDA0003168027280000021
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to a geometric position constraint includes:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the transformation error e of the nearest neighbor matching point is found in the second target standing tree image according to the feature point of the first target standing tree imagejDetermining a matching point corresponding to the standing tree feature point, including:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree image
Figure BDA0003168027280000031
Variation error e of1If there is a matching point
Figure BDA0003168027280000032
Variation error e of1Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000033
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000034
Variation error e of1Greater than or equal to error threshold dtThen calculate the matching point
Figure BDA0003168027280000035
Variation error e of2If there is a matching point
Figure BDA0003168027280000036
Variation error e of2Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000037
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000038
And
Figure BDA0003168027280000039
variation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching point
Figure BDA00031680272800000310
Variation error e of3If there is a matching point
Figure BDA00031680272800000311
Variation error e of3Less than an error threshold dtThen confirm the matching point
Figure BDA00031680272800000312
Is a correct matching point;
if there is a matching point
Figure BDA00031680272800000313
And
Figure BDA00031680272800000314
variation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, the calculation formula of the transformation error is:
Figure BDA00031680272800000315
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,
Figure BDA00031680272800000316
is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In a second aspect, the invention provides a standing tree feature point matching device, including:
the image acquisition module is used for acquiring a standing tree image and separating a target standing tree image from a background image in the standing tree image;
the feature point detection module is used for detecting the standing tree feature points according to the target standing tree image based on an SIFT algorithm;
the descriptor generation module is used for extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and the matching module is used for carrying out initial matching based on a nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint.
Optionally, the descriptor generating module is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing color information expressed by the G channel with color features extracted by the ultragreen operator in the descriptor generating module is as follows:
Figure BDA0003168027280000041
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the matching module is specifically configured to:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the matching module is specifically further configured to:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree image
Figure BDA0003168027280000051
Variation error e of1If there is a matching point
Figure BDA0003168027280000052
Variation error e of1Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000053
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000054
Variation error e of1Greater than or equal to error threshold dtThen calculate the matching point
Figure BDA0003168027280000055
Variation error e of2If there is a matching point
Figure BDA0003168027280000056
Variation error e of2Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000057
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000058
And
Figure BDA0003168027280000059
variation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching point
Figure BDA00031680272800000510
Variation error e of3If there is a matching point
Figure BDA00031680272800000511
Variation error e of3Less than an error threshold dtThen confirm the matching point
Figure BDA00031680272800000512
Is a correct matching point;
if there is a matching point
Figure BDA00031680272800000513
And
Figure BDA00031680272800000514
variation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, in the matching module, the calculation formula of the transformation error is:
Figure BDA00031680272800000515
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,
Figure BDA00031680272800000516
is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any one of the methods when executing the computer program.
In a fourth aspect, an embodiment of the invention provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of any of the methods described above.
Based on the technical scheme, the method has the following beneficial effects:
1) according to the method, the super-green operator is used for extracting the stumpage color characteristics during characteristic point extraction, the characteristic point descriptor is constructed by combining the RGB color space, the description capacity of the stumpage characteristic points with similar structures is enhanced, and the extraction capacity of the stumpage characteristic points is improved.
2) According to the method, on the basis of a nearest neighbor distance ratio method, the transformation model is used for establishing geometric position constraint to complete standing tree feature point matching. A transformation model is obtained based on a nearest neighbor distance ratio method, geometric position constraint is built through the transformation model to complete standing tree feature point matching, and more actually existing matching pairs can be found in a standing tree feature point set without reducing the correct matching rate of the standing tree feature points.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
FIG. 2 shows an s-t network diagram constructed by Graph cut of the stumpage feature point matching method provided by the embodiment of the invention;
FIG. 3A is a schematic diagram illustrating an image of a pre-segmented stumpage provided in accordance with an embodiment of the present invention;
FIG. 3B is a schematic diagram illustrating a segmented target standing tree image according to an embodiment of the present invention;
fig. 4 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a feature point descriptor generation process provided by an embodiment of the present invention;
fig. 6 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a matching process of the stumpage feature point matching method according to an embodiment of the present invention;
fig. 8 is a block diagram illustrating a structure of a standing timber feature point matching device according to an embodiment of the present invention;
fig. 9 shows a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
For convenience of understanding, terms referred to in the embodiments of the present invention are explained below:
SIFT, Scale Invariant Feature Transform (Scale Invariant Feature Transform), which is a local Feature that is very stable and Invariant to rotation, Scale scaling, luminance variation, etc., regardless of the size and rotation of the image based on some local apparent interest points on the object. The tolerance to light, noise, and micro-viewing angle changes is also quite high. Based on these characteristics, they are highly significant and relatively easy to retrieve, easily identify objects and are rarely misidentified in feature databases with large denominations. The detection rate of partial object occlusion using the SIFT feature description is also quite high, and even more than 3 SIFT object features are enough to calculate the position and orientation. Under the current hardware speed of computer and the condition of small feature database, the recognition speed can approach to real-time operation.
RANSAC, Random Sample Consensus (Random Sample Consensus), is an iterative algorithm whose main process consists of two parts, namely, hypothesis and test. Firstly, randomly selecting 4 pairs of elements from a data set to obtain a minimum sampling set MSS (minimum sample sets), and calculating a test model by using the 4 pairs of elements. Next, a determination is made as to whether the test pattern is satisfied, except for the data that is sampled, and if so, left. Once a model with a required number of correspondences greater than the current number of correspondences appears, the model becomes the optimal model. The algorithm repeats the previous steps until the number of iterations is exceeded or when a better model than the current model is found with a probability less than a threshold.
Due to the factors of the growing environment of the standing tree, a large amount of noise often exists in the obtained standing tree image, so that certain mismatching exists in the obtaining of the standing tree matching points, and in order to solve the problem of the existing mismatching, the standing tree feature point matching method is provided. Fig. 1 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention. As shown in fig. 1, a standing tree feature point matching method specifically related to the embodiment of the present invention includes:
s101, obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image.
Specifically, because the background of the standing tree image shot in the natural environment is complex, in order to avoid the influence of the background feature point on the matching of the standing tree feature point, the Graph Cut segmentation algorithm is used for separating the target standing tree from the background image. According to the algorithm, an s-t network graph is constructed, referring to fig. 2, each pixel corresponds to a vertex in the s-t network graph, a foreground target corresponds to s, a background corresponds to t, each edge is endowed with a non-negative weight, and the optimal segmentation problem of the image is converted into the problem of solving the energy function minimization. The segmentation result of the standing tree image is shown in fig. 3, where fig. 3A is the original standing tree image and fig. 3B is the segmented target standing tree image.
S102, detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm.
Specifically, the target standing tree image is converted into a gray image, defined as I (x, y), and a variable-scale two-dimensional gaussian function G (x, y, σ) is used to perform convolution operation with the gray image, so as to obtain an image L (x, y, σ).
Figure BDA0003168027280000081
Wherein G (x, y, σ) is defined as:
Figure BDA0003168027280000082
σ is a scale variable factor, the size of the value of which affects the size of the corresponding scale space, and the size of σ can also reflect the degree of smoothness of the image. When the scale parameter σ is continuously changed, L (x, y, σ) constitutes a standing-wood scale-space image. In order to detect more stable stumpage feature points in the scale space image, two adjacent images in the scale space are subtracted to form a gaussian difference scale space image D (x, y, σ). Wherein the content of the first and second substances,
Figure BDA0003168027280000083
and in the three-dimensional scale space image, local extreme point searching and standing tree feature point positioning are carried out. The SIFT algorithm utilizes a scale space function to construct a Taylor expansion to perform curve fitting.
Figure BDA0003168027280000084
And obtaining extreme point coordinates by taking a derivative of D (X) and X (sigma) and enabling an equation value to be equal to 0, so as to determine the position of the standing timber feature point. The feature points which are positioned at the edge of the image and are unstable need to be deleted, have larger main curvature at the position crossing the edge and have smaller main curvature at the position vertical to the edge, and can be obtained by judging two feature values of a two-dimensional Hessian matrix.
Wherein, the two-dimensional Hessian matrix is:
Figure BDA0003168027280000091
wherein d isxxIs the second partial derivative, D, of a point in the image taken in the x directionyyIs the second partial derivative in the y direction, DxyThe partial derivatives are firstly calculated in the x direction and then in the y direction for the image. The eigenvalues of H are proportional to the principal curvature of D.
S103, extracting color features in the target standing tree image based on the ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor.
Specifically, after the standing tree feature points are detected by using the SIFT algorithm, descriptors need to be constructed for the feature points to determine the corresponding relationship between the feature points in the matching stage. Because the stumpage leaves have similar structures, the feature descriptors constructed by the SIFT algorithm have insufficient description capacity on stumpage feature points, and the mismatching rate is high. Therefore, the method and the device use the ultragreen operator to extract the standing tree color features and combine the RGB color space to construct the standing tree feature point descriptor so as to enhance the description capability of the standing tree feature points with similar structures.
And S104, performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
Specifically, the corresponding relation of the feature points in the two images is determined to be the core of feature point matching, the mismatching rate during matching can be reduced by using a nearest neighbor distance ratio method, but due to the complexity of the standing tree, even if the description capacity of the descriptor is increased, the situation of deleting correct matching pairs still cannot be avoided, so that the standing tree feature point matching is completed through geometric position constraint on the basis of initial matching performed by the nearest neighbor distance ratio method. By adopting the method, compared with an SIFT algorithm which only utilizes gray information, the problem of insufficient distinguishing degree of the standing tree feature points caused by neglecting color information is avoided, the color features of the standing tree can be enhanced, the description capability of the feature point descriptors is improved, and the matching rate of the standing tree feature points is improved.
Optionally, referring to fig. 4, step S103 specifically includes:
and S1031, representing the target standing tree image by R, G, B color components.
S1032, replacing color information expressed by the G channel with the color features extracted by the super-green operator.
Specifically, on the basis that the target standing tree image is represented by R, G, B color components, the color information of the G channel is replaced by the color features extracted by the ultragreen operator, the proportion of the G component is increased to a certain extent, and the color information of green leaves is enhanced.
Further, a substitution formula for substituting color information expressed by the G channel by the color features extracted by the ultragreen operator is as follows:
Figure BDA0003168027280000101
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
The color information expressed by the G channel is replaced by the color feature extracted by the ultragreen operator, so that the green feature of the stumpage is enhanced, the method can be effectively applied to image processing of the stumpage, the description capability of the feature points is improved, and the accuracy of feature point processing is improved.
And S1033, calculating the gradient amplitude and the gradient argument of each pixel point in the three transformed channels.
In specific implementation, the following formula is adopted:
Figure BDA0003168027280000102
θ(x,y,0)=tan-1((L(x,y+1,0)-L(x,y-1,0))/(L(x+1,y,0)-L(x-1,y,0))
Figure BDA0003168027280000103
θ(x,y,1)=tan-1((L(x,y+1,1)-L(x,y-1,1))/(L(x+1,y,1)-L(x-1,y,1))
Figure BDA0003168027280000104
θ(x,y,2)=tan-1((L(x,y+1,2)-L(x,y-1,2))/(L(x+1,y,2)-L(x-1,y,2))
wherein, L (x, y,0), L (x, y,1) and L (x, y,2) are pixel values of the image on three channels r ', g ' and b ', m (x, y,0), m (x, y,1) and m (x, y,2) are gradient amplitudes obtained on the three channels r ', g ' and b ', and θ (x, y,0), θ (x, y,1) and θ (r, y,2) are gradient amplitudes obtained on the three channels r ', g ' and b '.
S1034, determining the main directions of the feature points of the three channels and the feature point descriptors to form the feature point descriptors.
Specifically, as shown in fig. 5, the main direction of the feature point of each channel is determined according to the gradient magnitude and the gradient argument of each pixel point in the three channels determined in step S1033. Taking 10 degrees as one direction in the range of 0-360 degrees, 36 directions are total, taking the feature point as the center, counting a gradient direction histogram in the neighborhood radius according to the gradient amplitude and the gradient argument, wherein the direction with the largest peak value in the 36 directions is the main direction of the feature point. Feature point descriptors for each channel are then generated. And rotating the feature points to a main direction position, dividing the rotated region into 4 multiplied by 4 sub-regions, averagely dividing each sub-region into 8 directions, and counting a gradient direction histogram to obtain a 128-dimensional feature point descriptor of 4 multiplied by 8. And finally, forming the standing tree feature point descriptor. And combining the 128-dimensional feature point descriptors generated by the three color channels of r ', g ' and b ' to form a 384-dimensional stumpage feature point descriptor.
Optionally, referring to fig. 6, step S104 specifically includes:
s1041, performing initial matching on the feature points of the two target standing tree images, and determining an initial matching pair.
Specifically, in this step, the feature points of two target standing tree images are initially matched, a nearest neighbor distance ratio method is adopted for initial matching, the euclidean distance is adopted as the distance measurement of the feature point descriptor in the image pair, and if the euclidean distance (d) of the nearest neighbor of the feature point is adopted for the initial matching1) Euclidean distance (d) from next neighbor2) And if the ratio is smaller than or equal to the threshold r, successfully matching the second target standing tree image with the feature points in the closest distance in the target image, otherwise, failing to match. Specifically, the following formula:
Figure BDA0003168027280000111
it should be noted that the threshold r specifically related to in the above formula is set to 0.8 in the present application, and 0.8 is a recommended threshold of the SIFT algorithm, and the setting of the threshold r may be specifically adjusted adaptively according to the actual situation.
S1042, based on RANSAC algorithm, the initial matching pair is used for iterative computation of a transformation matrix P, and a transformation model of the target standing tree image matching pair is obtained.
Specifically, the RANSAC algorithm is adopted to use the initial matching pair for iterative computation of the change matrix P, so that a transformation model of the standing tree image pair is obtained, and the transformation model represents the geometric position relation of the feature points in the standing tree image pair.
S1043, according to the feature points of the first target standing tree image, searching a transformation error e of the nearest neighbor matching point in the second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
In particular, according to the transformation error ejAnd determining a matching point corresponding to the feature point of the first target standing tree image in the second target standing tree image, wherein the feature point in the first target standing tree image is reconfirmed into a corresponding matching pair through calculation of variation errors of the feature point of the second target standing tree image which is missed to be matched in the initial matching by a nearest neighbor distance ratio method.
By calculating the transformation error ejJudging the variation error e of the first three nearest neighbor matching pointsjWhether or not it is less than threshold dtIf the error e is changedjLess than threshold dtThe corresponding matching points of the feature points in the first target standing tree image are confirmed. The feature point matching method based on the geometric position constraint can find many correct matching pairs which are ignored in the nearest neighbor distance ratio matching method, and effectively increases the number of correct feature point pairs under the condition of not introducing a large number of mismatching pairs, so that the correct matching rate is further improved.
Optionally, step S1043 specifically includes:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree image
Figure BDA0003168027280000121
Variation error e of1If there is a matching point
Figure BDA0003168027280000122
Variation error e of1Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000123
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000124
Variation error e of1Greater than or equal to error threshold dtThen calculate the matching point
Figure BDA0003168027280000125
Variation error e of2If there is a matching point
Figure BDA0003168027280000126
Variation error e of2Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000127
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000128
And
Figure BDA0003168027280000129
variation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching point
Figure BDA00031680272800001210
Variation error e of3If there is a matching point
Figure BDA00031680272800001211
Variation error e of3Less than an error threshold dtThen confirm the matching point
Figure BDA00031680272800001212
Is a correct matching point;
if there is a matching point
Figure BDA00031680272800001213
And
Figure BDA00031680272800001214
variation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Specifically, the first three nearest neighbor matching points
Figure BDA00031680272800001215
And
Figure BDA00031680272800001216
referring to fig. 7, the transformation error e is calculated in turnjIf a conversion error e occursjLess than an error threshold dtStopping the calculation of the change error of the subsequent adjacent matching point, and determining that the change error satisfies the condition less than the error threshold dtOf (e) a transformation error ejThe corresponding adjacent matching points are the matching points corresponding to the feature points in the first target standing tree image. The aim of the step is to re-determine the points which are missed to be matched in the initial matching as correct matching points through the constraint of geometric positions, avoid the deletion of the matching points and improve the matching rate.
It should be noted that, in the present application, the error threshold d is usedtThe value is 10. Verified by multiple experiments, the error threshold value dtA value of 10 increases the number of correct matching pairs while the accuracy remains high. Error threshold dtAdaptation may be made to the matching object.
Optionally, the transformation error is calculated by:
Figure BDA0003168027280000131
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,
Figure BDA0003168027280000132
is a characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In one embodiment, there is provided a standing timber feature point matching device 20, comprising:
the image acquisition module 201 is configured to acquire a standing tree image and separate a target standing tree image from a background image in the standing tree image;
the feature point detection module 202 is configured to detect a standing tree feature point based on a SIFT algorithm according to the target standing tree image;
the descriptor generation module 203 is configured to extract color features in the target standing tree image based on an ultragreen operator, replace color information expressed by a G channel with the color features extracted by the ultragreen operator, and form a standing tree descriptor;
and the matching module 204 is configured to perform initial matching based on a nearest neighbor distance ratio method, and determine a correct matching point according to geometric position constraint.
Optionally, the descriptor generating module 203 is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing the color information expressed by the G channel with the color feature extracted by the ultragreen operator in the descriptor generating module 203 is as follows:
Figure BDA0003168027280000133
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the matching module 204 is specifically configured to:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, a second target standing tree image is displayedTransformation error e for finding nearest neighbor matching point in imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the matching module 204 is specifically further configured to:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree image
Figure BDA0003168027280000141
Variation error e of1If there is a matching point
Figure BDA0003168027280000142
Variation error e of1Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000143
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000144
Variation error e of1Greater than or equal to error threshold dtThen calculate the matching point
Figure BDA0003168027280000145
Variation error e of2If there is a matching point
Figure BDA0003168027280000146
Variation error e of2Less than an error threshold dtThen confirm the matching point
Figure BDA0003168027280000147
Is a correct matching point;
if there is a matching point
Figure BDA0003168027280000148
And
Figure BDA0003168027280000149
variation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching point
Figure BDA00031680272800001410
Variation error e of3If there is a matching point
Figure BDA00031680272800001411
Variation error e of3Less than an error threshold dtThen confirm the matching point
Figure BDA00031680272800001412
Is a correct matching point;
if there is a matching point
Figure BDA00031680272800001413
And
Figure BDA00031680272800001414
variation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, in the matching module 204, the calculation formula of the transformation error is:
Figure BDA00031680272800001415
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,
Figure BDA00031680272800001416
is a characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
The standing tree feature point matching device 20 provided in the embodiment of the present application and the standing tree feature point matching method described above adopt the same inventive concept, and can obtain the same beneficial effects, which are not described herein again.
Based on the same inventive concept as the standing tree feature point matching method, the embodiment of the present application further provides an electronic device 30, as shown in fig. 5, the electronic device 30 may include a processor 301 and a memory 302.
The Processor 301 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 302, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 302 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A standing tree feature point matching method is characterized by comprising the following steps:
obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image;
detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm;
extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
2. The method of claim 1, wherein the extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator for color information expressed by a G channel, and forming a standing tree descriptor comprises:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
3. The method of claim 2, wherein the substitution formula for the color feature extracted by the ultragreen operator to substitute the color information expressed by the G channel is:
Figure FDA0003168027270000011
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
4. The method of claim 1, wherein the initial matching based on nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint comprises:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
5. The method according to claim 4, wherein the transformation error e of finding the nearest neighbor matching point in the second target standing tree image according to the feature point of the first target standing tree imagejDetermining a matching point corresponding to the standing tree feature point, including:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree image
Figure FDA0003168027270000021
Variation error e of1If there is a matching point
Figure FDA0003168027270000022
Variation error e of1Less than an error threshold dtThen confirm the matching point
Figure FDA0003168027270000023
Is a correct matching point;
if there is a matching point
Figure FDA0003168027270000024
Variation error e of1Greater than or equal to error threshold dtThen calculate the matching point
Figure FDA0003168027270000025
Variation error e of2If there is a matching point
Figure FDA0003168027270000026
Variation error e of2Less than an error threshold dtThen confirm the matching point
Figure FDA0003168027270000027
Is a correct matching point;
if there is a matching point
Figure FDA0003168027270000028
And
Figure FDA0003168027270000029
variation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching point
Figure FDA00031680272700000210
Variation error e of3If there is a matching point
Figure FDA00031680272700000211
Variation error e of3Less than an error threshold dtThen confirm the matching point
Figure FDA00031680272700000212
Is a correct matching point;
if there is a matching point
Figure FDA00031680272700000213
And
Figure FDA00031680272700000214
variation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
6. The method of claim 5, wherein the transformation error is calculated by:
Figure FDA00031680272700000215
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,
Figure FDA00031680272700000216
is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
7. A standing tree feature point matching device is characterized by comprising:
the image acquisition module is used for acquiring a standing tree image and separating a target standing tree image from a background image in the standing tree image;
the feature point detection module is used for detecting the standing tree feature points according to the target standing tree image based on an SIFT algorithm;
the descriptor generation module is used for extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and the matching module is used for carrying out initial matching based on a nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint.
8. The apparatus of claim 7, wherein the descriptor generating module is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
CN202110810361.2A 2021-07-19 2021-07-19 Standing tree feature point matching method, device, equipment and medium Pending CN113591949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110810361.2A CN113591949A (en) 2021-07-19 2021-07-19 Standing tree feature point matching method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110810361.2A CN113591949A (en) 2021-07-19 2021-07-19 Standing tree feature point matching method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113591949A true CN113591949A (en) 2021-11-02

Family

ID=78247906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110810361.2A Pending CN113591949A (en) 2021-07-19 2021-07-19 Standing tree feature point matching method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113591949A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695498A (en) * 2020-06-10 2020-09-22 西南林业大学 Wood identity detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695498A (en) * 2020-06-10 2020-09-22 西南林业大学 Wood identity detection method

Similar Documents

Publication Publication Date Title
CN109409385B (en) Automatic identification method for pointer instrument
CN103839265B (en) SAR image registration method based on SIFT and normalized mutual information
CN111242221B (en) Image matching method, system and storage medium based on image matching
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN101650784B (en) Method for matching images by utilizing structural context characteristics
Chaudhury et al. Plant species identification from occluded leaf images
Chen et al. Robust affine-invariant line matching for high resolution remote sensing images
CN109697240B (en) Image retrieval method and device based on features
US11023781B2 (en) Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium
CN113591949A (en) Standing tree feature point matching method, device, equipment and medium
CN111881923A (en) Bill element extraction method based on feature matching
CN109840529B (en) Image matching method based on local sensitivity confidence evaluation
CN113159103A (en) Image matching method, image matching device, electronic equipment and storage medium
Nayef et al. On the use of geometric matching for both: Isolated symbol recognition and symbol spotting
Maybank A probabilistic definition of salient regions for image matching
CN116863349A (en) Remote sensing image change area determining method and device based on triangular network dense matching
Li et al. The retrieval of shoeprint images based on the integral histogram of the gabor transform domain
CN111915645A (en) Image matching method and device, computer equipment and computer readable storage medium
CN116206139A (en) Unmanned aerial vehicle image upscaling matching method based on local self-convolution
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
Gilman et al. Dolphin fin pose correction using ICP in application to photo-identification
Nakashima et al. SIFT feature point selection by using image segmentation
Gaudillière et al. Region-Based Epipolar and Planar Geometry Estimation in Low─ Textured Environments
CN109190457B (en) Oil depot cluster target rapid detection method based on large-format remote sensing image
CN116342826B (en) AR map construction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination