CN113591949A - Standing tree feature point matching method, device, equipment and medium - Google Patents
Standing tree feature point matching method, device, equipment and medium Download PDFInfo
- Publication number
- CN113591949A CN113591949A CN202110810361.2A CN202110810361A CN113591949A CN 113591949 A CN113591949 A CN 113591949A CN 202110810361 A CN202110810361 A CN 202110810361A CN 113591949 A CN113591949 A CN 113591949A
- Authority
- CN
- China
- Prior art keywords
- standing tree
- matching
- point
- image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 28
- 230000009466 transformation Effects 0.000 claims description 40
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 238000006467 substitution reaction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and provides a stumpage feature point matching method, device, equipment and medium. The method comprises the following steps: obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image; detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm; extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor; and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint. By adopting the technical scheme in the embodiment of the application, compared with the problem that the color information is ignored and the discrimination of the standing tree feature points is insufficient due to the fact that the SIFT algorithm only utilizes the gray information, the color features of the standing tree can be enhanced, the description capacity of the feature point descriptors is improved, and the matching rate of the standing tree feature points is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a standing tree feature point matching method, a standing tree feature point matching device, standing tree feature point matching equipment and a standing tree feature point matching medium.
Background
The image feature point matching is important research content in computer vision, and can provide data basis for work such as three-dimensional reconstruction, map navigation, image registration and the like. The feature points contain important information of the image. At present, the application of image feature point matching in agriculture and forestry is widely regarded, and researchers at home and abroad carry out deep research on the image feature point matching. At present, for the feature point matching of plants in agriculture and forestry, after the plant images are preprocessed, the feature points in the plant images are extracted and matched by using an SIFT algorithm, so that the subsequent work of three-dimensional reconstruction, map navigation, image registration and the like can be conveniently carried out.
However, due to the particularity of the growing environment of the standing tree, the background of the growing environment of the standing tree is usually complex, and the detected feature points contain a large number of non-target objects, so that the correct matching of the standing tree feature points is influenced. Meanwhile, because the stumpage tree leaves have similar structures, the feature descriptors constructed by the existing algorithm have insufficient discrimination, so that the matching accuracy is low, and the requirement of practical application cannot be met due to less matching pairs.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a stumpage feature point matching method, a device, equipment and a medium, which are used for solving the problems of low matching accuracy and few matching pairs caused by the particularity of a stumpage growing environment at present.
In a first aspect, the invention provides a standing tree feature point matching method, which includes:
obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image;
detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm;
extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
According to the technical scheme, the standing tree feature point matching method is provided for solving the problems of low feature point matching accuracy and few matching pairs caused by complex background and similar leaf structure of the standing tree image. Firstly, separating the stumpage from a background image by a Graph Cut algorithm; detecting and positioning the standing tree feature points by using an SIFT algorithm; then, extracting the standing tree color features by using an ultragreen operator, and constructing feature point descriptors by combining RGB color spaces to enhance the description capability of standing tree feature points with similar structures; and finally, establishing geometric position constraint on the basis of applying a nearest neighbor distance ratio method to complete standing tree feature point matching. By adopting the method, compared with the problem that the color information is ignored to cause insufficient distinguishing degree of the standing tree feature points by only utilizing the gray information in the SIFT algorithm, the color feature of the standing tree can be enhanced, the description capability of the feature point descriptor is improved, and the matching rate of the standing tree feature points is improved.
Optionally, the extracting, based on an ultragreen operator, color features in the target standing tree image, replacing color information expressed by a G channel with the color features extracted by the ultragreen operator, and forming a standing tree descriptor includes:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing the color information expressed by the G channel with the color feature extracted by the ultragreen operator is as follows:
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to a geometric position constraint includes:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the transformation error e of the nearest neighbor matching point is found in the second target standing tree image according to the feature point of the first target standing tree imagejDetermining a matching point corresponding to the standing tree feature point, including:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree imageVariation error e of1If there is a matching pointVariation error e of1Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointVariation error e of1Greater than or equal to error threshold dtThen calculate the matching pointVariation error e of2If there is a matching pointVariation error e of2Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching pointVariation error e of3If there is a matching pointVariation error e of3Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, the calculation formula of the transformation error is:
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In a second aspect, the invention provides a standing tree feature point matching device, including:
the image acquisition module is used for acquiring a standing tree image and separating a target standing tree image from a background image in the standing tree image;
the feature point detection module is used for detecting the standing tree feature points according to the target standing tree image based on an SIFT algorithm;
the descriptor generation module is used for extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and the matching module is used for carrying out initial matching based on a nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint.
Optionally, the descriptor generating module is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing color information expressed by the G channel with color features extracted by the ultragreen operator in the descriptor generating module is as follows:
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the matching module is specifically configured to:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the matching module is specifically further configured to:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree imageVariation error e of1If there is a matching pointVariation error e of1Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointVariation error e of1Greater than or equal to error threshold dtThen calculate the matching pointVariation error e of2If there is a matching pointVariation error e of2Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching pointVariation error e of3If there is a matching pointVariation error e of3Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, in the matching module, the calculation formula of the transformation error is:
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any one of the methods when executing the computer program.
In a fourth aspect, an embodiment of the invention provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of any of the methods described above.
Based on the technical scheme, the method has the following beneficial effects:
1) according to the method, the super-green operator is used for extracting the stumpage color characteristics during characteristic point extraction, the characteristic point descriptor is constructed by combining the RGB color space, the description capacity of the stumpage characteristic points with similar structures is enhanced, and the extraction capacity of the stumpage characteristic points is improved.
2) According to the method, on the basis of a nearest neighbor distance ratio method, the transformation model is used for establishing geometric position constraint to complete standing tree feature point matching. A transformation model is obtained based on a nearest neighbor distance ratio method, geometric position constraint is built through the transformation model to complete standing tree feature point matching, and more actually existing matching pairs can be found in a standing tree feature point set without reducing the correct matching rate of the standing tree feature points.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
FIG. 2 shows an s-t network diagram constructed by Graph cut of the stumpage feature point matching method provided by the embodiment of the invention;
FIG. 3A is a schematic diagram illustrating an image of a pre-segmented stumpage provided in accordance with an embodiment of the present invention;
FIG. 3B is a schematic diagram illustrating a segmented target standing tree image according to an embodiment of the present invention;
fig. 4 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a feature point descriptor generation process provided by an embodiment of the present invention;
fig. 6 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a matching process of the stumpage feature point matching method according to an embodiment of the present invention;
fig. 8 is a block diagram illustrating a structure of a standing timber feature point matching device according to an embodiment of the present invention;
fig. 9 shows a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
For convenience of understanding, terms referred to in the embodiments of the present invention are explained below:
SIFT, Scale Invariant Feature Transform (Scale Invariant Feature Transform), which is a local Feature that is very stable and Invariant to rotation, Scale scaling, luminance variation, etc., regardless of the size and rotation of the image based on some local apparent interest points on the object. The tolerance to light, noise, and micro-viewing angle changes is also quite high. Based on these characteristics, they are highly significant and relatively easy to retrieve, easily identify objects and are rarely misidentified in feature databases with large denominations. The detection rate of partial object occlusion using the SIFT feature description is also quite high, and even more than 3 SIFT object features are enough to calculate the position and orientation. Under the current hardware speed of computer and the condition of small feature database, the recognition speed can approach to real-time operation.
RANSAC, Random Sample Consensus (Random Sample Consensus), is an iterative algorithm whose main process consists of two parts, namely, hypothesis and test. Firstly, randomly selecting 4 pairs of elements from a data set to obtain a minimum sampling set MSS (minimum sample sets), and calculating a test model by using the 4 pairs of elements. Next, a determination is made as to whether the test pattern is satisfied, except for the data that is sampled, and if so, left. Once a model with a required number of correspondences greater than the current number of correspondences appears, the model becomes the optimal model. The algorithm repeats the previous steps until the number of iterations is exceeded or when a better model than the current model is found with a probability less than a threshold.
Due to the factors of the growing environment of the standing tree, a large amount of noise often exists in the obtained standing tree image, so that certain mismatching exists in the obtaining of the standing tree matching points, and in order to solve the problem of the existing mismatching, the standing tree feature point matching method is provided. Fig. 1 shows a flowchart of a stumpage feature point matching method according to an embodiment of the present invention. As shown in fig. 1, a standing tree feature point matching method specifically related to the embodiment of the present invention includes:
s101, obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image.
Specifically, because the background of the standing tree image shot in the natural environment is complex, in order to avoid the influence of the background feature point on the matching of the standing tree feature point, the Graph Cut segmentation algorithm is used for separating the target standing tree from the background image. According to the algorithm, an s-t network graph is constructed, referring to fig. 2, each pixel corresponds to a vertex in the s-t network graph, a foreground target corresponds to s, a background corresponds to t, each edge is endowed with a non-negative weight, and the optimal segmentation problem of the image is converted into the problem of solving the energy function minimization. The segmentation result of the standing tree image is shown in fig. 3, where fig. 3A is the original standing tree image and fig. 3B is the segmented target standing tree image.
S102, detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm.
Specifically, the target standing tree image is converted into a gray image, defined as I (x, y), and a variable-scale two-dimensional gaussian function G (x, y, σ) is used to perform convolution operation with the gray image, so as to obtain an image L (x, y, σ).
Wherein G (x, y, σ) is defined as:
σ is a scale variable factor, the size of the value of which affects the size of the corresponding scale space, and the size of σ can also reflect the degree of smoothness of the image. When the scale parameter σ is continuously changed, L (x, y, σ) constitutes a standing-wood scale-space image. In order to detect more stable stumpage feature points in the scale space image, two adjacent images in the scale space are subtracted to form a gaussian difference scale space image D (x, y, σ). Wherein the content of the first and second substances,
and in the three-dimensional scale space image, local extreme point searching and standing tree feature point positioning are carried out. The SIFT algorithm utilizes a scale space function to construct a Taylor expansion to perform curve fitting.
And obtaining extreme point coordinates by taking a derivative of D (X) and X (sigma) and enabling an equation value to be equal to 0, so as to determine the position of the standing timber feature point. The feature points which are positioned at the edge of the image and are unstable need to be deleted, have larger main curvature at the position crossing the edge and have smaller main curvature at the position vertical to the edge, and can be obtained by judging two feature values of a two-dimensional Hessian matrix.
Wherein, the two-dimensional Hessian matrix is:
wherein d isxxIs the second partial derivative, D, of a point in the image taken in the x directionyyIs the second partial derivative in the y direction, DxyThe partial derivatives are firstly calculated in the x direction and then in the y direction for the image. The eigenvalues of H are proportional to the principal curvature of D.
S103, extracting color features in the target standing tree image based on the ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor.
Specifically, after the standing tree feature points are detected by using the SIFT algorithm, descriptors need to be constructed for the feature points to determine the corresponding relationship between the feature points in the matching stage. Because the stumpage leaves have similar structures, the feature descriptors constructed by the SIFT algorithm have insufficient description capacity on stumpage feature points, and the mismatching rate is high. Therefore, the method and the device use the ultragreen operator to extract the standing tree color features and combine the RGB color space to construct the standing tree feature point descriptor so as to enhance the description capability of the standing tree feature points with similar structures.
And S104, performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
Specifically, the corresponding relation of the feature points in the two images is determined to be the core of feature point matching, the mismatching rate during matching can be reduced by using a nearest neighbor distance ratio method, but due to the complexity of the standing tree, even if the description capacity of the descriptor is increased, the situation of deleting correct matching pairs still cannot be avoided, so that the standing tree feature point matching is completed through geometric position constraint on the basis of initial matching performed by the nearest neighbor distance ratio method. By adopting the method, compared with an SIFT algorithm which only utilizes gray information, the problem of insufficient distinguishing degree of the standing tree feature points caused by neglecting color information is avoided, the color features of the standing tree can be enhanced, the description capability of the feature point descriptors is improved, and the matching rate of the standing tree feature points is improved.
Optionally, referring to fig. 4, step S103 specifically includes:
and S1031, representing the target standing tree image by R, G, B color components.
S1032, replacing color information expressed by the G channel with the color features extracted by the super-green operator.
Specifically, on the basis that the target standing tree image is represented by R, G, B color components, the color information of the G channel is replaced by the color features extracted by the ultragreen operator, the proportion of the G component is increased to a certain extent, and the color information of green leaves is enhanced.
Further, a substitution formula for substituting color information expressed by the G channel by the color features extracted by the ultragreen operator is as follows:
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
The color information expressed by the G channel is replaced by the color feature extracted by the ultragreen operator, so that the green feature of the stumpage is enhanced, the method can be effectively applied to image processing of the stumpage, the description capability of the feature points is improved, and the accuracy of feature point processing is improved.
And S1033, calculating the gradient amplitude and the gradient argument of each pixel point in the three transformed channels.
In specific implementation, the following formula is adopted:
θ(x,y,0)=tan-1((L(x,y+1,0)-L(x,y-1,0))/(L(x+1,y,0)-L(x-1,y,0))
θ(x,y,1)=tan-1((L(x,y+1,1)-L(x,y-1,1))/(L(x+1,y,1)-L(x-1,y,1))
θ(x,y,2)=tan-1((L(x,y+1,2)-L(x,y-1,2))/(L(x+1,y,2)-L(x-1,y,2))
wherein, L (x, y,0), L (x, y,1) and L (x, y,2) are pixel values of the image on three channels r ', g ' and b ', m (x, y,0), m (x, y,1) and m (x, y,2) are gradient amplitudes obtained on the three channels r ', g ' and b ', and θ (x, y,0), θ (x, y,1) and θ (r, y,2) are gradient amplitudes obtained on the three channels r ', g ' and b '.
S1034, determining the main directions of the feature points of the three channels and the feature point descriptors to form the feature point descriptors.
Specifically, as shown in fig. 5, the main direction of the feature point of each channel is determined according to the gradient magnitude and the gradient argument of each pixel point in the three channels determined in step S1033. Taking 10 degrees as one direction in the range of 0-360 degrees, 36 directions are total, taking the feature point as the center, counting a gradient direction histogram in the neighborhood radius according to the gradient amplitude and the gradient argument, wherein the direction with the largest peak value in the 36 directions is the main direction of the feature point. Feature point descriptors for each channel are then generated. And rotating the feature points to a main direction position, dividing the rotated region into 4 multiplied by 4 sub-regions, averagely dividing each sub-region into 8 directions, and counting a gradient direction histogram to obtain a 128-dimensional feature point descriptor of 4 multiplied by 8. And finally, forming the standing tree feature point descriptor. And combining the 128-dimensional feature point descriptors generated by the three color channels of r ', g ' and b ' to form a 384-dimensional stumpage feature point descriptor.
Optionally, referring to fig. 6, step S104 specifically includes:
s1041, performing initial matching on the feature points of the two target standing tree images, and determining an initial matching pair.
Specifically, in this step, the feature points of two target standing tree images are initially matched, a nearest neighbor distance ratio method is adopted for initial matching, the euclidean distance is adopted as the distance measurement of the feature point descriptor in the image pair, and if the euclidean distance (d) of the nearest neighbor of the feature point is adopted for the initial matching1) Euclidean distance (d) from next neighbor2) And if the ratio is smaller than or equal to the threshold r, successfully matching the second target standing tree image with the feature points in the closest distance in the target image, otherwise, failing to match. Specifically, the following formula:
it should be noted that the threshold r specifically related to in the above formula is set to 0.8 in the present application, and 0.8 is a recommended threshold of the SIFT algorithm, and the setting of the threshold r may be specifically adjusted adaptively according to the actual situation.
S1042, based on RANSAC algorithm, the initial matching pair is used for iterative computation of a transformation matrix P, and a transformation model of the target standing tree image matching pair is obtained.
Specifically, the RANSAC algorithm is adopted to use the initial matching pair for iterative computation of the change matrix P, so that a transformation model of the standing tree image pair is obtained, and the transformation model represents the geometric position relation of the feature points in the standing tree image pair.
S1043, according to the feature points of the first target standing tree image, searching a transformation error e of the nearest neighbor matching point in the second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
In particular, according to the transformation error ejAnd determining a matching point corresponding to the feature point of the first target standing tree image in the second target standing tree image, wherein the feature point in the first target standing tree image is reconfirmed into a corresponding matching pair through calculation of variation errors of the feature point of the second target standing tree image which is missed to be matched in the initial matching by a nearest neighbor distance ratio method.
By calculating the transformation error ejJudging the variation error e of the first three nearest neighbor matching pointsjWhether or not it is less than threshold dtIf the error e is changedjLess than threshold dtThe corresponding matching points of the feature points in the first target standing tree image are confirmed. The feature point matching method based on the geometric position constraint can find many correct matching pairs which are ignored in the nearest neighbor distance ratio matching method, and effectively increases the number of correct feature point pairs under the condition of not introducing a large number of mismatching pairs, so that the correct matching rate is further improved.
Optionally, step S1043 specifically includes:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree imageVariation error e of1If there is a matching pointVariation error e of1Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointVariation error e of1Greater than or equal to error threshold dtThen calculate the matching pointVariation error e of2If there is a matching pointVariation error e of2Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching pointVariation error e of3If there is a matching pointVariation error e of3Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Specifically, the first three nearest neighbor matching pointsAndreferring to fig. 7, the transformation error e is calculated in turnjIf a conversion error e occursjLess than an error threshold dtStopping the calculation of the change error of the subsequent adjacent matching point, and determining that the change error satisfies the condition less than the error threshold dtOf (e) a transformation error ejThe corresponding adjacent matching points are the matching points corresponding to the feature points in the first target standing tree image. The aim of the step is to re-determine the points which are missed to be matched in the initial matching as correct matching points through the constraint of geometric positions, avoid the deletion of the matching points and improve the matching rate.
It should be noted that, in the present application, the error threshold d is usedtThe value is 10. Verified by multiple experiments, the error threshold value dtA value of 10 increases the number of correct matching pairs while the accuracy remains high. Error threshold dtAdaptation may be made to the matching object.
Optionally, the transformation error is calculated by:
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,is a characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
In one embodiment, there is provided a standing timber feature point matching device 20, comprising:
the image acquisition module 201 is configured to acquire a standing tree image and separate a target standing tree image from a background image in the standing tree image;
the feature point detection module 202 is configured to detect a standing tree feature point based on a SIFT algorithm according to the target standing tree image;
the descriptor generation module 203 is configured to extract color features in the target standing tree image based on an ultragreen operator, replace color information expressed by a G channel with the color features extracted by the ultragreen operator, and form a standing tree descriptor;
and the matching module 204 is configured to perform initial matching based on a nearest neighbor distance ratio method, and determine a correct matching point according to geometric position constraint.
Optionally, the descriptor generating module 203 is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
Optionally, the formula for replacing the color information expressed by the G channel with the color feature extracted by the ultragreen operator in the descriptor generating module 203 is as follows:
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
Optionally, the matching module 204 is specifically configured to:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, a second target standing tree image is displayedTransformation error e for finding nearest neighbor matching point in imagejAnd determining the matching points corresponding to the standing tree feature points.
Optionally, the matching module 204 is specifically further configured to:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree imageVariation error e of1If there is a matching pointVariation error e of1Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointVariation error e of1Greater than or equal to error threshold dtThen calculate the matching pointVariation error e of2If there is a matching pointVariation error e of2Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching pointVariation error e of3If there is a matching pointVariation error e of3Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1、e2And e3Are not less than the error threshold dtThen characteristic point aiThere is no corresponding matching point.
Optionally, in the matching module 204, the calculation formula of the transformation error is:
wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,is a characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
The standing tree feature point matching device 20 provided in the embodiment of the present application and the standing tree feature point matching method described above adopt the same inventive concept, and can obtain the same beneficial effects, which are not described herein again.
Based on the same inventive concept as the standing tree feature point matching method, the embodiment of the present application further provides an electronic device 30, as shown in fig. 5, the electronic device 30 may include a processor 301 and a memory 302.
The Processor 301 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
Claims (10)
1. A standing tree feature point matching method is characterized by comprising the following steps:
obtaining a standing tree image, and separating a target standing tree image from a background image in the standing tree image;
detecting standing tree feature points according to the target standing tree image based on an SIFT algorithm;
extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and performing initial matching based on a nearest neighbor distance ratio method, and determining a correct matching point according to geometric position constraint.
2. The method of claim 1, wherein the extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator for color information expressed by a G channel, and forming a standing tree descriptor comprises:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
3. The method of claim 2, wherein the substitution formula for the color feature extracted by the ultragreen operator to substitute the color information expressed by the G channel is:
wherein R ', G ' and B ' are color information of three channels of the target standing tree image after replacement, and 2G-R-B is color information of a G channel replaced by an ultragreen operator.
4. The method of claim 1, wherein the initial matching based on nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint comprises:
performing initial matching on the characteristic points of the two target standing tree images to determine an initial matching pair; the two target standing tree images comprise a first target standing tree image and a second target standing tree image;
based on RANSAC algorithm, using the initial matching pair to iteratively calculate a transformation matrix P to obtain a transformation model of the matching pair of the target standing tree image;
according to the characteristic points of the first target standing tree image, searching a transformation error e of a nearest neighbor matching point in a second target standing tree imagejAnd determining the matching points corresponding to the standing tree feature points.
5. The method according to claim 4, wherein the transformation error e of finding the nearest neighbor matching point in the second target standing tree image according to the feature point of the first target standing tree imagejDetermining a matching point corresponding to the standing tree feature point, including:
calculating a feature point aiCorresponding first nearest neighbor matching point in second target standing tree imageVariation error e of1If there is a matching pointVariation error e of1Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointVariation error e of1Greater than or equal to error threshold dtThen calculate the matching pointVariation error e of2If there is a matching pointVariation error e of2Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
if there is a matching pointAndvariation error e of1And e2Are all greater than or equal to the error threshold dtThen calculate the matching pointVariation error e of3If there is a matching pointVariation error e of3Less than an error threshold dtThen confirm the matching pointIs a correct matching point;
6. The method of claim 5, wherein the transformation error is calculated by:wherein e isjFor variation error, PaiIs a characteristic point aiThe transformation matrix of (a) is,is the characteristic point a in the first target standing tree imageiThe first three nearest neighbor matching points in the second target standing tree image.
7. A standing tree feature point matching device is characterized by comprising:
the image acquisition module is used for acquiring a standing tree image and separating a target standing tree image from a background image in the standing tree image;
the feature point detection module is used for detecting the standing tree feature points according to the target standing tree image based on an SIFT algorithm;
the descriptor generation module is used for extracting color features in the target standing tree image based on an ultragreen operator, replacing the color features extracted by the ultragreen operator with color information expressed by a G channel, and forming a standing tree descriptor;
and the matching module is used for carrying out initial matching based on a nearest neighbor distance ratio method and determining a correct matching point according to geometric position constraint.
8. The apparatus of claim 7, wherein the descriptor generating module is specifically configured to:
representing the target standing tree image with R, G, B color components;
replacing color information expressed by the G channel with color features extracted by the ultragreen operator;
calculating the gradient amplitude and gradient argument of each pixel point in the three channels after transformation;
and determining the main directions of the feature points and the feature point descriptors of the three channels to form the feature point descriptors.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110810361.2A CN113591949A (en) | 2021-07-19 | 2021-07-19 | Standing tree feature point matching method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110810361.2A CN113591949A (en) | 2021-07-19 | 2021-07-19 | Standing tree feature point matching method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113591949A true CN113591949A (en) | 2021-11-02 |
Family
ID=78247906
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110810361.2A Pending CN113591949A (en) | 2021-07-19 | 2021-07-19 | Standing tree feature point matching method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113591949A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695498A (en) * | 2020-06-10 | 2020-09-22 | 西南林业大学 | Wood identity detection method |
-
2021
- 2021-07-19 CN CN202110810361.2A patent/CN113591949A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695498A (en) * | 2020-06-10 | 2020-09-22 | 西南林业大学 | Wood identity detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109409385B (en) | Automatic identification method for pointer instrument | |
CN103839265B (en) | SAR image registration method based on SIFT and normalized mutual information | |
CN111242221B (en) | Image matching method, system and storage medium based on image matching | |
CN106981077B (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN101650784B (en) | Method for matching images by utilizing structural context characteristics | |
Chaudhury et al. | Plant species identification from occluded leaf images | |
Chen et al. | Robust affine-invariant line matching for high resolution remote sensing images | |
CN109697240B (en) | Image retrieval method and device based on features | |
US11023781B2 (en) | Method, apparatus and device for evaluating image tracking effectiveness and readable storage medium | |
CN113591949A (en) | Standing tree feature point matching method, device, equipment and medium | |
CN111881923A (en) | Bill element extraction method based on feature matching | |
CN109840529B (en) | Image matching method based on local sensitivity confidence evaluation | |
CN113159103A (en) | Image matching method, image matching device, electronic equipment and storage medium | |
Nayef et al. | On the use of geometric matching for both: Isolated symbol recognition and symbol spotting | |
Maybank | A probabilistic definition of salient regions for image matching | |
CN116863349A (en) | Remote sensing image change area determining method and device based on triangular network dense matching | |
Li et al. | The retrieval of shoeprint images based on the integral histogram of the gabor transform domain | |
CN111915645A (en) | Image matching method and device, computer equipment and computer readable storage medium | |
CN116206139A (en) | Unmanned aerial vehicle image upscaling matching method based on local self-convolution | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
Gilman et al. | Dolphin fin pose correction using ICP in application to photo-identification | |
Nakashima et al. | SIFT feature point selection by using image segmentation | |
Gaudillière et al. | Region-Based Epipolar and Planar Geometry Estimation in Low─ Textured Environments | |
CN109190457B (en) | Oil depot cluster target rapid detection method based on large-format remote sensing image | |
CN116342826B (en) | AR map construction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |