CN116416443A - Stolen tree matching method and system based on perspective transformation and deep learning - Google Patents

Stolen tree matching method and system based on perspective transformation and deep learning Download PDF

Info

Publication number
CN116416443A
CN116416443A CN202310248630.XA CN202310248630A CN116416443A CN 116416443 A CN116416443 A CN 116416443A CN 202310248630 A CN202310248630 A CN 202310248630A CN 116416443 A CN116416443 A CN 116416443A
Authority
CN
China
Prior art keywords
image
tree
corner
perspective transformation
stump
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310248630.XA
Other languages
Chinese (zh)
Inventor
崔世林
田斐
崔玉连
张丹
张宇阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Institute of Technology
Original Assignee
Nanyang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Institute of Technology filed Critical Nanyang Institute of Technology
Priority to CN202310248630.XA priority Critical patent/CN116416443A/en
Publication of CN116416443A publication Critical patent/CN116416443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a stolen tree matching method and system based on perspective transformation and deep learning, wherein an angular point detection auxiliary device is arranged on a tree section, distortion correction and checkerboard detection are carried out after an image is acquired by a smart phone, angular point detection and angular point sequencing are carried out, perspective transformation, contour extraction, diameter measurement, diameter comparison and contour comparison are carried out, and clear technical support can be provided for whether two tree sections belong to the same tree or not. According to the invention, the neural network is used for assisting in corner detection, 30 times of improvement on the corner detection speed can be realized, and the requirements on photographing posture, angle and position are avoided. The corner ordering method can still accurately obtain the perspective transformation matrix under the condition of large-area corner missing, edge missing and corner missing of vertex position. The invention adopts the neural network to extract the contour of the stump, measures the major diameter and the minor diameter, and judges whether the two sections belong to the same tree or not through naked eyes after mirror image and rotation transformation of the images with the matched diameters.

Description

Stolen tree matching method and system based on perspective transformation and deep learning
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a stolen tree matching method and system based on perspective transformation and deep learning.
Background
When the forest police tracks the stolen trees, the modes of crossing monitoring, squatting points of a wood processing plant and the like are often adopted, and for the found suspicious wood, the lack of technical means confirms whether the suspicious wood is the stolen wood, so that certain barriers are brought to the recovery of the stolen trees and deterring illegal criminals, and in order to assist in judging whether the suspicious trees are the stolen objects, the suspicious trees are measured and compared, so that the method is an important method which can be adopted technically. The trunk size and outline measurement needs to be carried by means of three-dimensional scanners, laser measuring instruments and other noble instruments, the instruments are inconvenient to carry and expensive, and when the trunk size and outline measurement is used outdoors, the instruments are not ideal in treatment capacity and treatment effect on the interference of ambient light, and part of the instruments also need to be sprayed with fluorescent agents, so that the trunk size and outline measurement not only pollute the environment, but also damage trees.
Thus, the first and second substrates are bonded together, what is needed is a method and system for matching stolen trees with low-cost tree sections, which is convenient to carry, simple and easy to implement.
Disclosure of Invention
Aiming at the technical problems, one of the purposes of one mode of the invention is to provide a stolen tree matching method based on perspective transformation and deep learning, wherein an angular point detection auxiliary device is arranged on a tree, a stolen tree stump and a target tree section are shot, distortion correction and detection of the angular point detection auxiliary device are sequentially carried out after images are acquired, angular point detection, angular point sequencing, perspective transformation, contour extraction, diameter measurement, diameter comparison and contour comparison are carried out, and clear technical support can be simply and effectively provided for whether two tree sections belong to the same tree or not.
According to the invention, the corner detection auxiliary device is arranged on the tree section as a reference object, the corner detection auxiliary device is found through the neural network, and then the corner detection is carried out, so that the corner detection and perspective transformation can be more rapidly carried out, and the detection efficiency is improved.
One of the purposes of one mode of the invention is to provide a corner sorting method, which allows the corner to have large-area deletion, allows the edge corner to have deletion, allows the corner at the vertex position to have deletion, and has stronger robustness and application value.
One of the purposes of one mode of the invention is to provide a stolen tree matching system based on perspective transformation and deep learning, which does not need to use expensive instruments such as a three-dimensional scanner, a laser measuring instrument and the like, has strong light interference resistance, cannot damage the tree, and can simply and effectively provide clear technical support for whether two tree sections belong to the same tree.
Note that the description of these objects does not prevent the existence of other objects. Not all of the above objects need be achieved in one embodiment of the present invention. Other objects than the above objects can be extracted from the description of the specification, drawings, and claims.
The present invention achieves the above technical object by the following means.
A stolen tree matching method based on perspective transformation and deep learning comprises the following steps:
s1, obtaining a stump image: placing the corner detection auxiliary device on the cross section of the stump, and shooting to obtain a stump complete graph comprising the corner detection auxiliary device;
step S2, acquiring a target image: placing the corner detection auxiliary device on the cross section of the target tree, and shooting to obtain a full view of the cross section of the tree comprising the corner detection auxiliary device;
step S3, preliminary image processing: performing distortion correction on the images acquired in the step S1 and the step S2;
step S4, neural network auxiliary detection of corner points: classifying the stump image and the target image processed in the step S3 by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking, detecting the corner points of the corner detection auxiliary device area, and sequencing the corners;
step S5, perspective transformation: performing least square perspective transformation by using the ordered corner points to obtain a perspective transformation matrix, and performing perspective transformation on the stump image and the target image by using the matrix;
step S6, contour extraction: classifying the cut pile image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the cut pile image and the target image into a corner detection auxiliary device area, a tree cross section area and a background area, filling holes in the tree cross section area, and respectively extracting outlines of the cut pile image and the target image after edge smoothing;
step S7, diameter measurement: respectively finding two contour points with the maximum distance between the stump image and the target image, wherein the distance is defined as a long diameter, then making a vertical line at the center point of the diameter, and defining a connecting line between the vertical line and two intersection points of the contour as a short diameter;
step S8, diameter comparison: respectively comparing the long diameter and the short diameter of the stump image with the short diameter of the target image, if the error is within a preset threshold, carrying out the next step, and if the error is outside the preset threshold, considering that the target tree and the stump do not belong to the same tree, and stopping matching;
step S9, contour comparison: and mirroring and rotating the perspective transformed stump image and the target image, comparing the outlines, if the outlines coincide with each other in mirroring and rotating, regarding that the target tree and the stump belong to the same tree, and if the outlines do not coincide with each other, not the same tree.
In the above scheme, the corner detection auxiliary device is a checkerboard.
In the scheme, the checkerboard machining precision is smaller than 0.1mm.
In the scheme, the back of the checkerboard is provided with the sharp part, and the sharp part is inserted into the tree to fix the checkerboard when the checkerboard is placed on the cross section of the tree.
In the above scheme, in the step S4, the neural network adopts deep learning semantic segmentation neural network with deep learning v3+, the backbone network is defined as an inpationresnetv 2, the types of the network are defined as two types, one type is a corner detection auxiliary device, the other type is a background, the labeling tool is ImageLabeler of MATLAB, a polygonal frame is adopted to focus on the target checkerboard, and the rest is labeled as the background.
In the above scheme, the neural network input format in step S4 is an RGB three-layer image, and when training the neural network, a random rotation, scaling, random overturn, random translation strategy is adopted for the image sample, a CPU is adopted for training calculation, and an ADAM method is adopted for gradient descent.
In the above scheme, the corner sorting algorithm in step S4 specifically includes the following steps:
step S4.1: the corner point set is represented by T, 4 adjacent points of the point set T are found and correspond to four vertexes of a square with the side length of Q, a perspective transformation matrix P between the 4 pairs of vertexes is obtained according to perspective transformation rules, and all the corner points in the T are subjected to perspective transformation by utilizing the transformation matrix P to obtain a new point set which is represented by S;
the method comprises the steps of firstly calculating a histogram of the distances between the points in the point set T, and then selecting the maximum value V1 and the next maximum value V2 of the histogram. Finding the left upper corner vertex of the point set T, then finding two points near the points V1 and V2 respectively, and then finding one point which is closest to the left upper corner vertex and is other than the two points, wherein the four points form four adjacent points of the point set T;
step S4.2: the first point at the upper left corner of S is marked as 0, the point is sent out, the searching step length is Q, in the delta neighborhood of a target position with a distance, whether the corner point exists or not is searched, if the corner point exists, the point closest to the target position is selected as a target point, and the mark is increased by 1; if not, the coordinates of the point are set to NULL, and the label is still added with 1; the above search process is continuously performed and each found or not found point is numbered until the last corner point.
Step S4.3: ordering the checkerboard angular points with uniform intervals of N x 2N, wherein the ordering starting point is the angular point of the upper left corner, and the ordering direction is rightward and downward, and a special ordering method is not needed because the angular points are uniformly spaced; in the invention, N is 6;
step S4.4: the sorting points obtained in the step S4.2 and the sorting points obtained in the step S4.3 are in one-to-one correspondence, and each pair has the same serial number; if the point coordinates in a certain pair have NULL values, the point does not participate in subsequent least squares perspective transformation; all points involved in the least squares perspective transformation are concentrated.
If in the above scheme, the contour extraction in the step S6 adopts deep learning semantic segmentation neural network, the backbone network is defined as an ideptionresnetv 2, the categories of the network are defined as three categories, one category is a tree section, the other category is a checkerboard, the other category is a background, the marking tool is ImageLabeler of MATLAB, the tree section is marked by adopting a pixel label, the target checkerboard is marked by adopting a polygonal frame, and the rest is marked by pixels as the background; the network input format is an RGB three-layer image. In order to enrich training samples, a random rotation, scaling, random overturning and random translation strategy is adopted for the samples, a CPU is adopted for training calculation, and an ADAM method is adopted for gradient descent.
In the above scheme, the error preset threshold value of the diameter comparison in the step S8 is 20%.
The stolen tree matching system based on perspective transformation and deep learning comprises a mobile terminal and an angular point detection auxiliary device, wherein the mobile terminal comprises android APP software, and the android APP software comprises an image acquisition module, an image correction module, an angular point detection module, a perspective transformation module, a contour extraction module, a diameter measurement module, a diameter comparison module and a contour comparison module; the corner detection auxiliary device is a checkerboard;
the image acquisition module is used for placing the corner detection auxiliary device on the cut pile section and the target tree section, and shooting to acquire a cut pile full-view and a tree section full-view comprising the corner detection auxiliary device;
the image correction module is used for carrying out distortion correction on the image acquired by the image acquisition module;
the corner detection module is used for classifying the stump image and the target image processed by the image correction module by using a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking the corner detection auxiliary device area, detecting the corners of the corner detection auxiliary device area, and sequencing the corners;
the perspective transformation module is used for carrying out least square perspective transformation by utilizing the ordered corner points to obtain a perspective transformation matrix, and carrying out perspective transformation on the stump image and the target image by utilizing the matrix;
the contour extraction module is used for classifying the stump image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area, a tree cross-section area and a background area, filling holes in the tree cross-section area, and respectively extracting the contour of the stump image and the contour of the target image after edge smoothing;
the diameter measurement module is used for finding two contour points with the maximum distance between the stump image and the target image respectively, determining the two contour points as long diameters, making a vertical line at the center point of the diameter, and determining a connecting line between the vertical line and two intersection points of the contour as short diameters;
the diameter comparison module is used for respectively comparing the long diameter and the short diameter of the stump image with the long diameter and the short diameter of the target image, carrying out contour comparison if the error is within a preset threshold value, and stopping matching if the error is outside the preset threshold value, wherein the target tree and the stump are not the same tree;
the contour comparison module is used for mirroring and rotating the perspective transformed stump image and the target image to compare the contours, if the contours coincide in mirroring and rotating, the target tree and the stump belong to the same tree, and if the contours do not coincide, the target tree and the stump do not belong to the same tree.
Compared with the prior art, the invention has the beneficial effects that:
according to one mode of the invention, the corner detection auxiliary device is arranged on the tree as a reference object, the corner detection auxiliary device is detected through the neural network, and then the corner on the corner detection auxiliary device is detected.
According to one mode of the invention, the detected corner points are ordered, the ordering method allows the corner points to have large-area deletion, allows the corner points of the edge corner points to be deleted, allows the corner points of the vertex position to be deleted, and the deleted corner points do not participate in the calculation of the least square perspective transformation, so that the method has stronger application value and robustness.
According to one mode of the invention, the stolen tree stump and the target tree cross section are shot, and distortion correction, corner detection auxiliary device detection, corner detection, perspective transformation, contour extraction, diameter measurement, diameter comparison and contour comparison are sequentially carried out after images are acquired, so that clear technical support can be simply and effectively provided for whether two tree cross sections belong to the same tree.
According to one mode of the tree matching method, tree matching can be performed by using android APP software without using expensive instruments such as a three-dimensional scanner and a laser measuring instrument, the light interference resistance is high, and damage to the tree is avoided.
Note that the description of these effects does not hinder the existence of other effects. One embodiment of the present invention does not necessarily have all of the above effects. Effects other than the above are obvious and can be extracted from the description of the specification, drawings, claims, and the like.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a cross-sectional image of a stump and tree taken in accordance with an embodiment of the present invention.
FIG. 3 is a perspective transformation of a stump and tree cross-sectional image according to an embodiment of the present invention.
Fig. 4 is a comparison of tree sections according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "transverse", "length", "width", "thickness", "front", "rear", "left", "right", "upper", "lower", "axial", "radial", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element in question must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
Example 1
Figure 1 shows a preferred embodiment of the stolen tree matching method based on perspective transformation and deep learning,
a stolen tree matching method based on perspective transformation and deep learning comprises the following steps:
s1, obtaining a stump image: placing the corner detection auxiliary device on the cross section of the stump, and shooting to obtain a stump complete graph comprising the corner detection auxiliary device;
step S2, acquiring a target image: placing the corner detection auxiliary device on the cross section of the target tree, and shooting to obtain a full view of the cross section of the tree comprising the corner detection auxiliary device; according to this embodiment, preferably, the corner detection auxiliary device is a checkerboard, the checkerboard adopts a rectangle, the size is 2n×n, N can be 3,4,..10, and other natural numbers, and N is 6 in the present invention. The physical size of the checkerboard is required to be accurate, and the accuracy is controlled to be 0.1mm;
the back of the checkerboard is provided with sharp parts, and when the checkerboard is placed on the cross section of a tree, the sharp parts are inserted into the tree to fix the checkerboard. When shooting, the checkerboard is placed in the middle area of the tree section, an intelligent mobile phone is adopted to collect images on site, the shooting angle and the shooting gesture are not required to be focused, and the focal length is not required to be adjusted manually; in one embodiment of the present invention, the captured image is as shown in fig. 2;
step S3, preliminary image processing: performing distortion correction on the images acquired in the step S1 and the step S2; according to the present embodiment, preferably, the distortion correction algorithm employs a checkerboard method of Zhang Zhengyou;
step S4, neural network auxiliary detection of corner points: classifying the stump image and the target image processed in the step S3 by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking, detecting the corner points of the corner detection auxiliary device area, and sequencing the corners;
according to the embodiment, preferably, the neural network adopts deep learning semantic segmentation neural network with deep learning V < 3+ >, the backbone network is in the classification of the ideptionresnet v2, the network is in two classes, one class is a corner detection auxiliary device, the other class is a background, the marking tool is an ImageLabeler of MATLAB, a polygonal frame is adopted to mark a target checkerboard, and the rest part is marked as the background;
in the step S4, the input format of the neural network is RGB three-layer images, when the neural network is trained, a random rotation, scaling, random overturning and random translation strategy is adopted for the image samples, a CPU is adopted for training calculation, GPU acceleration is not needed, and an ADAM method is adopted for gradient descent;
loading a training model when corner detection is carried out, inputting a stump image, detecting a checkerboard in the training model, giving an area where the checkerboard is located by an algorithm, wherein the area where the checkerboard is located is marked as 1, and the background area is marked as 0;
because the area of the checkerboard occupies only a small part of the area of the whole stump image, the corner correction method is directly applied to the stump image, and the corner detection method is a very time-consuming method.
Step S5, perspective transformation: performing least square perspective transformation by using the ordered corner points to obtain a perspective transformation matrix, and performing perspective transformation on the stump image and the target image by using the matrix; an image photographed at an arbitrary angle and an arbitrary posture is transformed into a main (vertical) view, and in one embodiment of the present invention, the perspective-transformed image is shown in fig. 3.
Step S6, contour extraction: classifying the cut pile image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the cut pile image and the target image into a corner detection auxiliary device area, a tree cross section area and a background area, filling holes in the tree cross section area, and respectively extracting outlines of the cut pile image and the target image after edge smoothing;
step S7, diameter measurement: respectively finding two contour points with the maximum distance between the stump image and the target image, wherein the distance is defined as a long diameter, a vertical line is drawn at the center point of the diameter, and a connecting line between the vertical line and two intersection points of the contour is defined as a short diameter;
step S8, diameter comparison: respectively comparing the long diameter and the short diameter of the stump image with the short diameter of the target image, if the error is within a preset threshold, carrying out the next step, and if the error is outside the preset threshold, considering that the target tree and the stump do not belong to the same tree, and stopping matching;
according to the embodiment, it is preferable that the automatic measurement result of the diameter is checked manually, and the part of the error is corrected manually;
step S9, contour comparison: and mirroring and rotating the perspective transformed stump image and the target image, comparing the outlines, if the outlines coincide with each other in mirroring and rotating, regarding that the target tree and the stump belong to the same tree, and if the outlines do not coincide with each other, not the same tree.
The corner ordering algorithm in the step S4 specifically comprises the following steps:
step S4.1: the corner point set is represented by T, 4 adjacent points of the point set T are found and correspond to four vertexes of a square with the side length of Q, a perspective transformation matrix P between the 4 pairs of vertexes is obtained according to perspective transformation rules, and all the corner points in the T are subjected to perspective transformation by utilizing the transformation matrix P to obtain a new point set which is represented by S;
according to this embodiment, preferably, the method for searching four adjacent points in the point set T includes first calculating a histogram of distances between points in the point set T, and then selecting a maximum value V1 and a next maximum value V2 of the histogram. The top left corner vertex of the set of points T is found, then two points are found near the points V1, V2, respectively, and then one point other than the two points closest to the top left corner vertex is found, the four points constituting four adjacent points of the set of points T.
Step S4.2: the first point at the upper left corner of S is marked as 0, the point is sent out, the searching step length is Q, in the delta neighborhood of a target position with a distance, whether the corner point exists or not is searched, if the corner point exists, the point closest to the target position is selected as a target point, and the mark is increased by 1; if not, the coordinates of the point are set to NULL, and the label is still added with 1; the searching process is continuously executed, and marks are added to each found or not found point until the last corner point; according to the present embodiment, the search step Q is preferably a fixed value, and Q is 80 pixels in the present embodiment. The size of the neighborhood δ is typically 20% of Q, and this parameter, even if set to 10% or 30% of Q, does not affect the effect of the algorithm and is therefore easy to determine. The sorting method has the advantages that even if the conditions of corner point large-area missing, edge missing and vertex position corner point missing exist, the sorting method can stably operate, and has extremely strong robustness;
step S4.3: and (3) sorting the corner points in the standard corner point detection auxiliary device, wherein the sorting starting point is the corner point of the upper left corner, and the sorting direction is rightward and downward.
Step S4.4: the sorting points obtained in the step S4.2 and the sorting points obtained in the step S4.3 are in one-to-one correspondence, and each pair has the same serial number; if the point coordinates in a certain pair have NULL values, the point does not participate in subsequent least squares perspective transformation; all points involved in the least squares perspective transformation are concentrated.
The size of the checkerboard adopted in one embodiment of the present invention is 6×12, so the number of corner points that can be detected is 55 at maximum, and the least square perspective transformation in step S5 adopts the following formula:
Figure BDA0004126925270000081
wherein: a, a 11 ,a 12 …a 31 ,a 32 Representing a variable, pinv representing a pseudo-inverse; u (u) 1 ,v 1 ,...u 55 ,v 55 The ordering method of the corner coordinates detected by the corner detection algorithm is determined by step 4.2, and in perspective transformation, the points are called as moving points and x is calculated as x 1 ,y 1 ,...x 55 ,y 55 For fixed point coordinates, the ordering method is determined by step 4.3. If there is a point with NULL in the coordinates among the points 55, the row and column with NULL value on the right side in the above formula is deleted in the process of specifically calculating the perspective transformation matrix.
The step S6 contour extraction adopts deep learning semantic segmentation neural network, the backbone network is in the form of an ideptionresnet 2, the network is in three types, one type is a tree section, the other type is a checkerboard, the marking tool is an ImageLabeler of MATLAB, the tree section is marked by adopting a pixel label, the target checkerboard is marked by adopting a polygonal frame, and the rest is marked by pixels as the background; according to this embodiment, preferably, the label corresponding to the tree section and the checkerboard is set to 1, the background label is set to 0, the area with the largest area of the label of 1 is filled with the cavity, and after the edge is smoothed, the outline of the area is extracted;
the neural network input format is an RGB three-layer image. In order to enrich training samples, random rotation, scaling, random overturning and random translation strategies are adopted for the samples, a CPU is adopted for training calculation force, GPU acceleration is not needed, and an ADAM method is adopted for gradient descent.
According to this embodiment, preferably, the error preset threshold value of the diameter comparison in the step S8 is 20%.
According to this embodiment, it is preferable that, at the time of contour comparison, the stump image and the target image are perspective-transformed, put together, left and right separated, mirror-transformed for the second image, and then rotated, and the two images are visually observed as to whether they match or not while being rotated. If the observation process has difficulty, the rotation transformation angle is adjusted, and the observation is re-performed until the conclusion is reached.
As shown in fig. 4, the position of the slider in fig. 4 is adjustable between 0 and 360, and the corresponding rotation angle of the right image is that the position of the slider is adjusted, so that the rotation of the right image can be observed, the position of the slider is continuously adjusted, the rotation angles of the right image and the left image are kept consistent, and the images on the left and right sides can be judged to belong to the same tree by naked eyes as long as the rotation angles are proper.
During the tree felling process, workers often cut a double line between the two sections, and saw one or more V-shaped sheets additionally, so that the collapse direction of the tree can be controlled, and the position and the outline of the missing V-shaped sheets can be seen from fig. 2. Due to factors such as shielding, missing of the V-shaped sheet and the like, the outlines of part of the positions cannot be observed, but after rotation, as can be seen from fig. 2, the outlines of the left image and the right image are highly matched under the condition that the missing of the V-shaped sheet exists at the position nearby 120 degrees clockwise, and the outlines of the two images are highly matched at the position nearby 180 degrees clockwise. Considering that the diameters of the two images are approximate and the partial outlines are highly matched, the two images can be judged to belong to the same tree technically, the left side is the basal stump image, and the right side is the stolen forest cross-section image, so that technical support is provided for case diagnosis.
According to the invention, based on perspective transformation and deep learning technology, the corner detection auxiliary device is arranged on the tree, the stolen tree stump and the target tree section are shot, and distortion correction, corner detection auxiliary device detection, corner detection, perspective transformation, contour extraction, diameter measurement, diameter comparison and contour comparison are sequentially carried out after images are acquired, so that clear technical support can be simply and effectively provided for whether two tree sections belong to the same tree. In addition, the corner detection auxiliary device is arranged on the tree to serve as a reference object, the corner detection auxiliary device is used for carrying out corner detection in an auxiliary mode through the neural network, corner detection and perspective transformation can be carried out more rapidly, and detection efficiency is improved.
Example 2
The stolen tree matching system based on perspective transformation and deep learning comprises a mobile terminal and an angular point detection auxiliary device, wherein the mobile terminal is preferably a smart phone or a smart tablet, the mobile terminal comprises android APP software, and the android APP software comprises an image acquisition module, an image correction module, an angular point detection module, a perspective transformation module, a profile extraction module, a diameter measurement module, a diameter comparison module and a profile comparison module; the corner detection auxiliary device is a checkerboard;
the image acquisition module is used for placing the corner detection auxiliary device on the cut pile section and the target tree section, and shooting to acquire a cut pile full-view and a tree section full-view comprising the corner detection auxiliary device;
the image correction module is used for carrying out distortion correction on the image acquired by the image acquisition module;
the corner detection module is used for classifying the stump image and the target image processed by the image correction module by using a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking the corner detection auxiliary device area, detecting the corners of the corner detection auxiliary device area, and sequencing the corners;
the perspective transformation module is used for carrying out least square perspective transformation by utilizing the ordered corner points to obtain a perspective transformation matrix, and carrying out perspective transformation on the stump image and the target image by utilizing the matrix;
the contour extraction module is used for classifying the stump image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area, a tree cross-section area and a background area, filling holes in the tree cross-section area, and respectively extracting the contour of the stump image and the contour of the target image after edge smoothing;
the diameter measurement module is used for finding two contour points with the maximum distance between the stump image and the target image respectively, the distance is defined as a long diameter, a vertical line is drawn at the center point of the diameter, and a connecting line between the vertical line and two intersection points of the contour is defined as a short diameter;
the diameter comparison module is used for respectively comparing the long diameter and the short diameter of the stump image with the long diameter and the short diameter of the target image, carrying out contour comparison if the error is within a preset threshold value, and stopping matching if the error is outside the preset threshold value, wherein the target tree and the stump are not the same tree;
the contour comparison module is used for mirroring and rotating the perspective transformed stump image and the target image to compare the contours, if the contours coincide in mirroring and rotating, the target tree and the stump belong to the same tree, and if the contours do not coincide, the target tree and the stump do not belong to the same tree.
The invention does not need to use three-dimensional scanners, laser measuring instruments and other noble instruments, has strong light interference resistance, can not damage the tree, and can simply and effectively provide clear technical support for whether two tree sections belong to the same tree.
It should be understood that although the present disclosure has been described in terms of various embodiments, not every embodiment is provided with a separate technical solution, and this description is for clarity only, and those skilled in the art should consider the disclosure as a whole, and the technical solutions in the various embodiments may be combined appropriately to form other embodiments that will be understood by those skilled in the art.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.

Claims (10)

1. A stolen tree matching method based on perspective transformation and deep learning is characterized by comprising the following steps:
s1, obtaining a stump image: placing the corner detection auxiliary device on the cross section of the stump, and shooting to obtain a stump complete graph comprising the corner detection auxiliary device;
step S2, acquiring a target image: placing the corner detection auxiliary device on the cross section of the target tree, and shooting to obtain a full view of the cross section of the tree comprising the corner detection auxiliary device;
step S3, preliminary image processing: performing distortion correction on the images acquired in the step S1 and the step S2;
step S4, neural network auxiliary detection of corner points: classifying the stump image and the target image processed in the step S3 by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking, detecting the corner points of the corner detection auxiliary device area, and sequencing the corners;
step S5, perspective transformation: performing least square perspective transformation by using the ordered corner points to obtain a perspective transformation matrix, and performing perspective transformation on the stump image and the target image by using the matrix;
step S6, contour extraction: classifying the cut pile image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the cut pile image and the target image into a corner detection auxiliary device area, a tree cross section area and a background area, filling holes in the tree cross section area, and respectively extracting outlines of the cut pile image and the target image after edge smoothing;
step S7, diameter measurement: respectively finding two contour points with the maximum distance between the stump image and the target image, determining the distance between the two points as a long diameter, making a vertical line at the center point of the diameter, and determining the connecting line between the vertical line and two intersection points of the contour as a short diameter;
step S8, diameter comparison: respectively comparing the long diameter and the short diameter of the stump image with the short diameter of the target image, if the error is within a preset threshold, carrying out the next step, and if the error is outside the preset threshold, considering that the target tree and the stump do not belong to the same tree, and stopping matching;
step S9, contour comparison: and mirroring and rotating the perspective transformed stump image and the target image, comparing the outlines, if the outlines coincide with each other in mirroring and rotating, regarding that the target tree and the stump belong to the same tree, and if the outlines do not coincide with each other, not the same tree.
2. The method for matching stolen trees based on perspective transformation and deep learning according to claim 1, wherein the corner detection auxiliary device is a checkerboard.
3. The method for matching stolen trees based on perspective transformation and deep learning according to claim 2, wherein the checkerboard processing precision is less than 0.1mm.
4. The method for matching stolen trees based on perspective transformation and deep learning according to claim 2, wherein the back of the checkerboard is provided with sharp parts, and the sharp parts are inserted into the trees to fix the checkerboard when the checkerboard is placed on the cross section of the trees.
5. The method for matching stolen trees based on perspective transformation and deep learning according to claim 1, wherein in the step S4, the neural network is a deep learning semantic segmentation neural network by adopting deep labv3+, the backbone network is an inpationresnet 2, the types of the network are two types, one type is a corner detection auxiliary device, the other type is a background, the marking tool is ImageLabeler of MATLAB, a polygonal frame is used for marking a target checkerboard, and the rest is marked as the background.
6. The method for matching stolen trees based on perspective transformation and deep learning according to claim 1, wherein the neural network input format in the step S4 is three layers of RGB images, and when training the neural network, a random rotation, scaling, random overturn and random translation strategy is adopted for image samples, a training calculation force adopts a CPU, and a gradient descent adopts an ADAM method.
7. The method for matching stolen trees based on perspective transformation and deep learning according to claim 1, wherein the corner ordering algorithm adopted in the step S4 specifically comprises the following steps:
step S4.1: the corner point set is represented by T, 4 adjacent points of the point set T are found and correspond to four vertexes of a square with the side length of Q, a perspective transformation matrix P between the 4 pairs of vertexes is obtained according to perspective transformation rules, and all the corner points in the T are subjected to perspective transformation by utilizing the transformation matrix P to obtain a new point set which is represented by S;
step S4.2: the first point at the upper left corner of S is marked as 0, the point is sent out, the searching step length is Q, in the delta neighborhood of a target position with a distance, whether the corner point exists or not is searched, if the corner point exists, the point closest to the target position is selected as a target point, and the mark is increased by 1; if not, the coordinates of the point are set to NULL, and the label is still added with 1; the searching process is continuously executed, and marks are added to each found or not found point until the last corner point;
step S4.3: ordering the corner points in the corner point detection auxiliary device, wherein the ordering starting point is the corner point of the upper left corner, and the ordering direction is rightward and downward;
step S4.4: the sorting points obtained in the step S4.2 and the sorting points obtained in the step S4.3 are in one-to-one correspondence, each pair having the same sequence number; if the point coordinates in a certain pair have NULL values, the point does not participate in subsequent least squares perspective transformation; all points involved in the least squares perspective transformation are concentrated.
8. The stolen tree matching method based on perspective transformation and deep learning according to claim 1, wherein the step S6 contour extraction adopts deep labV < 3+ > deep learning semantic segmentation neural network, the main network is in the classification of ideptionresnet v2, the classification of the network is three, one is a tree section, the other is a checkerboard, the marking tool is an ImageLabeler of MATLAB, the tree section is marked by adopting a pixel label, the target checkerboard is marked by adopting a polygonal frame, and the rest is marked by pixels as the background; the network input format is RGB three-layer image, adopting random rotation, scaling, random overturn and random translation strategy for the sample, adopting CPU for training calculation force and adopting ADAM for gradient descent.
9. The method for matching stolen trees based on perspective transformation and deep learning according to claim 1, wherein the error preset threshold of the diameter comparison in the step S8 is 20%.
10. A system for a stolen tree matching method based on perspective transformation and deep learning according to any of claims 1-9 is characterized by comprising a mobile terminal and a corner detection auxiliary device, wherein the mobile terminal comprises android APP software, the android APP software comprises an image acquisition module, an image correction module, a corner detection module, a perspective transformation module, a contour extraction module, a diameter measurement module, a diameter comparison module and a contour comparison module; the corner detection auxiliary device is a checkerboard;
the image acquisition module is used for placing the corner detection auxiliary device on the cut pile section and the target tree section, and shooting to acquire a cut pile full-view and a tree section full-view comprising the corner detection auxiliary device;
the image correction module is used for carrying out distortion correction on the image acquired by the image acquisition module;
the corner detection module is used for classifying the stump image and the target image processed by the image correction module by using a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area and a background area, respectively marking the corner detection auxiliary device area, detecting the corners of the corner detection auxiliary device area, and sequencing the corners;
the perspective transformation module is used for carrying out least square perspective transformation by utilizing the ordered corner points to obtain a perspective transformation matrix, and carrying out perspective transformation on the stump image and the target image by utilizing the matrix;
the contour extraction module is used for classifying the stump image and the target image after perspective transformation by adopting a deep learning semantic segmentation neural network, dividing the stump image and the target image into a corner detection auxiliary device area, a tree cross-section area and a background area, filling holes in the tree cross-section area, and respectively extracting the contour of the stump image and the contour of the target image after edge smoothing;
the diameter measurement module is used for finding two contour points with the maximum distance between the stump image and the target image respectively, the distance is defined as a long diameter, a vertical line is drawn at the center point of the diameter, and a connecting line between the vertical line and two intersection points of the contour is defined as a short diameter;
the diameter comparison module is used for respectively comparing the long diameter and the short diameter of the stump image with the long diameter and the short diameter of the target image, carrying out contour comparison if the error is within a preset threshold value, and stopping matching if the error is outside the preset threshold value, wherein the target tree and the stump are not the same tree;
the contour comparison module is used for mirroring and rotating the perspective transformed stump image and the target image to compare the contours, if the contours coincide in mirroring and rotating, the target tree and the stump belong to the same tree, and if the contours do not coincide, the target tree and the stump do not belong to the same tree.
CN202310248630.XA 2023-03-15 2023-03-15 Stolen tree matching method and system based on perspective transformation and deep learning Pending CN116416443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310248630.XA CN116416443A (en) 2023-03-15 2023-03-15 Stolen tree matching method and system based on perspective transformation and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310248630.XA CN116416443A (en) 2023-03-15 2023-03-15 Stolen tree matching method and system based on perspective transformation and deep learning

Publications (1)

Publication Number Publication Date
CN116416443A true CN116416443A (en) 2023-07-11

Family

ID=87050692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310248630.XA Pending CN116416443A (en) 2023-03-15 2023-03-15 Stolen tree matching method and system based on perspective transformation and deep learning

Country Status (1)

Country Link
CN (1) CN116416443A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635619A (en) * 2024-01-26 2024-03-01 南京海关工业产品检测中心 Log volume detection method and system based on machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635619A (en) * 2024-01-26 2024-03-01 南京海关工业产品检测中心 Log volume detection method and system based on machine vision
CN117635619B (en) * 2024-01-26 2024-04-05 南京海关工业产品检测中心 Log volume detection method and system based on machine vision

Similar Documents

Publication Publication Date Title
CN108898047B (en) Pedestrian detection method and system based on blocking and shielding perception
CN109655040B (en) Side slope displacement monitoring method based on unmanned aerial vehicle targeting technology
CN102982323B (en) Gait recognition method fast
CN105865326A (en) Object size measurement method and image database data acquisition method
CN107203990A (en) A kind of labeling damage testing method based on template matches and image quality measure
CN116416443A (en) Stolen tree matching method and system based on perspective transformation and deep learning
CN112220444B (en) Pupil distance measuring method and device based on depth camera
CN107833250A (en) Semantic space map constructing method and device
CN103954280B (en) A kind of quickly and high robust autonomous fixed star recognition methods
CN106767810A (en) The indoor orientation method and system of a kind of WIFI and visual information based on mobile terminal
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN109308462B (en) Finger vein and knuckle print region-of-interest positioning method
CN111553881A (en) Method and system for detecting iron tower deformation based on three-dimensional point cloud identification
CN115854895A (en) Non-contact stumpage breast diameter measurement method based on target stumpage form
CN110288655B (en) Method and device for automatically identifying position of test pattern in chart picture
JPH05215547A (en) Method for determining corresponding points between stereo images
CN113421301B (en) Method and system for positioning central area of field crop
CN115115711A (en) Vision calibration method in nasopharynx swab sampling and related equipment
KR20120108277A (en) Method for localizing intelligent mobile robot by using both natural landmark and artificial landmark
CN115272417A (en) Image data processing method, image processing apparatus, and readable storage medium
CN112082475B (en) Living stumpage species identification method and volume measurement method
CN111833281B (en) Multi-vision sensor data fusion method oriented to recycling of reusable rocket
CN111292297A (en) Welding seam detection method, device and equipment based on binocular stereo vision and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination