CN118072177A - Line channel inflammable tree species identification method based on laser point cloud and image fusion - Google Patents

Line channel inflammable tree species identification method based on laser point cloud and image fusion Download PDF

Info

Publication number
CN118072177A
CN118072177A CN202410441204.2A CN202410441204A CN118072177A CN 118072177 A CN118072177 A CN 118072177A CN 202410441204 A CN202410441204 A CN 202410441204A CN 118072177 A CN118072177 A CN 118072177A
Authority
CN
China
Prior art keywords
tree
pixel
point cloud
binary mask
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410441204.2A
Other languages
Chinese (zh)
Inventor
胡睿哲
尹林
胡京
邹建章
欧阳俊杰
罗成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Booway New Technology Co ltd
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd
Original Assignee
Jiangxi Booway New Technology Co ltd
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Booway New Technology Co ltd, State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd filed Critical Jiangxi Booway New Technology Co ltd
Priority to CN202410441204.2A priority Critical patent/CN118072177A/en
Publication of CN118072177A publication Critical patent/CN118072177A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of transmission line inspection, and discloses a line channel inflammable tree species identification method based on laser point cloud and image fusion, which carries out visual classification labeling on point clouds of vegetation, ground and building roads; extracting point cloud data of a single tree, and classifying and displaying different types of trees in different colors; creating a binary mask map of the tree through the point cloud data of the single tree, registering the binary mask map with an orthographic image map shot by the unmanned aerial vehicle, and dividing the image information of the tree after registering to obtain a tree image; and marking the tree images according to types, then placing the tree images into a deep neural network for identification, realizing intelligent extraction of tree type information, and displaying the extraction results in point cloud data. The tree species identification analysis is carried out on the tree species of the transmission line channel by fusing the laser point cloud data and the orthophoto data, and the method has the characteristics of less workload, quick identification and high accuracy.

Description

Line channel inflammable tree species identification method based on laser point cloud and image fusion
Technical Field
The invention belongs to the technical field of transmission line inspection, and relates to a line channel inflammable tree species identification method based on laser point cloud and image fusion.
Background
At present, along with the continuous expansion of the scale of a power transmission line, the phenomenon of long-distance crossing a forest zone is common. Because a large number of inflammable tree species are distributed in the transmission line channel, the safety risk of forest fires facing the transmission line is continuously improved under the high-temperature dry weather condition. In order to prevent forest fires along the transmission line, operation and maintenance personnel need to cut down after finding flammable trees along the line so as to reduce the tripping risk of the line caused by the forest fires. However, the requirement for identifying inflammable trees is high for patrolling personnel, and because the transmission line has a long path and a complex field environment, the manual identification workload is large, and the identification efficiency is low. Therefore, the search for a low-cost and high-efficiency inflammable tree identification method is important to the reduction of the workload of basic group staff.
The unmanned aerial vehicle gathers laser point cloud's process is from trees sky to target transmission laser pulse signal, because unmanned aerial vehicle laser radar power is lower, can't pierce through multilayer leaf and reach the trunk, only a small number passes the laser signal that the leaf gap reached ground and returns to the receiver. Therefore, the laser point cloud collected by the unmanned aerial vehicle is actually composed of the outline point cloud of the surface layer of the tree crown and the sporadic ground point cloud. Because of the lack of point clouds of trunk, branch and other parts, the morphological structure of the tree is difficult to distinguish by laser point cloud modeling only.
Disclosure of Invention
In order to solve the problems, the invention provides a line channel inflammable tree species identification method based on laser point cloud and image fusion, which is used for carrying out tree species identification analysis on trees of a power transmission line channel by fusing laser point cloud data and orthophoto data, and has the characteristics of less workload, quick identification and high accuracy, thereby greatly reducing the workload of line personnel channel inspection and tree species identification.
The invention is realized by the following technical scheme. A line channel inflammable tree species identification method based on laser point cloud and image fusion comprises the following steps:
s1, carrying out visual classification labeling on point clouds of vegetation, ground and roads of buildings;
S2, extracting point cloud data of a single tree, and classifying and displaying different types of trees in different colors;
S3, creating a binary mask map of the tree through point cloud data of the single tree, registering the binary mask map with an orthographic image shot by the unmanned aerial vehicle, and dividing the image information of the tree after registering to obtain a tree image;
s4, marking the tree images according to types, then putting the tree images into a deep neural network for identification, realizing intelligent extraction of tree type information, and displaying the extraction results in point cloud data;
the process for extracting the point cloud data of the single tree comprises the following steps:
S21, subtracting the digital surface model (DEM) from the Digital Surface Model (DSM) to generate a Canopy Height Model (CHM);
s22, finding a local maximum value in the canopy height model, and designating the local maximum value as a tree tip;
s221, traversing each pixel point from left to right and from top to bottom from the upper left corner of the canopy height model, and reading the height h represented by the pixel point;
s222, calculating the radius of a window of a local maximum filter according to the height h represented by the pixel point;
S223, if the height value of the central point position in the window range of the local maximum filter is highest, the point cloud representing the point position is a treetop, and if the point cloud is not the highest position, the point cloud is not a treetop;
s224, traversing the next pixel point, and repeating the steps S222-S223 until each pixel in the canopy height model is calculated, so as to finish the extraction of the treetop points of the region;
S23, searching a crown area around the tree stump by utilizing the position of the tree stump and combining a gradient threshold value of the tree crown, wherein the specific method comprises the following steps:
s231, taking treetop points as seed points, and putting the treetop points into a crown set M;
S232, traversing each seed point in the crown set M to obtain the height of the seed point Searching the neighborhood pixel point of the seed point to obtain the height of the neighborhood pixel point; ComparisonAnd (3) withIf the height of (1)And (3) withThe height of (2) satisfies:
the neighborhood pixels are placed into a set M of crowns, wherein, Representing the average height of the crown range;
s233. repeat step S232 until no more points are put into the crown set M.
Further preferably, the step S3 includes the following substeps:
S31, projecting point cloud data of a single tree into an XOY plane under a UTM coordinate system, and creating a binary mask map of the tree;
S32, converting the orthographic image under the WGS84 coordinate system into an orthographic image under the UTM coordinate system by using a GlobalMapper tool or a GDAL library;
S33, projecting the binary mask map of the tree onto an orthographic image map; and obtaining the image of the single tree through the AND operation of the binary mask image and the orthographic image.
Further preferably, the step of creating a binary mask map of the tree in step S31 is as follows:
s311, traversing coordinate values of a Shan tree point cloud XOY plane, and respectively finding maximum values and minimum values in the X-axis direction and the Y-axis direction;
s312, calculating the pixel width W and the pixel height H of the binary mask map according to the resolution S;
S313, creating a pair of full black images P according to the pixel width W and the pixel height H of the binary mask map;
s314, traversing the coordinate value of the XOY plane of the Shan tree point cloud again, and calculating the pixel coordinate of the midpoint of the single tree point cloud in the binary mask map:
wherein, Respectively the horizontal coordinate and the vertical coordinate of the midpoint d of the point cloud of the single tree in the XOY plane; Is the minimum value of the X-axis direction; is the maximum value in the Y-axis direction; Respectively representing the horizontal and vertical coordinates of a pixel of a point cloud midpoint d of a single tree in a binary mask map;
s315, setting the corresponding pixel position in the image P to be white according to the pixel coordinate of the point cloud midpoint of the single tree in the binary mask map;
S316, repeating the steps S314-S315 in a circulating mode until all point clouds of a single tree are traversed, and converting the final image P into a binary mask map of the tree.
Further preferably, in step S33, the calculation process of projecting the binary mask map of the tree onto the orthographic image map is as follows:
S331, traversing each pixel in the binary mask map, if the pixel is black, skipping, and continuing traversing the next pixel point;
S332, if the pixel is a white point, taking the pixel coordinates of the binary mask map, and calculating the region range under the corresponding UTM coordinate system ; Wherein the method comprises the steps ofConverting pixels of the binary mask map to coordinates in a UTM coordinate system; pixel coordinates of a binary mask mapThe maximum and minimum values of coordinates in the corresponding UTM coordinate system are expressed by the following formula:
wherein, The pixel coordinates of the binary mask map correspond to the minimum value of the abscissa in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the minimum value of the ordinate in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the maximum value of the abscissa in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the maximum value of the ordinate under the UTM coordinate system;
S333, calculating a pixel region K of the region range N under the UTM coordinate system of the orthophoto map, and respectively calculating the left upper corner coordinates of the region range N ,) And lower right corner coordinates [ ],) Respectively converted into pixel coordinates under an orthophoto mapAnd:
Wherein,Is the abscissa of the point cloud i in the region range N,Is the ordinate of the point cloud i in the region range N,The upper left corner of the region N is converted to the pixel abscissa under the orthophoto map,The upper left corner of the region N is converted to the pixel ordinate under the orthophoto map,The lower right corner of the region N is converted to the pixel abscissa under the orthophoto map,The lower right corner of the region N is converted to the pixel ordinate under the orthophoto map,For the minimum X-axis coordinate of the orthophoto map in the UTM coordinate system,For the maximum Y-axis coordinate of the orthophotomap in the UTM coordinate system,For the maximum X-axis coordinate of the orthophoto map in the UTM coordinate system,For an orthophotomap, the minimum value of the Y-axis coordinates in the UTM coordinate system, S dom is the resolution of the orthophotomap;
S334 pixel coordinates calculated by step S333 AndThe pixel region range of the region range N under the orthophoto map under the UTM coordinate system can be obtainedFor the pixel coordinates of the region range N under the orthophoto map, putting the pixels in the pixel region range K into an output set;
S335, repeating the steps S331-S334 until all pixels in the binary mask map are traversed;
S336, sequentially outputting pixels in the output set to a new image;
S337, extracting the area which is overlapped with the white part of the binary mask map in the orthographic image based on a unified UTM coordinate system to form a tree image.
Further preferably, the deep neural network is a PP-LCNet network; the four-layer convolution system consists of 4 layers of convolution layers, 1 layer of global average pooling layer and 2 layers of full connection layers, wherein the 4 layers of convolution layers sequentially comprise a standard convolution layer, two 3X 3 depth separable convolutions and 5X 5 depth separable convolutions.
The beneficial effects of the invention are as follows: the method has the advantages that the Digital Surface Model (DSM) is used for subtracting the Digital Earth Model (DEM), the Canopy Height Model (CHM) is extracted, the tree crown extraction method is improved, the radius of the local maximum method is dynamically adjusted based on the tree height and the tree crown relation, the tree crown extraction is more accurate, the binary mask map is registered with the orthographic image map shot by the unmanned aerial vehicle, the image information of the tree is segmented after registration, the tree image is obtained, the intelligent extraction of the tree type information is realized by adopting the PP-LCNET, and the tree type identification accuracy is improved.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a PP-LCNet network.
FIG. 3 is a diagram of a PP-LCNet network training.
Detailed Description
The invention is illustrated in further detail below in connection with examples.
Referring to fig. 1, a method for identifying inflammable tree species of a line channel based on laser point cloud and image fusion includes:
s1, carrying out visual classification labeling on point clouds of vegetation, ground and roads of buildings;
S2, extracting point cloud data of a single tree, and classifying and displaying different types of trees in different colors;
S3, creating a binary mask map of the tree through point cloud data of the single tree, registering the binary mask map with an orthographic image shot by the unmanned aerial vehicle, and dividing the image information of the tree after registering to obtain a tree image;
and S4, marking the tree images according to types, then placing the tree images into a deep neural network for identification, realizing intelligent extraction of tree type information, and displaying the extraction result in point cloud data.
The process for extracting the point cloud data of the single tree in the embodiment includes:
S21, subtracting the digital surface model (DEM) from the Digital Surface Model (DSM) to generate a Canopy Height Model (CHM). The method for manufacturing the digital surface model comprises the steps of dividing the point cloud into grids in an XOY plane, counting the maximum height value of the point cloud in each grid, filling the grids with different colors according to different height values, and generally, the resolution of the digital surface model is smaller than 0.1m in order to meet the requirement of single tree segmentation. Digital Earth Model (DEM) fabrication: firstly, carrying out grid division on the ground point cloud in an XOY plane, then counting the maximum height value of the point cloud in each grid, and filling the grids with different colors according to different height values.
S22, finding a local maximum value in the canopy height model, and designating the local maximum value as a tree tip. Because the treetop is the highest point of the tree in the crown range, the characteristic of local maximum value can be presented on the crown height model. A local maximum filter (Local Maximum Filtering) is used to find the local maximum of the canopy height model. And combining the shape characteristics of the tree, selecting a circular shape as the shape of a local area of the local maximum filter.
S221, traversing each pixel point from left to right and from top to bottom from the upper left corner of the canopy height model, and reading the height h represented by the pixel point;
s222, calculating the radius of a window of a local maximum filter according to the height h represented by the pixel point;
Successful identification of treetop locations using local maximum filters depends on careful selection of filter window sizes. If an excessive window radius is used, shorter shoots may be missed; while too small a window radius, the locally convex portion may be misinterpreted as a treetop. By analyzing the height and crown diameter data of the actual tree, linear regression of the quadratic model is used to determine the height of the tree Obtaining the diameter of the tree crownAnd thus the radius of the local maximum filter. The corresponding relation between the crown diameter and the tree height is shown as follows:
;
Wherein e is a natural constant;
Another improvement mode of the local maximum filter is to purposefully adjust the shape of the filter according to the characteristics of the local tree crowns. For example, for a tree with a mountain slope facing the sunny side, one vertex faces the eastern side, a triangular filter can be selected, and the size of the triangular filter can be adjusted according to the height of the tree. In addition, when the tree grows towards the water source side in the water-deficient area, a rectangular or elliptic filter can be selected, the major axis direction of the ellipse of the rectangular or elliptic filter faces the water source side, and the position and the size of the central point are adjusted according to the height of the tree and the distance from the water source.
S223, if the height value of the central point position in the window range of the local maximum filter is highest, the point cloud representing the point position is a treetop, and if the point cloud is not the highest position, the point cloud is not a treetop;
s224, traversing the next pixel point, and repeating the steps S222-S223 until each pixel in the canopy height model is calculated, so as to finish the extraction of the treetop points of the region;
S23, after the treetop is detected, the treetop is also required to be identified, and the whole treetop area can be extracted by gradient continuity because the whole treetop presents an umbrella-like structure. And searching a crown area around the treetop by utilizing the treetop position and combining the gradient threshold value of the crown. The specific method comprises the following steps:
s231, taking treetop points as seed points, and putting the treetop points into a crown set M;
S232, traversing each seed point in the crown set M to obtain the height of the seed point Searching the neighborhood pixel point of the seed point to obtain the height of the neighborhood pixel point; ComparisonAnd (3) withIf the height of (1)And (3) withThe height of (2) satisfies:
the neighborhood pixels are placed into a set M of crowns, wherein, Representing the average height of the crown range;
s233. repeat step S232 until no more points are put into the crown set M.
The step S3 of this embodiment includes the following sub-steps:
S31, projecting point cloud data of a single tree into an XOY plane under a UTM coordinate system, and creating a binary mask map of the tree;
S32, converting the orthographic image under the WGS84 coordinate system into an orthographic image under the UTM coordinate system by using a GlobalMapper tool or a GDAL library;
S33, projecting the binary mask map of the tree onto an orthographic image map; and obtaining the image of the single tree through the AND operation of the binary mask image and the orthographic image.
In step S31 of this embodiment, the creating a binary mask map of a tree means: each point cloud coordinate of the tree is projected to an XOY plane under a UTM coordinate system, and sampling is performed with 0.1m as a grid, if the point cloud of the tree exists in the 0.1m grid, the grid is filled with white, otherwise, the grid is filled with black. The method comprises the following steps:
s311, traversing coordinate values of a Shan tree point cloud XOY plane, and respectively finding maximum values and minimum values in the X-axis direction and the Y-axis direction;
s312, calculating the pixel width W (the number of pixels of the binary mask in the X-axis direction of the pixel coordinate system) and the pixel height H (the number of pixels of the binary mask in the Y-axis direction of the pixel coordinate system) of the binary mask according to the resolution S:
wherein, Respectively a maximum value and a minimum value in the X-axis direction; The maximum value and the minimum value of the Y-axis direction are respectively; int is the rounding operator; on the premise of not influencing the actual recognition effect, in order to improve the calculation efficiency as much as possible, the resolution S is 0.1m in the binary mask map calculation process, so that the requirement of image recognition on the image resolution can be met;
S313, creating a pair of full black images P according to the pixel width W and the pixel height H of the binary mask map;
s314, traversing the coordinate value of the XOY plane of the Shan tree point cloud again, and calculating the pixel coordinate of the midpoint of the single tree point cloud in the binary mask map:
wherein, Respectively the horizontal coordinate and the vertical coordinate of the midpoint d of the point cloud of the single tree in the XOY plane; Respectively representing the horizontal and vertical coordinates of a pixel of a point cloud midpoint d of a single tree in a binary mask map;
S315, setting the corresponding pixel position in the image P to be white according to the pixel coordinate of the point cloud midpoint of the single tree in the binary mask map.
S316, repeating the steps S314-S315 in a circulating mode until all point clouds of a single tree are traversed, and converting the final image P into a binary mask map of the tree.
In order to map the pixels of the binary mask map to the pixel coordinate system of the orthographic image, the characteristic that both images (the binary mask map and the orthographic image) can be converted to the UTM coordinate system is utilized, the pixels of the binary mask map are firstly converted to the UTM coordinate system to obtain a region set N, then the region set N is converted to the pixel coordinate system of the orthographic image, and finally the pixel range K is obtained.
Since the Int operator is used to calculate the pixel coordinates of a single tree point cloud in the binary mask map, multiple UTM coordinates may be mapped to the same pixel coordinate. Therefore, when the pixels of the binary mask pattern are converted into UTM coordinates, a UTM coordinate set M { about } is obtained) Since each pixel of the binary mask map is a rectangular region with a length and width of the spatial resolution S in the UTM coordinate system, in order to represent the region, the region range may be represented using the maximum value and the minimum value of X, Y axes of the rectangular region: Wherein Pixels of the binary mask map are converted to coordinates in the UTM coordinate system. Pixel coordinates of a binary mask mapThe maximum and minimum values of coordinates in the corresponding UTM coordinate system are expressed by the following formula:
wherein, The pixel coordinates of the binary mask map correspond to the minimum value of the abscissa in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the minimum value of the ordinate in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the maximum value of the abscissa in the UTM coordinate system,The pixel coordinates of the binary mask map correspond to the maximum value of the ordinate in the UTM coordinate system.
Since the pixel coordinates are only integers, the pixels of the binary mask map are discrete points when they are converted into UTM coordinates.
Since the binary mask map is an image map that converts the XOY coordinates of the tree point cloud, each pixel in the binary mask map has a corresponding area range under the UTM coordinate system. In the same way, each pixel in the orthographic image after projection transformation has a corresponding coordinate area under the UTM coordinate system, so that the pixel coordinates of the two images can establish a conversion relation through the UTM coordinate system.
Parameters such as spatial resolution of the orthophotomap, maxima and minima of the X-axis and Y-axis in the corresponding UTM coordinate system can be read from the orthophotomap file.
In step S33 of this embodiment, the calculation process of projecting the binary mask map of the tree onto the orthographic image map is as follows:
S331, traversing each pixel in the binary mask map, if black, skipping, and continuing traversing the next pixel point.
S332, if the pixel is a white point, taking the pixel coordinates of the binary mask map, and calculating the region range under the corresponding UTM coordinate system
S333, calculating a pixel region K of the region range N under the UTM coordinate system of the orthophoto map, and respectively calculating the left upper corner coordinates of the region range N,) And lower right corner coordinates [ ],) Respectively converted into pixel coordinates under an orthophoto mapAnd:
Wherein,Is the abscissa of the point cloud i in the region range N,Is the ordinate of the point cloud i in the region range N,The upper left corner of the region N is converted to the pixel abscissa under the orthophoto map,The upper left corner of the region N is converted to the pixel ordinate under the orthophoto map,The lower right corner of the region N is converted to the pixel abscissa under the orthophoto map,The lower right corner of the region N is converted to the pixel ordinate under the orthophoto map,For the minimum X-axis coordinate of the orthophoto map in the UTM coordinate system,For the maximum Y-axis coordinate of the orthophotomap in the UTM coordinate system,For the maximum X-axis coordinate of the orthophoto map in the UTM coordinate system,For an orthophoto map, the minimum value of the Y-axis coordinates in the UTM coordinate system, S dom is the resolution of the orthophoto map.
S334 pixel coordinates calculated by step S333AndThe pixel region range of the region range N under the orthophoto map under the UTM coordinate system can be obtainedFor pixel coordinates of region range N under the orthophoto map, pixels within pixel region range K are placed into the output set.
S335, repeating the steps S331-S334 until all pixels in the binary mask map are traversed.
S336, sequentially outputting pixels in the output set to a new image;
S337, extracting the area which is overlapped with the white part of the binary mask map in the orthographic image based on a unified UTM coordinate system to form a tree image.
The deep neural network used in this embodiment is a PP-LCNet network. The PP-LCNet network is a lightweight convolutional neural network and is mainly used for tasks such as image classification, target detection, semantic segmentation and the like. The PP-LCNet network structure is shown in figure 2, and the whole network consists of 4 layers of convolution layers (comprising a standard convolution layer, two 3x3 depth separable convolutions and 5 x5 depth separable convolutions in sequence), 1 layer of global average pooling layer and 2 layers of full connection layers (1280 nodes of full connection layers and 1000 nodes of full connection layers). The input to the PP-LCNet network is a 224x224 picture, and the output is an n-dimensional feature vector. Each dimension in the output n-dimensional vector represents a type of probability. The PP-LCNet network has a better activation function and a larger convolution kernel in place than other classifiers (ShuffleNet, SOTA). The characteristics enable the PP-LCNet network to have faster identification efficiency and accuracy. The identification efficiency and accuracy of the PP-LCNet network are shown in table 1, compared with the table, PPLCnet and MobileNet, shuffleNet, the PP-LCNet network has higher identification accuracy, the parameter number and the calculated amount are not the highest, the PP-LCNet network has the highest cost performance. In the table, the calculated amount FLOPs refers to the floating point number of operations, and can be used to measure the complexity of the algorithm/model; the top 5 accuracy rate refers to the probability that the top five categories of the model predictive rank contain the correct predictive category.
TABLE 1 comparison of the Performance of the models
The PP-LCNet network application process is divided into 3 stages of labeling, training and application, as shown in fig. 3, and the required work of each stage is described as follows:
In the labeling stage, firstly, a single tree image picture separated from an orthographic image is selected as a tree seed sample image. The requirement for each tree species sample image is more than 100. And then, performing brightness adjustment and scaling on the sample picture to obtain a multi-scale tree species sample image by expansion. And finally, carrying out inflammable tree species characteristic marking on the tree species sample image, and generating tree species sample characteristic vectors.
In the training stage, a PP-LCNet network is built, tree sample images marked in the marking stage are input into the PP-LCNet network, classified feature vectors are obtained through the processing of a convolution layer, a global average pooling layer and a full connection layer in sequence, iterative training is carried out, error vectors of the classified feature vectors and the tree sample feature vectors are calculated until the accuracy reaches more than 90%, and training is stopped; the full connection layer output is a trained weight file.
In the application stage, a PP-LCNet network is built, and a weight file obtained in the training stage is applied; and (3) putting the non-sample single tree image picture into a trained PP-LCNet network for prediction to obtain a classification feature vector and a tree species label, thereby obtaining a probability value of a corresponding tree species type.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (5)

1. The method for identifying the inflammable tree species of the line channel based on the fusion of the laser point cloud and the image is characterized by comprising the following steps of:
s1, carrying out visual classification labeling on point clouds of vegetation, ground and roads of buildings;
S2, extracting point cloud data of a single tree, and classifying and displaying different types of trees in different colors;
S3, creating a binary mask map of the tree through point cloud data of the single tree, registering the binary mask map with an orthographic image shot by the unmanned aerial vehicle, and dividing the image information of the tree after registering to obtain a tree image;
s4, marking the tree images according to types, then putting the tree images into a deep neural network for identification, realizing intelligent extraction of tree type information, and displaying the extraction results in point cloud data;
the process for extracting the point cloud data of the single tree comprises the following steps:
s21, subtracting the digital surface model from the digital surface model to generate a canopy height model;
s22, finding a local maximum value in the canopy height model, and designating the local maximum value as a tree tip;
s221, traversing each pixel point from left to right and from top to bottom from the upper left corner of the canopy height model, and reading the height h represented by the pixel point;
s222, calculating the radius of a window of a local maximum filter according to the height h represented by the pixel point;
S223, if the height value of the central point position in the window range of the local maximum filter is highest, the point cloud representing the point position is a treetop, and if the point cloud is not the highest position, the point cloud is not a treetop;
s224, traversing the next pixel point, and repeating the steps S222-S223 until each pixel in the canopy height model is calculated, so as to finish the extraction of the treetop points of the region;
S23, searching a crown area around the tree stump by utilizing the position of the tree stump and combining a gradient threshold value of the tree crown, wherein the specific method comprises the following steps:
s231, taking treetop points as seed points, and putting the treetop points into a crown set M;
S232, traversing each seed point in the crown set M to obtain the height of the seed point Searching a neighborhood pixel point of the seed point to obtain the height/>, of the neighborhood pixel point; Comparison/>And/>If/>And/>The height of (2) satisfies:
the neighborhood pixels are placed into a set M of crowns, wherein, Representing the average height of the crown range;
s233. repeat step S232 until no more points are put into the crown set M.
2. The method for identifying inflammable tree species of line channel based on laser point cloud and image fusion according to claim 1, wherein the step S3 comprises the following sub-steps:
S31, projecting point cloud data of a single tree into an XOY plane under a UTM coordinate system, and creating a binary mask map of the tree;
S32, converting the orthographic image under the WGS84 coordinate system into an orthographic image under the UTM coordinate system by using a GlobalMapper tool or a GDAL library;
S33, projecting the binary mask map of the tree onto an orthographic image map; and obtaining the image of the single tree through the AND operation of the binary mask image and the orthographic image.
3. The method for identifying inflammable tree species in line channel based on laser point cloud and image fusion according to claim 2, wherein the step of creating a binary mask map of tree in step S31 is as follows:
s311, traversing coordinate values of a Shan tree point cloud XOY plane, and respectively finding maximum values and minimum values in the X-axis direction and the Y-axis direction;
s312, calculating the pixel width W and the pixel height H of the binary mask map according to the resolution S;
S313, creating a pair of full black images P according to the pixel width W and the pixel height H of the binary mask map;
s314, traversing the coordinate value of the XOY plane of the Shan tree point cloud again, and calculating the pixel coordinate of the midpoint of the single tree point cloud in the binary mask map:
wherein, 、/>Respectively the horizontal coordinate and the vertical coordinate of the midpoint d of the point cloud of the single tree in the XOY plane; /(I)Is the minimum value of the X-axis direction; /(I)Is the maximum value in the Y-axis direction; /(I)、/>Respectively representing the horizontal and vertical coordinates of a pixel of a point cloud midpoint d of a single tree in a binary mask map;
s315, setting the corresponding pixel position in the image P to be white according to the pixel coordinate of the point cloud midpoint of the single tree in the binary mask map;
S316, repeating the steps S314-S315 in a circulating mode until all point clouds of a single tree are traversed, and converting the final image P into a binary mask map of the tree.
4. The method for identifying inflammable tree species in a line channel based on laser point cloud and image fusion according to claim 3, wherein the calculation process of projecting the binary mask map of the tree onto the orthographic image map in step S33 is as follows:
S331, traversing each pixel in the binary mask map, if the pixel is black, skipping, and continuing traversing the next pixel point;
S332, if the pixel is a white point, taking the pixel coordinates of the binary mask map, and calculating the region range under the corresponding UTM coordinate system ; Wherein/>Converting pixels of the binary mask map to coordinates in a UTM coordinate system; pixel coordinates/>, of a binary mask mapThe maximum and minimum values of coordinates in the corresponding UTM coordinate system are expressed by the following formula:
wherein, Is the minimum value of the abscissa in the UTM coordinate system corresponding to the pixel coordinates of the binary mask map,/>Is the minimum value of the ordinate of the binary mask image under the UTM coordinate system corresponding to the pixel coordinate of the binary mask image,/>Is the maximum value of the abscissa in UTM coordinate system corresponding to the pixel coordinates of the binary mask map,/>The pixel coordinates of the binary mask map correspond to the maximum value of the ordinate under the UTM coordinate system;
S333, calculating a pixel region K of the region range N under the UTM coordinate system of the orthophoto map, and respectively calculating the left upper corner coordinates of the region range N ,/>) And lower right corner coordinates (/ >,/>) Respectively converted into pixel coordinates/>, under an orthophoto mapAnd/>:
Wherein,Is the abscissa of the point cloud i in the region range N,/>Is the ordinate of the point cloud i in the region range N,/>Is the upper left corner of the region N converted to the pixel abscissa under the orthophoto map,/>Is the upper left corner of the region N converted to the pixel ordinate under the orthophoto map,/>Is the lower right corner of the region range N converted to the pixel abscissa under the orthophoto map,/>Is the lower right corner of the region N converted to the pixel ordinate under the orthophoto map,/>X-axis coordinate minimum value of orthophoto map under UTM coordinate system,/>Is the maximum value of Y-axis coordinates of an orthophoto map in a UTM coordinate system,/>Maximum X-axis coordinate of orthophoto map under UTM coordinate system,/>For an orthophotomap, the minimum value of the Y-axis coordinates in the UTM coordinate system, S dom is the resolution of the orthophotomap;
S334 pixel coordinates calculated by step S333 And/>The pixel region range/>, under the orthophoto map, of the region range N under the UTM coordinate system can be obtained,/>For the pixel coordinates of the region range N under the orthophoto map, putting the pixels in the pixel region range K into an output set;
S335, repeating the steps S331-S334 until all pixels in the binary mask map are traversed;
S336, sequentially outputting pixels in the output set to a new image;
S337, extracting the area which is overlapped with the white part of the binary mask map in the orthographic image based on a unified UTM coordinate system to form a tree image.
5. The line channel inflammable tree species identification method based on laser point cloud and image fusion according to claim 1, wherein the deep neural network is a PP-LCNet network; the four-layer convolution system consists of 4 layers of convolution layers, 1 layer of global average pooling layer and 2 layers of full connection layers, wherein the 4 layers of convolution layers sequentially comprise a standard convolution layer, two 3X 3 depth separable convolutions and 5X5 depth separable convolutions.
CN202410441204.2A 2024-04-12 2024-04-12 Line channel inflammable tree species identification method based on laser point cloud and image fusion Pending CN118072177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410441204.2A CN118072177A (en) 2024-04-12 2024-04-12 Line channel inflammable tree species identification method based on laser point cloud and image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410441204.2A CN118072177A (en) 2024-04-12 2024-04-12 Line channel inflammable tree species identification method based on laser point cloud and image fusion

Publications (1)

Publication Number Publication Date
CN118072177A true CN118072177A (en) 2024-05-24

Family

ID=91104250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410441204.2A Pending CN118072177A (en) 2024-04-12 2024-04-12 Line channel inflammable tree species identification method based on laser point cloud and image fusion

Country Status (1)

Country Link
CN (1) CN118072177A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101606516B1 (en) * 2015-03-31 2016-03-28 국민대학교산학협력단 system and method for analyzing woody growth using UAV image
CN110728197A (en) * 2019-09-19 2020-01-24 中山大学 Single-tree-level tree species identification method based on deep learning
KR20200061154A (en) * 2018-11-23 2020-06-02 네이버웹툰 주식회사 Method and apparatus of analyzing diagram containing visual and textual information
CN111369540A (en) * 2020-03-06 2020-07-03 西安电子科技大学 Plant leaf disease identification method based on mask convolutional neural network
CN113570621A (en) * 2021-07-19 2021-10-29 广东科诺勘测工程有限公司 Tree information extraction method and device based on high-precision point cloud and image
CN113935366A (en) * 2021-09-30 2022-01-14 海南电网有限责任公司海南输变电检修分公司 Automatic classification method for point cloud single wood segmentation
CN114972743A (en) * 2022-03-18 2022-08-30 西安理工大学 Radius expansion-based hierarchical single tree extraction method
US20220292786A1 (en) * 2019-09-09 2022-09-15 apoQlar GmbH Method for controlling a display, computer program and mixed reality display device
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
US20230039554A1 (en) * 2021-08-09 2023-02-09 Institute of Forest Resource Information Techniques CAF Tree crown extraction method based on unmanned aerial vehicle multi-source remote sensing
CN115937226A (en) * 2022-12-15 2023-04-07 华南农业大学 Fruit tree single tree segmentation method based on unmanned aerial vehicle Lidar point cloud data
CN116704333A (en) * 2023-05-19 2023-09-05 中国电建集团江西省电力设计院有限公司 Single tree detection method based on laser point cloud data
CN117011614A (en) * 2023-08-18 2023-11-07 延边大学 Wild ginseng reed body detection and quality grade classification method and system based on deep learning
CN117132915A (en) * 2023-10-27 2023-11-28 国网江西省电力有限公司电力科学研究院 Power transmission line tree obstacle hidden danger analysis method based on automatic classification of point cloud
CN117541786A (en) * 2023-10-09 2024-02-09 中国长江电力股份有限公司 Single plant vegetation fine segmentation method integrating multi-source point cloud data

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101606516B1 (en) * 2015-03-31 2016-03-28 국민대학교산학협력단 system and method for analyzing woody growth using UAV image
KR20200061154A (en) * 2018-11-23 2020-06-02 네이버웹툰 주식회사 Method and apparatus of analyzing diagram containing visual and textual information
US20220292786A1 (en) * 2019-09-09 2022-09-15 apoQlar GmbH Method for controlling a display, computer program and mixed reality display device
CN110728197A (en) * 2019-09-19 2020-01-24 中山大学 Single-tree-level tree species identification method based on deep learning
CN111369540A (en) * 2020-03-06 2020-07-03 西安电子科技大学 Plant leaf disease identification method based on mask convolutional neural network
CN113570621A (en) * 2021-07-19 2021-10-29 广东科诺勘测工程有限公司 Tree information extraction method and device based on high-precision point cloud and image
US20230039554A1 (en) * 2021-08-09 2023-02-09 Institute of Forest Resource Information Techniques CAF Tree crown extraction method based on unmanned aerial vehicle multi-source remote sensing
CN113935366A (en) * 2021-09-30 2022-01-14 海南电网有限责任公司海南输变电检修分公司 Automatic classification method for point cloud single wood segmentation
CN114972743A (en) * 2022-03-18 2022-08-30 西安理工大学 Radius expansion-based hierarchical single tree extraction method
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115937226A (en) * 2022-12-15 2023-04-07 华南农业大学 Fruit tree single tree segmentation method based on unmanned aerial vehicle Lidar point cloud data
CN116704333A (en) * 2023-05-19 2023-09-05 中国电建集团江西省电力设计院有限公司 Single tree detection method based on laser point cloud data
CN117011614A (en) * 2023-08-18 2023-11-07 延边大学 Wild ginseng reed body detection and quality grade classification method and system based on deep learning
CN117541786A (en) * 2023-10-09 2024-02-09 中国长江电力股份有限公司 Single plant vegetation fine segmentation method integrating multi-source point cloud data
CN117132915A (en) * 2023-10-27 2023-11-28 国网江西省电力有限公司电力科学研究院 Power transmission line tree obstacle hidden danger analysis method based on automatic classification of point cloud

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FUCHUN LIU等: "LiDAR Point Cloud Semantic Segmentation Method Based on Multi-scale Contextual Feature", 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 1 December 2023 (2023-12-01) *
JING HU等: "M-GCN: Multi-scale Graph Convolutional Network for 3D Point Cloud Classification", 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 25 August 2023 (2023-08-25) *
陶江;刘丽娟;庞勇;李登秋;冯云云;王雪;丁友丽;彭琼;肖文惠;: "基于机载激光雷达和高光谱数据的树种识别方法", 浙江农林大学学报, no. 02, 8 April 2018 (2018-04-08) *

Similar Documents

Publication Publication Date Title
CN113128405B (en) Plant identification and model construction method combining semantic segmentation and point cloud processing
CN107516077B (en) Traffic sign information extraction method based on fusion of laser point cloud and image data
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
CN113591766B (en) Multi-source remote sensing tree species identification method for unmanned aerial vehicle
CN112819830A (en) Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN103544505B (en) Ship identification system and method towards unmanned plane Aerial Images
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN103218787A (en) Multi-source heterogeneous remote-sensing image control point automatic collecting method
CN112560623B (en) Unmanned aerial vehicle-based rapid mangrove plant species identification method
CN115294293B (en) Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result
Zhang et al. Research on spatial positioning system of fruits to be picked in field based on binocular vision and SSD model
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN112907520A (en) Single tree crown detection method based on end-to-end deep learning method
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
CN114663787A (en) Single-tree segmentation method fusing unmanned aerial vehicle CHM and RGB images
CN113128476A (en) Low-power consumption real-time helmet detection method based on computer vision target detection
CN115719445A (en) Seafood identification method based on deep learning and raspberry type 4B module
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
Gupta et al. Tree annotations in LiDAR data using point densities and convolutional neural networks
CN116645321B (en) Vegetation leaf inclination angle calculation statistical method and device, electronic equipment and storage medium
Yan et al. Identification and localization of optimal picking point for truss tomato based on mask r-cnn and depth threshold segmentation
CN115760885B (en) High-closure-degree wetland forest parameter extraction method based on consumer-level unmanned aerial vehicle image
CN114494586B (en) Lattice projection deep learning network broadleaf branch and leaf separation and skeleton reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination