CN111832473A - Point cloud feature identification processing method and device, storage medium and electronic equipment - Google Patents

Point cloud feature identification processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111832473A
CN111832473A CN202010663889.7A CN202010663889A CN111832473A CN 111832473 A CN111832473 A CN 111832473A CN 202010663889 A CN202010663889 A CN 202010663889A CN 111832473 A CN111832473 A CN 111832473A
Authority
CN
China
Prior art keywords
target
features
point
points
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010663889.7A
Other languages
Chinese (zh)
Inventor
黄不了
卢奕
陈欢欢
江贻芳
朱云慧
黄恩兴
王力
李建平
高健
舒百寿
于娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Stargis Tianjin Technology Development Co ltd
University of Science and Technology of China USTC
Original Assignee
Stargis Tianjin Technology Development Co ltd
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stargis Tianjin Technology Development Co ltd, University of Science and Technology of China USTC filed Critical Stargis Tianjin Technology Development Co ltd
Priority to CN202010663889.7A priority Critical patent/CN111832473A/en
Publication of CN111832473A publication Critical patent/CN111832473A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a point cloud feature identification processing method, a point cloud feature identification processing device, a storage medium and electronic equipment, wherein the method comprises the following steps: extracting a target point set in the laser point cloud data; abstract and iterate the point characteristics in the target point set into a target local area, wherein the overall characteristics contained in the target local area meet the preset global characteristic requirement; taking the total features contained in the target local domain and the local features contained in the initial local domain obtained in the process of carrying out abstract iteration on the target local domain as global propagation features, and distributing the global propagation features to the points in the target point set corresponding to the target local domain and the initial local domain; and classifying and identifying the points in the target point set according to the global propagation characteristics of each point in the target point set. The method can be used for summarizing the local features of each point in the target point set and the global features of the local area where the point is located based on the deep neural network, and further provides support for accurate extraction of the laser point cloud earth surface points.

Description

Point cloud feature identification processing method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a point cloud feature identification processing method and device, a storage medium and electronic equipment.
Background
The extraction of the surface points of the laser point cloud is to accurately extract data points representing the surface positions from airborne laser point cloud data with large scale, much noise, abundant information and complex scenes. Airborne laser point cloud data is generally composed of a large number of three-dimensional data points for describing topographic features in an area, and surface point extraction of airborne laser point clouds is to extract points describing the surface of the earth from the three-dimensional data points and filter out the rest of the points.
The extraction of the surface points of the laser point cloud has important significance for various works such as city management and planning, mountain terrain estimation, vegetation coverage assessment and the like, however, in the prior art, the basic characteristics of the laser point cloud are adopted to identify the point cloud, and the accuracy of the identification result is poor.
Disclosure of Invention
The invention provides a point cloud feature identification processing method, a point cloud feature identification processing device, a storage medium and electronic equipment, and solves the problem that the identification and classification of point cloud data cannot be accurately realized by adopting basic features of laser point cloud data in the prior art.
In one aspect of the present invention, a point cloud feature identification processing method is provided, the method includes:
extracting a target point set in the laser point cloud data, wherein the target point set is a point set consisting of lower domain points of the laser point cloud;
abstract and iterate point characteristics in a target point set into a target local area, wherein the overall characteristics contained in the target local area meet the preset global characteristic requirement;
taking the total features contained in the target local domain and the local features contained in the initial local domain obtained in the process of abstract iteration of the target local domain as global propagation features, and distributing the global propagation features to the points in the target point set corresponding to the target local domain and the initial local domain;
and classifying and identifying the points in the target point set according to the global propagation characteristics of each point in the target point set.
Optionally, before the classifying and identifying the points in the target point set according to the global propagation characteristics of each point in the target point set, the method further includes:
taking the point characteristics of upper-layer points of all lower domain points in the target point set as the hierarchical characteristics of the corresponding lower domain points;
and splicing the hierarchical features to the global propagation features corresponding to the lower domain points to obtain the final features of the lower domain points.
Optionally, the abstractly iterating the point features in the target point set into the target local area includes:
abstract and iterate the point characteristics in the target point set into a plurality of initial local areas by adopting a neural network, wherein each local area comprises the local characteristics of the represented point area;
judging whether the local features contained in each initial local area meet the preset global feature requirement or not;
and if the local features contained in any initial local area do not meet the preset global feature requirement, adopting a neural network to continuously perform abstract iteration on the local features contained in each initial local area until the target local area is obtained.
Optionally, the classifying and identifying, according to the global propagation feature of each point in the target point set, each point in the target point set includes:
and classifying the points in the target point set by adopting a preset classifier model according to the global propagation characteristics of each point in the target point set, and generating corresponding labels for the points of different classes.
Optionally, the point features include x, y and z coordinate features, r, g and b color features, elevation features, reflection intensity, centered x, y and z position features, linear features, planar features, divergent features, verticality features and fitted surface parameter features.
Optionally, the classifier model is implemented by using a support vector machine algorithm, a random forest algorithm or an Adaboost iterative algorithm.
In another aspect of the present invention, there is provided a point cloud feature recognition processing apparatus, including:
the point set extraction unit is used for extracting a target point set in the laser point cloud data, wherein the target point set is a point set consisting of lower domain points of the laser point cloud;
the characteristic abstraction unit is used for abstracting and iterating the point characteristics in the target point set into a target local area, and the overall characteristics contained in the target local area meet the preset global characteristic requirement;
the characteristic summarizing unit is used for taking the total characteristic contained in the target local area and the local characteristic contained in the initial local area obtained in the process of carrying out abstract iteration on the target local area as global propagation characteristics and distributing the global propagation characteristics to the points in the target point set corresponding to the target local area and the initial local area;
and the identification unit is used for classifying and identifying each point in the target point set according to the global propagation characteristics of each point in the target point set.
Optionally, the apparatus further comprises:
and the hierarchical feature splicing unit is used for taking the point features of upper-layer points of all lower domain points in the target point set as the hierarchical features of the corresponding lower domain points before the identification unit classifies and identifies all the points in the target point set according to the global propagation features of all the points in the target point set, and splicing the hierarchical features after the global propagation features of the corresponding lower domain points to obtain the final features of the lower domain points.
Optionally, the feature abstraction unit includes:
the characteristic processing module is used for abstracting and iterating the point characteristics concentrated by the target points into a plurality of initial local areas by adopting a neural network, and each local area comprises the local characteristics of the represented point area;
the judging module is used for judging whether the local features contained in each initial local area meet the preset global feature requirement or not;
and the characteristic processing module is further used for continuing abstract iteration on the local characteristics contained in each initial local area by adopting a neural network until a target local area is obtained when the judgment result of the judgment module is that the local characteristics contained in any initial local area do not meet the preset global characteristic requirement.
Optionally, the identification unit is specifically configured to classify, according to the global propagation characteristics of each point in the target point set, each point in the target point set by using a preset classifier model, and generate corresponding labels for the points of different categories.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
Furthermore, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method as described above when executing the program.
The point cloud feature recognition processing method, the point cloud feature recognition processing device, the storage medium and the electronic equipment provided by the embodiment of the invention can be used for summarizing the local features of each point in a target point set and the global features of the local area where the point is located on the basis of the deep neural network to obtain comprehensive features capable of identifying each point data, so that the point cloud recognition classification can be realized on the basis of the comprehensive features, and further support is provided for accurate extraction of the laser point cloud earth surface points.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of a point cloud feature identification processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a point cloud feature identification processing apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 schematically shows a flow chart of a point cloud feature identification processing method according to an embodiment of the present invention. Referring to fig. 1, the point cloud feature identification processing method provided by the embodiment of the present invention specifically includes steps S11 to S14, as follows:
and S11, extracting a target point set in the laser point cloud data, wherein the target point set is a point set consisting of domain points under the laser point cloud.
And S12, abstract and iterate the point characteristics in the target point set into a target local area, wherein the overall characteristics contained in the target local area meet the preset global characteristic requirement.
In this embodiment, after obtaining the target point set, the method further includes the following steps: extracting the point characteristics of each point in the target point set to obtain a characteristic array corresponding to the target point set, then carrying out standardization processing on the obtained characteristic array, and inputting the data after the standardization processing into a subsequent deep neural network for processing.
In this embodiment, abstract iteration of point features in a target point set is performed to obtain a target local area, and the specific implementation flow is as follows: abstract and iterate the point characteristics in the target point set into a plurality of initial local areas by adopting a neural network, wherein each local area comprises the local characteristics of the represented point area; judging whether the local features contained in each initial local area meet the preset global feature requirement or not; and if the local features contained in any initial local area do not meet the preset global feature requirement, adopting a neural network to continuously perform abstract iteration on the local features contained in each initial local area until the target local area is obtained.
And S13, taking the overall characteristics contained in the target local area and the local characteristics contained in the initial local area obtained in the process of abstract iteration of the target local area as global propagation characteristics, and distributing the global propagation characteristics to the points in the target point set corresponding to the target local area and the initial local area.
And S14, classifying and identifying each point in the target point set according to the global propagation characteristics of each point in the target point set.
The point cloud feature identification processing method provided by the embodiment of the invention can be used for summarizing the local features of each point in a target point set and the global features of the local area where the point is located on the basis of the deep neural network to obtain comprehensive features capable of identifying each point data, so that the point cloud identification classification can be realized on the basis of the comprehensive features, and support is provided for accurate extraction of the laser point cloud earth surface points.
In embodiments of the present invention, the point features include x, y, and z coordinate features, r, g, and b color features, elevation features, reflection intensity, centered x, y, and z position features, linear features, planar features, divergent features, verticality features, and fitted surface parameter features.
In practical application, besides the position information (x, y and z coordinate characteristics) of each point, the airborne laser point cloud data can also store other additional information, mainly including elevation characteristics elevation, color characteristics r, g, b, reflection intensity and geometric characteristics,
wherein the geometric characteristics of the local area around the description point are respectively linear L, planarity P and hair
And the dispersity S and the verticality V are achieved. The calculation formula is as follows:
Figure BDA0002579639150000061
in the formula, λ123-3 eigenvalues of the covariance matrix of the three-dimensional spatial coordinates of all points of the local neighborhood of the point, according to λ1>λ2>λ3Arranging;
Figure BDA0002579639150000062
-and λ123Corresponding 3 feature vectors.
And calculating the centralized x, y and z position characteristics and the fitted surface parameter characteristics based on the position distribution of each point in the target point set.
Specifically, the x, y and z position relations of each point in the target point set relative to the central point of the laser point cloud are calculated, and the calculation result is used as the centralized x, y and z position characteristics.
Since the position information (X, Y, Z) of each point is affected by the actual spatial position of each point, although the categories of the two earth surface points are consistent, the (X, Y, Z) will be very different and cannot be directly taken as a feature. The centralized position feature calculation is shown by the following formula:
Figure BDA0002579639150000071
specifically, a designated local area where each point in the target point set is located is fitted into a curved surface by using a least square method, a corresponding curved surface formula is generated, and each parameter in the curved surface formula is used as a fitted curved surface parameter characteristic of the point.
In this embodiment, the local area where each point is located is fitted into a curved surface by using a least square method, and a curved surface formula of the curved surface is obtained, and each parameter in the curved surface formula is used as a fitted curved surface feature of the point.
In this embodiment, through the above steps, a multidimensional additional feature is given to each point in the target point set, and the classification of the points can be better realized based on the multidimensional additional feature.
In the embodiment of the present invention, before step S14, the method further includes the following steps not shown in the drawings:
taking the point characteristics of upper-layer points of all lower domain points in the target point set as the hierarchical characteristics of the corresponding lower domain points; and splicing the hierarchical features to the global propagation features corresponding to the lower domain points to obtain the final features of the lower domain points.
The upper layer point of the lower domain point is the ten points that are closest to the upper layer point and are not concentrated in the target point, and the specific number of the upper layer points is not limited in this embodiment.
In this embodiment of the present invention, the step S14 specifically includes: and classifying the points in the target point set by adopting a preset classifier model according to the global propagation characteristics of each point in the target point set, and generating corresponding labels for the points of different classes.
The method for identifying and processing point cloud features of the present invention is explained in detail by an embodiment.
In this embodiment, after the point features in the target point set are extracted, these features cannot be directly used for extracting the surface points, and a deep neural network is designed in this embodiment. The deep neural network takes the normalized characteristic data as input, and assigns a category label to each point in the target point set, thereby extracting the surface points from the target point set.
In this embodiment, the implementation of the present invention is described in detail by taking airborne laser point cloud data of one area as an example. The selected airborne laser point cloud covers a 2000m 1600m area.
First, because each point in the target point set is assigned with an additional feature, the features of the target point set obtained previously are organized into an array of 24864 × N, where N corresponds to the feature dimension that we extracted, and the eighteen-dimensional feature is taken as an example in this embodiment for explanation. This 24864 × 18 array is then normalized and input into a deep neural network for processing. The following normalization methods are available:
z-score. Each data x is subtracted by its corresponding mean μ and divided by the standard deviation σ.
The formula is as follows:
Figure BDA0002579639150000081
a Sigmod function. One can map the data to [0,1] space,
the formula is as follows:
Figure BDA0002579639150000082
Min-Max standardization. Normalization was performed by subtracting the minimum from the data and dividing by the maximum minus the minimum.
The formula is as follows:
Figure BDA0002579639150000083
in this embodiment, the deep neural network is mainly used to process eighteen-dimensional features input by the user and output a category label corresponding to each point, and the deep neural network includes three main networks, which are a local abstract network, an information summary network, and a hierarchical feature propagation network. The method specifically comprises the following steps:
a local abstract network. Considering that all things in the point cloud are present in the form of clusters, for example, in the point cloud, a building is represented as a cluster of building points, a ground surface is represented as a cluster of horizontal ground surface points, and no single point is present, the original 24864 points are firstly abstracted into a small number of local modules through a neural network, each local module contains local features of a local area represented by the local module, the local modules are further abstracted to obtain more global features, and the abstraction process is iterated until the total features which are enough to be global are obtained, namely the total features contained in the local areas meet the preset global feature requirements. In this embodiment, the local abstraction module is composed of a set of convolutional layers, the parameters of which are (1024,256,64,16,4), that is, the point cloud is abstracted into 1024 local domains, then the 1024 local domains are abstracted into 256 larger local domains, and finally 4 largest domains containing global features are generated.
An information summary network. After the global feature is obtained, the global feature is assigned to each lower domain point, so that the lower domain points are classified and the surface points are extracted. This step can be approximately regarded as the inverse operation of the previous step, i.e. the local feature of each point is obtained from the global feature, but the biggest difference is that the input of this step is not only the global feature as a whole, but also the less global local feature obtained from each of the previously obtained local abstractions. In this case, the information summarizing module also consists of a set of convolutional layers with parameters (16,64,256,1024, 24864), and the input of each layer not only includes the output of the previous layer, but also includes the output of the corresponding layer of the local abstraction module.
The hierarchical features propagate the network. Considering that there are multiple echoes in the point cloud data, so that some points in the point cloud are located below other points, and these hierarchical features can help better classify the points in the target point set, this embodiment uses the features of the upper layer points of each lower domain point as the hierarchical features of the lower domain points and splices them after the global propagation features of the lower domain points as the final features of the lower domain points. In an embodiment, ten points that are closest to each lower domain point and are not in the target point set are selected as its upper layer points for each lower domain point, and eighteen-dimensional features of these upper layer points are also calculated and taken as additional features for the corresponding lower domain points.
And finally, classifying the obtained final features of the lower domain points through a preset classifier, wherein each lower domain point is endowed with a label, and the point with the label as the earth surface is extracted as a final earth surface point. Wherein, the optional classifiers are as follows:
and a support vector machine. A Support Vector Machine (SVM) is a generalized linear classifier (generalized linear classifier) that binary classifies data according to a supervised learning (supervised learning) mode, and a decision boundary of the SVM is a maximum margin hyperplane for solving a learning sample.
And (5) random forests. A random forest is a classifier that contains multiple decision trees and whose output classes are dependent on the mode of the class output by the individual trees.
Adaboost. Adaboost is an iterative algorithm, and the core idea thereof is to train different classifiers (weak classifiers) aiming at the same training set, and then to assemble the weak classifiers to form a stronger final classifier (strong classifier).
The point cloud feature identification processing method provided by the embodiment of the invention can collect the local features of each point in the target point set and the global features of the local area where the point is located based on the deep neural network, fully realize the classification and identification of the features of each point in the target point set by considering the influence of the features of upper-layer points of each point on the current point features, and further provide support for the accurate extraction of the laser point cloud earth surface points.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the present invention. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that no action is necessarily required by the embodiments.
Fig. 2 schematically shows a structural diagram of a point cloud feature identification processing apparatus according to an embodiment of the present invention. Referring to fig. 2, the point cloud feature identification processing apparatus according to the embodiment of the present invention specifically includes a point set extraction unit 201, a feature abstraction unit 202, a feature summarization unit 203, and an identification unit 204, where:
a point set extraction unit 201, configured to extract a target point set in the laser point cloud data, where the target point set is a point set composed of lower domain points of the laser point cloud;
the feature abstraction unit 202 is configured to abstract and iterate point features in a target point set into a target local area, where total features included in the target local area meet a preset global feature requirement;
a feature summarizing unit 203, configured to use the total features included in the target local domain and the local features included in the initial local domain obtained in the process of performing abstract iteration on the target local domain as global propagation features, and allocate the global propagation features to the points in the target point set corresponding to the target local domain and the initial local domain;
and the identifying unit 204 is configured to classify and identify each point in the target point set according to the global propagation characteristics of each point in the target point set.
Wherein the point features include x, y and z coordinate features, r, g and b color features, elevation features, reflection intensity, centered x, y and z position features, linear features, planar features, divergent features, verticality features, and fitted surface parameter features.
In the embodiment of the present invention, the apparatus further includes a hierarchical feature stitching unit, not shown in the drawing, configured to, before the identification unit performs classification and identification on each point in the target point set according to the global propagation feature of each point in the target point set, take a point feature of an upper layer point of each lower domain point in the target point set as a hierarchical feature of a corresponding lower domain point, and stitch the hierarchical feature after the global propagation feature of the corresponding lower domain point to obtain a final feature of the lower domain point.
In this embodiment of the present invention, the feature abstraction unit 202 includes a feature processing module and a determining module, where:
the characteristic processing module is used for abstracting and iterating the point characteristics concentrated by the target points into a plurality of initial local areas by adopting a neural network, and each local area comprises the local characteristics of the represented point area;
the judging module is used for judging whether the local features contained in each initial local area meet the preset global feature requirement or not;
further, the feature processing module is further configured to, when the determination result of the determining module is that the local feature included in any initial local region does not meet the preset global feature requirement, continue performing abstraction iteration on the local feature included in each initial local region by using a neural network until the target local region is obtained.
In this embodiment of the present invention, the identifying unit 204 is specifically configured to classify, according to the global propagation characteristics of each point in the target point set, each point in the target point set by using a preset classifier model, and generate corresponding labels for the points of different categories.
The classifier model is realized by adopting a support vector machine algorithm, a random forest algorithm or an Adaboost iterative algorithm.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The point cloud feature recognition processing method and device provided by the embodiment of the invention can collect the local features of each point in a target point set and the global features of the local area where the point is located based on the deep neural network to obtain the comprehensive features capable of identifying each point data, so that the point cloud recognition classification can be realized on the basis of the comprehensive features, and support is provided for accurate extraction of the laser point cloud earth surface points.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method as described above.
In this embodiment, the module/unit integrated with the point cloud feature recognition processing device may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The electronic device provided by the embodiment of the invention comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the point cloud feature identification processing method embodiments, such as S11-S14 shown in FIG. 1. Alternatively, the processor implements the functions of the modules/units in the above-described each point cloud feature recognition processing apparatus embodiment when executing the computer program, for example, the point set extraction unit 201, the feature abstraction unit 202, the feature summarization unit 203, and the recognition unit 204 shown in fig. 2.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of instruction segments of a computer program capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the point cloud feature identification processing device. For example, the computer program may be divided into a point set extraction unit 201, a feature abstraction unit 202, a feature summarization unit 203, and a recognition unit 204.
The electronic device can be a mobile computer, a notebook, a palm computer, a mobile phone and other devices. The electronic device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the electronic device in this embodiment may include more or fewer components, or combine certain components, or different components, for example, the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the electronic device and that connects the various parts of the overall electronic device using various interfaces and wires.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A point cloud feature identification processing method is characterized by comprising the following steps:
extracting a target point set in the laser point cloud data, wherein the target point set is a point set consisting of lower domain points of the laser point cloud;
abstract and iterate point characteristics in a target point set into a target local area, wherein the overall characteristics contained in the target local area meet the preset global characteristic requirement;
taking the total features contained in the target local domain and the local features contained in the initial local domain obtained in the process of carrying out abstract iteration on the target local domain as global propagation features, and distributing the global propagation features to the points in the target point set corresponding to the target local domain and the initial local domain;
and classifying and identifying the points in the target point set according to the global propagation characteristics of each point in the target point set.
2. The method of claim 1, wherein prior to said classifying and identifying the points in the set of target points based on the global propagation characteristics of each point in the set of target points, the method further comprises:
taking the point characteristics of upper-layer points of all lower domain points in the target point set as the hierarchical characteristics of the corresponding lower domain points;
and splicing the hierarchical features to the global propagation features corresponding to the lower domain points to obtain the final features of the lower domain points.
3. The method of claim 1 or 2, wherein the iterating the abstraction of the point features in the set of target points into the target local domain comprises:
abstract and iterate the point characteristics in the target point set into a plurality of initial local areas by adopting a neural network, wherein each local area comprises the local characteristics of the represented point area;
judging whether the local features contained in each initial local area meet the preset global feature requirement or not;
and if the local features contained in any initial local area do not meet the preset global feature requirement, adopting a neural network to continuously perform abstract iteration on the local features contained in each initial local area until the target local area is obtained.
4. The method of claim 1, wherein the classifying and identifying the points in the target point set according to the global propagation characteristics of each point in the target point set comprises:
and classifying the points in the target point set by adopting a preset classifier model according to the global propagation characteristics of each point in the target point set, and generating corresponding labels for the points of different classes.
5. The method of claim 1, wherein the point features comprise x, y, and z coordinate features, r, g, and b color features, elevation features, reflection intensity, centered x, y, and z position features, linear features, planar features, divergent features, verticality features, and fitted surface parameter features.
6. A point cloud feature recognition processing apparatus, comprising:
the point set extraction unit is used for extracting a target point set in the laser point cloud data, wherein the target point set is a point set consisting of lower domain points of the laser point cloud;
the characteristic abstraction unit is used for abstracting and iterating the point characteristics in the target point set into a target local area, and the overall characteristics contained in the target local area meet the preset global characteristic requirement;
the characteristic summarizing unit is used for taking the total characteristic contained in the target local area and the local characteristic contained in the initial local area obtained in the process of carrying out abstract iteration on the target local area as global propagation characteristics and distributing the global propagation characteristics to the points in the target point set corresponding to the target local area and the initial local area;
and the identification unit is used for classifying and identifying each point in the target point set according to the global propagation characteristics of each point in the target point set.
7. The apparatus of claim 6, further comprising:
and the hierarchical feature splicing unit is used for taking the point features of upper-layer points of all lower domain points in the target point set as the hierarchical features of the corresponding lower domain points before the identification unit classifies and identifies all the points in the target point set according to the global propagation features of all the points in the target point set, and splicing the hierarchical features after the global propagation features of the corresponding lower domain points to obtain the final features of the lower domain points.
8. The apparatus according to claim 6 or 7, wherein the feature abstraction unit comprises:
the characteristic processing module is used for abstracting and iterating the point characteristics concentrated by the target points into a plurality of initial local areas by adopting a neural network, and each local area comprises the local characteristics of the represented point area;
the judging module is used for judging whether the local features contained in each initial local area meet the preset global feature requirement or not;
and the characteristic processing module is further used for continuing abstract iteration on the local characteristics contained in each initial local area by adopting a neural network until a target local area is obtained when the judgment result of the judgment module is that the local characteristics contained in any initial local area do not meet the preset global characteristic requirement.
9. The apparatus according to claim 6, wherein the identification unit is specifically configured to classify each point in the target point set by using a preset classifier model according to the global propagation characteristics of each point in the target point set, and generate corresponding labels for different types of points.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the steps of the method according to any of claims 1-5 are implemented when the processor executes the program.
CN202010663889.7A 2020-07-10 2020-07-10 Point cloud feature identification processing method and device, storage medium and electronic equipment Pending CN111832473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010663889.7A CN111832473A (en) 2020-07-10 2020-07-10 Point cloud feature identification processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010663889.7A CN111832473A (en) 2020-07-10 2020-07-10 Point cloud feature identification processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111832473A true CN111832473A (en) 2020-10-27

Family

ID=72899815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010663889.7A Pending CN111832473A (en) 2020-07-10 2020-07-10 Point cloud feature identification processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111832473A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180081995A1 (en) * 2016-09-16 2018-03-22 Oracle International Corporation System and method providing a scalable and efficient space filling curve approach to point cloud feature generation
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110348299A (en) * 2019-06-04 2019-10-18 上海交通大学 The recognition methods of three-dimension object
CN110363178A (en) * 2019-07-23 2019-10-22 上海黑塞智能科技有限公司 The airborne laser point cloud classification method being embedded in based on part and global depth feature
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
CN110827398A (en) * 2019-11-04 2020-02-21 北京建筑大学 Indoor three-dimensional point cloud automatic semantic segmentation algorithm based on deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180081995A1 (en) * 2016-09-16 2018-03-22 Oracle International Corporation System and method providing a scalable and efficient space filling curve approach to point cloud feature generation
CN110232329A (en) * 2019-05-23 2019-09-13 星际空间(天津)科技发展有限公司 Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN110348299A (en) * 2019-06-04 2019-10-18 上海交通大学 The recognition methods of three-dimension object
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110414577A (en) * 2019-07-16 2019-11-05 电子科技大学 A kind of laser radar point cloud multiple target Objects recognition method based on deep learning
CN110363178A (en) * 2019-07-23 2019-10-22 上海黑塞智能科技有限公司 The airborne laser point cloud classification method being embedded in based on part and global depth feature
CN110827398A (en) * 2019-11-04 2020-02-21 北京建筑大学 Indoor three-dimensional point cloud automatic semantic segmentation algorithm based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵刚;杨必胜;: "基于Gradient Boosting的车载LiDAR点云分类", 地理信息世界, no. 03, 25 June 2016 (2016-06-25), pages 47 - 52 *

Similar Documents

Publication Publication Date Title
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
Zhang et al. A multilevel point-cluster-based discriminative feature for ALS point cloud classification
US20190073560A1 (en) Machine learning system for generating classification data and part localization data for objects depicted in images
US8953888B2 (en) Detecting and localizing multiple objects in images using probabilistic inference
CN110084173A (en) Number of people detection method and device
Kim et al. Color–texture segmentation using unsupervised graph cuts
CN110232329A (en) Point cloud classifications method, apparatus, storage medium and equipment based on deep learning
CN107944381A (en) Face tracking method, device, terminal and storage medium
Zhou et al. Ssg: superpixel segmentation and grabcut-based salient object segmentation
CN113569607A (en) Motion recognition method, motion recognition device, motion recognition equipment and storage medium
Jin et al. Content-based image retrieval based on shape similarity calculation
Kerdvibulvech A methodology for hand and finger motion analysis using adaptive probabilistic models
CN108615006A (en) Method and apparatus for output information
Sumbul et al. Plasticity-stability preserving multi-task learning for remote sensing image retrieval
CN116758321A (en) Image classification method, device and storage medium
CN114548213A (en) Model training method, image recognition method, terminal device, and computer medium
US20230394846A1 (en) Coarse-to-fine attention networks for light signal detection and recognition
CN111352926A (en) Data processing method, device, equipment and readable storage medium
Xu et al. Saliency-based superpixels
CN110659631A (en) License plate recognition method and terminal equipment
Bouteldja et al. A comparative analysis of SVM, K-NN, and decision trees for high resolution satellite image scene classification
CN111832473A (en) Point cloud feature identification processing method and device, storage medium and electronic equipment
Sharath Kumar et al. KD-tree approach in sketch based image retrieval
Bai et al. Informative patches sampling for image classification by utilizing bottom-up and top-down information
Feinen et al. Shape matching using point context and contour segments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination