CN113538474B - 3D point cloud segmentation target detection system based on edge feature fusion - Google Patents

3D point cloud segmentation target detection system based on edge feature fusion Download PDF

Info

Publication number
CN113538474B
CN113538474B CN202110786257.4A CN202110786257A CN113538474B CN 113538474 B CN113538474 B CN 113538474B CN 202110786257 A CN202110786257 A CN 202110786257A CN 113538474 B CN113538474 B CN 113538474B
Authority
CN
China
Prior art keywords
point cloud
feature
edge
extraction
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110786257.4A
Other languages
Chinese (zh)
Other versions
CN113538474A (en
Inventor
毛琳
向姝芬
杨大伟
张汝波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN202110786257.4A priority Critical patent/CN113538474B/en
Publication of CN113538474A publication Critical patent/CN113538474A/en
Application granted granted Critical
Publication of CN113538474B publication Critical patent/CN113538474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a 3D point cloud segmentation target detection system based on edge feature fusion, and relates to the technical field of deep learning 3D point cloud segmentation; the edge features are extracted by adopting the multi-layer perceptron, the point cloud retaining features and the point cloud extracting features are fused to generate point cloud edge fusion features, the extracting capability of the edge features is enhanced, the obtained edge features are applied to a target detection task, the precision of a point cloud segmentation model is improved, an accurate point cloud target detection result is obtained, and the method can be well applied to the fields of unmanned driving, manipulator perception and the like.

Description

3D point cloud segmentation target detection system based on edge feature fusion
Technical Field
The application relates to the technical field of deep learning 3D point cloud segmentation, in particular to a 3D point cloud segmentation target detection system based on edge feature fusion.
Background
In recent years, with the development of three-dimensional laser scanning technology, the adoption of three-dimensional point cloud data has become rapid and convenient. The three-dimensional point cloud has very wide application in the fields of unmanned driving, robots, indoor scene detection, recognition and the like, and the three-dimensional point cloud has become a research hot spot in the field of computer vision. At present, scene acquisition mainly depends on traditional point cloud, but a computer automatic examination technology is not mature enough, and the existing algorithm cannot realize full understanding and understanding of the point cloud target. Deep learning is rapidly evolving in the field of computer vision and achieves significant achievements in the identification and classification of two-dimensional images. Meanwhile, research on three-dimensional point cloud classification is affected by the research, and a deep learning method is increasingly used.
Most of the existing 3D point cloud target detection algorithms take key points as 3D detection targets, and an auxiliary training module is determined through connection relations among the key points, so that accurate positioning of a 3D target frame is achieved. The application discloses a 3D target detection method based on key points, which is disclosed in patent application No. CN112766100A, and utilizes the key points of a 3D target to realize the positioning of a frame by regressing the connection relation between the points of the same 3D target. But this approach ignores the relationship between local and global resulting in a low 3D object detection accuracy. The application discloses a point cloud classification method and a system based on local edge feature enhancement, and the patent application with the publication number of CN112052884A, wherein the point cloud classification model is constructed based on a graph convolution network structure and a channel attention mechanism to classify the point cloud filled with features by acquiring the point cloud voxelization data, the edge features of the point cloud corresponding to the point cloud voxelization data in a preset adjacent area and the voxel positions corresponding to the point cloud voxelization data, and the point cloud classification result is output, so that the interdependence relationship among feature channels is increased, the global feature expression capability of the point cloud is enhanced, and the efficiency and the prediction accuracy of the point cloud classification are improved. However, as point cloud edge filling is required, the method can increase the recognition error of the small sample model and is not beneficial to classifying the small sample point cloud model targets.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a 3D point cloud segmentation target detection system based on edge feature fusion, which can obtain more accurate edge information and improve the accuracy of a point cloud target detection result.
In order to achieve the above purpose, the technical scheme of the application is as follows: the 3D point cloud segmentation target detection system based on edge feature fusion comprises an expansion extraction feature module, a maintenance extraction feature module and a restoration extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, each edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature maintenance branch and a point cloud feature extraction branch, the point cloud feature sampling layer changes the size of point cloud features acquired from a three-dimensional image, the point cloud feature maintenance branch adopts 1-dimensional convolution to further process the point cloud features with changed size, the consistency of the point cloud features before and after convolution processing is ensured, the point cloud feature extraction branch obtains edge features through an extraction type multi-layer perceptron and fuses with the consistent point cloud features, so that the point cloud edge fusion maintenance feature content has more diversity, and the point cloud target feature information is fully and accurately represented.
Further, in the edge feature extraction unit, for obtaining the point cloud edge fusion feature, the implementation steps are as follows:
step 1: the original point cloud feature vector in the three-dimensional imageAs input, the feature vector is read, the size is n×3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), the original point cloud feature vector +_in the three-dimensional image is input>In the specific form->
Step 2: characterizing the original point cloudProcessing through a point cloud feature sampling layer to obtain converted point cloud features with changed sizes ∈>The expression is as follows:
step 3: in the point cloud feature holding branch, the point cloud features are convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in a point cloud feature preserving branch, v σi Is a vector representation expression in the point cloud retention branch, and the specific expression of the one-dimensional convolution operation is +.>i represents the characteristic dimension of the point cloud, the convolution kernel size of convolution processing is 1×1, s represents the step size of convolution operation, and s=1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
wherein Representing one-dimensional convolution operation in the extraction type multi-layer perceptron, wherein the convolution kernel size is 1 multiplied by 1, s represents the step length of the convolution operation, s=1, and lambda represents the offset of each layer of perceptron;
step 4: fusing the point cloud retention features and the point cloud extraction features to obtain point cloud edge fusion featuresThe method comprises the following steps:
through the original point cloud feature in the three-dimensional imagePerforming the above operation, and finally outputting the point cloud edge fusion feature +.>The following are provided;
because the convolution kernel size and step length of one-dimensional convolution are 1 in the convolution processing in each module, the point cloud edge fusion characteristic is finally outputThe feature size of the three-dimensional image is determined by a point cloud feature sampling layer, and represents the point cloud feature size of each unit, so that consistency before and after point cloud feature processing is ensured, the consistency is effectively enhanced, and the richness of the point cloud features in the three-dimensional image is improved.
Further, the expansion extraction feature module comprises a plurality of edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layerTaking 5 units as an example, namely a epsilon {1,2,3,4,5} is compounded, and the implementation steps are as follows for obtaining the point cloud edge fusion expansion feature:
step 1: the original point cloud feature vector in the three-dimensional imageAs input, the feature vector is read, the size is n×3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), the original point cloud feature vector +_in the three-dimensional image is input>In the specific form->
Step 2: characterizing the original point cloudProcessing through a point cloud characteristic expansion sampling layer to obtain converted point cloud characteristics +.>The expression is as follows:
step 3: in the point cloud feature holding branch, the point cloud features are convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
step 4: fusing the point cloud retention feature and the point cloud extraction feature to obtain a point cloud edge fusion expansion featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so as to obtain a point cloud edge fusion expansion features. Taking the example of 5 units as an example,
the method is characterized by comprising the following steps:
when a=2, the output result is
When a=3, the output result is
When a=4, the output result is
When a=5, the output result is
Further, the keep-extracted feature moduleThe method comprises b edge feature extraction units, wherein a point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature holding sampling layerTaking 3 units as an example, namely b epsilon {1,2,3} is compounded, and the implementation steps are as follows for obtaining the point cloud edge fusion retention feature:
step 1: fusing point cloud edge outputted by expansion extraction feature module with expansion featureAs input, reading the feature vector, and using a point cloud feature holding sampling layer to hold feature size to obtain a conversion type point cloud featureIn the specific form->
Step 2: in the point cloud feature holding branch, the point cloud feature is convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch.
Meanwhile, in the point cloud characteristic extraction branch, the method also comprises the following steps ofThe conversion type point cloud featureAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so as to obtain b point cloud edge fusion maintaining features. Taking as an example a 3-unit number,
the method is characterized by comprising the following steps:
b=2, the output result is
b=3, the output result is
Further, the feature restoration and extraction module comprises c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature restoration and sampling layerTaking 4 units as an example, namely c epsilon {1,2,3,4} is compounded, and the implementation steps are as follows for obtaining the point cloud edge fusion reduction characteristics:
step 1: fusing the point cloud edges output by the feature preserving and extracting module into preserving featuresAs input, reading the feature vector, and using an restore point cloud feature sampling layer to perform feature size preservation to obtain a conversion point cloud featureIn the specific form->
Step 2: in the point cloud feature holding branch, the point cloud feature is convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs the input of the characteristic vector, the characteristic vector is extracted by an extraction type multi-layer perceptronTaking the characteristics to obtain point cloud extraction characteristics +.>The following formula is shown:
step 3: fusing the point cloud retention feature and the point cloud extraction feature to obtain a point cloud edge fusion reduction featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the c point cloud edge fusion reduction features are obtained by analogy. Taking a 4-unit example as an example,
the method is characterized by comprising the following steps:
when c=2, the output result is
When c=3, the output result is
c=4, the output result is
By adopting the technical scheme, the application can obtain the following technical effects:
(1) Is suitable for obtaining the point cloud characteristic through the edge characteristic
The three-dimensional image is input by taking the original point cloud characteristics, edge characteristic information is fused in the process of extracting the point cloud characteristics by adopting the multi-layer perceptron structure, the obtained point cloud characteristics have double expression of the original point cloud characteristics and the edge characteristics, the expression capacity of the point cloud characteristics is enhanced, and the method is suitable for the condition of acquiring the point cloud characteristics through the edge characteristics.
(2) Adapted for point cloud segmentation tasks
According to the three-dimensional image fusion method, the point cloud edge fusion maintaining characteristics with stronger expression capability are obtained through the edge characteristic extraction composite structure, the point cloud characteristics and the point cloud maintaining characteristics of the targets in the three-dimensional image can be fused, and the fused points cloud characteristics and the point cloud maintaining characteristics are input as target characteristics after being cascaded, so that accurate point cloud segmentation results are obtained.
(3) Adapted for target detection tasks
The method can effectively improve the performance of the point cloud segmentation model in the three-dimensional image, has relatively simple factors such as targets, actions, attributes and the like, and can be better applied to target detection tasks.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
FIG. 1 is a schematic diagram of an edge feature extraction unit;
FIG. 2 is a schematic diagram of an expansion extracted feature module composed of edge feature extraction units;
FIG. 3 is a schematic diagram of a keep-alive feature module consisting of an edge feature extraction unit;
fig. 4 is a schematic diagram of a restoration extraction feature module composed of an edge feature extraction unit:
FIG. 5 is a schematic diagram of the overall framework of the present system (edge feature extraction composite structure);
FIG. 6 is a schematic diagram of the environment surrounding the autopilot recognition in example 1;
fig. 7 is a schematic diagram of the case of the detection scenario of the railway scenario in embodiment 2;
fig. 8 is a schematic view showing a case where the robot grips an object in embodiment example 3.
Detailed Description
The embodiment of the application is implemented on the premise of the technical scheme of the application, and a detailed implementation mode and a specific operation process are provided, but the protection scope of the application is not limited to the following embodiment.
The embodiment provides a 3D point cloud segmentation target detection system based on edge feature fusion, which is used for obtaining point cloud edge fusion reduction features containing edge feature information with strong expression capability by carrying out edge feature extraction composite structure processing on original point cloud features of a three-dimensional image. The composite structure comprises three modules, namely an expansion extraction feature module, a maintenance extraction feature module and a reduction extraction feature module, wherein the basic structure is an edge feature extraction unit which comprises a point cloud feature sampling layer, a point cloud feature maintenance branch and a point cloud feature extraction branch. The point cloud feature sampling layer changes the input point cloud feature size, the point cloud feature maintaining branch adopts 1-dimensional convolution to further process the point cloud features after the size is changed, the consistency of the point cloud features before and after the convolution processing is guaranteed, the point cloud feature extracting branch extracts edge features through the extraction type multi-layer perceptron and then fuses with the consistent point cloud features, so that the point cloud edge fusion feature content is more diversified, and the point cloud target feature information is fully and accurately represented.
The edge feature extraction unit acquires hidden layer features in the processing process of the extraction type multi-layer perceptron, then merges the point cloud features, enriches the diversity of feature content, and multiplexes the edge feature extraction unit to obtain an expansion extraction feature module, a maintenance extraction feature module and a reduction extraction feature module, acquires features with fusion expression of the point cloud features and the edge features, and enhances the expression capability of the point cloud features. And fusing the point cloud maintaining characteristics obtained by the point cloud characteristic maintaining branch and the point cloud extracting characteristics obtained by the point cloud characteristic extracting branch to serve as the input of the next unit. The sampling layer of the edge feature extraction unit in the expansion extraction feature module is a point cloud feature expansion sampling layer, the sampling layer of the edge feature extraction unit in the extraction feature module is a point cloud feature holding sampling layer, the sampling layer of the edge feature extraction unit in the reduction extraction feature module is a point cloud feature reduction sampling layer, and feature size conversion is carried out on input point cloud features in each unit.
The edge feature extraction composite structure comprises the three modules, is used as a main structure of the point cloud multi-layer extraction feature, and is input into the maximum pooling layer to extract the most dominant feature in the whole structure.
The edge feature extraction unit takes the original point cloud coordinates of the three-dimensional image as input, and the original point cloud features are preprocessed in a clustering mode such as KNNChanging or maintaining the characteristic size of the point cloud through the point cloud characteristic sampling layer, and outputting the converted point cloud characteristic R F As input to the point cloud feature holding branch and the point cloud feature extraction branch. Output point cloud feature preserving R in point cloud feature preserving branch B After being processed by an extraction type multi-layer perceptron in the point cloud feature extraction branch, the point cloud feature extraction branch is output as a point cloud extraction feature R T Maintaining the point cloud with the feature R B Extracting characteristic R from point cloud T The final output is the point cloud edge fusion characteristic R, specifically:
(1) Inputting original point cloud features of three-dimensional imagesI.e. the feature vector input to the edge feature extraction unit.
(2) Output point cloud retention feature R after processing three-dimensional image B The method specifically comprises the following steps: original point cloud features input by three-dimensional images are changed in size through a point cloud feature sampling layer to obtain converted point cloud features R F The conversion type point cloud characteristic R F And carrying out 1-dimensional convolution processing on the feature vector obtained by the convolution kernel with the size of 1.
(3) The three-dimensional image is processed and then the point cloud extraction feature R is output T The method specifically comprises the following steps: three-dimensionalOriginal point cloud features input by the image are changed in size through a point cloud feature sampling layer to obtain converted point cloud features R F The conversion type point cloud characteristic R F Extracting features by a multi-layer perceptron, and outputting point cloud extracted features R T
(4) The point cloud feature fusion comprises the following specific operations: maintaining point cloud feature R B Extracting characteristic R from point cloud T And adding and fusing to obtain a point cloud edge fusion maintaining characteristic R.
And further processing the original point cloud characteristics in the point cloud characteristic holding branch, improving the expression capability of the edge characteristics under the condition that the characteristic size is changed and the attribute of the detection target is unchanged by the point cloud characteristics, and providing the point cloud original characteristics for the point cloud characteristic extraction branch. And adding the point cloud extraction features and the point cloud retention features in the point cloud feature extraction branch, enriching the feature content, and obtaining the point cloud features with strong expression capability.
The edge feature extraction composite structure comprises three modules which are used in series, namely an expansion extraction feature module, a maintenance extraction feature module and a reduction extraction feature module, and the three modules are input into an original point cloud featureThe output is the point cloud edge fusion feature +.>The method comprises the following steps:
(1) Inputting original point cloud features in three-dimensional imagesI.e. input to the edge feature extraction complex structure.
(2) The expansion extraction feature module is provided with a edge feature extraction units, wherein a point cloud feature sampling layer is a point cloud feature expansion sampling layer, the size of an input original point cloud feature is amplified, and the output of the point cloud feature expansion sampling layer is thatThe point cloud feature sampling layer is a point cloud feature preserving sampling layer, the input point cloud feature size is kept unchanged, and the output of the point cloud feature preserving sampling layer is +.>The restoring and extracting feature module is provided with c edge feature extracting units, the point cloud feature sampling layer is a point cloud feature restoring and sampling layer, the size of the input original point cloud features is restored, and the output of the point cloud feature restoring and sampling layer is +.>
(3) Point cloud preserving feature R in the three-dimensional image B Is a conversion type point cloud characteristic R in a point cloud characteristic holding branch F And carrying out 1-dimensional convolution processing on the feature vector obtained by 1-dimensional convolution processing through 1 convolution kernel with the size of 1. In particular point cloud feature preserving representation in an expansion extraction feature moduleMaintaining point cloud feature retention in extracted feature module is expressed asPoint cloud feature retention in the reduction extraction feature module is denoted +.>
(4) Point cloud extraction features R in the three-dimensional image T The method is characterized in that the conversion type point cloud features extract the features in the conversion type point cloud features through a multi-layer perceptron, the fusion processing is waited to be carried out in a point cloud feature extraction branch, and the point cloud feature extraction in the expansion extraction feature module is expressed asThe point cloud feature extraction in the keep extraction feature module is denoted +.>Point cloud feature extraction in the reduction extraction feature module is denoted +.>
(5) The point cloud feature fusion in the three-dimensional image comprises the following specific operations: in the point cloud feature extraction branch, maintaining the point cloud with a feature R B Extracting characteristic R from point cloud T The point cloud feature fusion in the expansion extraction feature module is expressed asThe point cloud feature fusion in the keep extraction feature module is denoted +.>Point cloud feature fusion in the reduction extraction feature module is expressed as +.>
And (3) extracting a composite structure from the edge characteristics, and performing size amplification, maintenance, reduction and transformation on the input point cloud characteristics to obtain conversion type point cloud characteristics. And further processing the converted point cloud characteristics in the point cloud characteristic holding branch, and extracting the point cloud characteristics without changing the size of the point cloud characteristics. And adding the point cloud extraction features and the point cloud retention features in the point cloud feature extraction branch, so that the generated point cloud edge fusion feature content is more diversified, and the point cloud target feature information in the three-dimensional image is fully and accurately represented.
In the dilation extraction feature module:
(1) When a=1, the output point cloud edge fusion expanded feature size is [ n×64].
(2) When a=2, the output point cloud edge fusion expanded feature size is [ n×128].
(3) When a=3, the output point cloud edge fusion expanded feature size is [ n×256].
(4) When a=4, the output point cloud edge fusion expanded feature size is [ n×512].
(5) When a=5, the output point cloud edge fusion expanded feature size is [ n×1024].
In the keep extraction features module:
(1) When b=1, the output point cloud edge fusion maintains the feature size as [ n×1024].
(2) When b=2, the output point cloud edge fusion maintains the feature size as [ n×1024].
(3) When b=3, the output point cloud edge fusion maintains the feature size as [ n×1024].
In the reduction extraction feature module:
(1) When c=1, the output point cloud edge fusion reduction feature size is [ n×512].
(2) When c=2, the output point cloud edge fusion reduction feature size is [ n×256].
(3) When c=3, the output point cloud edge fusion reduction feature size is [ n×128].
(4) When c=4, the output point cloud edge fusion reduction feature size is [ n×64].
The foregoing descriptions of specific exemplary embodiments of the present application are presented for purposes of illustration and description. It is not intended to limit the application to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the application and its practical application to thereby enable one skilled in the art to make and utilize the application in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the application be defined by the claims and their equivalents.

Claims (4)

1. The 3D point cloud segmentation target detection system based on edge feature fusion is characterized by comprising an expansion extraction feature module, a maintenance extraction feature module and a restoration extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, each edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature maintenance branch and a point cloud feature extraction branch, the point cloud feature sampling layer changes the point cloud feature size acquired from a three-dimensional image, the point cloud feature maintenance branch adopts 1-dimensional convolution to further process the point cloud features with changed size, the consistency of the point cloud features before and after convolution processing is ensured, and the point cloud feature extraction branch obtains edge features through an extraction type multi-layer perceptron and is fused with the consistent point cloud features;
in the edge feature extraction unit, for obtaining the point cloud edge fusion feature, the implementation steps are as follows:
step 1: the original point cloud feature vector in the three-dimensional imageAs input, the feature vector is read, the size is n×3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and the original point cloud feature vector in the three-dimensional image is inputIn the specific form->
Step 2: characterizing the original point cloudProcessing through a point cloud feature sampling layer to obtain converted point cloud features with changed sizes ∈>The expression is as follows:
step 3: in the point cloud feature holding branch, the point cloud features are convertedAs a feature vector input, adopt1-dimensional convolution operation is performed by using 1 convolution kernel with size of 1, and the output point cloud keeps the characteristic +.>The following is shown:
wherein ,is a feature vector input expression in a point cloud feature preserving branch, v σi Is a vector representation expression in the point cloud retention branch, and the specific expression of the one-dimensional convolution operation is +.>i represents the characteristic dimension of the point cloud, the convolution kernel size of convolution processing is 1×1, s represents the step size of convolution operation, and s=1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
wherein Representing one-dimensional convolution operation in the extraction type multi-layer perceptron, wherein the convolution kernel size is 1 multiplied by 1, s represents the step length of the convolution operation, s=1, and lambda represents the offset of each layer of perceptron;
step 4: fusing the point cloud retention features and the point cloud extraction features to obtain point cloud edge fusion featuresThe method comprises the following steps:
through the original point cloud feature in the three-dimensional imagePerforming the operation, and finally outputting the point cloud edge fusion characteristicsThe following are provided;
2. the edge feature fusion-based 3D point cloud segmentation target detection system according to claim 1, wherein the expansion extraction feature module comprises a number of edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layerIn order to obtain the point cloud edge fusion expansion characteristics, the implementation steps are as follows:
step 1: the original point cloud feature vector in the three-dimensional imageAs input, the feature vector is read, the size is n×3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and the original point cloud feature vector in the three-dimensional image is inputIn the specific form->
Step 2: characterizing the original point cloudProcessing through a point cloud characteristic expansion sampling layer to obtain converted point cloud characteristics +.>The expression is as follows:
step 3: in the point cloud feature holding branch, the point cloud features are convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs a feature vector input, viaExtracting features by using over-extraction type multi-layer perceptron to obtain point cloud extraction features +.>The following formula is shown:
step 4: fusing the point cloud retention feature and the point cloud extraction feature to obtain a point cloud edge fusion expansion featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so as to obtain a point cloud edge fusion expansion features.
3. The edge feature fusion-based 3D point cloud segmentation target detection system according to claim 1, wherein the feature preserving and extracting module comprises b edge feature extracting units, and the point cloud feature sampling layer in each edge feature extracting unit is a point cloud feature preserving and sampling layerIn order to obtain the point cloud edge fusion retention feature, the implementation steps are as follows:
step 1: taking the point cloud edge fusion expansion feature output by the expansion extraction feature module as input, reading the feature vector, and using a point cloud feature holding sampling layer to hold the feature size to obtain a conversion type point cloud feature
Step 2: point cloud feature retention supportIn-road, in-road conversion type point cloud featuresAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud features are adoptedAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so as to obtain b point cloud edge fusion maintaining features.
4. The 3D point cloud segmentation target detection system based on edge feature fusion according to claim 1, wherein the reduction extraction feature module includes c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature reduction sampling layerIn order to obtain the point cloud edge fusion reduction characteristics, the implementation steps are as follows:
step 1: taking the point cloud edge fusion maintaining feature output by the maintaining and extracting feature module as input, reading the feature vector, and maintaining the feature size by using the restoring point cloud feature sampling layer to obtain the converted point cloud feature
Step 2: in the point cloud feature holding branch, the point cloud feature is convertedAs characteristic vector input, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with 1 size, and the output point cloud keeps characteristic +.>The following is shown:
wherein ,is a feature vector input expression in the point cloud feature preserving branch;
meanwhile, in the point cloud feature extraction branchAlso characterized by the conversion type point cloudAs the characteristic vector input, extracting the characteristic by an extraction type multi-layer perceptron to obtain the point cloud extraction characteristic +.>The following formula is shown:
step 3: fusing the point cloud retention feature and the point cloud extraction feature to obtain a point cloud edge fusion reduction featureThe method comprises the following steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the c point cloud edge fusion reduction features are obtained by analogy.
CN202110786257.4A 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion Active CN113538474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110786257.4A CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110786257.4A CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Publications (2)

Publication Number Publication Date
CN113538474A CN113538474A (en) 2021-10-22
CN113538474B true CN113538474B (en) 2023-08-22

Family

ID=78098712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110786257.4A Active CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Country Status (1)

Country Link
CN (1) CN113538474B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299243A (en) * 2021-12-14 2022-04-08 中科视语(北京)科技有限公司 Point cloud feature enhancement method and device based on multi-scale fusion
CN114998890B (en) * 2022-05-27 2023-03-10 长春大学 Three-dimensional point cloud target detection algorithm based on graph neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112052860A (en) * 2020-09-11 2020-12-08 中国人民解放军国防科技大学 Three-dimensional target detection method and system
CN112270249A (en) * 2020-10-26 2021-01-26 湖南大学 Target pose estimation method fusing RGB-D visual features
CN112785611A (en) * 2021-01-29 2021-05-11 昆明理工大学 3D point cloud weak supervision semantic segmentation method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784333B (en) * 2019-01-22 2021-09-28 中国科学院自动化研究所 Three-dimensional target detection method and system based on point cloud weighted channel characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112052860A (en) * 2020-09-11 2020-12-08 中国人民解放军国防科技大学 Three-dimensional target detection method and system
CN112270249A (en) * 2020-10-26 2021-01-26 湖南大学 Target pose estimation method fusing RGB-D visual features
CN112785611A (en) * 2021-01-29 2021-05-11 昆明理工大学 3D point cloud weak supervision semantic segmentation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征融合的林下环境点云分割;樊丽;刘晋浩;黄青青;;北京林业大学学报(第05期);全文 *

Also Published As

Publication number Publication date
CN113538474A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN107563381B (en) Multi-feature fusion target detection method based on full convolution network
US20230045519A1 (en) Target Detection Method and Apparatus
CN109344701B (en) Kinect-based dynamic gesture recognition method
CN107358262B (en) High-resolution image classification method and classification device
CN111460968B (en) Unmanned aerial vehicle identification and tracking method and device based on video
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
Gao et al. Counting from sky: A large-scale data set for remote sensing object counting and a benchmark method
CN108647694B (en) Context-aware and adaptive response-based related filtering target tracking method
CN113538474B (en) 3D point cloud segmentation target detection system based on edge feature fusion
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN110796143A (en) Scene text recognition method based on man-machine cooperation
CN111462120A (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN110674741A (en) Machine vision gesture recognition method based on dual-channel feature fusion
CN110751232A (en) Chinese complex scene text detection and identification method
CN110443279B (en) Unmanned aerial vehicle image vehicle detection method based on lightweight neural network
CN112597920A (en) Real-time object detection system based on YOLOv3 pruning network
CN108537109B (en) OpenPose-based monocular camera sign language identification method
Deng A survey of convolutional neural networks for image classification: Models and datasets
Chen et al. Research and implementation of robot path planning based on computer image recognition technology
CN111368733A (en) Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal
CN111368663A (en) Method, device, medium and equipment for recognizing static facial expressions in natural scene
Song et al. PDD: Post-Disaster Dataset for Human Detection and Performance Evaluation
Tencer et al. A new framework for online sketch-based image retrieval in web environment
Yang et al. A deep learning model S-Darknet suitable for small target detection
CN115496859A (en) Three-dimensional scene motion trend estimation method based on scattered point cloud cross attention learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant