CN111340831A - Point cloud edge detection method and device - Google Patents

Point cloud edge detection method and device Download PDF

Info

Publication number
CN111340831A
CN111340831A CN201811545580.7A CN201811545580A CN111340831A CN 111340831 A CN111340831 A CN 111340831A CN 201811545580 A CN201811545580 A CN 201811545580A CN 111340831 A CN111340831 A CN 111340831A
Authority
CN
China
Prior art keywords
point cloud
network
edge detection
training
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811545580.7A
Other languages
Chinese (zh)
Inventor
李艳丽
杨恒
赫桂望
蔡金华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811545580.7A priority Critical patent/CN111340831A/en
Publication of CN111340831A publication Critical patent/CN111340831A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a point cloud edge detection method and device, and relates to the field of computers. By using point cloud training data, such as a first input point cloud for training and a true labeled point cloud obtained by labeling the first input point cloud, iterative training is carried out on a generating network and a judging network of the generating type countermeasure network, the generating network carries out edge detection on the first input point cloud to obtain a generating marked point cloud, the judging network judges the authenticity of the generating marked point cloud based on the real marked point cloud until the generating network can make a false sample which makes the judging network difficult to identify the authenticity, and then the generating network in the trained generating type countermeasure network is utilized, the edge detection is carried out on the second input point cloud to be detected, thereby realizing a point cloud edge detection scheme based on the generative countermeasure network, the method can directly carry out edge detection on the three-dimensional point cloud, avoid information loss in the point cloud dimension reduction process and is beneficial to improving the accuracy of point cloud edge detection.

Description

Point cloud edge detection method and device
Technical Field
The disclosure relates to the field of computers, and in particular relates to a point cloud edge detection method and device.
Background
A point cloud is a collection of points in three-dimensional space whose attributes may include, for example, coordinate locations, reflection intensities, color information, and the like.
The purpose of point cloud edge detection is to distinguish which points in the point cloud fall on the edge. A related technology projects three-dimensional point cloud onto a two-dimensional plane to form a two-dimensional image, an image edge is extracted by an image edge extraction method, and then mapping is carried out according to the corresponding relation between image pixel points and the three-dimensional point cloud points to obtain the point cloud edge.
Disclosure of Invention
The inventor finds that information loss exists in the process of converting point cloud from three-dimensional to two-dimensional, complete edges are difficult to extract, and accuracy is poor.
In view of this, the present disclosure provides a point cloud edge detection scheme implemented based on a generative countermeasure network, which can directly perform edge detection on a three-dimensional point cloud, avoid information loss during a point cloud dimension reduction process, and facilitate improvement of accuracy of point cloud edge detection.
Some embodiments of the present disclosure provide a point cloud edge detection method, including:
the method comprises the following steps of training a generative confrontation network by using point cloud training data, wherein the generative confrontation network comprises a generative network and a discrimination network, the point cloud training data comprises a first input point cloud for training and a real labeling point cloud obtained by labeling the first input point cloud, and the training process comprises the following steps: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and the iterative training of the generation countermeasure network enables the difference between the true probability and the false probability in the discrimination result of the discrimination network to be smaller than a preset value;
and carrying out edge detection on the second input point cloud to be detected by utilizing the generation network in the trained generation countermeasure network.
In some embodiments, the convolution layer of the generation network or the discrimination network downsamples the point cloud of the input convolution layer into point clouds with multiple granularities, the point clouds with the various granularities are respectively and independently subjected to convolution operation, and the operation results are cascaded into one point cloud.
In some embodiments, the convolutional layer of the generating network or the discriminating network is provided with a plurality of convolutional branches of different scales; and generating a network or judging a cascade layer of the network, keeping the point cloud position unchanged, and connecting a plurality of convolution branches in series.
In some embodiments, the parameter N of the convolutional layer of the first layer in different convolution branches is different, and the parameter N of the convolutional layer of the last layer in different convolution branches is the same, where the parameter N represents the number of points of the output point cloud.
In some embodiments, the cascaded layers keep constant the parameter N of the last layer of convolutional layers in each convolutional branch, and add the parameter C of the last layer of convolutional layers in each convolutional branch, where the parameter C represents the number of characteristic channels of the point of the output point cloud.
In some embodiments, the results of the edge detection include:
probability of whether each point of the second input point cloud is an edge point;
alternatively, the first and second electrodes may be,
probabilities of semantic categories for points of the second input point cloud.
In some embodiments, the second input point cloud is obtained by thinning the original point cloud to be detected.
Some embodiments of the present disclosure provide a point cloud edge detection apparatus, including:
a memory; and
a processor coupled to the memory, the processor configured to perform the point cloud edge detection method of any of the foregoing embodiments based on instructions stored in the memory.
Some embodiments of the present disclosure provide a point cloud edge detection apparatus, including:
the training unit is configured to train a generative confrontation network by using point cloud training data, the generative confrontation network comprises a generative network and a discrimination network, the point cloud training data comprises a first input point cloud for training and a real labeling point cloud obtained by labeling the first input point cloud, and the training process comprises the following steps: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and the iterative training of the generation countermeasure network enables the difference between the true probability and the false probability in the discrimination result of the discrimination network to be smaller than a preset value;
and the detection unit is configured to perform edge detection on the second input point cloud to be detected by utilizing the generation network in the trained generation countermeasure network.
Some embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the point cloud edge detection method of any one of the preceding embodiments.
Drawings
The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure will be more clearly understood from the following detailed description, which proceeds with reference to the accompanying drawings,
it is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
Fig. 1 is a schematic flow chart of a point cloud edge detection method according to some embodiments of the present disclosure.
Fig. 2 is a schematic diagram of a structure of a generation network according to some embodiments of the present disclosure.
Fig. 3 is a schematic structural diagram of a discrimination network according to some embodiments of the present disclosure.
Fig. 4 is a schematic structural diagram of a point cloud edge detection apparatus according to some embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of a point cloud edge detection apparatus according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
The method realizes a point cloud edge detection scheme based on the generative countermeasure network, can directly carry out edge detection on the three-dimensional point cloud, avoids information loss in the point cloud dimension reduction process, and is favorable for improving the accuracy of point cloud edge detection. In addition, compared with a convolutional neural network, the generative countermeasure network can better complete tasks under a small sample.
The generative confrontation network includes a generative network (set to G) and a discriminative network (set to D). The generation network G generates a false sample G (z) (such as a point cloud edge) according to the observation information z (such as a laser point cloud), and the discrimination network D performs authenticity prediction on the false sample G (z) based on the real sample x. The ideal generating network G has super-strong false making capability, the made false samples G (z) make the discrimination network D difficult to identify, and on the contrary, the ideal discrimination network D has super-strong false distinguishing capability and can accurately judge the truth of the false samples G (z).
The method utilizes the game characteristics of a generating network G and a judging network D to carry out iterative training on the generating countermeasure network in a point cloud edge detection service, so that the generating network G has better counterfeiting capability, the judging network D is difficult to identify the truth of a false sample (the false sample is the generating annotation point cloud, and the generating annotation point cloud is point cloud annotation data to be identified during the training process) made by the generating network G based on a real sample (the real sample is the real annotation point cloud, and the generating annotation point cloud is point cloud annotation data to be identified during the training process), which shows that the accuracy of the edge detection result of the generating network G is very high, and then, the generating network G in the trained generating countermeasure network is utilized to carry out edge detection on the point cloud to be detected, and the accuracy of the edge detection result is very high.
The point cloud edge detection scheme implemented based on the generative confrontation network of the present disclosure is specifically described below with reference to fig. 1.
Fig. 1 is a schematic flow chart of a point cloud edge detection method according to some embodiments of the present disclosure. As shown in fig. 1, the method of this embodiment includes:
at step 110, the generative confrontation network is trained using the point cloud training data. The generative confrontation network includes a generative network and a discriminative network. The point cloud training data comprises a first input point cloud for training and a real labeling point cloud obtained by labeling the first input point cloud.
The training process comprises the following steps: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the generated labeled point cloud is input into a discrimination network for authenticity discrimination, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and iterative training on the generation countermeasure network enables the difference between the probability of authenticity and the probability of authenticity in the discrimination result of the discrimination network to be smaller than a preset value, which shows that the discrimination network D is difficult to identify the authenticity of a false sample (namely the generated labeled point cloud) made by the generation network G based on a real sample (namely the real labeled point cloud), namely the generation network G has good counterfeiting capability, and the accuracy of the edge detection result is very high.
For example, a "real labeling point cloud" may be obtained by manual labeling. Different edge data may be labeled for different applications. Robustness can be improved by enriching the training samples.
It should be noted that the descriptions of "first", "second", and the like in the present disclosure are only used for distinguishing different objects, and are not used for representing the meanings of size, timing, and the like.
In step 120, edge detection is performed on the second input point cloud to be detected by using the generation network in the trained generation countermeasure network.
In some embodiments, the second input point cloud is obtained by thinning the original point cloud to be detected. The thinning algorithm may, for example, use a k-means algorithm to select representative fractional points from each of the aggregated categories. Therefore, the number of data points is reduced to the maximum under the condition of ensuring that the shape of the vector curve is not changed.
In some embodiments, binary edge detection may be performed, that is, whether a point in the point cloud is an edge point is detected, and the result of the corresponding edge detection is a probability of whether each point of the second input point cloud is an edge point.
In some embodiments, semantic edge detection may be performed, that is, semantic categories of the point cloud edges are detected, and the corresponding edge detection result is a probability of the semantic categories of each point of the second input point cloud. For example, semantic categories corresponding to point clouds about street views include, for example, curbs, lane lines, and the like.
In some embodiments, the generative confrontation network may be, for example, a generative confrontation network constructed based on a deep neural network.
The disclosure also provides a multi-scale generation type countermeasure network, the convolution layer of which is provided with a plurality of convolution branches with different scales, and each convolution branch respectively and independently performs convolution operation, thereby better retaining details and improving precision in point cloud edge detection.
In some embodiments, the generating network and the discriminating network in the generating countermeasure network have similar structures, for example, the generating network and the discriminating network each include a convolutional layer (provided with a plurality of convolutional branches of different scales), a cascade layer, a fully-connected layer, a normalization layer, and an output layer. In addition, the discrimination network includes an averaging layer. The functions of the fully-connected layer and the normalization layer can be referred to in the prior art, and are not described in detail herein.
In some embodiments, the convolutional layers of the generating network and the discriminating network are provided with a plurality of convolutional branches of different scales; the cascade layer of the generating network and the discriminating network keeps the point cloud position unchanged and is connected with a plurality of convolution branches in series. Thus, the convolution layers of the generation network and the discrimination network down-sample the point cloud of the input convolution layer into point clouds of a plurality of granularities, the point clouds of each granularity are respectively and independently subjected to convolution operation, and the operation results are cascaded into one point cloud.
The parameter N of the convolution layer of the first layer in different convolution branches is different, and the parameter N of the convolution layer of the last layer in different convolution branches is the same, wherein the parameter N represents the number of points of the output point cloud.
The cascade layer keeps the parameter N of the convolution layer at the last layer in each convolution branch unchanged, and superposes the parameter C of the convolution layer at the last layer in each convolution branch, wherein the parameter C represents the number of characteristic channels of the point of the output point cloud.
In the network layer structure, "N- …, C- …" represents a convolution layer (X-convv layer), the input point cloud is calculated as an output, wherein N represents the number of output points, C represents the characteristic channel number of output points, and C — 12 "represents the number of output points, C — 35" represents the number of output points, C — N represents the number of output points, C — 12 "represents the number of output points, N represents the number of output points, C — 35" represents the number of convolution layer (X-convoluting layer), and C — N represents the number of output points, C — N — C — N represents the number of output points, C — N — C — N — C —:
Figure BDA0001909253780000071
to representThe characteristics of the output channel xi are normalized, exp denotes the exponential operation.
As shown in fig. 3, the discrimination network discriminates and generates the authenticity of the labeling point cloud based on the real labeling point cloud, wherein the dotted line frame is a data layer, the solid line frame is a network layer, "8192 × 5" represents the point cloud to be discriminated, that is, "generating the labeling point cloud" output by the network, the point number of the point cloud to be discriminated is 8192, the number of channels is 5 (such as color information: red (R), green (G), blue (B), reflection Intensity (Intensity), and task cls), "1 × 2" represents an output layer, and the probability that the "generating the labeling point cloud" of the point cloud to be discriminated is true and false is output.
It should be noted that the specific values in fig. 2 and fig. 3, for example, the point cloud point number 8192, and the value of the parameter x in the parameter N, the parameter C, fc (x), are only an example, and may be set to other values according to the business needs.
Taking streetscape laser point cloud edge detection as an example, the method comprises a training stage and a testing stage.
A training stage: firstly, acquiring and capturing a large amount of color point cloud data by using an existing public data set or a vehicle-mounted laser camera device, and carrying out point cloud slicing along a driving track, for example, acquiring point clouds with the current vehicle-mounted position as the center and the radius within 20 meters at intervals of 10 meters; then, the following processes a-c are respectively carried out on each point cloud: a) extracting 8192 point cloud points by using a K-Mean mode, b) marking the edge attribute of the point cloud by manually drawing a bounding box, c) carrying out iterative training on the generative countermeasure network by using training data until the generative countermeasure network can make a false sample- 'generating a marked point cloud', which makes the network difficult to identify true and false.
And (3) a testing stage: firstly, a vehicle-mounted laser camera device collects and captures a large amount of color point cloud data, and point cloud slicing is carried out along a driving track, for example, point clouds with the current vehicle-mounted position as the center and the radius within 20 meters are obtained every 10 meters; then, each point cloud is respectively processed a-b), namely a) 8192 point cloud points are extracted in a K-Mean mode, and b) edge detection (namely edge labeling) is carried out by utilizing a generating network. In addition, all point cloud slices with edges marked can be spliced.
In some embodiments, the point cloud edge detection of the present disclosure is further applicable to business scenarios such as point cloud compression, simulation construction, environment perception, and the like. In point cloud compression, non-edge points are removed, edge points falling on the outline of the object are reserved, and the number of point clouds can be reduced. In simulation construction, a simple simulation environment can be constructed by vectorizing edge points. In environment perception, edge point semantization is beneficial to perception of an environment, for example, elements such as lane lines and road teeth in street view often fall on the edge, and the edge point semantization is beneficial to intelligent analysis of a vehicle-mounted system.
Fig. 4 is a schematic structural diagram of a point cloud edge detection apparatus according to some embodiments of the present disclosure. As shown in fig. 4, the point cloud edge detection apparatus 400 of this embodiment includes:
a memory 410; and
a processor 420 coupled to the memory, the processor configured to execute the point cloud edge detection method of any of the foregoing embodiments based on instructions stored in the memory.
Memory 410 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
Fig. 5 is a schematic structural diagram of a point cloud edge detection apparatus according to some embodiments of the present disclosure. As shown in fig. 5, the point cloud edge detection apparatus 500 of this embodiment includes:
a training unit 510 configured to train a generative confrontation network with point cloud training data, the generative confrontation network including a generation network and a discrimination network, the point cloud training data including a first input point cloud for training and a real labeled point cloud obtained by labeling the first input point cloud, wherein the training process includes: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and the iterative training of the generation countermeasure network enables the difference between the true probability and the false probability in the discrimination result of the discrimination network to be smaller than a preset value.
A detecting unit 520 configured to perform edge detection on the second input point cloud to be detected by using the trained generation network in the generation countermeasure network.
Some embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the point cloud edge detection method of any one of the preceding embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A point cloud edge detection method comprises the following steps:
the method comprises the following steps of training a generative confrontation network by using point cloud training data, wherein the generative confrontation network comprises a generative network and a discrimination network, the point cloud training data comprises a first input point cloud for training and a real labeling point cloud obtained by labeling the first input point cloud, and the training process comprises the following steps: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and the iterative training of the generation countermeasure network enables the difference between the true probability and the false probability in the discrimination result of the discrimination network to be smaller than a preset value;
and carrying out edge detection on the second input point cloud to be detected by utilizing the generation network in the trained generation countermeasure network.
2. The method of claim 1, wherein,
the convolution layer of the network is generated or the network is judged, the point cloud of the input convolution layer is down sampled into the point clouds with a plurality of granularities, the point clouds with the granularities are respectively and independently subjected to convolution operation, and the operation results are cascaded into one point cloud.
3. The method of claim 1, wherein,
the convolution layer of the generation network or the discrimination network is provided with a plurality of convolution branches with different scales;
and generating a network or judging a cascade layer of the network, keeping the point cloud position unchanged, and connecting a plurality of convolution branches in series.
4. The method of claim 3, wherein,
the parameter N of the convolutional layer of the first layer located in different convolutional branches is different,
the parameter N of the convolutional layer of the last layer in the different convolutional branches is the same,
wherein the parameter N represents the number of points of the output point cloud.
5. The method of claim 4, wherein,
the cascade layer keeps the parameter N of the convolution layer of the last layer in each convolution branch unchanged, and superposes the parameter C of the convolution layer of the last layer in each convolution branch,
wherein the parameter C represents the number of characteristic channels of the point of the output point cloud.
6. The method of claim 1, wherein the results of the edge detection comprise:
probability of whether each point of the second input point cloud is an edge point;
alternatively, the first and second electrodes may be,
probabilities of semantic categories for points of the second input point cloud.
7. The method of claim 1, wherein the second input point cloud is obtained by thinning the original point cloud to be detected.
8. A point cloud edge detection apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the point cloud edge detection method of any of claims 1-7 based on instructions stored in the memory.
9. A point cloud edge detection apparatus, comprising:
the training unit is configured to train a generative confrontation network by using point cloud training data, the generative confrontation network comprises a generative network and a discrimination network, the point cloud training data comprises a first input point cloud for training and a real labeling point cloud obtained by labeling the first input point cloud, and the training process comprises the following steps: the generation network carries out edge detection on the first input point cloud to obtain a generated labeled point cloud, the discrimination network discriminates the authenticity of the generated labeled point cloud based on the real labeled point cloud, and the iterative training of the generation countermeasure network enables the difference between the true probability and the false probability in the discrimination result of the discrimination network to be smaller than a preset value;
and the detection unit is configured to perform edge detection on the second input point cloud to be detected by utilizing the generation network in the trained generation countermeasure network.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the point cloud edge detection method of any one of claims 1 to 7.
CN201811545580.7A 2018-12-18 2018-12-18 Point cloud edge detection method and device Pending CN111340831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811545580.7A CN111340831A (en) 2018-12-18 2018-12-18 Point cloud edge detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811545580.7A CN111340831A (en) 2018-12-18 2018-12-18 Point cloud edge detection method and device

Publications (1)

Publication Number Publication Date
CN111340831A true CN111340831A (en) 2020-06-26

Family

ID=71185093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811545580.7A Pending CN111340831A (en) 2018-12-18 2018-12-18 Point cloud edge detection method and device

Country Status (1)

Country Link
CN (1) CN111340831A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396069A (en) * 2021-01-20 2021-02-23 深圳点猫科技有限公司 Semantic edge detection method, device, system and medium based on joint learning
CN112508983A (en) * 2020-12-18 2021-03-16 华南理工大学 Point cloud down-sampling method based on image edge detection
CN112819960A (en) * 2021-02-01 2021-05-18 电子科技大学 Antagonistic point cloud generation method, storage medium and terminal
CN112990373A (en) * 2021-04-28 2021-06-18 四川大学 Convolution twin point network blade profile splicing system based on multi-scale feature fusion
CN118015035A (en) * 2024-04-09 2024-05-10 法奥意威(苏州)机器人***有限公司 Point cloud edge detection method and device based on single neighborhood characteristics and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508983A (en) * 2020-12-18 2021-03-16 华南理工大学 Point cloud down-sampling method based on image edge detection
CN112508983B (en) * 2020-12-18 2023-06-20 华南理工大学 Point cloud downsampling method based on image edge detection
CN112396069A (en) * 2021-01-20 2021-02-23 深圳点猫科技有限公司 Semantic edge detection method, device, system and medium based on joint learning
CN112819960A (en) * 2021-02-01 2021-05-18 电子科技大学 Antagonistic point cloud generation method, storage medium and terminal
CN112819960B (en) * 2021-02-01 2022-06-24 电子科技大学 Antagonistic point cloud generation method, storage medium and terminal
CN112990373A (en) * 2021-04-28 2021-06-18 四川大学 Convolution twin point network blade profile splicing system based on multi-scale feature fusion
CN118015035A (en) * 2024-04-09 2024-05-10 法奥意威(苏州)机器人***有限公司 Point cloud edge detection method and device based on single neighborhood characteristics and electronic equipment

Similar Documents

Publication Publication Date Title
CN111340831A (en) Point cloud edge detection method and device
CN105760886B (en) A kind of more object segmentation methods of image scene based on target identification and conspicuousness detection
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
Ham et al. Automated content-based filtering for enhanced vision-based documentation in construction toward exploiting big visual data from drones
CN110751215B (en) Image identification method, device, equipment, system and medium
CN111709420A (en) Text detection method, electronic device and computer readable medium
KR20200052932A (en) Bone marrow cell labeling method and system
CN109903282B (en) Cell counting method, system, device and storage medium
CN111932577B (en) Text detection method, electronic device and computer readable medium
CN113223614A (en) Chromosome karyotype analysis method, system, terminal device and storage medium
CN111783732A (en) Group mist identification method and device, electronic equipment and storage medium
CN117495891B (en) Point cloud edge detection method and device and electronic equipment
CN104966109A (en) Medical laboratory report image classification method and apparatus
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN114332058A (en) Serum quality identification method, device, equipment and medium based on neural network
CN111967449B (en) Text detection method, electronic device and computer readable medium
CN113780287A (en) Optimal selection method and system for multi-depth learning model
CN110659631A (en) License plate recognition method and terminal equipment
CN112560925A (en) Complex scene target detection data set construction method and system
CN116824135A (en) Atmospheric natural environment test industrial product identification and segmentation method based on machine vision
CN113537253A (en) Infrared image target detection method and device, computing equipment and storage medium
CN108133210B (en) Image format identification method and device
CN111797922A (en) Text image classification method and device
EP3611695A1 (en) Generating annotation data of tissue images
CN113903015B (en) Lane line identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination