CN115471808A - Processing method and device for reconstructing target point cloud based on reference image - Google Patents

Processing method and device for reconstructing target point cloud based on reference image Download PDF

Info

Publication number
CN115471808A
CN115471808A CN202211296877.0A CN202211296877A CN115471808A CN 115471808 A CN115471808 A CN 115471808A CN 202211296877 A CN202211296877 A CN 202211296877A CN 115471808 A CN115471808 A CN 115471808A
Authority
CN
China
Prior art keywords
point cloud
target
target point
clouds
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211296877.0A
Other languages
Chinese (zh)
Inventor
陈东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Qingyu Technology Co Ltd
Original Assignee
Suzhou Qingyu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Qingyu Technology Co Ltd filed Critical Suzhou Qingyu Technology Co Ltd
Priority to CN202211296877.0A priority Critical patent/CN115471808A/en
Publication of CN115471808A publication Critical patent/CN115471808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a processing method and a device for reconstructing a target point cloud based on a reference image, wherein the method comprises the following steps: acquiring a first point cloud and a first reference image; carrying out point cloud segmentation to obtain a plurality of target point clouds; performing target semantic segmentation to obtain a plurality of target images; carrying out correlation estimation on the target point cloud and the target image; forming a correlation set by the correlations larger than the correlation threshold; taking the target image corresponding to the maximum correlation degree in the set as a matching image; gathering target point clouds matched with the same target image into a first target point cloud set; counting the number of target point clouds in the first target point cloud set; recording a first target point cloud set with the target point cloud number larger than 1 as a second target point cloud set; and performing target point cloud reorganization on the second target point cloud set. The method can perform fusion recombination on the excessively segmented target point cloud based on the reference image, thereby achieving the purpose of reducing the excessive segmentation.

Description

Processing method and device for reconstructing target point cloud based on reference image
Technical Field
The invention relates to the technical field of data processing, in particular to a processing method and a device for reconstructing a target point cloud based on a reference image.
Background
When the automatic driving system divides the laser radar point cloud, excessive division often occurs, namely the point cloud belonging to the same target object is divided into two groups of target point clouds, and the point cloud processing precision of a perception module of the automatic driving system can be reduced under the condition.
Disclosure of Invention
The invention aims to provide a processing method, a device, an electronic device and a computer readable storage medium for reconstructing a target point cloud based on a reference image, aiming at the defects of the prior art; the method comprises the steps of carrying out target identification on laser radar point clouds to obtain a plurality of target point clouds, carrying out target semantic segmentation on a reference image in the same scene with the laser radar point clouds to obtain a plurality of target images, calculating the correlation degree of each target point cloud and each target image, taking the most relevant target image as a corresponding matching image, clustering the target point clouds corresponding to the same matching image to obtain a corresponding target point cloud set, and fusing the target point clouds in each target point cloud set according to a preset minimum distance threshold value. By the method and the device, the excessively segmented target point cloud can be fused and recombined based on the reference image, so that the aims of reducing excessive segmentation errors and improving the point cloud processing precision are fulfilled.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a processing method for reconstructing a target point cloud based on a reference image, where the method includes:
acquiring a first point cloud and a corresponding first reference image;
performing point cloud segmentation processing on the first point cloud to obtain a plurality of first target point clouds;
performing target semantic segmentation processing on the first reference image to obtain a plurality of first target images;
estimating the correlation degree of the first target point cloud and each first target image to generate corresponding first correlation degree; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; taking the first target image corresponding to the maximum correlation degree in the first correlation degree set as a matching image of the current first target point cloud;
gathering one or more first target point clouds of the matched image as the same first target image into a first target point cloud set corresponding to the first target point cloud set; counting the number of the first target point clouds of each first target point cloud set to generate a corresponding first number; recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets;
and performing target point cloud reorganization processing on each second target point cloud set.
Preferably, the estimating the correlation between the first target point cloud and each of the first target images to generate a corresponding first correlation specifically includes:
performing coordinate conversion processing from point cloud coordinates to pixel coordinates on three-dimensional point cloud coordinates of each point of the first target point cloud to obtain corresponding first point cloud pixel coordinates, and forming a corresponding first point cloud pixel coordinate set by all the obtained first point cloud pixel coordinates;
performing pixel point tracing on the first reference image according to each first point cloud pixel coordinate set, and performing convex hull drawing processing on a pixel point coverage area of the current first point cloud pixel coordinate set to generate a corresponding first convex hull; calculating the contact ratio of the current first convex hull and each first target image to generate a corresponding first contact ratio; and taking the first contact ratio as the corresponding first correlation ratio.
Preferably, the performing the target point cloud reorganization process on each second target point cloud set specifically includes:
in a step 31, the process is carried out, selecting one first target point cloud from the second target point cloud set as a current point cloud; marking other first target point clouds as other point clouds;
step 32, traversing each other point cloud; during the traversal, taking the other currently traversed point clouds as corresponding current other point clouds; calculating the point spacing between each point of the current point cloud and each point of the other current point clouds to generate a corresponding first point spacing; taking the minimum value from all the obtained first point distances as a corresponding first shortest distance; if the first shortest distance is smaller than a preset shortest distance threshold value, marking the other current point clouds as matching point clouds corresponding to the current point clouds; when the traversal is finished, counting the number of the matching point clouds corresponding to the current point cloud to generate a corresponding second number;
step 33, judging whether the second number is greater than 0; if the second quantity is 0, marking the current point cloud as an isolated point cloud; if the second number is larger than 0, performing point cloud fusion on the current point cloud and one or more corresponding matching point clouds, and taking the obtained fusion point cloud as a new first target point cloud;
step 34, performing statistics again on the number of the first target point clouds in the second target point cloud set to generate a corresponding third number; counting the number of the isolated point clouds in the second target point cloud set to generate a corresponding fourth number;
step 35, when the third number is not 1 and the fourth number is not equal to (third number-1), returning to step 31; when the third number is 1 or the fourth number is equal to (third number-1), the target point cloud recomposing process is ended.
A second aspect of the embodiments of the present invention provides an apparatus for implementing the processing method for reconstructing a target point cloud based on a reference image according to the first aspect, where the apparatus includes: the system comprises an acquisition module, a point cloud preprocessing module, an image preprocessing module, a correlation degree processing module and a target point cloud reconstruction module;
the acquisition module is used for acquiring a first point cloud and a corresponding first reference image;
the point cloud preprocessing module is used for carrying out point cloud segmentation processing on the first point cloud to obtain a plurality of first target point clouds;
the image preprocessing module is used for performing target semantic segmentation processing on the first reference image to obtain a plurality of first target images;
the relevancy processing module is used for estimating the relevancy between the first target point cloud and each first target image to generate corresponding first relevancy; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; taking the first target image corresponding to the maximum correlation degree in the first correlation degree set as a matching image of the current first target point cloud;
the target point cloud reorganization module is used for gathering one or more first target point clouds of which the matching images are the same first target image into a first target point cloud set corresponding to the first target point cloud set; counting the number of the first target point clouds in each first target point cloud set to generate a corresponding first number; recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets; and performing target point cloud reorganization processing on each second target point cloud set.
A third aspect of an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a transceiver;
the processor is configured to be coupled to the memory, read and execute instructions in the memory, so as to implement the method steps of the first aspect;
the transceiver is coupled to the processor, and the processor controls the transceiver to transmit and receive messages.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing computer instructions that, when executed by a computer, cause the computer to perform the method of the first aspect.
The embodiment of the invention provides a processing method, a device, electronic equipment and a computer readable storage medium for reconstructing a target point cloud based on a reference image; the method comprises the steps of carrying out target identification on laser radar point clouds to obtain a plurality of target point clouds, carrying out target semantic segmentation on a reference image in the same scene with the laser radar point clouds to obtain a plurality of target images, calculating the correlation degree of each target point cloud and each target image, taking the most relevant target image as a corresponding matching image, clustering the plurality of target point clouds corresponding to the same matching image to obtain a corresponding target point cloud set, and fusing the target point clouds in each target point cloud set according to a preset minimum distance threshold value. By the method and the device, the excessively segmented target point cloud can be fused and recombined based on the reference image, so that excessive segmentation errors are reduced, and the point cloud processing precision is improved.
Drawings
Fig. 1 is a schematic diagram of a processing method for reconstructing a target point cloud based on a reference image according to an embodiment of the present invention;
fig. 2 is a block diagram of a processing apparatus for reconstructing a target point cloud based on a reference image according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
An embodiment of the present invention provides a processing method for reconstructing a target point cloud based on a reference image, as shown in fig. 1, which is a schematic diagram of the processing method for reconstructing the target point cloud based on the reference image provided in the embodiment of the present invention, the method mainly includes the following steps:
step 1, acquiring a first point cloud and a corresponding first reference image.
Here, the first point cloud is a lidar point cloud generated by the first lidar, the first reference image is a scene image shot by the first camera, a radar scanning scene of the first lidar and an image shooting scene of the first camera are defaulted to be the same scene, and namely, a target in the first point cloud also appears in the first reference image.
And 2, carrying out point cloud segmentation on the first point clouds to obtain a plurality of first target point clouds.
In the embodiment of the invention, a point cloud target detection model can be used for carrying out point cloud target detection processing on a first point cloud to obtain a plurality of first target identification frames, and point cloud subsets in each first target identification frame are used as corresponding first target point clouds, wherein the point cloud target detection model comprises a PointNet model, a PointNet + + model and the like, or other mature models can be selected for processing, and one-to-one enumeration is not carried out; the embodiment of the invention can also perform point cloud segmentation on the first point cloud based on a point cloud clustering algorithm to obtain a plurality of first point cloud subsets, and each first point cloud subset is used as a corresponding first target point cloud, wherein the point cloud clustering algorithm comprises an Euclidean clustering segmentation method, a point cloud K-Means clustering algorithm and the like, and other mature algorithms can be selected for processing, and no one-to-one enumeration is performed here.
And 3, performing target semantic segmentation processing on the first reference image to obtain a plurality of first target images.
Here, in the embodiment of the present invention, an image semantic segmentation model may be used to perform target semantic segmentation processing on a first reference image to obtain a plurality of first target mask maps, and each first target mask map is used as a corresponding first target image, where the image semantic segmentation model includes an FCN model, a U-Net model, a depolabv 3p model, an OCRNet model, and the like, and other mature models may also be used for processing, which are not enumerated herein one by one.
Step 4, estimating the correlation degree of the first target point cloud and each first target image to generate corresponding first correlation degree; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; taking a first target image corresponding to the maximum correlation in the first correlation set as a matching image of the current first target point cloud;
here, in the embodiment of the present invention, a corresponding target mask image, that is, a first target image, is located for each first target point cloud in a first reference image, and is used as a corresponding matching image;
the method specifically comprises the following steps: step 41, estimating the correlation between the first target point cloud and each first target image to generate corresponding first correlation;
the method specifically comprises the following steps: step 411, performing coordinate conversion processing from point cloud coordinates to pixel coordinates on three-dimensional point cloud coordinates of each point of the first target point cloud to obtain corresponding first point cloud pixel coordinates, and forming a corresponding first point cloud pixel coordinate set by all the obtained first point cloud pixel coordinates;
here, the internal and external parameters of the first laser radar corresponding to the first point cloud are known, the internal and external parameters of the first camera corresponding to the first reference image are known, the installation positions of the first laser radar and the first camera are known, and a coordinate conversion matrix from the point cloud coordinates to the pixel coordinates is known, so that the three-dimensional point cloud coordinates of each point of the first target point cloud, the first laser radar and the internal and external parameters of the first camera are substituted into the coordinate conversion matrix from the point cloud coordinates to the pixel coordinates to calculate, and then the corresponding pixel point coordinates of each point in the first target point cloud in the first reference image, namely the first point cloud pixel coordinates, are obtained; the first point cloud pixel coordinate set is actually a projection pixel point coordinate set of the first target point cloud on the first reference image;
step 412, performing pixel point tracing on the first reference image according to each first point cloud pixel coordinate set, and performing convex hull drawing processing on a pixel point coverage area of the current first point cloud pixel coordinate set to generate a corresponding first convex hull; calculating the contact ratio of the current first convex hull and each first target image to generate a corresponding first contact ratio; taking the first contact ratio as a corresponding first correlation degree;
here, the Convex Hull (Convex Hull) is a graphic concept, a set X is given in a real vector space, and the intersection of all Convex sets containing the given set X is referred to as the Convex Hull of the given point set X; the first convex hull of the embodiment of the invention is actually a convex polygon corresponding to the first point cloud pixel coordinate set; the first coincidence degree of the embodiment of the invention is the area ratio of the intersection of the first convex hull and the first target image to the first convex hull, and the higher the first coincidence degree is, the higher the correlation degree of the first convex hull and the first target image is, so the embodiment of the invention takes the first coincidence degree as the corresponding first correlation degree;
it should be noted that, when performing convex hull drawing processing on a pixel point coverage area of a current first point cloud pixel coordinate set, the embodiment of the present invention locates a plurality of convex hull vertexes from the first point cloud pixel coordinate set by using a convex hull algorithm, and sequentially connects all the obtained convex hull vertexes to obtain a corresponding first convex hull, where the convex hull algorithm includes a jarviss march algorithm, a Graham algorithm, an Andrew algorithm, and the like, and may also select other mature algorithms for processing, where no one-to-one enumeration is performed;
it should be further noted that, in the embodiment of the present invention, multiple contact ratio calculation modes are supported when the contact ratio of the current first convex hull and each first target image is calculated; one of the contact ratio calculation methods is: clustering pixel points of a first target image entering a first convex hull to serve as a corresponding first overlapping pixel point set, performing convex hull drawing processing on pixel point coverage areas of the first overlapping pixel point set based on the convex hull algorithm to generate a corresponding first overlapping convex hull, and taking the area ratio of the first overlapping convex hull to the first convex hull as a corresponding first contact ratio; one of the contact ratio calculation methods is: performing convex hull drawing processing on a pixel point coverage area of a first target image based on the convex hull algorithm to generate a corresponding first target image convex hull, taking an intersection convex hull of the first target image convex hull and the first convex hull as a corresponding second overlapping convex hull, and taking an area ratio of the second overlapping convex hull and the first convex hull as a corresponding first contact ratio;
step 42, extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; and taking the first target image corresponding to the maximum correlation in the first correlation set as a matching image of the current first target point cloud.
Here, the correlation threshold is a preset parameter threshold, and if the correlation threshold is lower than the preset parameter threshold, the first correlation is too low and should be ignored, and only if the correlation threshold is higher than the preset parameter threshold, the first correlation can be regarded as a valid value.
Step 5, gathering one or more first target point clouds of which the matching images are the same first target image into a corresponding first target point cloud set; counting the number of the first target point clouds of each first target point cloud set to generate a corresponding first number; and recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets.
Here, each first target point cloud set corresponds to one first target image, and the first target images corresponding to each first target point cloud set are different; if the first number of a certain first target point cloud set is greater than 1, it indicates that the plurality of first target point clouds in the first target point cloud set may be caused by excessive segmentation, and at this time, the first target point cloud set needs to be marked as a second target point cloud set, and target point cloud reorganization processing is performed on the second target point cloud set through subsequent steps.
Step 6, performing target point cloud reorganization processing on each second target point cloud set;
the target point cloud reorganization processing in the current step aims to fuse the first target point clouds of which the minimum point distances meet the preset minimum distance threshold value in the second target point cloud set to generate new first target point clouds until the number of the first target point clouds in the second target point cloud set is unique or the minimum point distances between any two first target point clouds do not meet the preset minimum distance threshold value;
the method specifically comprises the following steps: step 61, selecting one first target point cloud from the current second target point cloud set as a current point cloud; marking other first target point clouds as other point clouds;
step 62, traversing each other point cloud; during the traversal, taking other currently traversed point clouds as corresponding current other point clouds; calculating the point spacing between each point of the current point cloud and each point of other current point clouds to generate a corresponding first point spacing; taking the minimum value from all the obtained first point distances as a corresponding first shortest distance; if the first shortest distance is smaller than a preset shortest distance threshold value, marking other current point clouds as matched point clouds corresponding to the current point clouds; when traversing is finished, counting the number of the matched point clouds corresponding to the current point cloud to generate a corresponding second number;
here, the shortest distance threshold is a preset distance threshold parameter;
step 63, judging whether the second number is larger than 0; if the second quantity is 0, marking the current point cloud as an isolated point cloud; if the second number is larger than 0, performing point cloud fusion on the current point cloud and one or more corresponding matched point clouds, and taking the obtained fusion point cloud as a new first target point cloud;
here, the embodiment of the present invention marks the isolated point cloud to prevent excessive reorganization of the point cloud while reducing excessive point cloud segmentation;
step 64, carrying out statistics again on the number of the first target point clouds in the second target point cloud set to generate a corresponding third number; counting the number of the isolated point clouds in the second target point cloud set to generate a corresponding fourth number;
step 65, when the third number is not 1 and the fourth number is not equal to (third number-1), returning to step 61; when the third number is 1 or the fourth number is equal to (third number-1), the target point cloud recomposing process is ended.
Here, if the third number is 1, it indicates that only 1 first target point cloud in the current second target point cloud set remains, and at this time, point cloud reorganization is not required, so that the current target point cloud reorganization process can be finished; if the fourth number is equal to (third number-1), it is indicated that the fourth number of first target point clouds except the remaining 1 first target point cloud in the current second target point cloud set are all marked as isolated point clouds, and the identification process of the isolated point clouds can know that the isolated point clouds are that the minimum point distances between the isolated point clouds and any one of the other first target point clouds do not meet the preset shortest distance threshold, that is, the minimum point distances between the remaining first target point clouds and the other fourth number of first target point clouds also do not meet the shortest distance threshold, that is, the isolated point clouds are also known, that is, when the fourth number is equal to (third number-1), all first target point clouds in the current second target point cloud set are isolated, in the embodiment of the invention, point cloud fusion is not performed on the isolated point clouds, so that point cloud reorganization on the current second target point cloud set does not need to be performed again, and target point cloud reorganization processing of the current time can be finished; on the contrary, if the third number is not 1 and the fourth number is not equal to (the third number is-1), it indicates that the number of the first target point clouds in the current second target point cloud set is more than one and not all the first target point clouds are isolated point clouds, and at this time, the point cloud reorganization can be continued for the current second target point cloud set, so that the processing needs to be continued by returning to step 61. To better understand the above steps 61-65, two examples are given below for a simple illustration.
The first example is: knowing that three first target point clouds in the second target point cloud set are marked as first target point clouds A, B, C;
taking the first target point cloud A in the second target point cloud set as the current point cloud through the step 61; recording a first target point cloud B, C as other point clouds;
traversing the first target point cloud B, C, via step 62; when the other current point clouds are the first target point cloud B, calculating the point distances between each point of the first target point cloud A and each point of the first target point cloud B to generate corresponding first point distances, and taking the minimum value of all the first point distances as the corresponding first shortest distance d B Comparing and confirming the first shortest distance d B If the distance is smaller than the preset shortest distance threshold, marking the first target point cloud B as a matching point cloud corresponding to the first target point cloud A; when the current other point clouds are the first target point cloud C, calculating the point distances between each point of the first target point cloud A and each point of the first target point cloud C to generate corresponding first point distances, and taking the minimum value in all the first point distances as the corresponding first shortest distance d C Comparing and confirming the first shortest distance d C If the distance is less than the preset shortest distance threshold, marking the first target point cloud C as a matching point cloud corresponding to the first target point cloud A; when the traversal is finished, counting the number of the matching point clouds corresponding to the first target point cloud A to obtain a second number of 2;
through the step 63, whether the second number is greater than 0 is judged, and since the second number is 2 and greater than 0, point cloud fusion is performed on the first target point cloud a and the corresponding 2 matched point clouds (the first target point cloud B, C) and the obtained fused point cloud is used as a new first target point cloud D, and at this time, the second target point cloud set does not have the first target point cloud A, B, C, and only one first target point cloud D exists;
through step 64, the third number obtained by re-counting the number of the first target point clouds in the second target point cloud set is 1; because no isolated point cloud exists in the second target point cloud set, the fourth quantity obtained by counting the quantity of the isolated point clouds in the second target point cloud set is 0;
the determination is made by step 65, and since the third number is not 1, the target point cloud recomposing process is ended. The second target point cloud set only comprises a first target point cloud, namely a first target point cloud D after the target point cloud reorganization processing is completed, and the first target point cloud D is formed by fusing and reorganizing the original first target point cloud A, B, C.
The second example is: knowing that three first target point clouds in the second target point cloud set are marked as first target point clouds E, F, G;
taking the first target point cloud E in the second target point cloud set as the current point cloud through the step 61; recording a first target point cloud F, G as other point clouds;
traversing the first target point cloud F, G, via step 62; when the current other point clouds are the first target point cloud F, the point distances between each point of the first target point cloud E and each point of the first target point cloud F are calculated to generate corresponding first point distances, and the minimum value in all the first point distances is taken as the corresponding first shortest distance d F Comparing and confirming the first shortest distance d F If the distance is smaller than the preset shortest distance threshold, marking the first target point cloud F as a matching point cloud corresponding to the first target point cloud E; when the other current point clouds are the first target point cloud G, calculating the point distance between each point of the first target point cloud E and each point of the first target point cloud G to generate corresponding first pointsDistance, and taking the minimum value of all the first point distances as the corresponding first shortest distance d G Comparing and confirming the first shortest distance d G If the first target point cloud G is not smaller than the preset shortest distance threshold, the first target point cloud G is not used as any mark; when the traversal is finished, counting the number of the matching point clouds corresponding to the first target point cloud E to obtain a second number of 1;
through the step 63, whether the second number is greater than 0 is judged, and because the second number is 1 and greater than 0, point cloud fusion is performed on the first target point cloud E and 1 corresponding matching point cloud (first target point cloud F), and the obtained fusion point cloud is used as a new first target point cloud H, at this time, the second target point cloud set does not have the first target point cloud E, F, and only two first target point clouds G, H are available;
through step 64, the third number obtained by re-counting the number of the first target point clouds in the second target point cloud set is 2; because no isolated point cloud exists in the second target point cloud set, the fourth quantity obtained by counting the quantity of the isolated point clouds in the second target point cloud set is 0;
the judgment is made by the step 65, and since the third number 2 is not 1 and the fourth number 0 is not equal to (the third number-1), it is necessary to return to the step 61 to continue the processing;
performing a second pass 61, using the first target point cloud G in the second target point cloud set as a current point cloud; recording the first target point cloud H as other point clouds;
the other point clouds are traversed twice through step 62, and only the first target point cloud H is processed because the other point clouds at this time are only the first target point cloud H: calculating the point distance between each point of the first target point cloud H and each point of the first target point cloud G to generate corresponding first point distance, and taking the minimum value of all the first point distances as the corresponding first shortest distance d H Comparing and confirming the first shortest distance d H If the distance is not less than the preset shortest distance threshold, the first target point cloud H is not used as any mark; then, the second number obtained by counting the number of the matching point clouds corresponding to the first target point cloud G is 0; second target at this timeThe point cloud set is also composed of a first target point cloud G, H;
in the second step 63, whether the second number is greater than 0 is determined, and the first target point cloud G is marked as an isolated point cloud because the second number is 0;
in the second pass step 64, the third number obtained by counting again the number of the first target point clouds in the second target point cloud set is 2, and the fourth number obtained by counting the number of the isolated point clouds in the second target point cloud set is 1;
the determination is made twice through step 65, and although the third number 2 is not 1, the fourth number 1 is equal to (the third number-1), so that the target point cloud recomposing process is ended. After the target point cloud reorganization processing is completed, the second target point cloud set is composed of two isolated point clouds, namely a first target point cloud G, H, wherein the first target point cloud H is formed by fusing and reorganizing an original first target point cloud E, F.
Fig. 2 is a block diagram of a processing apparatus for reconstructing a target point cloud based on a reference image according to a second embodiment of the present invention, where the apparatus is a terminal device or a server for implementing the foregoing method embodiment, and may also be an apparatus capable of enabling the foregoing terminal device or the server to implement the foregoing method embodiment, for example, the apparatus may be an apparatus or a chip system of the foregoing terminal device or the server. As shown in fig. 2, the apparatus includes: the system comprises an acquisition module 201, a point cloud preprocessing module 202, an image preprocessing module 203, a correlation processing module 204 and a target point cloud reorganization module 205.
The obtaining module 201 is configured to obtain a first point cloud and a corresponding first reference image.
The point cloud preprocessing module 202 is configured to perform point cloud segmentation on the first point cloud to obtain a plurality of first target point clouds.
The image preprocessing module 203 is configured to perform target semantic segmentation processing on the first reference image to obtain a plurality of first target images.
The correlation processing module 204 is configured to estimate the correlation between the first target point cloud and each first target image to generate a corresponding first correlation; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; and taking the first target image corresponding to the maximum correlation degree in the first correlation degree set as a matching image of the current first target point cloud.
The target point cloud reorganization module 205 is configured to group one or more first target point clouds whose matching images are the same first target image into a corresponding first target point cloud set; counting the number of the first target point clouds of each first target point cloud set to generate a corresponding first number; recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets; and performing target point cloud reorganization processing on each second target point cloud set.
The processing device for reconstructing the target point cloud based on the reference image, provided by the embodiment of the invention, can execute the method steps in the method embodiments, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or can be implemented in the form of hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the obtaining module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the determining module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when some of the above modules are implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can invoke the program code. As another example, these modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the foregoing method embodiments are generated in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, bluetooth, microwave, etc.).
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. The electronic device may be the terminal device or the server, or may be a terminal device or a server connected to the terminal device or the server and implementing the method according to the embodiment of the present invention. As shown in fig. 3, the electronic device may include: a processor 301 (e.g., a CPU), a memory 302, a transceiver 303; the transceiver 303 is coupled to the processor 301, and the processor 301 controls the transceiving operation of the transceiver 303. Various instructions may be stored in memory 302 for performing various processing functions and implementing the processing steps described in the foregoing method embodiments. Preferably, the electronic device according to an embodiment of the present invention further includes: a power supply 304, a system bus 305, and a communication port 306. The system bus 305 is used to implement communication connections between the elements. The communication port 306 is used for connection communication between the electronic device and other peripherals.
The system bus 305 mentioned in fig. 3 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this is not intended to represent only one bus or type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM) and may also include a Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, including a central Processing Unit CPU, a Network Processor (NP), a Graphics Processing Unit (GPU), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
It should be noted that the embodiment of the present invention also provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the method and the processing procedure provided in the above-mentioned embodiment.
The embodiment of the present invention further provides a chip for executing the instructions, where the chip is configured to execute the processing steps described in the foregoing method embodiment.
The embodiment of the invention provides a processing method, a device, electronic equipment and a computer readable storage medium for reconstructing a target point cloud based on a reference image; the method comprises the steps of carrying out target identification on laser radar point clouds to obtain a plurality of target point clouds, carrying out target semantic segmentation on a reference image in the same scene with the laser radar point clouds to obtain a plurality of target images, calculating the correlation degree of each target point cloud and each target image, taking the most relevant target image as a corresponding matching image, clustering the target point clouds corresponding to the same matching image to obtain a corresponding target point cloud set, and fusing the target point clouds in each target point cloud set according to a preset minimum distance threshold value. By the method and the device, the excessively segmented target point cloud can be fused and recombined based on the reference image, so that excessive segmentation errors are reduced, and the point cloud processing precision is improved.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A processing method for reconstructing a target point cloud based on a reference image is characterized by comprising the following steps:
acquiring a first point cloud and a corresponding first reference image;
carrying out point cloud segmentation processing on the first point cloud to obtain a plurality of first target point clouds;
performing target semantic segmentation processing on the first reference image to obtain a plurality of first target images;
estimating the correlation degree of the first target point cloud and each first target image to generate corresponding first correlation degree; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; taking the first target image corresponding to the maximum correlation degree in the first correlation degree set as a matching image of the current first target point cloud;
gathering one or more first target point clouds of the matched image as the same first target image into a first target point cloud set corresponding to the first target point cloud set; counting the number of the first target point clouds in each first target point cloud set to generate a corresponding first number; recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets;
and performing target point cloud reorganization processing on each second target point cloud set.
2. The method as claimed in claim 1, wherein the estimating the correlation between the first target point cloud and each of the first target images to generate a corresponding first correlation comprises:
performing coordinate conversion processing from point cloud coordinates to pixel coordinates on three-dimensional point cloud coordinates of each point of the first target point cloud to obtain corresponding first point cloud pixel coordinates, and forming a corresponding first point cloud pixel coordinate set by all the obtained first point cloud pixel coordinates;
performing pixel point tracing on the first reference image according to each first point cloud pixel coordinate set, and performing convex hull drawing processing on a pixel point coverage area of the current first point cloud pixel coordinate set to generate a corresponding first convex hull; calculating the contact ratio of the current first convex hull and each first target image to generate a corresponding first contact ratio; and taking the first contact ratio as the corresponding first correlation ratio.
3. The method for reconstructing a target point cloud based on a reference image according to claim 1, wherein the performing the target point cloud reconstruction on each of the second target point cloud sets specifically includes:
step 31, selecting one first target point cloud from the second target point cloud set as a current point cloud; marking other first target point clouds as other point clouds;
step 32, traversing each other point cloud; during the traversal, taking the other currently traversed point clouds as corresponding current other point clouds; calculating the point spacing between each point of the current point cloud and each point of the other current point clouds to generate a corresponding first point spacing; taking the minimum value from all the obtained first point distances as a corresponding first shortest distance; if the first shortest distance is smaller than a preset shortest distance threshold value, marking the other current point clouds as matching point clouds corresponding to the current point clouds; when the traversal is finished, counting the number of the matching point clouds corresponding to the current point cloud to generate a corresponding second number;
step 33, judging whether the second number is greater than 0; if the second quantity is 0, marking the current point cloud as an isolated point cloud; if the second number is larger than 0, performing point cloud fusion on the current point cloud and one or more corresponding matched point clouds, and taking the obtained fusion point cloud as a new first target point cloud;
step 34, counting the number of the first target point clouds in the second target point cloud set again to generate a corresponding third number; counting the number of the isolated point clouds in the second target point cloud set to generate a corresponding fourth number;
step 35, when the third number is not 1 and the fourth number is not equal to (third number-1), returning to step 31; when the third number is 1 or the fourth number is equal to (third number-1), the target point cloud recomposing process is ended.
4. An apparatus for performing the method of any one of claims 1-3, wherein the apparatus comprises: the system comprises an acquisition module, a point cloud preprocessing module, an image preprocessing module, a correlation degree processing module and a target point cloud reorganizing module;
the acquisition module is used for acquiring a first point cloud and a corresponding first reference image;
the point cloud preprocessing module is used for carrying out point cloud segmentation processing on the first point cloud to obtain a plurality of first target point clouds;
the image preprocessing module is used for performing target semantic segmentation processing on the first reference image to obtain a plurality of first target images;
the relevancy processing module is used for estimating the relevancy between the first target point cloud and each first target image to generate corresponding first relevancy; extracting the first correlation degrees which are greater than a preset correlation degree threshold value to form a corresponding first correlation degree set; taking the first target image corresponding to the maximum correlation degree in the first correlation degree set as a matching image of the current first target point cloud;
the target point cloud reorganization module is used for gathering one or more first target point clouds of which the matching images are the same first target image into a first target point cloud set corresponding to the first target point cloud set; counting the number of the first target point clouds in each first target point cloud set to generate a corresponding first number; recording the first target point cloud sets with the first number larger than 1 as corresponding second target point cloud sets; and performing target point cloud reorganization processing on each second target point cloud set.
5. An electronic device, comprising: a memory, a processor, and a transceiver;
the processor is used for being coupled with the memory, reading and executing the instructions in the memory to realize the method steps of any one of claims 1-3;
the transceiver is coupled to the processor, and the processor controls the transceiver to transmit and receive messages.
6. A computer-readable storage medium having stored thereon computer instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-3.
CN202211296877.0A 2022-10-21 2022-10-21 Processing method and device for reconstructing target point cloud based on reference image Pending CN115471808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211296877.0A CN115471808A (en) 2022-10-21 2022-10-21 Processing method and device for reconstructing target point cloud based on reference image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211296877.0A CN115471808A (en) 2022-10-21 2022-10-21 Processing method and device for reconstructing target point cloud based on reference image

Publications (1)

Publication Number Publication Date
CN115471808A true CN115471808A (en) 2022-12-13

Family

ID=84337137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211296877.0A Pending CN115471808A (en) 2022-10-21 2022-10-21 Processing method and device for reconstructing target point cloud based on reference image

Country Status (1)

Country Link
CN (1) CN115471808A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310227A (en) * 2023-05-18 2023-06-23 海纳云物联科技有限公司 Three-dimensional dense reconstruction method, three-dimensional dense reconstruction device, electronic equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310227A (en) * 2023-05-18 2023-06-23 海纳云物联科技有限公司 Three-dimensional dense reconstruction method, three-dimensional dense reconstruction device, electronic equipment and medium
CN116310227B (en) * 2023-05-18 2023-09-12 海纳云物联科技有限公司 Three-dimensional dense reconstruction method, three-dimensional dense reconstruction device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN108769821B (en) Scene of game describes method, apparatus, equipment and storage medium
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN115436910B (en) Data processing method and device for performing target detection on laser radar point cloud
CN110176064B (en) Automatic identification method for main body object of photogrammetric generation three-dimensional model
CN109461133B (en) Bridge bolt falling detection method and terminal equipment
CN112734837B (en) Image matching method and device, electronic equipment and vehicle
CN113706472A (en) Method, device and equipment for detecting road surface diseases and storage medium
CN115471808A (en) Processing method and device for reconstructing target point cloud based on reference image
CN115147333A (en) Target detection method and device
CN111813882A (en) Robot map construction method, device and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN113554037A (en) Feature extraction method and device based on model simplification
CN113920267A (en) Three-dimensional scene model construction method, device, equipment and storage medium
CN113409376A (en) Method for filtering laser radar point cloud based on depth estimation of camera
CN112337093A (en) Virtual object clustering method and device, storage medium and electronic device
CN114913213B (en) Method and device for learning aerial view characteristics
CN118037601B (en) Point cloud filling method and electronic equipment
CN118229938B (en) Color-imparting method, device, apparatus, medium and program product for point cloud model
CN113487541B (en) Insulator detection method and device
CN113643421B (en) Three-dimensional reconstruction method and three-dimensional reconstruction device for image
CN116343132B (en) Complex scene power equipment defect identification method and device and computer equipment
CN116188303A (en) Point cloud enhancement method, device, equipment and storage medium
CN115631201A (en) Object segmentation method, device, equipment and storage medium
CN115457284A (en) Processing method for filtering laser point cloud
CN117854058A (en) Target perception method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination