CN110610501B - Point cloud segmentation method and device - Google Patents

Point cloud segmentation method and device Download PDF

Info

Publication number
CN110610501B
CN110610501B CN201910879613.XA CN201910879613A CN110610501B CN 110610501 B CN110610501 B CN 110610501B CN 201910879613 A CN201910879613 A CN 201910879613A CN 110610501 B CN110610501 B CN 110610501B
Authority
CN
China
Prior art keywords
point cloud
axis
image
data set
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910879613.XA
Other languages
Chinese (zh)
Other versions
CN110610501A (en
Inventor
彭潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Original Assignee
Beijing Daheng Image Vision Co ltd
China Daheng Group Inc Beijing Image Vision Technology Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Daheng Image Vision Co ltd, China Daheng Group Inc Beijing Image Vision Technology Branch filed Critical Beijing Daheng Image Vision Co ltd
Priority to CN201910879613.XA priority Critical patent/CN110610501B/en
Publication of CN110610501A publication Critical patent/CN110610501A/en
Application granted granted Critical
Publication of CN110610501B publication Critical patent/CN110610501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a point cloud segmentation method and a point cloud segmentation device, wherein the method comprises the following steps: step 1, acquiring a first point cloud data set of an object to be grabbed, calculating a rotating data set of the first point cloud data set according to a preset step increment, combining the first point cloud data set and the rotating data set, and generating a second point cloud data set; step 2, selecting a three-dimensional point with the largest z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting, and determining a selection angle according to the minimum entropy value of a projection image; step 3, calculating a histogram and a segmentation position of the projection image according to the selected angle, and generating a point cloud segmentation result of the object to be captured; and 4, determining the center coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating grabbing information according to the center coordinate and the clamping jaw posture. Through the technical scheme in this application, utilize the influence of offset data to the location, improve the object and snatch efficiency.

Description

Point cloud segmentation method and device
Technical Field
The application relates to the technical field of machine vision, in particular to a point cloud segmentation method and a point cloud segmentation device.
Background
In common industrial application, the manipulator is mainly responsible for carrying and assembling work, such as loading and unloading of processing equipment, workpiece installation and welding and the like. The guidance of the manipulator can be programmed in advance, and can be performed by a visual sensor or a 3D sensor, each time according to a fixed point location. Obviously, the latter is more flexible and more adaptive. In order to enable the grabbing visual field range to be larger and the equipment to operate more flexibly, a sensor and a mechanical arm follow-up mode is adopted for project implementation. Because the load of the manipulator is limited, the most compact acquisition equipment is required to acquire relevant data to guide the manipulator to work.
In the prior art, when single-angle scanning is adopted, due to the fact that the heights of objects to be captured are different, point cloud data points facing a sensor are more, side point cloud data points are less, faults appear in the point cloud data, and the faults bring great influence on the accuracy of point cloud registration. When the spliced images are acquired from multiple angles, the images are shot from multiple angles, and although point cloud data with complete objects can be reconstructed, the method has the following defects in practical application:
1) the acquisition, reconstruction and splicing of point cloud data consume a long time;
2) in the process of fusing (splicing) multiple pieces of point cloud data, new noise points are introduced at the part where the point cloud data are overlapped, so that the grabbing precision of a manipulator is influenced;
3) when holes exist in the point cloud data or the point cloud data is missing, the point cloud registration effect is affected.
Disclosure of Invention
The purpose of this application lies in: the plane of the object to be captured is taken as a primitive, point cloud data segmentation is carried out, the object to be captured is located and analyzed, the influence of offset data on location is reduced, and the object capturing efficiency is improved.
The technical scheme of the first aspect of the application is as follows: a point cloud segmentation method is provided, which comprises: step 1, acquiring a first point cloud data set of an object to be grabbed, respectively calculating rotating data sets of the first point cloud data set around an x axis and a y axis according to a preset step increment, and combining the first point cloud data set and the rotating data sets to generate a second point cloud data set; step 2, selecting a three-dimensional point with the maximum z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting on each plane of a three-dimensional coordinate system respectively to generate a projection image, and determining a selection angle according to the minimum entropy value of the projection image; step 3, calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selected angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position; and 4, determining the center coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating grabbing information according to the center coordinate and the clamping jaw posture.
In any one of the above technical solutions, further, in step 2, specifically including:
step 21, selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set according to a preset point cloud projection resolution, generating a third point cloud data set, and projecting on three coordinate planes to generate a projection image, wherein the projection image comprises an image projected along an x axis, an image projected along a y axis and an image projected along a z axis;
step 22, according to the preset step increment, in the preset step range, calculating the entropy value H (theta) of the projected image along the z axis in the projected image, selecting the angle theta corresponding to the minimum entropy value H (theta), and recording the angle theta as the selected angle thetasWherein, the formula for calculating the entropy value is as follows:
H(θ)=-∑pαβlog(pαβ)
Figure GDA0003507479320000021
where f (α, β) is the number of occurrences of the gray value combination (α, β), W is the image scale, i.e., the number of pixels occupied by the image, pαβIs the feature probability.
In any one of the above technical solutions, further, the preset step increment is 10 °.
In any one of the above technical solutions, further, in step 3, specifically including: step 31, selecting an angle thetasDetermining a corresponding projection image Zimg along the z axis, calculating a histogram of the projection image Zimg along the z axis, and performing median filtering on the histogram to generate a median array HistMeain (g); step 32, calculating the partition points of the median array HistMeain (g), reversely calculating the partition positions Fz (k) of the partition points F (k) according to the calculation formula of Zimg normalization of the projection image along the z axis, carrying out point cloud partition on the projection image Zimg theta along the z axis according to the partition positions Fz (k), and generating a point cloud partition result of the object to be captured according to the partition result of the projection image Zimg theta along the z axis.
In any one of the above technical solutions, further, the preset point cloud projection resolution is 3 times of the minimum precision of the xy plane of the three-dimensional sensor.
In any of the above technical solutions, further, generating a point cloud segmentation result of the object to be captured specifically includes: deriving a median array HistMedrain (g), traversing the derivative by using a state machine, searching a trough inflection point, and marking the trough inflection point as a segmentation point F (k); dividing Zimg theta of the projection image along the z axis according to the dividing points F (k) to form dividing areas, and screening the dividing areas according to the number of three-dimensional points in the dividing areas; and generating a point cloud segmentation result according to the screened segmentation area.
The technical scheme of the second aspect of the application is as follows: there is provided a point cloud segmentation apparatus, which employs the point cloud segmentation method according to any one of the first aspect technical solutions, and the apparatus includes: the point cloud segmentation device is arranged above the conveying device and connected to the moving mechanism, the point cloud segmentation device comprises a three-dimensional sensor, the three-dimensional sensor is used for scanning objects to be grabbed above the conveying device for a single time, a first point cloud data set is generated, and the point cloud segmentation device further comprises: a data set generating unit, an angle determining unit, a dividing unit and an information generating unit; the data set generating unit is used for respectively calculating rotating data sets of the first point cloud data set around an x axis and a y axis according to the preset step increment, combining the first point cloud data set and the rotating data sets and generating a second point cloud data set; the angle determining unit is used for selecting a three-dimensional point with the maximum z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting the three-dimensional point on each plane of a three-dimensional coordinate system respectively to generate a projection image, and determining a selection angle according to the minimum entropy value of the projection image; the segmentation unit is used for calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selected angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position; the information generating unit is used for determining the central coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating and sending grabbing information to the moving mechanism according to the central coordinate and the clamping jaw posture.
The above-mentioned renIn one technical solution, further, the motion mechanism is a manipulator, and the angle determining unit specifically includes: the projection module selects the module; the projection module is used for selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set according to a preset point cloud projection resolution, generating a third point cloud data set, projecting on three coordinate planes and generating a projection image, wherein the projection image comprises an image projected along an x axis, an image projected along a y axis and an image projected along a z axis; the selection module is used for calculating the entropy value H (theta) of the projected image along the z axis in the projected image according to the preset step increment within the preset step range, selecting the angle theta corresponding to the minimum entropy value H (theta) and recording the angle theta as a selected angle thetas
In any one of the above technical solutions, further, the preset step increment is 10 °.
In any one of the above technical solutions, further, the preset point cloud projection resolution is 3 times of the minimum precision of the xy plane of the three-dimensional sensor.
In any of the above technical solutions, further, the dividing unit specifically includes: the median calculation module and the result generation module; the median calculation module is used for calculating the median according to the selected angle thetasDetermining a corresponding projection image Zimg along the z axis, calculating a histogram of the projection image Zimg along the z axis, and performing median filtering on the histogram to generate a median array HistMeain (g); the result generation module is used for calculating the division points of the median array HistMeain (g), reversely calculating the division positions Fz (k) of the division points F (k) according to the calculation formula of Zimg normalization of the projection image along the z axis, carrying out point cloud division on the projection image Zimg theta along the z axis according to the division positions Fz (k), and generating a point cloud division result of the object to be grabbed according to the division result of the projection image Zimg theta along the z axis.
The beneficial effect of this application is:
according to the technical scheme, the first point cloud data set of the object to be grabbed is obtained by single 3D scanning, the point cloud data are divided by data rotation by taking the plane where the object to be grabbed is located as an element, especially the object to be grabbed with the plane characteristic, such as a packing box, a display, a mobile phone shell and the like, the positioning analysis of the object to be grabbed is realized, and the object grabbing efficiency is improved.
According to the method and the device, single-angle single-time imaging is adopted, the acquired images do not need to be spliced, the processing time of the point cloud data acquisition process is reduced, the introduction of noise points of point cloud overlapping parts is avoided, the fault phenomenon in the point cloud data is acquired by using a single angle through the analysis of point cloud data projection on different coordinate planes, and the positioning precision of an object to be grabbed is improved.
Drawings
The advantages of the above and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic illustration of a grabbing scenario according to one embodiment of the present application;
FIG. 2 is a schematic flow diagram of a point cloud segmentation method according to one embodiment of the present application;
fig. 3 is a schematic diagram of shooting range division according to an embodiment of the present application;
FIG. 4 is a schematic view of different angle projections according to an embodiment of the present application;
FIG. 5 is a simulation plot of data entropy for different angle projections according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an inflection search according to an embodiment of the present application;
FIG. 7 is a schematic diagram of point cloud segmentation according to one embodiment of the present application;
FIG. 8 is a schematic illustration of a segmentation result according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
As shown in fig. 1, the present embodiment takes an object to be grabbed, such as a packaging box, a display, a mobile phone housing, etc., with a step in the surface height direction, that is, the object to be grabbed is composed of a plurality of planes perpendicular or approximately perpendicular to each other. The object to be grabbed is placed on the conveyor belt, the point cloud segmentation device in the embodiment scans the object to be grabbed for a single time by using the single three-dimensional sensor arranged above the conveyor belt, point cloud data of the object to be grabbed is obtained, the position and the posture of the upper surface of the object to be grabbed in the point cloud data are determined by the point cloud segmentation method in the embodiment, and then the grabbing device (such as a mechanical arm) is controlled to grab the object to be grabbed.
The first embodiment is as follows:
as shown in fig. 2, the present embodiment provides a point cloud segmentation method, including:
step 1, a first point cloud data set P (x, y, z) of an object to be grabbed is obtained, an x-axis point cloud data set and a y-axis point cloud data set of the first point cloud data set P (x, y, z) rotating around an x axis and the y-axis point cloud data set rotating around a y axis are respectively calculated within a preset step length range according to a preset step length increment, the first point cloud data set P (x, y, z), the x-axis point cloud data set and the y-axis point cloud data set are combined, and a second point cloud data set P' (x, y, z) is generated, wherein the x-axis point cloud data set and the y-axis point cloud data set are recorded as rotating data sets.
In the embodiment, a planar normal phase direction of an object to be grabbed is set as a z-axis, an advancing direction of the object to be grabbed is set as a y-axis, a right side of the advancing direction is set as an x-axis, a three-dimensional rectangular coordinate system is established, a single scanning is performed on the object to be grabbed by using a single three-dimensional sensor arranged above a conveyor belt, and a first point cloud data set P (x, y, z) is acquired, wherein the first point cloud data set P (x, y, z) is composed of a plurality of three-dimensional points, a coordinate corresponding to each three-dimensional point is (x, y, z), and a gray value is P (x, y, z).
Specifically, in the present embodiment, the preset step increase is setThe quantity delta theta is 10 DEG, the preset step range thetat∈[-30°,30°]During rotation about the x-axis, consider when θxWhen 0, the obtained x-axis point cloud data set
Figure GDA0003507479320000061
The first point cloud data set P (x, y, z) itself, therefore, θ is omittedx0, so θxE-30, -20, -10, 20, 30, and, for the same reason, during rotation around the y-axisyE { -30 °, -20 °, -10 °,10 °,20 °,30 ° }, i.e. the x-axis point cloud dataset
Figure GDA0003507479320000062
And y-axis point cloud data set
Figure GDA0003507479320000063
Each containing 6 sets of second point cloud data.
Figure GDA0003507479320000064
y-axis point cloud data set
Figure GDA0003507479320000065
The calculation formula of (2) is as follows:
Figure GDA0003507479320000071
the synthesized second point cloud data set P' (x, y, z) includes 12 sets of second point cloud data and 1 set of first point cloud data set P (x, y, z), and each set of point cloud data includes a plurality of three-dimensional points.
Step 2, selecting a three-dimensional point with the maximum z coordinate value in the second point cloud data set P' (x, y, z) according to a preset point cloud projection resolution, respectively projecting on each plane of a three-dimensional coordinate system, and determining a selection angle theta according to the minimum value of the entropy H (theta) of projections
Further, the step 2 specifically includes:
step 21, according to the preset point cloud projection resolution, selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set P' (x, y, z) to generate a third point cloud data set Ppjt(xp,yp,zp) And projecting on three coordinate planes to generate a projection image, wherein the projection image comprises a projection image along an x-axis, a projection image along a y-axis and a projection image along a z-axis.
Preferably, the projection resolution of the preset point cloud is 3 times of the minimum precision of the xy plane of the three-dimensional sensor.
In this embodiment, the accuracy of the three-dimensional sensor in the x-axis direction is set to 0.05mm, and the accuracy in the y-axis direction is set to 0.1mm, so that the preset point cloud projection resolution Δ S is 0.05mm × 3 to 0.15 mm.
Dividing the shooting range of the three-dimensional sensor on the xy plane into N M areas according to the preset point cloud projection resolution Delta S, and as shown in FIG. 3, designating any area as S (i, j), i is 0,1,2, …, N-1, j is 0,1,2, …, M-1, wherein,
N=(max(X)-min(X))/ΔS,
M=(max(Y)-min(Y))/ΔS,
in the formula, max (·) is the maximum coordinate value of the three-dimensional sensor in the corresponding coordinate axis photographing range, and min (·) is the minimum coordinate value of the three-dimensional sensor in the corresponding coordinate axis photographing range.
For each group of second point cloud data of the second point cloud data set P' (x, y, z), three-dimensional points with the maximum z coordinate value are selected in each region S (i, j), M x N three-dimensional points can be selected from each group of second point cloud data and recorded as third point cloud data, and a third point cloud data set P is generated from the 13 groups of third point cloud datapjt(xp,yp,zp) Then a third point cloud data set Ppjt(xp,yp,zp) And performing projection, wherein each group of third point cloud data respectively generates three projection images which are respectively recorded as an x-axis projection image Ximg, a y-axis projection image Yimg and a z-axis projection image Zimg, the width and the height of the z-axis projection image Zimg are respectively N and M, and a third three-dimensional point in the image Zimg is positionedThe index I (I, j) corresponds to the region S (I, j), and its gray value is the z-coordinate value of the three-dimensional point with the largest z-value in the region S (I, j), that is, zpij
The projected image Ximg along the X-axis is consistent with the projected image Zimg along the z-axis in size, the third three-dimensional point coordinate I (I, j) in the image Ximg corresponds to the region S (I, j), and the gray value is the X coordinate value of the three-dimensional point with the maximum z value in the region S (I, j), namely the X coordinate valuepij
The projected image YImg along the Y axis is consistent with the projected image Zimg along the z axis in size, the third three-dimensional point coordinate I (I, j) in the image YImg corresponds to the region S (I, j), and the gray value is the Y coordinate value of the three-dimensional point with the maximum z value in the region S (I, j), namely the Y coordinate valuepij. When theta isxWhen the angle is-30 °, 0 °, and 30 °, Zimg of the projection image along the z-axis is as shown in fig. 4(a), 4(b), and 4(c) in this order.
Step 22, according to the preset step increment, in the preset step range, calculating the entropy value H (theta) of the projection image along the z axis in the projection image, selecting the angle theta corresponding to the minimum entropy value H (theta), and recording the angle theta as the selected angle thetasWherein, the formula for calculating the entropy value is as follows:
H(θ)=-∑pαβlog(pαβ)
Figure GDA0003507479320000081
W=N*M
where f (α, β) is the number of occurrences of the gray value combination (α, β), W is the image scale, i.e., the number of pixels occupied by the image, pαβIs the feature probability.
Specifically, by calculating the entropy H (θ), not only the gray distribution characteristics of the point cloud data set but also the gradient distribution characteristics can be reflected, and the information content of the point cloud data set can be reflected more optimally.
For the measured plane of the object to be grabbed, no matter which direction the plane inclines to, different neighborhood gradients can be generated, and if the Z axis is vertical to the plane after projection, the gradient change near the plane approaches to 0, so that the probability of 0 gradient occurrence is greatly increasedAdditionally, the degree of mixing is greatly reduced. Therefore, the plane which is relatively vertical to the Z axis can be obtained by taking the angle as a criterion, so that the projection direction with less overlapped points in projection is obtained, namely, the angle theta is selecteds
In practical application, a projected image vertical to a plane to be measured has a data missing phenomenon or a data jumping phenomenon due to a fault caused by shielding, and the z direction of point cloud data acquired by a three-dimensional sensor is vertical to a placing plane of an object to be grabbed, but due to installation errors, the condition that the top surface and the bottom surface of the object to be grabbed are not parallel and the like, excessive data points are overlapped with each other during projection, namely a plurality of data points are contained in one area S (i, j), and because only a three-dimensional point with the largest z coordinate value is selected for sampling, the data of the rest three-dimensional points do not enter subsequent calculation.
In order to improve the utilization rate of the point cloud data, it is desirable that each region S (i, j) contains meaningful values, and therefore, all the second point cloud data sets P' (x, y, z) are obtained, and the information entropy of the projection image Zimg along the z-axis is calculated, so as to obtain the rotation angle with the maximum information amount.
Firstly, image normalization is carried out on projection images Zimg along the z axis corresponding to all the point cloud data of the third group, and normalized projection images Zimg' along the z axis are generated.
Normalizing the Zimg projected image along the z-axis within a measuring range, so that the normalized gray value Zimg' (i, j) of the third three-dimensional point in the Zimg projected image along the z-axis is between [0,255], wherein the measuring range is determined by the maximum measuring height of the object to be grabbed, and the calculation formula of the Zimg projected image along the z-axis normalization is as follows:
Zimg’(i,j)=255*(Zimg(i,j)-Zmin)/(Zmax-Zmin)
where Zmin is the minimum gray value in the projected image Zimg along the z-axis and Zmax is the maximum gray value in the projected image Zimg along the z-axis.
Zimg (i, j) is the gray scale value of the third three-dimensional point (i, j), and Zimg' (i, j) is the normalized gray scale value of the third three-dimensional point (i, j).
And secondly, performing mean filtering on the normalized projection image Zimg 'along the z-axis to generate a filtered image Zimgmean'.
And counting the frequency of the distribution characteristic f (alpha, beta) of the image, wherein alpha is the gray value of the normalized projection image Zimg 'along the z-axis, alpha belongs to (0,255), beta is the gray value of the filtered image Zimgmean', beta belongs to (0,255), and f (alpha, beta) is the frequency of the occurrence of the gray value combination (alpha, beta).
Then, according to the preset step increment, in the preset step range, calculating an entropy value H (θ) of the projection image along the z-axis in the projection image, as shown in fig. 5, the formula for calculating the entropy value H (θ) is:
H(θ)=-∑pαβlog(pαβ)
Figure GDA0003507479320000101
W=N*M
where W is the image scale, i.e. the number of pixels occupied by an image, pαβIs the feature probability.
Finally, selecting the minimum value of the entropy values H (theta), and recording the angle theta corresponding to the minimum value of the entropy values as a selected angle thetas
And 3, calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selected angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position.
Further, the step 3 specifically includes:
step 31, selecting an angle thetasDetermining the selected angle thetasAnd correspondingly projecting the image Zimg along the z axis, calculating a histogram of the image Zimg along the z axis, and performing median filtering on the calculated histogram to generate a median array HistMeain (g).
In particular, the histogram reflects the clustering of gray values, and for the projected image Zimg along the Z-axis, it reflects the clustering of three-dimensional points along the Z-axis, i.e., points where Z is close, in other words, points that are approximately in the same plane. Considering that the data has errors and the histogram generates peaks, the present embodiment needs to analyze the trend of the histogram, and therefore, the median filtering algorithm is used to filter the peaks in the histogram and retain the trend of the histogram for subsequent analysis.
According to a selected angle thetasCorresponding projection images Ximg θ along the x-axis, Yimg θ along the y-axis, and Zimg θ along the z-axis are determined.
According to the measuring range of the object to be grabbed, normalizing Zimg theta in the range of [0,255], and calculating a normalized image histogram hist (g), wherein the calculation formula of the image histogram hist (g) is as follows:
Figure GDA0003507479320000102
where g is the image gray level, num (g) is the number of pixels in the image having a gray level of g.
Then, performing median filtering with a window of 5 on the calculated image histogram hist (g) to generate a median array histmedian (g), wherein a calculation formula of the median filtering is as follows:
Figure GDA0003507479320000111
in the formula, Medrain (hist (k)) is the middle value after the values of hist (k) corresponding to the gray values g-2 to g +2 are sorted.
Step 32, calculating the division point of the median array HistMeain (g), reversely calculating the division position Fz (k) of the division point F (k) according to the calculation formula of Zimg normalization of the projection image along the z axis, and selecting the angle theta according to the division position Fz (k)sAnd performing point cloud segmentation on the corresponding Zimg theta projected image along the z axis, and generating a point cloud segmentation result of the object to be grabbed according to the segmentation result of the Zimg theta projected image along the z axis.
Specifically, first, derivation is performed on a discrete median array histmean (g), and the calculation formula is:
Hist’(g)=HistMedain(g+1)-HistMedain(g)g∈(0,254)
then, using a state machine, traversing the derivative Hist' (g) of the median array histmean (g), searching the trough inflection point corresponding to the array F in the search range, and recording the trough inflection point as the segmentation point.
Setting:
when Hist' (g) >0, the state machine Sta is 1;
when Hist' (g) is 0, the state machine Sta is 0;
when Hist' (g) <0, the state machine Sta becomes-1.
When the state machine Sta is changed from-1 or 0 to 1, recording the corresponding gray level g, storing the gray level g into an array F, wherein the length of the array F is w, each element in the array F stores the gray level value F (k) of one inflection point in a histogram, and k belongs to [0, w ].
And searching a trough inflection point corresponding to the array F according to a preset parameter MinDis.
As shown in fig. 6, in the search range of gray level g e (F (k) -minis, F (k)) + minis according to the preset parameter minis, in the search array F, each data is subjected to non-minimum value suppression, the lower inflection point in the histogram, namely the valley inflection point, is reserved, and the valley inflection point is recorded as the segmentation point F (k).
More specifically, as shown in fig. 7, the Zimg θ projected image along the z-axis is segmented according to the segmentation points f (k), after segmentation, the segmentation regions can be screened according to the number of three-dimensional points in the segmentation regions, and the segmentation regions with smaller number of segmentation points are screened out to remove interference, because Zimg θ has a one-to-one correspondence relationship with Ximg θ and Yimg θ, that is:
third Point cloud data set Ppjt(xp,yp,zp) Corresponds to three projection images after the projection along the x, y and z axes, and each projection image corresponds to the histogram, therefore:
Ppjt(x(i,j),y(i,j),z(i,j))=[Ximgθ(i,j),Yimgθ(i,j),Zimgθ(i,j)]
further, from the segmentation result of the projection image Zimg θ along the z-axis, a point cloud segmentation result can be generated, as shown in fig. 8.
And 4, determining the center coordinate of the grabbing plane and the clamping jaw posture of the object to be grabbed according to the point cloud segmentation result of the object to be grabbed, and generating and sending grabbing information to the manipulator according to the center coordinate and the clamping jaw posture, wherein the grabbing information is used for controlling the manipulator to grab the object to be grabbed.
Specifically, in the grabbing process, the manipulator is generally suspended above the object to be grabbed, and grabbing is performed according to the angle of the object to be grabbed, so that a grabbing position (x, y, z) and a clamping jaw posture (clamping jaw direction vector) need to be provided for the manipulator.
Let the post-cut set of point clouds be Psg (x, y, z), whose center can be expressed as:
PsgCenter(Mean(x),Mean(y),Mean(z))
the grabbing direction can be calculated by the average normal vector of the plane, and any point E of the plane is taken0Search distance E0Nearest point E1And second nearest point E2Then point E0The normal vector of (a) can be expressed as:
Figure GDA0003507479320000121
calculating normal vector e of each point in the point set and averaging the normal vectors to obtain the average normal vector
Figure GDA0003507479320000122
For the object to be grabbed with the simple point, the point cloud after segmentation can be directly used for positioning and guiding. For example, a carton is wrapped, the point cloud with more points in a specific height range can be selected as a grabbing plane, the center of gravity of the point cloud in the plane is used as a grabbing position, and the normal vector direction of the plane is used as the posture of the clamping jaw to grab the box body.
Example two:
the present embodiment provides a point cloud segmentation apparatus, which uses the point cloud segmentation method as in the first embodiment to capture an object to be captured, the point cloud segmentation apparatus is disposed above a conveying apparatus and connected to a moving mechanism, the point cloud segmentation apparatus includes a three-dimensional sensor, the three-dimensional sensor is used to perform a single scan on the object to be captured above the conveying apparatus, and generate a first point cloud data set, the point cloud segmentation apparatus further includes: a data set generating unit, an angle determining unit, a dividing unit and an information generating unit;
in the embodiment, a planar normal phase direction of an object to be grabbed is set as a z-axis, an advancing direction of the object to be grabbed is set as a y-axis, a right side of the advancing direction is set as an x-axis, a three-dimensional rectangular coordinate system is established, a single scanning is performed on the object to be grabbed by using a single three-dimensional sensor arranged above a conveyor belt, and a first point cloud data set P (x, y, z) is acquired, wherein the first point cloud data set P (x, y, z) is composed of a plurality of three-dimensional points, a coordinate corresponding to each three-dimensional point is (x, y, z), and a gray value is P (x, y, z).
The data set generating unit is used for respectively calculating rotating data sets of the first point cloud data set around an x axis and a y axis according to the preset step increment, combining the first point cloud data set and the rotating data sets and generating a second point cloud data set;
specifically, in the present embodiment, the preset step increment Δ θ is set to 10 °, and the preset step range θ is set tot∈[-30°,30°]During rotation about the x-axis, consider when θxWhen 0, the obtained x-axis point cloud data set P'θxThe first point cloud data set P (x, y, z) itself, therefore, θ is omittedx0, so θxE-30, -20, -10, 20, 30, and, for the same reason, during rotation around the y-axisyE { -30 °, -20 °, -10 °,10 °,20 °,30 ° }, i.e. the x-axis point cloud dataset
Figure GDA0003507479320000131
And y-axis point cloud data set
Figure GDA0003507479320000132
Each containing 6 sets of second point cloud data.
X-axis point cloud data set
Figure GDA0003507479320000133
The calculation formula of (2) is as follows:
Figure GDA0003507479320000134
y-axis point cloud data set
Figure GDA0003507479320000135
The calculation formula of (2) is as follows:
Figure GDA0003507479320000136
the synthesized second point cloud data set P' (x, y, z) includes 12 sets of second point cloud data and 1 set of first point cloud data set P (x, y, z), and each set of point cloud data includes a plurality of three-dimensional points.
The angle determining unit is used for selecting a three-dimensional point with the maximum z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting the three-dimensional point on each plane of a three-dimensional coordinate system respectively to generate a projection image, and determining a selection angle according to the minimum entropy value of the projection image;
further, the motion is the manipulator, and the angle determining unit specifically includes: the projection module selects the module;
the projection module is used for selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set according to a preset point cloud projection resolution, generating a third point cloud data set, projecting on three coordinate planes and generating a projection image, wherein the projection image comprises an image projected along an x axis, an image projected along a y axis and an image projected along a z axis;
preferably, the preset point cloud projection resolution is 3 times of the minimum precision of the xy plane of the three-dimensional sensor.
In this embodiment, the accuracy of the three-dimensional sensor in the x-axis direction is set to 0.05mm, and the accuracy in the y-axis direction is set to 0.1mm, so that the preset point cloud projection resolution Δ S is 0.05mm × 3 to 0.15 mm.
Dividing the shooting range of the three-dimensional sensor on an xy plane into N M areas according to a preset point cloud projection resolution Delta S, and marking any area as S (i, j), wherein i is 0,1,2, …, N-1, j is 0,1,2, …, M-1, wherein,
N=(max(X)-min(X))/ΔS,
M=(max(Y)-min(Y))/ΔS,
in the formula, max (·) is the maximum coordinate value of the three-dimensional sensor in the corresponding coordinate axis photographing range, and min (·) is the minimum coordinate value of the three-dimensional sensor in the corresponding coordinate axis photographing range.
For each group of second point cloud data of the second point cloud data set P' (x, y, z), three-dimensional points with the maximum z coordinate value are selected in each region S (i, j), M x N three-dimensional points can be selected from each group of second point cloud data and recorded as third point cloud data, and a third point cloud data set P is generated from the 13 groups of third point cloud datapjt(xp,yp,zp) Then a third point cloud data set Ppjt(xp,yp,zp) And performing projection, wherein each group of third point cloud data respectively generates three projection images which are respectively recorded as an x-axis projection image Ximg, a y-axis projection image Yimg and a z-axis projection image Zimg, the width and the height of the z-axis projection image Zimg are respectively N and M, a third three-dimensional point coordinate I (I, j) in the image Zimg corresponds to a region S (I, j), and the gray value of the third three-dimensional point I (I, j) corresponds to the z coordinate value of the three-dimensional point with the maximum z value in the region S (I, j), namely the z coordinate valuepij
The projected image Ximg along the X-axis is consistent with the projected image Zimg along the z-axis in size, the third three-dimensional point coordinate I (I, j) in the image Ximg corresponds to the region S (I, j), and the gray value is the X coordinate value of the three-dimensional point with the maximum z value in the region S (I, j), namely the X coordinate valuepij
The projected image YImg along the Y axis is consistent with the projected image Zimg along the z axis in size, the third three-dimensional point coordinate I (I, j) in the image YImg corresponds to the region S (I, j), and the gray value is the Y coordinate value of the three-dimensional point with the maximum z value in the region S (I, j), namely the Y coordinate valuepij
The selection module is used for calculating the entropy value H (theta) of the projected image along the z axis in the projected image according to the preset step increment within the preset step range, selecting the angle theta corresponding to the minimum entropy value H (theta) and recording the angle theta as a selected angle thetas
Specifically, in practical application, a projected image perpendicular to a plane to be measured has a data missing phenomenon or a data jumping phenomenon due to a fault caused by shielding, and the z direction of point cloud data acquired by a three-dimensional sensor is perpendicular to a placing plane of an object to be grabbed, but due to installation errors, the condition that the top surface and the bottom surface of the object to be grabbed are not parallel and the like, excessive data points overlap each other during projection, that is, in one area S (i, j), a plurality of data points are included, and since only a three-dimensional point with the maximum z coordinate value is selected for sampling, data of the rest three-dimensional points will not enter subsequent calculation.
In order to improve the utilization rate of the point cloud data, it is desirable that each region S (i, j) contains meaningful values, and therefore, all the second point cloud data sets P' (x, y, z) are obtained, and the information entropy of the projection image Zimg along the z-axis is calculated, so as to obtain the rotation angle with the maximum information amount.
Firstly, image normalization is carried out on projection images Zimg along the z axis corresponding to all the point cloud data of the third group, and normalized projection images Zimg' along the z axis are generated.
Normalizing the Zimg projected image along the z-axis within a measuring range, so that the normalized gray value Zimg' (i, j) of the third three-dimensional point in the Zimg projected image along the z-axis is between [0,255], wherein the measuring range is determined by the maximum measuring height of the object to be grabbed, and the calculation formula of the Zimg projected image along the z-axis normalization is as follows:
Zimg’(i,j)=255*(Zimg(i,j)-Zmin)/(Zmax-Zmin)
where Zmin is the minimum gray value in the projected image Zimg along the z-axis and Zmax is the maximum gray value in the projected image Zimg along the z-axis.
Zimg (i, j) is the gray scale value of the third three-dimensional point (i, j), and Zimg' (i, j) is the normalized gray scale value of the third three-dimensional point (i, j).
And secondly, performing mean filtering on the normalized projection image Zimg 'along the z-axis to generate a filtered image Zimgmean'.
And counting the frequency of the distribution characteristic f (alpha, beta) of the image, wherein alpha is the gray value of the normalized projection image Zimg 'along the z-axis, alpha belongs to (0,255), beta is the gray value of the filtered image Zimgmean', beta belongs to (0,255), and f (alpha, beta) is the frequency of the occurrence of the gray value combination (alpha, beta).
Then, according to the preset step increment, in the preset step range, calculating an entropy value H (theta) of the projection image along the z axis in the projection image, wherein a calculation formula of the entropy value H (theta) is as follows:
H(θ)=-∑pαβlog(pαβ)
Figure GDA0003507479320000161
W=N*M
where W is the image scale, i.e. the number of pixels occupied by an image, pαβIs the feature probability.
Finally, selecting the minimum value of the entropy values H (theta), and recording the angle theta corresponding to the minimum value of the entropy values as a selected angle thetas
The segmentation unit is used for calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selected angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position;
further, the segmentation unit specifically includes: the median calculation module and the result generation module;
the median calculation module is used for calculating the median according to the selected angle thetasDetermining a corresponding projection image Zimg along the z axis, calculating a histogram of the projection image Zimg along the z axis, and performing median filtering on the histogram to generate a median array HistMeain (g);
specifically, according to the selected angle thetasCorresponding projection images Ximg θ along the x-axis, Yimg θ along the y-axis, and Zimg θ along the z-axis are determined.
According to the measuring range of the object to be grabbed, normalizing Zimg theta in the range of [0,255], and calculating a normalized image histogram hist (g), wherein the calculation formula of the image histogram hist (g) is as follows:
Figure GDA0003507479320000171
where g is the image gray level, num (g) is the number of pixels in the image having a gray level of g.
Then, performing median filtering with a window of 5 on the calculated image histogram hist (g) to generate a median array histmedian (g), wherein a calculation formula of the median filtering is as follows:
Figure GDA0003507479320000172
in the formula, Medrain (hist (k)) is the middle value after the values of hist (k) corresponding to the gray values g-2 to g +2 are sorted.
The result generation module is used for calculating the division points of the median array HistMeain (g), reversely calculating the division positions Fz (k) of the division points F (k) according to the calculation formula of Zimg normalization of the projection image along the z axis, carrying out point cloud division on the projection image Zimg theta along the z axis according to the division positions Fz (k), and generating a point cloud division result of the object to be grabbed according to the division result of the projection image Zimg theta along the z axis.
Specifically, first, derivation is performed on a discrete median array histmean (g), and the calculation formula is:
Hist’(g)=HistMedain(g+1)-HistMedain(g)g∈(0,254)
then, using a state machine, traversing the derivative Hist' (g) of the median array histmean (g), searching the trough inflection point corresponding to the array F in the search range, and recording the trough inflection point as the segmentation point.
Setting:
when Hist' (g) >0, the state machine Sta is 1;
when Hist' (g) is 0, the state machine Sta is 0;
when Hist' (g) <0, the state machine Sta becomes-1.
When the state machine Sta is changed from-1 or 0 to 1, recording the corresponding gray level g, storing the gray level g into an array F, wherein the length of the array F is w, each element in the array F stores the gray level value F (k) of one inflection point in a histogram, and k belongs to [0, w ].
Searching the trough inflection point corresponding to the array F according to a preset parameter MinDis, searching the array F in the search range of the gray level g epsilon (F (k) -MinDis, F (k) + MinDis), performing non-minimum value suppression on each datum, reserving the lower inflection point in the histogram, namely the trough inflection point, and recording the trough inflection point as a segmentation point F (k).
More specifically, the Zimg θ projected image along the z-axis is segmented according to the segmentation points f (k), the segmented regions can be screened according to the number of three-dimensional points in the segmented regions after segmentation, and the segmented regions with smaller number of segmentation points are screened out to remove interference, because the Zimg θ has a one-to-one correspondence relationship with Ximg θ and Yimg θ, namely:
third Point cloud data set Ppjt(xp,yp,zp) Corresponds to three projection images after the projection along the x, y and z axes, and each projection image corresponds to the histogram, therefore:
Ppjt(x(i,j),y(i,j),z(i,j))=[Ximgθ(i,j),Yimgθ(i,j),Zimgθ(i,j)]
further, a point cloud segmentation result may be generated from the segmentation result of the Zimg θ projected image along the z-axis.
The information generating unit is used for determining the central coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating and sending grabbing information to the moving mechanism according to the central coordinate and the clamping jaw posture.
Specifically, in the grabbing process, the manipulator is generally suspended above the object to be grabbed, and grabbing is performed according to the angle of the object to be grabbed, so that a grabbing position (x, y, z) and a clamping jaw posture (clamping jaw direction vector) need to be provided for the manipulator.
Let the post-cut set of point clouds be Psg (x, y, z), whose center can be expressed as:
PsgCenter(Mean(x),Mean(y),Mean(z))
the grabbing direction can be calculated by the average normal vector of the plane, and any point E of the plane is taken0Search distance E0Nearest point E1And second nearest point E2Then point E0The normal vector of (a) can be expressed as:
Figure GDA0003507479320000181
calculating normal vector e of each point in the point set and averaging the normal vectors to obtain the average normal vector
Figure GDA0003507479320000182
For the object to be grabbed with the simple point, the point cloud after segmentation can be directly used for positioning and guiding. For example, a carton is wrapped, the point cloud with more points in a specific height range can be selected as a grabbing plane, the center of gravity of the point cloud in the plane is used as a grabbing position, and the normal vector direction of the plane is used as the posture of the clamping jaw to grab the box body.
The technical scheme of the present application is described in detail above with reference to the accompanying drawings, and the present application provides a point cloud segmentation method and apparatus, wherein the method includes: step 1, acquiring a first point cloud data set of an object to be grabbed, calculating a rotating data set of the first point cloud data set according to a preset step increment, combining the first point cloud data set and the rotating data set, and generating a second point cloud data set; step 2, selecting a three-dimensional point with the largest z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting, and determining a selection angle according to the minimum entropy value of a projection image; step 3, calculating a histogram and a segmentation position of the projection image according to the selected angle, and generating a point cloud segmentation result of the object to be captured; and 4, determining the center coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating grabbing information according to the center coordinate and the clamping jaw posture. Through the technical scheme in this application, reduce the influence of offset data to the location, improve the object and snatch efficiency.
The steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and not restrictive of the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.

Claims (10)

1. A point cloud segmentation method, comprising:
step 1, acquiring a first point cloud data set of an object to be grabbed, respectively calculating rotation data sets of the first point cloud data set around an x axis and a y axis according to a preset step increment, and combining the first point cloud data set and the rotation data sets to generate a second point cloud data set;
step 2, selecting a three-dimensional point with the largest z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting the three-dimensional point on each plane of a three-dimensional coordinate system respectively to generate a projection image, and determining a selection angle according to the minimum entropy value of the projection image, wherein the step 2 specifically comprises the following steps:
step 21, according to the preset point cloud projection resolution, selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set to generate a third point cloud data set, and projecting on three coordinate planes to generate the projection image, wherein the projection image comprises an image projected along an x axis, an image projected along a y axis and an image projected along a z axis;
step 22, according to the preset step increment, in the preset step range, calculating the entropy value of the projection image along the z axis in the projection image
Figure 671328DEST_PATH_IMAGE001
And selecting the minimum entropy value
Figure 218984DEST_PATH_IMAGE001
Corresponding angle
Figure 310437DEST_PATH_IMAGE002
Recording as the selection angle
Figure 550925DEST_PATH_IMAGE003
Step 3, calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selected angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position;
and 4, determining the central coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating grabbing information according to the central coordinate and the clamping jaw posture.
2. The point cloud segmentation method of claim 1, wherein the formula for calculating the entropy value is:
Figure 157356DEST_PATH_IMAGE004
Figure 508703DEST_PATH_IMAGE005
in the formula (I), the compound is shown in the specification,
Figure 953197DEST_PATH_IMAGE006
as a combination of grey values
Figure 364587DEST_PATH_IMAGE007
The number of occurrences, W is the image scale, i.e. the number of pixels occupied by the image,
Figure 395997DEST_PATH_IMAGE008
in order to be the probability of the feature,
Figure 285455DEST_PATH_IMAGE009
to normalize the gray values of the projected image Zimg' along the z-axis,
Figure 351500DEST_PATH_IMAGE010
Figure 933792DEST_PATH_IMAGE011
to filter the gray values of the image ZimgMean',
Figure 452498DEST_PATH_IMAGE012
the filtered image ZimgMean ' is an image obtained by mean filtering the normalized projected image Zimg ' along the z-axis, and the normalized projected image Zimg ' along the z-axis is an image obtained by image normalization of the projected image along the z-axis.
3. The point cloud segmentation method of claim 2, wherein the preset step increment is
Figure 145647DEST_PATH_IMAGE013
4. The point cloud segmentation method of claim 2, wherein the step 3 specifically comprises:
step 31, selecting the angle according to the angle
Figure 302084DEST_PATH_IMAGE003
Determining a corresponding projection image Zimg along the z axis, calculating the histogram of the projection image Zimg along the z axis, and performing median filtering on the histogram to generate a median array HistMeain (g);
step 32, calculating the segmentation point of the median array HistMeain (g), reversely calculating the segmentation position Fz (k) of the segmentation point F (k) according to the computation formula of Zimg normalization of the projection image along the z axis, performing point cloud segmentation on the projection image Zimg theta along the z axis according to the segmentation position Fz (k), and generating the point cloud segmentation result of the object to be captured according to the segmentation result of the projection image Zimg theta along the z axis.
5. The point cloud segmentation method of any of claims 2 to 4, wherein the preset point cloud projection resolution is 3 times of the minimum precision of an xy plane of a three-dimensional sensor.
6. The point cloud segmentation method according to claim 4, wherein generating the point cloud segmentation result of the object to be captured specifically comprises:
deriving the median array HistMedrain (g), traversing the derivative by using a state machine, searching a trough inflection point, and recording the trough inflection point as the segmentation point F (k);
according to the division points F (k), dividing the Zimg theta projection image along the z axis to form division areas, and screening the division areas according to the number of three-dimensional points in the division areas;
and generating the point cloud segmentation result according to the screened segmentation area.
7. A point cloud segmentation apparatus, which employs the point cloud segmentation method as claimed in any one of claims 1 to 6, wherein the point cloud segmentation apparatus is disposed above a conveyor and connected to a motion mechanism, the point cloud segmentation apparatus includes a three-dimensional sensor, the three-dimensional sensor is configured to perform a single scan on an object to be captured above the conveyor to generate a first point cloud data set, and the point cloud segmentation apparatus further includes: a data set generating unit, an angle determining unit, a dividing unit and an information generating unit;
the data set generating unit is used for respectively calculating rotating data sets of the first point cloud data set around an x axis and a y axis according to a preset step increment, combining the first point cloud data set and the rotating data sets and generating a second point cloud data set;
the angle determining unit is used for selecting a three-dimensional point with the maximum z coordinate value in the second point cloud data set according to a preset point cloud projection resolution, projecting the three-dimensional point on each plane of a three-dimensional coordinate system respectively to generate a projection image, and determining a selection angle according to the minimum entropy value of the projection image;
the segmentation unit is used for calculating a histogram of the projected image and a segmentation position corresponding to the histogram according to the selection angle, and generating a point cloud segmentation result of the object to be captured according to the segmentation position;
the information generating unit is used for determining the central coordinate and the clamping jaw posture of the grabbing plane according to the point cloud segmentation result, and generating and sending grabbing information to the moving mechanism according to the central coordinate and the clamping jaw posture.
8. The point cloud segmentation apparatus according to claim 7, wherein the movement mechanism is a robot, and the angle determination unit specifically includes: the projection module selects the module;
the projection module is used for selecting a three-dimensional point with the maximum z coordinate value from each group of point cloud data of the second point cloud data set according to a preset point cloud projection resolution, generating a third point cloud data set, and projecting on three coordinate planes to generate the projection image, wherein the projection image comprises an image projected along an x axis, an image projected along a y axis and an image projected along a z axis;
the selection module is used for calculating the entropy value of the projected image along the z axis in the projected image within the preset step length range according to the preset step length increment
Figure 320856DEST_PATH_IMAGE001
And selecting the minimum entropy value
Figure 202224DEST_PATH_IMAGE001
Corresponding angle
Figure 886015DEST_PATH_IMAGE002
Recording as the selection angle
Figure 536439DEST_PATH_IMAGE003
9. As claimed in claimThe point cloud segmentation device of 8, characterized in that the preset step increment is
Figure 319588DEST_PATH_IMAGE014
10. The point cloud segmentation apparatus of claim 8, wherein the predetermined point cloud projection resolution is 3 times of a minimum precision of an xy plane of a three-dimensional sensor.
CN201910879613.XA 2019-09-18 2019-09-18 Point cloud segmentation method and device Active CN110610501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910879613.XA CN110610501B (en) 2019-09-18 2019-09-18 Point cloud segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910879613.XA CN110610501B (en) 2019-09-18 2019-09-18 Point cloud segmentation method and device

Publications (2)

Publication Number Publication Date
CN110610501A CN110610501A (en) 2019-12-24
CN110610501B true CN110610501B (en) 2022-04-29

Family

ID=68891456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910879613.XA Active CN110610501B (en) 2019-09-18 2019-09-18 Point cloud segmentation method and device

Country Status (1)

Country Link
CN (1) CN110610501B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330699B (en) * 2020-11-14 2022-09-16 重庆邮电大学 Three-dimensional point cloud segmentation method based on overlapping region alignment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105844629A (en) * 2016-03-21 2016-08-10 河南理工大学 Automatic segmentation method for point cloud of facade of large scene city building
CN109872350A (en) * 2019-02-18 2019-06-11 重庆市勘测院 A kind of new point cloud autoegistration method
CN109872329A (en) * 2019-01-28 2019-06-11 重庆邮电大学 A kind of ground point cloud fast partition method based on three-dimensional laser radar
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2532948B (en) * 2014-12-02 2021-04-14 Vivo Mobile Communication Co Ltd Object Recognition in a 3D scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105844629A (en) * 2016-03-21 2016-08-10 河南理工大学 Automatic segmentation method for point cloud of facade of large scene city building
CN109872329A (en) * 2019-01-28 2019-06-11 重庆邮电大学 A kind of ground point cloud fast partition method based on three-dimensional laser radar
CN109872350A (en) * 2019-02-18 2019-06-11 重庆市勘测院 A kind of new point cloud autoegistration method
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robust Segmentation in Laser Scanning 3D Point Cloud Data;Abdul Nurunnabi等;《2012 International Conference on Digital Image Computing Techniques and Applications (DICTA)》;20130117;第1-8页 *
车载LiDAR点云相连行道树精细分割;张西童等;《测绘科学》;20160831;第41卷(第8期);第111-115页 *

Also Published As

Publication number Publication date
CN110610501A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
EP2927945B1 (en) X-ray inspection apparatus for inspecting semiconductor wafers
US8311311B2 (en) Optical aberration correction for machine vision inspection systems
US6501554B1 (en) 3D scanner and method for measuring heights and angles of manufactured parts
US6445807B1 (en) Image processing method and apparatus
Sun et al. Zhang
EP2551633B1 (en) Three dimensional distance measuring device and method
CN110610501B (en) Point cloud segmentation method and device
CN110136047B (en) Method for acquiring three-dimensional information of static target in vehicle-mounted monocular image
US20110193953A1 (en) System and method for estimating the height of an object using tomosynthesis-like techniques
CN112102375B (en) Point cloud registration reliability detection method and device and mobile intelligent equipment
CN115298513A (en) Three-dimensional optical measurement mobile device for a rope with a rope attachment means
CN115816471A (en) Disordered grabbing method and equipment for multi-view 3D vision-guided robot and medium
EP1653406B1 (en) Method and apparatus for the correction of nonlinear field of view distortion of a digital imaging system
JPH10124704A (en) Device for preparing stereoscopic model and method therefor and medium for recording program for preparing stereoscopic model
EP0676057B1 (en) Process and device for taking a distance image
IL294522A (en) System and method for controlling automatic inspection of articles
CN110044296B (en) Automatic tracking method and measuring machine for 3D shape
US6977985B2 (en) X-ray laminography system having a pitch, roll and Z-motion positioning system
CA3199809A1 (en) Deep learning based image enhancement for additive manufacturing
JP6805200B2 (en) Movement control device, movement control method and movement control program
CN105758329A (en) Optical surface profile scanning system
WO2022260028A1 (en) Visual inspection device, visual inspection method, image generation device, and image generation method
NL2029928B1 (en) Method and camera system for detecting substrate positions in a substrate cassette
CN118212304A (en) Multi-camera high-precision calibration method based on image enhancement and deep learning and electronic equipment
CN117146710B (en) Dynamic projection three-dimensional reconstruction system and method based on active vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant