CN114677388A - Room layout dividing method based on unit decomposition and space division - Google Patents

Room layout dividing method based on unit decomposition and space division Download PDF

Info

Publication number
CN114677388A
CN114677388A CN202210327868.7A CN202210327868A CN114677388A CN 114677388 A CN114677388 A CN 114677388A CN 202210327868 A CN202210327868 A CN 202210327868A CN 114677388 A CN114677388 A CN 114677388A
Authority
CN
China
Prior art keywords
point
pixel
point cloud
space
room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210327868.7A
Other languages
Chinese (zh)
Inventor
宁小娟
翟凌宇
刘瑛
金海燕
隋连升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202210327868.7A priority Critical patent/CN114677388A/en
Publication of CN114677388A publication Critical patent/CN114677388A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a room layout dividing method based on unit decomposition and space division, which is implemented according to the following steps: acquiring a unit division result of an indoor scene; obtaining a space segmentation result of an indoor scene; and overlapping the obtained unit division result of the indoor scene with the obtained space division result of the indoor scene, generating random points in an overlapping area, and determining which room the grid unit belongs to according to the number of the color labels carried by the random points in each grid unit, thereby realizing accurate division of the room layout. The invention discloses a room layout division method based on unit decomposition and space division, which solves the problem that the division cannot be realized due to the serious shielding phenomenon when the indoor scene with a complex internal structure is divided by a method for dividing indoor scene structure elements in the prior art.

Description

Room layout dividing method based on unit decomposition and space division
Technical Field
The invention belongs to the technical field of artificial intelligence calculation methods, and relates to a room layout division method based on unit decomposition and space division.
Background
The indoor scene is the place where people live and work most closely, and people also determine the complex diversity of the internal structure of the indoor scene. For some indoor scenes with simple internal structures, the accurate division of the room layout can be realized by a method for dividing indoor scene structural elements. Even if the acquired indoor scene data have a missing phenomenon, the method still has high robustness. However, in some indoor scenes with complex internal structures, there may be a severe occlusion phenomenon, and the division of the room layout cannot be realized by a method of dividing structural elements of the indoor scenes.
Disclosure of Invention
The invention aims to provide a room layout dividing method based on unit decomposition and space division, and solves the problem that in the prior art, when an indoor scene with a complex internal structure is divided by a method for dividing indoor scene structure elements, serious shielding phenomenon exists, and division cannot be realized.
The technical scheme adopted by the invention is that the room layout dividing method based on unit decomposition and space division is implemented according to the following steps:
step 1, extracting wall surface point clouds from an indoor point cloud scene, dividing the wall surface point clouds into a plurality of segments, calculating a centroid point of each segment, fitting a straight line corresponding to the wall surface point clouds according to the centroid point, then carrying out fusion processing on the fitted straight lines, and then carrying out unit decomposition based on the result of the fusion processing to obtain a unit division result of the indoor scene;
step 2, after down-sampling the indoor scene, projecting the down-sampled indoor scene onto a two-dimensional plane, discretizing projection points and generating a depth image; then binarizing the depth image, and segmenting the indoor space by using morphological corrosion operation to identify the number of rooms; finally, adding the regions which belong to the room regions but are not accessed into the room space by using a wavefront expansion method to obtain a space segmentation result of the indoor scene;
and 3, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2, generating random points in an overlapping area, and determining which room the grid unit belongs to according to the number of the color labels carried by the random points in each grid unit, thereby realizing accurate division of the room layout.
The present invention is also characterized in that,
the step 1 specifically comprises the following steps:
step 1.1, down-sampling an indoor scene to obtain a down-sampled indoor point cloud scene, dividing the down-sampled indoor point cloud scene into a plurality of horizontal point cloud slices by taking the down-sampled indoor point cloud scene as input, dividing wall points and indoor sundries in each horizontal point cloud slice by using a clustering method of region growing, and finally reserving the wall points to obtain wall point clouds corresponding to the horizontal point cloud slices;
step 1.2, dividing each wall surface point cloud extracted in the step 1.1 into a plurality of segments, calculating a centroid point of each segment, representing the characteristics of all points in each segment by using the centroid point, and then performing straight line fitting according to the calculated centroid point;
and step 1.3, projecting the straight line obtained in the step 1.2, fusing after projection, extending the fused line segment, calculating a two-dimensional arrangement data structure according to a CGAL library, and finally obtaining a two-dimensional unit decomposition result.
In step 1.1, a clustering method of region growing is used to segment wall points and indoor sundries in each horizontal point cloud slice, and finally the wall points are retained, so that the wall point cloud corresponding to the horizontal point cloud slice is obtained specifically as follows:
step 1.1.1, calculating the curvature of each point in each horizontal point cloud slice, sequencing the points according to the curvature values of the points, finding out the point with the minimum curvature, and adding the point into a seed point set;
step 1.1.2, for each seed point, if the K neighbor point of the seed point satisfies the formula (1) and the formula (2) at the same time, adding the point into a potential seed point list;
||np·ns||>cos(θth) (1)
wherein n ispA normal vector representing a current seed point; n is a radical of an alkyl radicalsA normal vector representing K neighbors of the current seed point; thetathRepresents a smoothing threshold;
rp<rth (2)
wherein r ispCurvature of K neighbors of the current seed point, rthRepresents a curvature threshold;
step 1.1.3, adding the step 1.1.2 into a potential seed point list as a midpoint to remove from a slice point cloud corresponding to the initial indoor point cloud;
step 1.1.4, clustering is carried out on the basis of the step 1.1.3, the number of the minimum point cluster is set to be Min, the number of the maximum point cluster is set to be Max, all slices with the generated number between Min and Max are reserved, different colors of different slice marks are distinguished, and the slice point cloud of the reserved slices is the acquired wall surface point cloud;
and step 1.1.5, repeating the steps 1.1.1 to 1.1.4, finishing clustering of all slices, and finally obtaining the wall point cloud corresponding to each slice.
The step 1.2 is specifically as follows:
step 1.2.1, inputting the wall surface point clouds extracted in the step 1.1, and respectively storing x, y and z coordinates of each point in each wall surface point cloud;
step 1.2.2, calculating the length and the width of the wall surface area in the space according to the x and y coordinates of the point cloud of the wall surface, wherein the length is xmax-xminWidth of ymax-ymin,xmax、xmin、ymax、yminRespectively representing the maximum and minimum x and y coordinates; setting the step length l to be 0.4m, dividing the point cloud of the wall surface in the space into l multiplied by l grids, and then the total line number is that raster _ rows is xmax-xminL, total number of columns is raster _ cols ═ ymax-ymin/l;
Step 1.2.3, creating a one-dimensional array vector _4, a two-dimensional array col _anda three-dimensional array row _ col, setting the size of the array vector _4 to 4, setting the size of the array col _4 to raster _ cols · vector _4 according to the sizes of raster _ rows and raster _ cols, and setting the size of the array row _ col to raster _ rows · col;
step 1.2.4, storing points in the wall point cloud into corresponding rows and columns, wherein the position (row _ idx, col _ idx) of any point is represented as (ceil ((point [ i ] x-point _ min.x)/l-1), ceil ((point [ i ] y-point _ min.y)/l-1)), point [ i ] x, point [ i ] y represent x, y coordinates of any point, and point _ min.x, point _ min.y represent x, y coordinates of a minimum point;
step 1.2.5, if the value of row _ idx or col _ idx is less than 0, setting the value to 0; then, sequentially storing the x, y and z of the point selected in the step 1.2.4 in the arrays row _ col [ row _ idx ] [ col-idx ] [0], row _ col [ row _ idx ] [ col-idx ] [1] and row _ col [ row _ idx ] [ col-idx ] [2], and increasing the value of each point in the array row _ col [ row _ idx ] [ col-idx ] [2] by 1;
step 1.2.6, repeating step 1.2.4 and step 1.2.5 to divide all points belonging to the wall point cloud into corresponding grid spaces, and calculating the centroid point Q of each grid space, wherein the calculation formula is as shown in formula (3);
Figure BDA0003573645410000041
wherein x _ mean, y _ mean and z _ mean respectively represent x, y and z coordinates of the centroid point Q; m represents the size of the array row _ col; n represents the size of the array row _ col [ i ]; row _ xol [ i ] [ j ] [0] represents the x coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [1] represents the y coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [2] represents the z coordinate of any point in the space corresponding to the extracted wall surface point cloud; row _ col [ i ] [ j ] [3] represents the number of points contained in a space corresponding to the wall point cloud extracted by each grid;
step 1.2.7, randomly selecting three points from a plurality of centroid points corresponding to each wall point cloud to construct an xoy plane, calculating coefficients of a plane equation, then calculating a normal vector v of a constructed plane, reconstructing the plane according to the solved plane equation coefficients, constructing a projection straight line by using end points of line segments in the space and the normal vector of the plane, solving an intersection point of the plane and the projection straight line again, wherein the intersection point is a projection point of one end point of a straight line segment in the space, and finally connecting the corresponding projection points in sequence to obtain the projected line segment.
The step 1.3 is specifically as follows:
step 1.3.1, setting a predefined length value of a wall segment, then sorting a plurality of segments projected in step 1.2.7 from small to large according to the lengths of the segments, and directly deleting the segments with the lengths smaller than the predefined length value;
step 1.3.2, selecting the line segment with the largest length from the remaining line segments processed in the step 1.3.1, adding the line segment into a final result set, selecting one line segment from the remaining line segments, comparing the line segment with the line segment in the final result set, setting an included angle threshold and a distance threshold, and if the included angle between the two line segments is smaller than the given threshold, considering the two line segments to be parallel; on the basis of parallel, if the distance between the two parallel lines is smaller than a given threshold value, the two line segments are considered to be collinear, and the two line segments are fused;
step 1.3.3, repeating step 1.3.2 until the number of the remaining line segments is 0, and ending the whole process to obtain a fused line segment;
and step 1.3.4, extending the fused line segments, then calculating a two-dimensional arrangement data structure according to the CGAL library, dividing a two-dimensional plane of an indoor scene by using the extension line segments, and decomposing the two-dimensional plane into two-dimensional units.
The step 2 specifically comprises the following steps:
step 2.1, converting the point cloud of the indoor scene space to a two-dimensional image to obtain a corresponding gray image, and carrying out binarization processing on the gray image to obtain a binary image reflecting the overall and local characteristics of the image in order to enable the image to show an obvious black-and-white effect;
step 2.2, defaulting that the door of each room leading to other rooms or corridors is closed, iteratively corroding accessible pixels in the binary image obtained in the step 2.1 by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, obtaining a corrosion result image after corrosion is completed once, judging whether separable areas exist or not according to the outline on the corrosion result image, stopping iteration if the separable areas are obtained, and marking the separable areas by using different colors;
and 2.3, detecting the number of rooms according to the separable areas marked in the step 2.2, wherein the binary image has three states, and white pixels represent unmarked areas in the accessible areas, and then expanding the white pixels by using a wave front propagation method to obtain a final room space segmentation result.
The step 2.1 specifically comprises the following steps:
step 2.1.1, projecting the indoor point cloud sampled in the step 1.1 onto an xoy plane, finding out the maximum x and y coordinates and the minimum x and y coordinates from the projection points, and determining the width and height of the image according to the maximum x and y coordinate values;
step 2.1.2, discretizing the projection points into a two-dimensional grid to obtain a plurality of pixel grids, setting the size of each pixel as pixelsize, determining the size of each pixel according to the thickness of a wall, the size of the point cloud and the density of the point cloud, judging whether the pixel is gray or black according to the number of points contained in each pixel, and if one pixel at least contains one point, indicating that the pixel is gray; if one pixel does not contain a point, the pixel is indicated to be black, and a gray level image is obtained;
step 2.1.3, setting the gray value of the pixel point on the gray image obtained in the step 2.1.2 as 0 or 255 through proper threshold selection, and if the gray value of the pixel point is greater than or equal to the pixel of the threshold, judging the pixel to belong to a specific object, wherein the gray value is represented by 255; otherwise, the pixel points are excluded from the object region, the gray value of the pixel points is represented by 0, and a binary image is obtained.
The step 2.2 specifically comprises the following steps:
step 2.2.1, starting first corrosion from the central position of the binary image obtained in the step 2.1, corroding accessible pixels in the binary image by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, and obtaining a corrosion result image after finishing the corrosion once;
step 2.2.2, searching contours in the corrosion result graph, establishing only two level relations of all the contours, namely a top layer and an inner layer, and counting the number findContourNums of the found contours;
step 2.2.3, each contour is checked, if the current contour does not have a corresponding embedded contour, the area contourArea1 surrounded by the current contour is calculated, so that the room area roommarea 1 is calculated, and the calculation formula is shown as (4);
roomArea1=cellSize*cellSize*contourArea1 (4)
wherein the parameter roomaarea 1 represents the room area; cellSize represents the size of one cell grid; contourArea1 represents the area of the current contour enclosure;
step 2.2.4, judging whether the currently detected contour is an embedded contour of other contours on the basis of the step 2.2.3, if the current contour is the embedded contour of one of the contours, calculating a contourArea2 of an area surrounded by the current contour, and simultaneously updating a room area roommarea 2, wherein an area calculation formula is shown as a formula (5);
roomArea2=cellSize*cellSize*contourArea2-roomArea1 (5)
wherein the parameter roomaarea 2 represents the updated room area;
step 2.2.5, judging whether the current room area is between an upper limit threshold and a lower limit threshold, wherein the lower limit threshold represents the minimum room area in the data, the upper limit threshold represents the maximum room area, and if the current area is between the upper limit threshold and the lower limit threshold, the outline of the current area is stored;
and 2.2.6, repeating the steps 2.2.1 to 2.2.5, iteratively corroding accessible pixels in the binary image until the areas of all the remaining regions are smaller than the lower limit threshold, stopping iteration at the moment, obtaining separable regions, and marking the obtained separable regions with different colors.
The step 2.3 is specifically as follows:
step 2.3.1, obtaining a room segmentation result according to the binary image in the step 2.1 and the contour in the step 2.2, and taking the segmentation result as input, wherein the room segmentation result is in three states, the first state is a black pixel represented by black, namely the black pixel obtained in the step 2.1.2, the area where the black pixel is located represents an inaccessible area, and the part does not belong to an indoor scene; the second is white pixels represented by white, that is, white pixels in the binary image obtained in step 2.1.3, and the area where the white pixels are located represents an accessible area but is not accessed; the third is the area marked with different colors, i.e. the area marked in step 2.2.6, the different colors represent different room areas composed of some pixels;
step 2.3.2, selecting a pixel from the unmarked white pixel, judging the color of the current pixel according to the colors of other pixels in a 3 × 3 area around the pixel, judging the channel values of three channels of the pixel from the pixel, and if the channel values of the three channels of the pixel are simultaneously larger than 250, indicating that the pixel is an accessible pixel; if the channel values of the three channels of the pixel are not 0 and are less than or equal to 250 at the same time, assigning the channel values of the three channels of the pixel to the accessible pixel, realizing the marking of the accessible pixel, and expanding the accessible pixel into the room space;
and 2.3.3, iterating the step 2.3.1 and the step 2.3.2 until all unmarked areas in the room segmentation result in the step 2.3.1 are marked, and realizing the space segmentation of the indoor scene.
The step 3 specifically comprises the following steps:
step 3.1, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2;
step 3.2, a group of random point sets are created in the overlapping space, the central point in each unit grid is added into the random point sets, and the points in each unit grid extract label information from the space segmentation result;
step 3.3, by calculating the number of points with the same label in each unit grid, and then allocating a label to the unit grid according to the label with the highest occurrence frequency in the unit grid, or according to the condition that one marked unit grid is surrounded by the same marked unit grid, the unit grid is also marked as the label which is the same as the surrounding unit grids;
and 3.4, visualizing the unit grids of the segmentation result labels completed in the steps 3.1 to 3.3, then combining the unit grids of the same labels, and deleting the inaccessible unit grids of the white pixels in the binary image in the step 2.3 to obtain a final room layout segmentation result.
The invention has the advantages that the top view of the indoor scene is decomposed into the units through the unit decomposition, thereby solving the problem of disordered room layout; the room information is distributed to the pixels decomposed by each room unit through room segmentation, the problem of serious shielding possibly existing in an indoor scene is solved, and finally, superposition analysis processing is carried out to obtain a final room layout division result. The comprehensive dividing method not only keeps the geometric regularity of the room, but also keeps the integrity of the room space.
Drawings
FIG. 1 is a flow chart of a room layout partitioning method based on unit decomposition and space division according to the present invention;
FIG. 2 is a schematic diagram of Scene data of Scene1 after down-sampling and original Scene data;
FIG. 3 is a schematic diagram of Scene data of Scene2 after down-sampling and original Scene data;
FIG. 4 is a diagram illustrating the decomposition result of Scene1 room unit according to the present invention;
FIG. 5 is a schematic diagram of decomposition results of Scene2 room units according to the present invention;
FIG. 6 is a diagram illustrating the results of the Scene1 room space segmentation according to the present invention;
FIG. 7 is a diagram illustrating the results of the Scene2 room space segmentation according to the present invention;
FIG. 8 is a diagram showing the result of the Scene1 overlay analysis according to the present invention;
FIG. 9 is a diagram showing the result of the Scene2 overlay analysis according to the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
The invention relates to a room layout dividing method based on unit decomposition and space division, the flow of which is shown in figure 1 and is implemented according to the following steps:
step 1, extracting wall point clouds from an indoor point cloud scene, dividing the wall point clouds into a plurality of segments, calculating a centroid point of each segment, fitting a straight line corresponding to the wall point clouds according to the centroid point, fusing the fitted straight lines, and performing unit decomposition based on the fused result to obtain a unit division result of the indoor scene; the method specifically comprises the following steps:
step 1.1, after down-sampling an indoor scene, obtaining an indoor point cloud scene after down-sampling, dividing the indoor point cloud scene after down-sampling into a plurality of horizontal point cloud slices by taking the indoor point cloud scene after down-sampling as input, dividing wall points and indoor sundries in each horizontal point cloud slice by using a clustering method of region growing, and finally reserving the wall points to obtain wall point clouds corresponding to the horizontal point cloud slices; the method comprises the following steps of using a clustering method of region growing to divide wall points and indoor sundries in each horizontal point cloud slice, and finally keeping the wall points to obtain the wall point cloud corresponding to the horizontal point cloud slice:
step 1.1.1, calculating the curvature of each point in each horizontal point cloud slice, sequencing the points according to the curvature values of the points, finding out the point with the minimum curvature, and adding the point into a seed point set;
step 1.1.2, for each seed point, if the K neighbor point of the seed point satisfies the formula (1) and the formula (2) at the same time, adding the point into a potential seed point list;
||np·ns||>cos(θth) (1)
wherein n ispA normal vector representing a current seed point; n issA normal vector representing K neighbors of the current seed point; thetathRepresents a smoothing threshold;
rp<rth (2)
wherein r ispCurvature of K neighbors of the current seed point, rthRepresents a curvature threshold;
step 1.1.3, adding the step 1.1.2 into a potential seed point list as a midpoint to remove from a slice point cloud corresponding to the initial indoor point cloud;
step 1.1.4, clustering is carried out on the basis of the step 1.1.3, the number of the minimum point cluster is set to be Min, the number of the maximum point cluster is set to be Max, all slices with the generated number between Min and Max are reserved, different colors of different slice marks are distinguished, and the slice point cloud of the reserved slices is the acquired wall surface point cloud;
and 1.1.5, repeating the steps 1.1.1 to 1.1.4, finishing clustering of all slices, and finally obtaining the wall point cloud corresponding to each slice.
Step 1.2, dividing each wall surface point cloud extracted in the step 1.1 into a plurality of segments, calculating a centroid point of each segment, representing the characteristics of all points in each segment by using the centroid point, and then performing straight line fitting according to the calculated centroid point; the method specifically comprises the following steps:
step 1.2.1, inputting the wall surface point clouds extracted in the step 1.1, and respectively storing x, y and z coordinates of each point in each wall surface point cloud;
step 1.2.2, calculating the length and the width of the wall surface area in the space according to the x and y coordinates of the point cloud of the wall surface, wherein the length is xmax-xminWidth of ymax-ymin,xmax、xmin、ymax、yminRespectively representing the maximum and minimum x and y coordinates; setting the step length l to be 0.4m, dividing the point cloud of the wall surface in the space into l multiplied by l grids, and then the total line number is that raster _ rows is xmax-xminL, total number of columns is raster _ cols ═ ymax-ymin/l;
Step 1.2.3, creating a one-dimensional array vector _4, a two-dimensional array col _anda three-dimensional array row _ col, setting the size of the array vector _4 to 4, setting the size of the array col _4 to raster _ cols · vector _4 according to the sizes of raster _ rows and raster _ cols, and setting the size of the array row _ col to raster _ rows · col;
step 1.2.4, storing points in the wall point cloud into corresponding rows and columns, wherein the position (row _ idx, col _ idx) of any point is represented as (ceil ((point [ i ] x-point _ min.x)/l-1), ceil ((point [ i ] y-point _ min.y)/l-1)), point [ i ] x, point [ i ] y represent x, y coordinates of any point, and point _ min.x, point _ min.y represent x, y coordinates of a minimum point;
step 1.2.5, if the value of row _ idx or col _ idx is less than 0, setting the value to 0; then, sequentially storing the x, y and z of the point selected in the step 1.2.4 in the arrays row _ col [ row _ idx ] [ col-idx ] [0], row _ col [ row _ idx ] [ col-idx ] [1] and row _ col [ row _ idx ] [ col-idx ] [2], and increasing the value of each point in the array row _ col [ row _ idx ] [ col-idx ] [2] by 1;
step 1.2.6, repeating step 1.2.4 and step 1.2.5 to divide all points belonging to the wall point cloud into corresponding grid spaces, and calculating the centroid point Q of each grid space, wherein the calculation formula is as shown in formula (3);
Figure BDA0003573645410000121
wherein x _ mean, y _ mean and z _ mean respectively represent x, y and z coordinates of the centroid point Q; m represents the size of the array row _ col; n represents the size of the array row _ col [ i ]; row _ xol [ i ] [ j ] [0] represents the x coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [1] represents the y coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [2] represents the z coordinate of any point in the space corresponding to the extracted wall surface point cloud; row _ col [ i ] [ j ] [3] represents the number of points contained in a space corresponding to the wall point cloud extracted by each grid;
step 1.2.7, randomly selecting three points from a plurality of centroid points corresponding to each wall point cloud to construct an xoy plane, calculating coefficients of a plane equation, then calculating a normal vector v of the constructed plane, reconstructing the plane according to the solved plane equation coefficients, constructing a projection straight line by using end points of a line segment in the space and the normal vector of the plane, solving an intersection point of the plane and the projection straight line again, wherein the intersection point is a projection point of one end point of the straight line segment in the space, and finally connecting the corresponding projection points in sequence to obtain a projected line segment.
Step 1.3, projecting the straight line obtained in the step 1.2, performing fusion processing after projection, then extending the line segment subjected to fusion processing, calculating a two-dimensional arrangement data structure according to a CGAL library, and finally obtaining a two-dimensional unit decomposition result, wherein the method specifically comprises the following steps:
step 1.3.1, setting a predefined length value of a wall segment, then sorting a plurality of segments projected in step 1.2.7 from small to large according to the lengths of the segments, and directly deleting the segments with the lengths smaller than the predefined length value;
step 1.3.2, selecting the line segment with the largest length from the remaining line segments processed in the step 1.3.1, adding the line segment into a final result set, selecting one line segment from the remaining line segments, comparing the line segment with the line segment in the final result set, setting an included angle threshold and a distance threshold, and if the included angle between the two line segments is smaller than the given threshold, considering the two line segments to be parallel; on the basis of parallel, if the distance between the two parallel lines is smaller than a given threshold value, the two line segments are considered to be collinear, and the two line segments are fused;
step 1.3.3, repeating step 1.3.2 until the number of the remaining line segments is 0, and ending the whole process to obtain a fused line segment;
and step 1.3.4, extending the fused line segments, then calculating a two-dimensional arrangement data structure according to the CGAL library, dividing a two-dimensional plane of an indoor scene by using the extension line segments, and decomposing the two-dimensional plane into two-dimensional units.
Step 2, after down-sampling the indoor scene, projecting the down-sampled indoor scene onto a two-dimensional plane, discretizing projection points and generating a depth image; then binarizing the depth image, and segmenting the indoor space by using morphological corrosion operation to identify the number of rooms; finally, adding the regions which belong to the room regions but are not accessed into the room space by using a wavefront expansion method to obtain a space segmentation result of the indoor scene; the method specifically comprises the following steps:
step 2.1, converting the point cloud of the indoor scene space to a two-dimensional image to obtain a corresponding gray image, and carrying out binarization processing on the gray image to obtain a binary image reflecting the overall and local characteristics of the image in order to enable the image to show an obvious black-and-white effect; the method specifically comprises the following steps:
step 2.1.1, projecting the indoor point cloud sampled in the step 1.1 onto an xoy plane, finding out the maximum x and y coordinates and the minimum x and y coordinates from the projection points, and determining the width and height of an image according to the maximum and maximum x and y coordinate values;
step 2.1.2, discretizing the projection points into a two-dimensional grid to obtain a plurality of pixel grids, setting the size of each pixel as pixelsize, determining the size of each pixel according to the thickness of a wall, the size of the point cloud and the density of the point cloud, judging whether the pixel is gray or black according to the number of points contained in each pixel, and if one pixel at least contains one point, indicating that the pixel is gray; if one pixel does not contain the point, the pixel is indicated to be black, and a gray level image is obtained;
step 2.1.3, setting the gray value of the pixel point on the gray image obtained in the step 2.1.2 as 0 or 255 through proper threshold selection, and if the gray value of the pixel point is greater than or equal to the pixel of the threshold, judging the pixel to belong to a specific object, wherein the gray value is represented by 255; otherwise, the pixel points are excluded from the object region, the gray value of the pixel points is represented by 0, and a binary image is obtained.
Step 2.2, defaulting that the door of each room leading to other rooms or corridors is closed, iteratively corroding accessible pixels in the binary image obtained in the step 2.1 by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, obtaining a corrosion result image after corrosion is completed once, judging whether separable areas exist or not according to the outline on the corrosion result image, stopping iteration if the separable areas are obtained, and marking the separable areas by using different colors; the method specifically comprises the following steps:
step 2.2.1, starting first corrosion from the central position of the binary image obtained in the step 2.1, corroding accessible pixels in the binary image by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, and obtaining a corrosion result image after finishing the corrosion once;
step 2.2.2, searching contours in the corrosion result graph, establishing only two level relations of all the contours, namely a top layer and an inner layer, and counting the number findContourNums of the found contours;
step 2.2.3, each contour is checked, if the current contour does not have a corresponding embedded contour, the area contourArea1 surrounded by the current contour is calculated, so that the room area roommarea 1 is calculated, and the calculation formula is shown as (4);
roomArea1=cellSize*cellSize*contourArea1 (4)
wherein the parameter roomaarea 1 represents the room area; cellSize represents the size of one cell grid; contourArea1 represents the area of the current contour enclosure;
step 2.2.4, judging whether the currently detected contour is an embedded contour of other contours on the basis of the step 2.2.3, if the current contour is the embedded contour of one of the contours, calculating a contourArea2 of an area surrounded by the current contour, and simultaneously updating a room area roommarea 2, wherein an area calculation formula is shown as a formula (5);
roomArea2=cellSize*cellSize*contourArea2-roomArea1 (5)
wherein the parameter roomaarea 2 represents the updated room area;
step 2.2.5, judging whether the current room area is between an upper limit threshold and a lower limit threshold, wherein the lower limit threshold represents the minimum room area in the data, the upper limit threshold represents the maximum room area, and if the current area is between the upper limit threshold and the lower limit threshold, the outline of the current area is stored;
and 2.2.6, repeating the steps 2.2.1 to 2.2.5, iteratively corroding accessible pixels in the binary image until the areas of all the remaining regions are smaller than the lower limit threshold, stopping iteration at the moment, obtaining separable regions, and marking the obtained separable regions with different colors.
Step 2.3, detecting the number of rooms according to the separable areas marked in the step 2.2, wherein the binary image has three states, and white pixels represent unmarked areas in the accessible areas, and then expanding the white pixels by using a wave front propagation method to obtain a final room space segmentation result; the method specifically comprises the following steps:
step 2.3.1, obtaining a room segmentation result according to the binary image in the step 2.1 and the contour in the step 2.2, and taking the segmentation result as input, wherein the room segmentation result is in three states, the first state is a black pixel represented by black, namely the black pixel obtained in the step 2.1.2, the area where the black pixel is located represents an inaccessible area, and the part does not belong to an indoor scene; the second is white pixels represented by white, that is, white pixels in the binary image obtained in step 2.1.3, and the area where the white pixels are located represents an accessible area but is not accessed; the third is the area marked with different colors, i.e. the area marked in step 2.2.6, the different colors represent different room areas composed of some pixels;
step 2.3.2, selecting a pixel from the unmarked white pixel, judging the color of the current pixel according to the colors of other pixels in a 3 × 3 area around the pixel, judging the channel values of three channels of the pixel from the pixel, and if the channel values of the three channels of the pixel are simultaneously larger than 250, indicating that the pixel is an accessible pixel; if the channel values of the three channels of the pixel are not 0 and are less than or equal to 250 at the same time, assigning the channel values of the three channels of the pixel to the accessible pixel, realizing the marking of the accessible pixel, and expanding the accessible pixel into a room space;
and 2.3.3, iterating the step 2.3.1 and the step 2.3.2 until all unmarked areas in the room segmentation result in the step 2.3.1 are marked, and realizing the space segmentation of the indoor scene.
Step 3, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2, generating random points in an overlapping area, and determining which room the grid unit belongs to according to the number of color labels carried by the random points in each grid unit, thereby realizing accurate division of the room layout; the method specifically comprises the following steps:
step 3.1, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2;
step 3.2, a group of random point sets are created in the overlapping space, the central point in each unit grid is added into the random point sets, and the points in each unit grid extract label information from the space segmentation result;
step 3.3, by calculating the number of points with the same label in each unit grid, and then allocating a label to the unit grid according to the label with the highest occurrence frequency in the unit grid, or according to the condition that one marked unit grid is surrounded by the same marked unit grid, the unit grid is also marked as the label which is the same as the surrounding unit grids;
and 3.4, visualizing the unit grids of the segmentation result labels completed in the steps 3.1 to 3.3, then combining the unit grids of the same labels, and deleting the inaccessible unit grids of the white pixels in the binary image in the step 2.3 to obtain a final room layout segmentation result.
Examples
The embodiment of the present invention uses two sets of data, Scene1 and Scene2, as shown in fig. 2(a) and fig. 3(a), wherein fig. 2(a) is the original Scene data of Scene1, and fig. 3(a) is the original Scene data of Scene2, and the specific process is as follows:
the invention relates to a room layout dividing method based on unit decomposition and space division, which is implemented according to the following steps:
step 1, extracting wall surface point clouds from an indoor point cloud scene, dividing the wall surface point clouds into a plurality of segments, calculating a centroid point of each segment, fitting a straight line corresponding to the wall surface point clouds according to the centroid point, then carrying out fusion processing on the fitted straight lines, and then carrying out unit decomposition based on the result of the fusion processing to obtain a unit division result of the indoor scene; the method specifically comprises the following steps:
step 1.1, after down-sampling an indoor Scene, obtaining an indoor point cloud Scene after down-sampling, wherein the indoor point cloud Scene corresponds to a picture 2(b) and a picture 3(b), the picture 2(b) is Scene data of Scene1 after down-sampling, the picture 3(b) is Scene data of Scene2 after down-sampling, the indoor point cloud Scene after down-sampling is used as input to be divided into a plurality of horizontal point cloud slices, a clustering method of region growing is used to divide wall points and indoor sundries in each horizontal point cloud slice, and finally the wall points are reserved to obtain wall point clouds corresponding to the horizontal point cloud slices; the method comprises the following steps of dividing wall points and indoor sundries in each horizontal point cloud slice by using a clustering method of region growing, and finally reserving the wall points to obtain the wall point cloud corresponding to the horizontal point cloud slices:
step 1.1.1, calculating the curvature of each point in each horizontal point cloud slice, sequencing the points according to the curvature values of the points, finding out the point with the minimum curvature, and adding the point into a seed point set;
step 1.1.2, for each seed point, if the K neighbor point of the seed point satisfies the formula (1) and the formula (2) at the same time, adding the point into a potential seed point list;
||np·ns||>cos(θth) (1)
wherein n ispA normal vector representing a current seed point; n is a radical of an alkyl radicalsA normal vector representing K neighbors of the current seed point; thetathRepresents a smoothing threshold;
rp<rth (2)
wherein r ispCurvature of K neighbors of the current seed point, rthRepresents a curvature threshold;
in two experiments in which the present invention was applied, the slice thickness of Scene1 was 0.15cm, the number of slices was 20, θthIs 7, rthIs 1; the thickness of the section of Scene2 was 0.15cm, the number of sections was 17, θthIs 7, rthIs 1;
step 1.1.3, adding the step 1.1.2 into a potential seed point list as a midpoint to remove from a slice point cloud corresponding to the initial indoor point cloud;
step 1.1.4, clustering is carried out on the basis of the step 1.1.3, the number of the minimum point cluster is set to be Min, the number of the maximum point cluster is set to be Max, all slices with the generated number between Min and Max are reserved, different colors of different slice marks are distinguished, and the slice point cloud of the reserved slices is the acquired wall surface point cloud;
and 1.1.5, repeating the steps 1.1.1 to 1.1.4 to finish clustering of all slices, and finally obtaining wall surface point clouds corresponding to each slice, wherein the corresponding two groups of experimental results are shown in fig. 4(a) and fig. 5 (a).
Step 1.2, dividing each wall surface point cloud extracted in the step 1.1 into a plurality of segments, calculating a centroid point of each segment, representing the characteristics of all points in each segment by using the centroid point, and then performing straight line fitting according to the calculated centroid point; the method specifically comprises the following steps:
step 1.2.1, inputting the wall surface point clouds extracted in the step 1.1, and respectively storing x, y and z coordinates of each point in each wall surface point cloud;
step 1.2.2, calculating the length and the width of the wall surface area in the space according to the x and y coordinates of the point cloud of the wall surface, wherein the length is xmax-xminWidth of ymax-ymin,xmax、xmin、ymax、yminRespectively representing the maximum and minimum x and y coordinates; setting the step length l to be 0.4m, dividing the point cloud of the wall surface in the space into l multiplied by l grids, and then the total line number is that raster _ rows is xmax-xminL, total number of columns is raster _ cols ═ ymax-ymin/l;
Step 1.2.3, creating a one-dimensional array vector _4, a two-dimensional array col _anda three-dimensional array row _ col, setting the size of the array vector _4 to 4, setting the size of the array col _4 to raster _ cols · vector _4 according to the sizes of raster _ rows and raster _ cols, and setting the size of the array row _ col to raster _ rows · col;
step 1.2.4, storing points in the wall point cloud into corresponding rows and columns, wherein the position (row _ idx, col _ idx) of any point is represented as (ceil ((point [ i ] x-point _ min.x)/l-1), ceil ((point [ i ] y-point _ min.y)/l-1)), point [ i ] x, point [ i ] y represent x, y coordinates of any point, and point _ min.x, point _ min.y represent x, y coordinates of a minimum point;
step 1.2.5, if the value of row _ idx or col _ idx is less than 0, setting the value to 0; then, sequentially storing the x, y and z of the point selected in the step 1.2.4 in the arrays row _ col [ row _ idx ] [ col-idx ] [0], row _ col [ row _ idx ] [ col-idx ] [1] and row _ col [ row _ idx ] [ col-idx ] [2], and increasing the value of each point in the array row _ col [ row _ idx ] [ col-idx ] [2] by 1;
step 1.2.6, repeating step 1.2.4 and step 1.2.5 to divide all points belonging to the wall point cloud into corresponding grid spaces, and calculating the centroid point Q of each grid space, wherein the calculation formula is as shown in formula (3);
Figure BDA0003573645410000201
wherein x _ mean, y _ mean and z _ mean respectively represent x, y and z coordinates of the centroid point Q; m represents the size of the array row _ col; n represents the size of the array row _ col [ i ]; row _ xol [ i ] [ j ] [0] represents the x coordinate of any point in the space corresponding to the extracted wall surface point cloud; row _ xol [ i ] [ j ] [1] represents the y coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [2] represents the z coordinate of any point in the space corresponding to the extracted wall surface point cloud; row _ col [ i ] [ j ] [3] represents the number of points contained in a space corresponding to the wall surface point cloud extracted by each grid;
step 1.2.7, randomly selecting three points from a plurality of centroid points corresponding to each wall point cloud to construct an xoy plane, calculating coefficients of a plane equation, then calculating a normal vector v for constructing the plane, reconstructing the plane according to the solved plane equation coefficients, constructing a projection straight line by using end points of line segments in the space and the normal vector of the plane, solving an intersection point of the plane and the projection straight line again, wherein the intersection point is a projection point of one end point of a straight line segment in the space, and finally connecting the corresponding projection points in sequence to obtain a projected line segment, wherein two groups of corresponding experimental results are shown in fig. 4(b) and fig. 5 (b).
Step 1.3, projecting the straight line obtained in the step 1.2, performing fusion processing after projection, then extending the line segment subjected to fusion processing, calculating a two-dimensional arrangement data structure according to a CGAL library, and finally obtaining a two-dimensional unit decomposition result, wherein the method specifically comprises the following steps:
step 1.3.1, setting a predefined length value of a wall segment, then sorting a plurality of segments projected in step 1.2.7 from small to large according to the lengths of the segments, and directly deleting the segments with the lengths smaller than the predefined length value;
step 1.3.2, selecting the line segment with the largest length from the remaining line segments processed in the step 1.3.1, adding the line segment into a final result set, selecting one line segment from the remaining line segments, comparing the line segment with the line segment in the final result set, setting an included angle threshold and a distance threshold, and if the included angle between the two line segments is smaller than the given threshold, considering the two line segments to be parallel; on the basis of parallel, if the distance between the two parallel lines is smaller than a given threshold value, the two line segments are considered to be collinear, and the two line segments are fused;
step 1.3.3, repeating step 1.3.2 until the number of the remaining line segments is 0, ending the whole process to obtain fused line segments, wherein the corresponding two groups of experimental results are shown in fig. 4(c) and fig. 5 (c);
step 1.3.4, extending the fused line segments, then calculating a two-dimensional arrangement data structure according to the CGAL library, dividing a two-dimensional plane of an indoor scene by using extension line segments, and decomposing the two-dimensional plane into two-dimensional units, wherein two groups of corresponding experimental results are shown in fig. 4(d) and fig. 5 (d).
Step 2, after down-sampling the indoor scene, projecting the down-sampled indoor scene onto a two-dimensional plane, discretizing projection points and generating a depth image; then binarizing the depth image, and segmenting the indoor space by using morphological corrosion operation to identify the number of rooms; finally, adding the regions which belong to the room regions but are not accessed into the room space by using a wave front expansion method to obtain a space segmentation result of the indoor scene; the method specifically comprises the following steps:
step 2.1, converting the point cloud of the indoor scene space to a two-dimensional image to obtain a corresponding gray image, and carrying out binarization processing on the gray image to obtain a binary image reflecting the overall and local characteristics of the image in order to enable the image to show an obvious black-and-white effect; the method specifically comprises the following steps:
step 2.1.1, projecting the indoor point cloud sampled in the step 1.1 onto an xoy plane, finding out the maximum x and y coordinates and the minimum x and y coordinates from the projection points, and determining the width and height of the image according to the maximum x and y coordinate values;
step 2.1.2, discretizing the projection points into a two-dimensional grid to obtain a plurality of pixel grids, setting the size of each pixel as pixelsize, determining the size of each pixel according to the thickness of a wall, the size of the point cloud and the density of the point cloud, judging whether the pixel is gray or black according to the number of points contained in each pixel, and if one pixel at least contains one point, indicating that the pixel is gray; if a pixel does not contain a dot, the pixel is indicated to be black, and a gray image is further obtained, and two groups of corresponding experimental results are shown in fig. 6(a) and fig. 7 (a);
step 2.1.3, setting the gray value of the pixel point on the gray image obtained in the step 2.1.2 as 0 or 255 through proper threshold selection, and if the gray value of the pixel point is greater than or equal to the pixel of the threshold, judging the pixel to belong to a specific object, wherein the gray value is represented by 255; otherwise, these pixel points are excluded from the object region, the gray values thereof are represented by 0, and a binary image is obtained, and the two sets of corresponding experimental results are shown in fig. 6(b) and fig. 7 (b).
Step 2.2, defaulting that doors of each room leading to other rooms or corridors are closed, iteratively corroding accessible pixels in the binary image obtained in the step 2.1 by using 3 x 3 structural elements, wherein the accessible pixels are white pixels in the binary image, obtaining a corrosion result map once the corrosion is completed, judging whether separable areas exist or not according to outlines on the corrosion result map, stopping the iteration if the separable areas are obtained, and marking the separable areas by using different colors; the method specifically comprises the following steps:
step 2.2.1, starting first corrosion from the central position of the binary image obtained in the step 2.1, corroding accessible pixels in the binary image by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, and obtaining a corrosion result image after finishing the corrosion once;
step 2.2.2, searching contours in the corrosion result graph, establishing only two level relations of all the contours, namely a top layer and an inner layer, and counting the number findContourNums of the found contours;
step 2.2.3, each contour is checked, if the current contour does not have a corresponding embedded contour, the area contourArea1 surrounded by the current contour is calculated, so that the room area roommarea 1 is calculated, and the calculation formula is shown as (4);
roomArea1=cellSize*cellSize*contourArea1 (4)
wherein the parameter roomaarea 1 represents the room area; cellSize represents the size of one cell grid; contourArea1 represents the area of the current contour enclosure;
step 2.2.4, judging whether the currently detected contour is an embedded contour of other contours on the basis of the step 2.2.3, if the current contour is the embedded contour of one of the contours, calculating a contourArea2 of an area surrounded by the current contour, and simultaneously updating a room area roommarea 2, wherein an area calculation formula is shown as a formula (5);
roomArea2=cellSize*cellSize*contourArea2-roomArea1 (5)
wherein the parameter roomaarea 2 represents the updated room area;
step 2.2.5, judging whether the current room area is between an upper limit threshold and a lower limit threshold, wherein the lower limit threshold represents the minimum room area in the data, the upper limit threshold represents the maximum room area, and if the current area is between the upper limit threshold and the lower limit threshold, the outline of the current area is stored;
in two experiments applied by the present invention, the pixel in the binary image of Scene1 is 60mm, the largest room area is 10, and the smallest room area is 1; the pixel in the binary image of Scene2 is 50mm, the largest room area is 25, and the smallest room area is 3;
and 2.2.6, repeating the steps 2.2.1 to 2.2.5, iteratively corroding accessible pixels in the binary image until the areas of all the remaining regions are smaller than the lower limit threshold, stopping iteration at this moment, obtaining separable regions, and marking the obtained separable regions with different colors, wherein the corresponding two groups of experiment results are shown in fig. 6(c) and 7 (c).
Step 2.3, detecting the number of rooms according to the separable areas marked in the step 2.2, wherein the binary image has three states, and white pixels represent unmarked areas in the accessible areas, and then expanding the white pixels by using a wave front propagation method to obtain a final room space segmentation result; the method specifically comprises the following steps:
step 2.3.1, obtaining a room segmentation result according to the binary image in the step 2.1 and the contour in the step 2.2, and taking the segmentation result as input, wherein the room segmentation result is in three states, the first state is a black pixel represented by black, namely the black pixel obtained in the step 2.1.2, the area where the black pixel is located represents an inaccessible area, and the part does not belong to an indoor scene; the second is white pixels represented by white, that is, white pixels in the binary image obtained in step 2.1.3, and the area where the white pixels are located represents an accessible area but is not accessed; the third is the area marked with different colors, i.e. the area marked in step 2.2.6, the different colors represent different room areas composed of some pixels;
step 2.3.2, selecting a pixel from the unmarked white pixel, judging the color of the current pixel according to the colors of other pixels in a 3 × 3 area around the pixel, judging the channel values of three channels of the pixel from the pixel, and if the channel values of the three channels of the pixel are simultaneously larger than 250, indicating that the pixel is an accessible pixel; if the channel values of the three channels of the pixel are not 0 and are less than or equal to 250 at the same time, assigning the channel values of the three channels of the pixel to the accessible pixel, realizing the marking of the accessible pixel, and expanding the accessible pixel into the room space;
and 2.3.3, iterating the step 2.3.1 and the step 2.3.2 until all unmarked areas in the room segmentation result in the step 2.3.1 are marked, and realizing the space segmentation of the indoor scene, wherein the two corresponding groups of experimental results are shown in fig. 6(d) and fig. 7 (d).
Step 3, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2, generating random points in an overlapping area, and determining which room the grid unit belongs to according to the number of color labels carried by the random points in each grid unit, thereby realizing accurate division of the room layout; the method specifically comprises the following steps: step 3.1, overlapping the unit division result of the indoor scene obtained in step 1 with the space division result of the indoor scene obtained in step 2, wherein the corresponding two groups of experiment results are shown in fig. 8(a) and fig. 9 (a);
step 3.2, a group of random point sets are created in the overlapping space, the central point in each unit grid is added into the random point sets, the corresponding two groups of experimental results are shown in fig. 8(b) and fig. 9(b), and the points in each unit grid extract label information from the space segmentation results;
step 3.3, by calculating the number of points with the same label in each unit grid, and then allocating a label to the unit grid according to the label with the highest occurrence frequency in the unit grid, or according to the condition that one marked unit grid is surrounded by the same marked unit grid, the unit grid is also marked as a label the same as the surrounding unit grids, and the corresponding two sets of experimental results are shown in fig. 8(c) and fig. 9 (c);
and 3.4, visualizing the unit grids of the segmentation result labels completed in the steps 3.1 to 3.3, then combining the unit grids of the same labels, deleting the unit grids of the white pixels which are inaccessible in the binary image in the step 2.3, and obtaining the final room layout segmentation result by using the corresponding two groups of experimental results as shown in fig. 8(d) and fig. 9 (d).

Claims (10)

1. The room layout dividing method based on unit decomposition and space division is characterized by comprising the following steps:
step 1, extracting wall point clouds from an indoor point cloud scene, dividing the wall point clouds into a plurality of segments, calculating a centroid point of each segment, fitting a straight line corresponding to the wall point clouds according to the centroid point, fusing the fitted straight lines, and performing unit decomposition based on the fused result to obtain a unit division result of the indoor scene;
step 2, after down-sampling the indoor scene, projecting the down-sampled indoor scene onto a two-dimensional plane, discretizing projection points and generating a depth image; then binarizing the depth image, and segmenting the indoor space by using morphological corrosion operation to identify the number of rooms; finally, adding the regions which belong to the room regions but are not accessed into the room space by using a wavefront expansion method to obtain a space segmentation result of the indoor scene;
and 3, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2, generating random points in an overlapping area, and determining which room the grid unit belongs to according to the number of the color labels carried by the random points in each grid unit, thereby realizing accurate division of the room layout.
2. The room layout partitioning method based on cell decomposition and space division according to claim 1, wherein the step 1 specifically comprises:
step 1.1, after down-sampling an indoor scene, obtaining an indoor point cloud scene after down-sampling, dividing the indoor point cloud scene after down-sampling into a plurality of horizontal point cloud slices by taking the indoor point cloud scene after down-sampling as input, dividing wall points and indoor sundries in each horizontal point cloud slice by using a clustering method of region growing, and finally reserving the wall points to obtain wall point clouds corresponding to the horizontal point cloud slices;
step 1.2, dividing each wall surface point cloud extracted in the step 1.1 into a plurality of segments, calculating a centroid point of each segment, representing the characteristics of all points in each segment by using the centroid point, and then performing straight line fitting according to the calculated centroid point;
and step 1.3, projecting the straight line obtained in the step 1.2, fusing after projection, extending the fused line segment, calculating a two-dimensional arrangement data structure according to a CGAL library, and finally obtaining a two-dimensional unit decomposition result.
3. The room layout division method based on unit decomposition and space segmentation according to claim 2, wherein the step 1.1 uses a clustering method of region growing to segment wall points and indoor sundries in each horizontal point cloud slice, and finally, the wall points are retained, and obtaining the wall point cloud corresponding to the horizontal point cloud slice specifically comprises:
step 1.1.1, calculating the curvature of each point in each horizontal point cloud slice, sequencing the points according to the curvature values of the points, finding out the point with the minimum curvature, and adding the point into a seed point set;
step 1.1.2, for each seed point, if the K neighbor point of the seed point satisfies the formula (1) and the formula (2) at the same time, adding the point into a potential seed point list;
||np·ns||>cos(θth) (1)
wherein n ispA normal vector representing a current seed point; n issA normal vector representing K neighbors of the current seed point; theta.theta.thRepresents a smoothing threshold;
rp<rth (2)
wherein r ispCurvature of K neighbors of the current seed point, rthRepresents a curvature threshold;
step 1.1.3, adding the step 1.1.2 into a potential seed point list as a midpoint to remove from a slice point cloud corresponding to the initial indoor point cloud;
step 1.1.4, clustering is carried out on the basis of the step 1.1.3, the number of the minimum point cluster is set to be Min, the number of the maximum point cluster is set to be Max, all slices with the generated number between Min and Max are reserved, different colors of different slice marks are distinguished, and the slice point cloud of the reserved slices is the acquired wall surface point cloud;
and 1.1.5, repeating the steps 1.1.1 to 1.1.4, finishing clustering of all slices, and finally obtaining the wall point cloud corresponding to each slice.
4. The room layout partitioning method based on cell decomposition and space segmentation as claimed in claim 3, wherein the step 1.2 is specifically:
step 1.2.1, inputting the wall surface point clouds extracted in the step 1.1, and respectively storing x, y and z coordinates of each point in each wall surface point cloud;
step 1.2.2, calculating the length and the width of the wall surface area in the space according to the x and y coordinates of the point cloud of the wall surface, wherein the length is xmax-xminWidth of ymax-ymin,xmax、xmin、ymax、yminRespectively representing the maximum and minimum x and y coordinates; setting the step length l to be 0.4m, dividing the point cloud of the wall surface in the space into l multiplied by l grids, and then the total line number is that raster _ rows is xmax-xminL, total number of columns is ratt _ cols ═ ymax-ymin/l;
Step 1.2.3, creating a one-dimensional array vector _4, a two-dimensional array col _ and a three-dimensional array row _ col, setting the size of the array vector _4 to be 4, setting the size of the array col _4 to be raster _ cols. vector _4 according to the sizes of raster _ rows and raster _ cols, and setting the size of the array row _ col to be raster _ rows. col;
step 1.2.4, storing points in the point cloud of the wall surface into corresponding lines and columns, wherein the position (row _ idx, col _ idx) of any point is represented as (ceil ((point [ i ] x-point _ min.x)/l-1), ceil ((point [ i ] y-point _ min.y)/l-1)), point [ i ] x and point [ i ] y represent x and y coordinates of any point, and point _ min.x and point _ min.y represent x and y coordinates of a minimum point;
step 1.2.5, if the value of row _ idx or col _ idx is less than 0, setting the value to be 0; then, sequentially storing the x, y and z of the point selected in the step 1.2.4 in the arrays row _ col [ row _ idx ] [ col-idx ] [0], row _ col [ row _ idx ] [ col-idx ] [1] and row _ col [ row _ idx ] [ col-idx ] [2], and increasing the value of each point in the array row _ col [ row _ idx ] [ col-idx ] [2] by 1;
step 1.2.6, repeating step 1.2.4 and step 1.2.5 to divide all points belonging to the wall point cloud into corresponding grid spaces, and calculating the centroid point Q of each grid space, wherein the calculation formula is as shown in formula (3);
Figure FDA0003573645400000041
wherein x _ mean, y _ mean and z _ mean respectively represent x, y and z coordinates of the centroid point Q; m represents the size of the array row _ col; n represents the size of the array row _ col [ i ]; row _ xol [ i ] [ j ] [0] represents the x coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [1] represents the y coordinate of any point in the space corresponding to the extracted wall point cloud; row _ xol [ i ] [ j ] [2] represents the z coordinate of any point in the space corresponding to the extracted wall surface point cloud; row _ col [ i ] [ j ] [3] represents the number of points contained in a space corresponding to the wall point cloud extracted by each grid;
step 1.2.7, randomly selecting three points from a plurality of centroid points corresponding to each wall point cloud to construct an xoy plane, calculating coefficients of a plane equation, then calculating a normal vector v of a constructed plane, reconstructing the plane according to the solved plane equation coefficients, constructing a projection straight line by using end points of line segments in the space and the normal vector of the plane, solving an intersection point of the plane and the projection straight line again, wherein the intersection point is a projection point of one end point of a straight line segment in the space, and finally connecting the corresponding projection points in sequence to obtain the projected line segment.
5. The room layout partitioning method based on cell decomposition and space segmentation as claimed in claim 4, wherein the step 1.3 is specifically:
step 1.3.1, setting a predefined length value of a wall segment, then sorting a plurality of segments projected in step 1.2.7 from small to large according to the lengths of the segments, and directly deleting the segments with the lengths smaller than the predefined length value;
step 1.3.2, selecting the line segment with the largest length from the remaining line segments processed in the step 1.3.1, adding the line segment into a final result set, selecting one line segment from the remaining line segments, comparing the line segment with the line segment in the final result set, setting an included angle threshold and a distance threshold, and if the included angle between the two line segments is smaller than the given threshold, considering the two line segments to be parallel; on the basis of parallel, if the distance between the two parallel lines is smaller than a given threshold value, the two line segments are considered to be collinear, and the two line segments are fused;
step 1.3.3, repeating step 1.3.2 until the number of the remaining line segments is 0, and ending the whole process to obtain a fused line segment;
and step 1.3.4, extending the fused line segments, then calculating a two-dimensional arrangement data structure according to the CGAL library, dividing a two-dimensional plane of an indoor scene by using the extension line segments, and decomposing the two-dimensional plane into two-dimensional units.
6. The room layout partitioning method based on cell decomposition and space segmentation according to claim 5, wherein the step 2 specifically comprises:
step 2.1, converting the point cloud of the indoor scene space to a two-dimensional image to obtain a corresponding gray image, and performing binarization processing on the gray image to obtain a binary image reflecting the overall and local characteristics of the image;
step 2.2, defaulting that the door of each room leading to other rooms or corridors is closed, iteratively corroding accessible pixels in the binary image obtained in the step 2.1 by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, obtaining a corrosion result image after corrosion is completed once, judging whether separable areas exist or not according to the outline on the corrosion result image, stopping iteration if the separable areas are obtained, and marking the separable areas by using different colors;
and 2.3, detecting the number of rooms according to the separable areas marked in the step 2.2, wherein the binary image has three states, and white pixels represent unmarked areas in the accessible areas, and then expanding the white pixels by using a wave front propagation method to obtain a final room space segmentation result.
7. The room layout partitioning method based on cell decomposition and space segmentation as claimed in claim 6, wherein the step 2.1 is specifically:
step 2.1.1, projecting the indoor point cloud sampled in the step 1.1 onto an xoy plane, finding out the maximum x and y coordinates and the minimum x and y coordinates from the projection points, and determining the width and height of the image according to the maximum x and y coordinate values;
step 2.1.2, discretizing the projection points into a two-dimensional grid to obtain a plurality of pixel grids, setting the size of each pixel as pixelsize, determining the size of each pixel according to the thickness of a wall, the size of the point cloud and the density of the point cloud, judging whether the pixel is gray or black according to the number of points contained in each pixel, and if one pixel at least contains one point, indicating that the pixel is gray; if one pixel does not contain a point, the pixel is indicated to be black, and a gray level image is obtained;
step 2.1.3, setting the gray value of the pixel point on the gray image obtained in the step 2.1.2 as 0 or 255 through proper threshold selection, and if the gray value of the pixel point is greater than or equal to the pixel of the threshold, judging the pixel to belong to a specific object, wherein the gray value is represented by 255; otherwise, the pixel points are excluded from the object region, the gray value of the pixel points is represented by 0, and a binary image is obtained.
8. The room layout partitioning method based on cell decomposition and space segmentation as claimed in claim 7, wherein the step 2.2 is specifically:
step 2.2.1, starting first corrosion from the central position of the binary image obtained in the step 2.1, corroding accessible pixels in the binary image by using a 3 x 3 structural element, wherein the accessible pixels are white pixels in the binary image, and obtaining a corrosion result image after finishing the corrosion once;
step 2.2.2, searching contours in the corrosion result graph, establishing only two level relations of all the contours, namely a top layer and an inner layer, and counting the number findContourNums of the found contours;
step 2.2.3, each contour is checked, if the current contour does not have a corresponding embedded contour, the area contourArea1 surrounded by the current contour is calculated, so that the room area roommarea 1 is calculated, and the calculation formula is shown as (4);
roomArea1=cellSize*cellSize*contourArea1 (4)
wherein the parameter roomaarea 1 represents the room area; cellSize represents the size of one cell grid; contourArea1 represents the area of the current contour enclosure;
step 2.2.4, judging whether the currently detected contour is an embedded contour of other contours on the basis of the step 2.2.3, if the current contour is the embedded contour of one of the contours, calculating a contourArea2 of an area surrounded by the current contour, and simultaneously updating a room area roommarea 2, wherein an area calculation formula is shown as a formula (5);
roomArea2=cellSize*cellSize*contourArea2-roomArea1 (5)
wherein the parameter roomaarea 2 represents the updated room area;
step 2.2.5, judging whether the current room area is between an upper limit threshold and a lower limit threshold, wherein the lower limit threshold represents the minimum room area in the data, the upper limit threshold represents the maximum room area, and if the current area is between the upper limit threshold and the lower limit threshold, the outline of the current area is stored;
and 2.2.6, repeating the steps 2.2.1 to 2.2.5, iteratively corroding accessible pixels in the binary image until the areas of all the remaining regions are smaller than the lower limit threshold, stopping iteration at the moment, obtaining separable regions, and marking the obtained separable regions with different colors.
9. The room layout partitioning method based on cell decomposition and space segmentation as claimed in claim 8, wherein the step 2.3 is specifically:
step 2.3.1, obtaining a room segmentation result according to the binary image in the step 2.1 and the contour in the step 2.2, and taking the segmentation result as input, wherein the room segmentation result is in three states, the first state is a black pixel represented by black, namely the black pixel obtained in the step 2.1.2, the area where the black pixel is located represents an inaccessible area, and the part does not belong to an indoor scene; the second is white pixels represented by white, that is, white pixels in the binary image obtained in step 2.1.3, and the area where the white pixels are located represents an accessible area but is not accessed; the third is the area marked with different colors, i.e. the area marked in step 2.2.6, the different colors represent different room areas composed of some pixels;
step 2.3.2, selecting a pixel from the unmarked white pixel, judging the color of the current pixel according to the colors of other pixels in a 3 × 3 area around the pixel, judging the channel values of three channels of the pixel from the pixel, and if the channel values of the three channels of the pixel are simultaneously larger than 250, indicating that the pixel is an accessible pixel; if the channel values of the three channels of the pixel are not 0 and are less than or equal to 250 at the same time, assigning the channel values of the three channels of the pixel to the accessible pixel, realizing the marking of the accessible pixel, and expanding the accessible pixel into the room space;
and 2.3.3, iterating the step 2.3.1 and the step 2.3.2 until all unmarked areas in the room segmentation result in the step 2.3.1 are marked, and realizing the space segmentation of the indoor scene.
10. The room layout partitioning method based on cell decomposition and space segmentation according to claim 9, wherein the step 3 specifically comprises:
step 3.1, overlapping the unit division result of the indoor scene obtained in the step 1 with the space division result of the indoor scene obtained in the step 2;
step 3.2, a group of random point sets are created in the overlapping space, the central point in each unit grid is added into the random point sets, and the points in each unit grid extract label information from the space segmentation result;
step 3.3, by calculating the number of points with the same label in each unit grid, and then allocating a label to the unit grid according to the label with the highest occurrence frequency in the unit grid, or according to the condition that one marked unit grid is surrounded by the same marked unit grid, the unit grid is also marked as the label which is the same as the surrounding unit grids;
and 3.4, visualizing the unit grids of the segmentation result labels finished in the steps 3.1 to 3.3, then combining the unit grids of the same labels, and deleting the inaccessible unit grids of the white pixels in the binary image in the step 2.3 to obtain a final room layout segmentation result.
CN202210327868.7A 2022-03-30 2022-03-30 Room layout dividing method based on unit decomposition and space division Pending CN114677388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210327868.7A CN114677388A (en) 2022-03-30 2022-03-30 Room layout dividing method based on unit decomposition and space division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210327868.7A CN114677388A (en) 2022-03-30 2022-03-30 Room layout dividing method based on unit decomposition and space division

Publications (1)

Publication Number Publication Date
CN114677388A true CN114677388A (en) 2022-06-28

Family

ID=82076012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210327868.7A Pending CN114677388A (en) 2022-03-30 2022-03-30 Room layout dividing method based on unit decomposition and space division

Country Status (1)

Country Link
CN (1) CN114677388A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187346A (en) * 2022-09-14 2022-10-14 深圳市明源云空间电子商务有限公司 Rental control graph display method and device, electronic equipment and readable storage medium
CN117496181A (en) * 2023-11-17 2024-02-02 杭州中房信息科技有限公司 OpenCV-based house type graph identification method, storage medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187346A (en) * 2022-09-14 2022-10-14 深圳市明源云空间电子商务有限公司 Rental control graph display method and device, electronic equipment and readable storage medium
CN117496181A (en) * 2023-11-17 2024-02-02 杭州中房信息科技有限公司 OpenCV-based house type graph identification method, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN111915730B (en) Method and system for automatically generating indoor three-dimensional model by taking semantic slave point cloud into consideration
US10692280B2 (en) 3D indoor modeling method, system and device based on point cloud data
Ochmann et al. Automatic reconstruction of parametric building models from indoor point clouds
CN107146280B (en) Point cloud building reconstruction method based on segmentation
CN112070769B (en) Layered point cloud segmentation method based on DBSCAN
CN109685080B (en) Multi-scale plane extraction method based on Hough transformation and region growth
CN114677388A (en) Room layout dividing method based on unit decomposition and space division
CN108171780A (en) A kind of method that indoor true three-dimension map is built based on laser radar
Galvanin et al. Extraction of building roof contours from LiDAR data using a Markov-random-field-based approach
CN115564926B (en) Three-dimensional patch model construction method based on image building structure learning
CN105513054B (en) Inscription rubbing method based on 3-D scanning
CN112164145B (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
Sohn et al. An implicit regularization for 3D building rooftop modeling using airborne lidar data
CN115423972A (en) Closed scene three-dimensional reconstruction method based on vehicle-mounted multi-laser radar fusion
CN111340822B (en) Multi-scale self-adaptive airborne LiDAR point cloud building single segmentation method
CN111783721B (en) Lane line extraction method of laser point cloud and electronic equipment
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN116310115B (en) Method and system for constructing building three-dimensional model based on laser point cloud
CN113066004A (en) Point cloud data processing method and device
CN113139982B (en) Automatic segmentation method for indoor room point cloud
CN111127622A (en) Three-dimensional point cloud outlier rejection method based on image segmentation
CN116630399B (en) Automatic roadway point cloud center line extraction method
CN114882192B (en) Building facade segmentation method and device, electronic equipment and storage medium
CN116051771A (en) Automatic photovoltaic BIM roof modeling method based on unmanned aerial vehicle oblique photography model
CN114677505A (en) Automatic room segmentation method based on wall constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination