WO2022041119A1 - Three-dimensional point cloud processing method and apparatus - Google Patents

Three-dimensional point cloud processing method and apparatus Download PDF

Info

Publication number
WO2022041119A1
WO2022041119A1 PCT/CN2020/112106 CN2020112106W WO2022041119A1 WO 2022041119 A1 WO2022041119 A1 WO 2022041119A1 CN 2020112106 W CN2020112106 W CN 2020112106W WO 2022041119 A1 WO2022041119 A1 WO 2022041119A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point
parameter
dimensional
points
Prior art date
Application number
PCT/CN2020/112106
Other languages
French (fr)
Chinese (zh)
Inventor
黄文杰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/112106 priority Critical patent/WO2022041119A1/en
Publication of WO2022041119A1 publication Critical patent/WO2022041119A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present application relates to the technical field of photogrammetry, and in particular, to a three-dimensional point cloud processing method and device.
  • the field of photogrammetry usually uses an aircraft to collect a series of images of the target area, and then reconstructs a 3D model, digital surface model (DSM) or orthophoto (DOM) of the target area based on these images.
  • DSM digital surface model
  • DOM orthophoto
  • the aerial photography aircraft will inevitably collect some areas that are not of interest to the user. For example, a building that the user wants to reconstruct will inevitably capture the background around the building, and the building will be reconstructed according to the image.
  • the reconstruction result will include a lot of background noise, which not only affects the reconstruction result and user experience, but also reduces the processing efficiency of the reconstruction process.
  • the user's region of interest can be determined first, and then only the region of interest is reconstructed.
  • an image segmentation method is used to determine the region of interest from the image, but this method requires accurate segmentation of the image and is not suitable for images with complex scenes.
  • the present application provides a three-dimensional point cloud processing method and device.
  • a method for processing a 3D point cloud is provided, wherein the 3D point cloud is obtained based on a plurality of images collected by an image collection device from different viewing angles, and the method includes:
  • a visibility parameter and/or a shooting angle parameter of the 3D point For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
  • the target parameter is used to determine the target 3D point from the 3D point cloud;
  • the target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
  • a processing device for a three-dimensional point cloud the three-dimensional point cloud is obtained based on a plurality of images collected by an image acquisition device at different viewing angles
  • the device includes a processor, a memory, and a storage device in the A computer program executable by the processor is stored in the memory, and when the processor executes the computer program, the following steps are implemented:
  • a visibility parameter and/or a shooting angle parameter of the 3D point For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
  • the target parameter is used to determine the target 3D point from the 3D point cloud;
  • the target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
  • the visibility parameter and/or shooting angle parameter of each three-dimensional point cloud can be counted, and the visibility parameter is used to characterize that the three-dimensional point corresponding to the plurality of images is included.
  • the number of pixel points of the target image, and the shooting angle parameter is used to characterize the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image.
  • the target parameter of the degree of interest of the point to use the target parameter to determine the target 3D point from the 3D point cloud.
  • the target three-dimensional point of interest to the user can be accurately determined through the visibility parameter and/or the shooting angle parameter, so as to determine the target pixel area in the multiple images according to the target three-dimensional point, and these target pixel areas correspond to the area of interest of the user.
  • a 3D model a digital surface model or an orthophoto
  • only the region of interest can be reconstructed, avoiding a lot of noise in the reconstruction result, and at the same time improving the processing efficiency of the reconstruction process.
  • FIG. 1 is a flowchart of a three-dimensional point cloud processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an included angle between a three-dimensional point and a line connecting an optical center corresponding to an image according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of determining a target pixel area by projecting a three-dimensional target area boundary to an image according to an embodiment of the present application.
  • FIG. 4 is a comparison diagram of the effect of performing three-dimensional reconstruction directly through an image according to an embodiment of the present application and determining a region of interest according to an embodiment of the present application and then performing three-dimensional reconstruction.
  • FIG. 5 is a schematic diagram of a logical structure of a three-dimensional point cloud processing apparatus according to an embodiment of the present application.
  • aerial photography vehicles are usually used to collect a series of images of the target area, and then a 3D model, digital surface model (DSM) or orthophoto (DOM) of the target area is reconstructed based on these images.
  • DSM digital surface model
  • DOM orthophoto
  • the user's area of interest can be determined first, and then only the user's sense of interest can be determined. Reconstruct the region of interest.
  • either the image segmentation method is used to directly determine the region of interest from the image, but this method requires automatic segmentation of the scene in the image, when the scene in the image is complex. , not very applicable.
  • technologies that can first extract feature points from the image, determine the pose parameters of the camera that collected the image according to the feature points, and determine the 3D point cloud corresponding to the feature point, and then determine the 3D point of interest from the generated 3D point cloud.
  • the region of interest to the user is determined according to the three-dimensional point of interest.
  • it is mainly to project the 3D point cloud onto the image to determine the spatial resolution corresponding to each 3D point.
  • the spatial resolution refers to the corresponding pixel point in the image in the actual 3D space
  • the histogram of spatial resolution is counted, and the 3D point near the spatial resolution corresponding to the highest point in the histogram is selected as the 3D point of interest, and finally the 3D point of interest is used to determine the region of interest.
  • the picture set A and the picture The resolution of set B may vary greatly, and screening based on the main resolution is likely to eliminate the 3D points corresponding to one of the image sets, resulting in the missing part of the determined region of interest.
  • the present application provides a three-dimensional point cloud processing method, which can be used to determine the target three-dimensional point of interest to the user from the three-dimensional point cloud obtained from multiple images collected by a camera device at different viewing angles, so as to The target 3D point defines the user's region of interest.
  • the method includes the following steps:
  • a visibility parameter and/or a shooting angle parameter of the three-dimensional point For each three-dimensional point in the three-dimensional point cloud, obtain a visibility parameter and/or a shooting angle parameter of the three-dimensional point, wherein the visibility parameter is used to represent the number of target images in the multiple images,
  • the target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
  • S102 Determine the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
  • the three-dimensional point cloud processing method of the present application can be executed by any device with a three-dimensional point cloud processing function, and the device can be a drone, an unmanned vehicle, a handheld gimbal, a mobile phone, a notebook computer, a cloud server, or the like.
  • the 3D point cloud in this application can be obtained through multiple images collected by a camera device at different viewing angles.
  • the camera device can be mounted on a movable platform, such as a drone, an unmanned car or a handheld gimbal, and then the camera can be mounted on a different Image acquisition of a scene at different positions and angles to obtain multiple images. Then a 3D point cloud is obtained based on these multiple images.
  • a three-dimensional point cloud can be obtained from the multiple images by using a structure-from-motion technique (SFM: Structure From Motion), for example, feature points can be extracted from the images, and matching of the feature points can be used to determine that the camera device collects different images The pose and the three-dimensional points corresponding to these feature points are obtained, that is, the above three-dimensional point cloud is obtained.
  • SFM Structure From Motion
  • the user reconstructs the target area by collecting the image of the target area to obtain the reconstruction results of the 3D model, orthophoto, digital surface model, etc. of the target area
  • the user in order to completely reconstruct the area of interest to the user, the user often chooses
  • the images of the region of interest are collected at different angles and with different resolutions. Therefore, the number of images containing the region of interest to the user is often large, and in order to obtain a complete image of the region, the user generally controls the camera to surround the region. To collect images, so that all angles of the area can be photographed.
  • the area of interest to the user is a building.
  • the user will control the drone to collect multiple images around the building, so that the front and back of the building can be collected.
  • the target image refers to an image that includes pixels corresponding to the three-dimensional points.
  • the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image.
  • the visibility parameter can indicate the number of times the 3D point is photographed. For example, the more times the user takes the 3D point, the more target images that contain the pixel corresponding to the 3D point, thus indicating that the 3D point is more concerned by the user. .
  • the number of target images may be directly used as the visibility parameter, or the ratio of the number of target images to the total number of multiple images may be used as the visibility parameter.
  • other parameters may also be used, which are not limited in this application. .
  • the shooting angle parameter may represent the included angle between the position of the camera device (ie, the position of the optical center corresponding to the image) and the line connecting the three-dimensional point when the user shoots the three-dimensional point.
  • the 3D point P is a 3D point in the building. If the building is an area of interest to the user, the user usually takes pictures of the building from different angles of the building. Therefore, the 3D point is related to the camera.
  • the included angle formed by the connection lines where the device is located will be relatively large. In some embodiments, the maximum included angle of the line connecting the three-dimensional point and the optical center corresponding to the image may be used as the shooting angle parameter.
  • the target parameter of the 3D point can be determined according to the visibility parameter, the target parameter of the 3D point can also be determined according to the shooting angle parameter, or the visibility parameter and the shooting angle parameter can be determined.
  • the target parameter of the 3D point is used to determine the target 3D point from the 3D point cloud by using the determined target parameter, wherein the target parameter can be used to represent the user's interest in the 3D point.
  • the mapping relationship between the target parameter and the visibility parameter or the shooting angle parameter may be preset, and after the visibility parameter or the shooting angle parameter is determined, the target parameter may be determined according to the mapping relationship.
  • the weights corresponding to the visibility parameters and the shooting angle parameters may also be preset, and then the target parameters are determined according to the determined visibility parameters, shooting angle parameters and their respective weights.
  • the target 3D points can be determined from the 3D point cloud according to the target parameters.
  • the 3D points whose target parameters meet the preset conditions can be used as the target 3D points, and then the determined target 3D points can be used to determine multiple 3D points.
  • the target pixel area in the image is used to reconstruct the three-dimensional area corresponding to the target pixel area, and the three-dimensional area corresponding to the target pixel area is the area of interest of the user.
  • the target parameter can be used to indicate the user's interest in 3D points, the larger the target parameter, the more interested the user is in the 3D point, wherein the target parameter can be positively correlated with the visibility parameter, and is related to the visibility parameter.
  • the shooting angle parameter is positively correlated. For example, assuming that the visibility parameter is the number of target images, and the shooting angle parameter is the maximum angle between the three-dimensional point and the optical center corresponding to the target image, the larger the visibility parameter and the larger the shooting angle parameter, the higher the target parameter. Big.
  • the smaller the target parameter the more interested the user is in the three-dimensional point.
  • the target parameter may also be negatively correlated with the visibility parameter or the shooting angle parameter.
  • the larger the number of target images the larger the maximum included angle of the connection line between the three-dimensional point and the optical center corresponding to the image, indicating that the user is more interested in the three-dimensional point.
  • the target parameter of the 3D point can also be determined in combination with the visibility parameter, the shooting angle parameter and the spatial resolution parameter, wherein,
  • the spatial resolution parameter is used to characterize the physical size of the pixel corresponding to the 3D point in the image in the actual 3D space.
  • the minimum spatial resolution corresponding to the target image is smaller, it means that the user is more interested in the three-dimensional point.
  • the target parameter may be negatively correlated with the spatial resolution parameter.
  • the spatial resolution parameter is the minimum value of the three-dimensional point in the spatial resolution corresponding to each target image. The more interested the 3D point is, and therefore, the larger the target parameter.
  • the spatial resolution parameter may also be the maximum value of the three-dimensional point in the spatial resolution corresponding to each target image. The smaller the spatial resolution parameter is, the more interested the user is in the three-dimensional point. the larger the parameter.
  • the target parameter may also be positively correlated with the spatial resolution parameter. For example, the smaller the target parameter, the more interested the user is, and at this time the target parameter may be positively correlated with the spatial resolution parameter. All in all, the closer the user shoots the three-dimensional point, the more interested the user is in the three-dimensional point.
  • the 3D point in order to determine the target parameter to more accurately represent the user's interest in the 3D point, when determining the target parameter of the 3D point according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, the 3D point can be combined with the 3D point.
  • the mean and standard deviation of the above parameters for all 3D points in the point cloud are used to determine the target parameters for each 3D point.
  • the visibility parameter is the number of target images, denoted by D
  • the shooting angle parameter is the maximum angle between the 3D point and the optical center corresponding to the image, denoted by ⁇
  • the spatial resolution parameter is the 3D point in each image.
  • the minimum value of the spatial resolution corresponding to the target image is represented by gsd
  • the target parameter is represented by w
  • the target parameter of each 3D point can be calculated by formula (1):
  • ⁇ D is the standard deviation of the visibility parameter D of all 3D points in the 3D point cloud
  • ⁇ ⁇ is the standard deviation of the maximum sight angle ⁇ of all 3D points in the 3D point cloud
  • ⁇ ⁇ is the standard deviation of the maximum sight angle ⁇ of all 3D points in the 3D point cloud
  • ⁇ gsd is the standard deviation of the gsd of all 3D points in the 3D point cloud.
  • the multiple images collected by the camera usually include one or more regions of interest.
  • the three-dimensional point cloud in order to determine the target three-dimensional point corresponding to the one or more regions of interest from the three-dimensional point cloud, after determining each After the target parameters corresponding to the three-dimensional points, the three-dimensional point cloud can be clustered according to the target parameters to determine the target three-dimensional points.
  • the target 3D point corresponds to a region of interest. By clustering the 3D point cloud, the target 3D point can be accurately determined from the 3D point cloud, so as to avoid missing some 3D points, resulting in the absence of the determined region of interest.
  • the clustering operation may be repeatedly performed to obtain one or more groups.
  • a first preset condition for stopping the clustering operation can be set, and as long as the first preset condition is not triggered, the operation of clustering the three-dimensional point cloud to obtain one of the groups is repeatedly performed.
  • the cluster center can be determined from the ungrouped 3D points in the 3D point cloud according to the target parameters, and then the ungrouped 3D points can be clustered based on the cluster center.
  • the class obtains a group, and as long as the first preset condition is not triggered, the above operation is repeated to obtain one or more groups.
  • the first preset condition may be that the ratio of the sum of the number of target 3D points in the determined one or more groups to the total number of 3D points in the 3D point cloud is greater than the first preset threshold, wherein, The first preset threshold can be set according to actual needs. For example, when there are 100 3D points in the 3D point cloud and three groups have been clustered, 40 3D points, 30 3D points, and 25 3D points respectively, the sum of the 3D points in the group is the same as The ratio of the total number of 3D point clouds is 95%. Assuming that the first preset threshold is 90%, the clustering operation can be stopped at this time, that is, a total of three groups are determined, and the 3D points in the group are the target 3D points, which are not Grouped 3D points are discarded.
  • the clustering operation can be stopped, wherein , specify the grouping as the group with the least number of target 3D points or the grouping obtained by the latest clustering. For example, when there are 100 3D points in the 3D point cloud, three groups have been clustered, 40 3D points, 30 3D points, and 15 3D points respectively, assuming that the second preset threshold is 20%. , there are 15 3D points in the group obtained by the latest clustering, accounting for 15% of the total number of 3D point clouds, which is less than the second preset threshold. Therefore, the clustering operation can be stopped at this time, and the 3D point cloud in the group can be stopped. The point is used as the target 3D point, and the 3D points that are not grouped are discarded.
  • a second preset condition when determining the target 3D points in each group according to the cluster center, a second preset condition may be set, and as long as the second preset condition is not triggered, the 3D points that have never been grouped will be repeatedly executed The operation of selecting 3D points to add to the current group.
  • the weighted distance between the ungrouped 3D points and the cluster center can be determined, and then the weighted distance is the smallest.
  • the three-dimensional point is determined as the target three-dimensional point in the current group, and then the cluster center is updated according to the coordinates of the current target three-dimensional point in the group and the target parameters.
  • the weighted weight when determining the weighted distance, may be first determined according to the target parameters of the three-dimensional point, and then the weighted distance may be determined according to the distance between the three-dimensional point and the cluster center and the weighted weight.
  • the weight when determining the updated cluster center, the weight can be first determined according to the target parameters of each 3D point, and then the coordinates of the cluster center can be determined according to the weight and the coordinates of each 3D point in the group. Specifically, you can refer to formula (2):
  • Wi is the weight of the i-th three-dimensional point in the current group
  • P i is the coordinate of the i-th point in the current group
  • n is the number of three-dimensional points contained in the current group.
  • the second preset condition may be that the minimum weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the current target 3D point in the group and the updated cluster center. Average of weighted distances from cluster centers. That is, the weighted distance between the ungrouped 3D points outside the current group and the updated cluster center is greater than the average weighted distance of the target 3D points in the current group, then it is considered that there are no suitable 3D points in the ungrouped 3D points. 3D points divided into the current group.
  • the 3D point cloud includes 3D points P0, P1, P2, P3....P n.
  • the corresponding target parameters are wi. The larger wi is, the more interested the user is in the 3D point.
  • the 3D point with the largest target parameter can be determined from the above 3D points, assuming P0, and P0 is used as the cluster center, and then the remaining 3D points can be determined one by one.
  • the weighted distance Di between the point and P0, Di widi, where di is the distance between each three-dimensional point and the cluster center.
  • the 3D point with the smallest weighted distance is then added to the current grouping.
  • the cluster center can be re-determined according to the three-dimensional points in the current group, and the cluster center Pcenter can be determined according to formula (2):
  • wi is the weight of the i-th three-dimensional point in the current group
  • P i is the coordinate of the i-th three-dimensional point in the current group
  • n is the number of three-dimensional points contained in the current group.
  • the clustering operation on the three-dimensional point cloud is stopped, thereby obtaining one or more groups.
  • the target three-dimensional area may be determined according to the target three-dimensional point, and then according to the projection points of the boundary points of the target three-dimensional area in the multiple images collected by the camera, in the multiple images Determine the target pixel area.
  • one or more grouped target three-dimensional points are obtained based on the clustering operation, and one or more target three-dimensional regions may be obtained based on the one or more grouped target three-dimensional points.
  • the target 3D area determined according to the target 3D points is a cuboid area
  • the corners or sides of the cuboid can be projected onto the image (the corners are taken as an example in the figure), and the projection points can be determined in the image (gray dots in the figure), and then the target pixel area corresponding to the 3D target area in the image can be determined according to the projection point.
  • the three-dimensional point corresponding to the pixel point of the target pixel area may be determined to reconstruct the target three-dimensional area.
  • a three-dimensional reconstruction of the target three-dimensional area can be performed to obtain a three-dimensional model of the target three-dimensional area.
  • an orthophoto, a digital surface model, etc. corresponding to the target three-dimensional area can also be obtained.
  • the target three-dimensional area may be a cuboid area.
  • the maximum coordinate value and the minimum coordinate value of the target three-dimensional point in the three axes of the three-dimensional space may be determined first, and then according to the maximum coordinate value and the minimum coordinate value of the three-dimensional space. The coordinate value and the minimum coordinate value determine the coordinates of the corner points of the cuboid region to determine the cuboid region.
  • the target three-dimensional area may be a spherical area.
  • the center of the spherical area may be determined according to the median value of the coordinates of the target three-dimensional point in the three axes of the three-dimensional space. and then take the maximum distance between the target three-dimensional point and the center of the sphere as the radius of the spherical area to determine the spherical area.
  • the 3D point cloud and the target 3D region can be displayed on the user interface, so that the user can make adaptive adjustments to the target 3D region, such as rotation, scaling, overall movement, and the like.
  • the depth information of the projection point of the boundary point of the target three-dimensional area in the multiple images collected by the camera can be used to determine the target pixel area.
  • the depth value range of the pixel point and then use the depth value range as a constraint to determine the depth information of the pixel point in the target pixel area, and determine the corresponding pixel point of the target pixel area according to the depth information of the pixel point in the target pixel area. 3D point.
  • the depth information of the pixel point needs to be calculated.
  • the depth value range of the pixel point can be determined according to the depth information of the projection point, and then use the depth The value range is used as a constraint to determine the depth information of the pixel.
  • 3D reconstruction In the field of photogrammetry, drones are usually used to collect multiple images of the target area, and then 3D reconstruction of the target area is performed based on these multiple images. 3D reconstruction usually includes the following steps:
  • the dense point cloud is constructed by triangular mesh or texture mapping to obtain a textured mesh model.
  • the target 3D region of interest to the user can be determined according to the determined sparse 3D point cloud, and when performing 3D reconstruction, only the target 3D region of interest to the user is reconstructed, which can be specifically achieved through the following steps:
  • the target image including the pixel corresponding to the 3D point in the multiple images can be determined
  • the visibility parameter D can be the number of target images
  • the shooting angle parameter ⁇ can be the maximum clip formed by the connection line between the 3D point and the light corresponding to the target image.
  • the spatial resolution parameter gsd can be the minimum value of the spatial resolution corresponding to the three-dimensional point in each target image.
  • the spatial resolution is the physical distance corresponding to each pixel in the image to the three-dimensional space.
  • the spatial resolution parameter gsd can be determined by the following formula (3):
  • depth i is the depth distance of the 3D point
  • focal i is the camera focal length corresponding to the target image
  • the target parameter w corresponding to each three-dimensional point can be determined according to the visibility parameter D, the shooting angle parameter ⁇ , and the spatial resolution parameter gsd. Among them, the larger the visibility parameter D, the larger the shooting angle parameter ⁇ , the smaller the spatial resolution parameter gsd, and the larger the target parameter w.
  • the target parameter w can be determined by the following formula (1):
  • ⁇ D is the standard deviation of the visibility parameter D of all 3D points in the 3D point cloud
  • ⁇ ⁇ is the standard deviation of the maximum sight angle ⁇ of all 3D points in the 3D point cloud
  • ⁇ ⁇ is the standard deviation of the maximum sight angle ⁇ of all 3D points in the 3D point cloud
  • ⁇ gsd is the standard deviation of the gsd of all 3D points in the 3D point cloud.
  • the cluster center After updating the three-dimensional points in the current group, the cluster center can be re-determined according to the three-dimensional points in the current group, and the cluster center Pcenter can be determined according to formula (2):
  • Wi is the weight of the ith three-dimensional point in the current group
  • Pi is the coordinate of the ith point in the current group
  • n is the number of three-dimensional points contained in the current group.
  • step (1) After re-determining the cluster center, then repeat step (1) until the minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the current target 3D in the group The average of the weighted distances of the points to the updated cluster centers is then stopped to determine a grouping.
  • the clustering operation on the three-dimensional point cloud is stopped, thereby obtaining one or more groups.
  • step 4 For each group obtained in step 4, you can traverse the three-dimensional points in the group to calculate the minimum and maximum coordinates of each three-dimensional point in the x-axis direction, the minimum and maximum values in the y-axis direction, and the minimum and maximum values in the z-axis direction.
  • the maximum value is obtained, and a cuboid bounding box is obtained to represent the 3D area of the target of interest corresponding to the group.
  • the boundary of the cuboid bounding box and the eight corner points can be projected onto multiple images to determine the target pixel area of interest to the user in the multiple images according to the projection points, and then obtain the three-dimensional point of each pixel in the target pixel area, To reconstruct the target three-dimensional area of the user's area of interest.
  • the depth range of each pixel in the target pixel area can be determined according to the eight corners of the cuboid bounding box, and the depth range is used as a constraint, according to the camera pose parameters And the pixel point matching in the multiple images calculates the depth of each pixel point in the target pixel area.
  • FIG. 4 (a) in FIG. 4 is a schematic diagram of the reconstruction result of directly performing 3D reconstruction on multiple images, and (b) in FIG. 4 is the result of the three-dimensional reconstruction after determining the target three-dimensional area by the method of this embodiment.
  • a schematic diagram of the reconstruction result it can be seen from the figure that by determining that the region of interest is undergoing 3D reconstruction, the noise in the reconstruction result can be reduced.
  • the present application also provides a processing device for a three-dimensional point cloud.
  • the three-dimensional point cloud is obtained based on multiple images collected by an image acquisition device at different viewing angles.
  • the device includes a processor 51, a memory 52.
  • a computer program executable by the processor 51 stored in the memory 52. When the processor executes the computer program, the following steps are implemented:
  • a visibility parameter and/or a shooting angle parameter of the 3D point For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
  • the target parameter of the three-dimensional point is determined, and the target parameter is used to determine the target three-dimensional point from the three-dimensional point cloud.
  • the visibility parameter and/or the shooting angle parameter is positively correlated with the target parameter.
  • the processor when the processor is configured to determine the target parameter of the three-dimensional point according to the visibility parameter and/or the shooting angle parameter, it is specifically configured to:
  • the target parameter of the three-dimensional point is determined according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, and the spatial resolution parameter is used to represent the pixel point corresponding to the three-dimensional point in the target image in the actual three-dimensional space physical size.
  • the spatial resolution parameter is negatively correlated with the target parameter.
  • the apparatus is also used to:
  • the three-dimensional point cloud is clustered according to the target parameter to obtain the target three-dimensional point, and the target three-dimensional point is divided into one or more groups.
  • the processor when configured to perform clustering processing on the three-dimensional point cloud according to the target parameter, it is specifically configured to:
  • a target three-dimensional point within one of the one or more groups is determined based on the cluster centers.
  • the first preset condition includes:
  • the ratio of the sum of the number of target 3D points in the one or more groups to the total number of 3D points in the 3D point cloud is greater than a first preset threshold
  • the ratio of the number of target 3D points in the specified group in the one or more groups to the total number of 3D points in the 3D point cloud is less than the second preset threshold, and the specified group is the one with the least number of target 3D points. grouping.
  • the processor when the processor is configured to determine the target 3D point in one of the one or more groups based on the cluster center, the processor is specifically configured to:
  • the cluster center is updated based on the coordinates of the current target three-dimensional point in the one group and the target parameter.
  • the second preset condition includes:
  • the minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the weighted distance between the current target 3D point in the one group and the updated cluster center average value.
  • the processor is also used to:
  • the three-dimensional point corresponding to the pixel point of the target pixel area is determined, so as to reconstruct the target three-dimensional area by using the three-dimensional point corresponding to the pixel point of the target pixel area.
  • the processor when the processor is configured to determine the three-dimensional point corresponding to the pixel point of the target pixel region, it is specifically configured to:
  • the depth information of the projection point determine the depth value range of the pixel point of the target pixel area
  • the three-dimensional point corresponding to the pixel point of the target pixel area is determined.
  • the target three-dimensional area is a cuboid area
  • the processor when configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically configured to:
  • Coordinates of the corner points of the rectangular parallelepiped region are determined based on the maximum coordinate value and the minimum coordinate value.
  • the target three-dimensional area is a spherical area
  • the processor when configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically configured to:
  • the maximum value of the distance between the target three-dimensional point and the center of the sphere is taken as the radius of the spherical area to determine the spherical area.
  • an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the method for processing a three-dimensional point cloud in any of the foregoing embodiments is implemented.
  • Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A three-dimensional point cloud processing method and apparatus. The method comprises: with regard to each three-dimensional point in a three-dimensional point cloud which is determined according to a plurality of images collected by a photographic apparatus, acquiring a visibility parameter and/or a photographing angle parameter of the three-dimensional point, wherein the visibility parameter is used for representing the number of target images from among the plurality of images, and the target images include pixel points corresponding to the three-dimensional point, and the photographing angle parameter is used for representing an included angle between connection lines between the three-dimensional point and photocenters corresponding to the target images; determining a target parameter of the three-dimensional point according to the visibility parameter and/or the photographing angle parameter, wherein the target parameter is used for the determination of a target three-dimensional point from the three-dimensional point cloud; and determining a target pixel region in the plurality of images according to the target three-dimensional point, so as to reconstruct a three-dimensional region corresponding to the target pixel region.

Description

三维点云处理方法及装置Three-dimensional point cloud processing method and device 技术领域technical field
本申请涉及摄影测量技术领域,具体而言,涉及一种三维点云处理方法及装置。The present application relates to the technical field of photogrammetry, and in particular, to a three-dimensional point cloud processing method and device.
背景技术Background technique
摄影测量领域通常会使用飞行器采集目标区域的一系列图像,然后根据这些图像重建目标区域的三维模型、数字地表模型(DSM)或者是正射影像(DOM)等。由于航拍飞行器在采集图像的过程中,不可避免会采集到一些用户不感兴趣的区域,比如,用户想要重建的一座建筑物,不可避免会拍摄到建筑物周围的背景,在根据图像重建建筑物的三维模型、数字地表模型(DSM)或者是正射影像(DOM)时,重建结果中会包括大量背景噪声,既影响重建结果和用户体验,又会降低重建过程的处理效率。因此,在根据图像进行重建之前,可以先确定用户的感兴趣区域,然后只对感兴趣区域进行重建。相关技术在确定感兴趣区域时,通过图像分割的方法,从图像中确定感兴趣区域,但是这种方式要求对图像进行准确分割,不适合场景比较复杂的图像。The field of photogrammetry usually uses an aircraft to collect a series of images of the target area, and then reconstructs a 3D model, digital surface model (DSM) or orthophoto (DOM) of the target area based on these images. During the process of collecting images, the aerial photography aircraft will inevitably collect some areas that are not of interest to the user. For example, a building that the user wants to reconstruct will inevitably capture the background around the building, and the building will be reconstructed according to the image. When a 3D model, a digital surface model (DSM) or an orthophoto (DOM) is used, the reconstruction result will include a lot of background noise, which not only affects the reconstruction result and user experience, but also reduces the processing efficiency of the reconstruction process. Therefore, before reconstructing from the image, the user's region of interest can be determined first, and then only the region of interest is reconstructed. When determining a region of interest in the related art, an image segmentation method is used to determine the region of interest from the image, but this method requires accurate segmentation of the image and is not suitable for images with complex scenes.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本申请提供一种三位点云处理方法及装置。In view of this, the present application provides a three-dimensional point cloud processing method and device.
根据本申请的第一方面,提供一种三维点云的处理方法,所述三维点云基于图像采集装置在不同视角采集的多张图像得到,所述方法包括:According to a first aspect of the present application, a method for processing a 3D point cloud is provided, wherein the 3D point cloud is obtained based on a plurality of images collected by an image collection device from different viewing angles, and the method includes:
针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,所述目标参数用于从所述三维点云中确定目标三维点;determining the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
根据所述目标三维点确定所述多张图像中的目标像素区域,以对所述目标像素区域对应的三维区域进行重建。The target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
根据本申请的第二方面,提供一种三维点云的处理装置,所述三维点云基于图像采集装置在不同视角采集的多张图像得到,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:According to a second aspect of the present application, there is provided a processing device for a three-dimensional point cloud, the three-dimensional point cloud is obtained based on a plurality of images collected by an image acquisition device at different viewing angles, the device includes a processor, a memory, and a storage device in the A computer program executable by the processor is stored in the memory, and when the processor executes the computer program, the following steps are implemented:
针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,所述目标参数用于从所述三维点云中确定目标三维点;determining the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
根据所述目标三维点确定所述多张图像中的目标像素区域,以对所述目标像素区域对应的三维区域进行重建。The target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
应用本申请提供的方案,在根据摄像装置采集的图像确定三维点云后,可以统计每个三维点云的可见度参数和/或拍摄角度参数,可见度参数用于表征多张图像中包含三维点对应的像素点的目标图像的数量,拍摄角度参数用于表征三维点与目标图像对应的光心的连线之间的夹角,然后根据可见度参数和/或拍摄角度参数确定用于表征用户对三维点感兴趣程度的目标参数,以利用目标参数从三维点云中确定目标三维点。通过可见度参数和/或拍摄角度参数可以准确确定用户感兴趣的目标三维点,以根据目标三 维点确定多张图像中的目标像素区域,这些目标像素区域对应于用户感兴趣区域,从而在根据图像重建三维模型、数字表面模型或者正射影像时,可以只重建感兴趣区域,避免重建结果出现大量噪声,同时也可以提高重建过程的处理效率。Applying the solution provided by the present application, after determining the three-dimensional point cloud according to the image collected by the camera, the visibility parameter and/or shooting angle parameter of each three-dimensional point cloud can be counted, and the visibility parameter is used to characterize that the three-dimensional point corresponding to the plurality of images is included. The number of pixel points of the target image, and the shooting angle parameter is used to characterize the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image. The target parameter of the degree of interest of the point to use the target parameter to determine the target 3D point from the 3D point cloud. The target three-dimensional point of interest to the user can be accurately determined through the visibility parameter and/or the shooting angle parameter, so as to determine the target pixel area in the multiple images according to the target three-dimensional point, and these target pixel areas correspond to the area of interest of the user. When reconstructing a 3D model, a digital surface model or an orthophoto, only the region of interest can be reconstructed, avoiding a lot of noise in the reconstruction result, and at the same time improving the processing efficiency of the reconstruction process.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是本申请一个实施例三维点云处理方法的流程图。FIG. 1 is a flowchart of a three-dimensional point cloud processing method according to an embodiment of the present application.
图2是本申请一个实施例的三维点与图像对应的光心连线的夹角的示意图。FIG. 2 is a schematic diagram of an included angle between a three-dimensional point and a line connecting an optical center corresponding to an image according to an embodiment of the present application.
图3是本申请一个实施例的通过将三维目标区域边界投影到图像以确定目标像素区域的示意图。FIG. 3 is a schematic diagram of determining a target pixel area by projecting a three-dimensional target area boundary to an image according to an embodiment of the present application.
图4是本申请一个实施例的直接通过图像进行三维重建以及根据本申请实施例的方法确定感兴趣区域后再三维重建的效果对比图。FIG. 4 is a comparison diagram of the effect of performing three-dimensional reconstruction directly through an image according to an embodiment of the present application and determining a region of interest according to an embodiment of the present application and then performing three-dimensional reconstruction.
图5是本申请一个实施例的三维点云处理装置的逻辑结构示意图。FIG. 5 is a schematic diagram of a logical structure of a three-dimensional point cloud processing apparatus according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
摄影测量领域通常会使用航拍飞行器采集目标区域的一系列图像,然 后根据这些图像重建目标区域的三维模型、数字地表模型(DSM)或者是正射影像(DOM)等。为了提高重建过程中的处理效率,且减少重建结果中的一些用户不感兴趣区域产生的噪声,以提高用户的体验,在进行重建之前,可以先确定用户的感兴趣区域,然后仅对用户的感兴趣区域进行重建。相关技术中,在确定用户感兴趣区域时,要么采用图像分割的方法,直接从图像中确定感兴趣区域,但是这种方式要求自动对图像中的场景进行准确分割,当图像中场景比较复杂时,不太适用。也有的技术可以先从图像中提取特征点,根据特征点确定采集图像的相机的位姿参数,以及确定特征点对应的三维点云,然后从生成的三维点云确定感兴趣的三维点,并根据感兴趣三维点确定用户感兴趣的区域。但是,在确定感兴趣的三维点时,主要是将三维点云投影到图像上,确定各三维点对应的空间分辨率,空间分辨率是指三维点在图像中对应的像素点在实际三维空间的物理尺寸,然后统计空间分辨率的直方图,并选取直方图中最高点对应的空间分辨率附近的三维点作为感兴趣三维点,最后用感兴趣三维点来确定感兴趣区域。但是,通过空间分辨率确定感兴趣三维点,很可能会遗漏一部分三维点,造成确定的感兴趣区域缺失。比如,对同一场景,可能需要用户在近距离拍摄的图片集A(比如,在地面拍摄)和较远距离拍摄的图片集B(比如在空中拍摄)一起进行重建,这时图片集A和图片集B的分辨率可能差异很大,单纯根据主分辨率来进行筛选,很可能会剔除其中一组图片集对应的三维点,从而导致确定的感兴趣区域缺失一部分。In the field of photogrammetry, aerial photography vehicles are usually used to collect a series of images of the target area, and then a 3D model, digital surface model (DSM) or orthophoto (DOM) of the target area is reconstructed based on these images. In order to improve the processing efficiency in the reconstruction process and reduce the noise generated by some areas that are not of interest to the user in the reconstruction result, so as to improve the user's experience, before reconstruction, the user's area of interest can be determined first, and then only the user's sense of interest can be determined. Reconstruct the region of interest. In the related art, when determining the user's region of interest, either the image segmentation method is used to directly determine the region of interest from the image, but this method requires automatic segmentation of the scene in the image, when the scene in the image is complex. , not very applicable. There are also technologies that can first extract feature points from the image, determine the pose parameters of the camera that collected the image according to the feature points, and determine the 3D point cloud corresponding to the feature point, and then determine the 3D point of interest from the generated 3D point cloud. The region of interest to the user is determined according to the three-dimensional point of interest. However, when determining the 3D points of interest, it is mainly to project the 3D point cloud onto the image to determine the spatial resolution corresponding to each 3D point. The spatial resolution refers to the corresponding pixel point in the image in the actual 3D space Then the histogram of spatial resolution is counted, and the 3D point near the spatial resolution corresponding to the highest point in the histogram is selected as the 3D point of interest, and finally the 3D point of interest is used to determine the region of interest. However, by determining the three-dimensional points of interest through spatial resolution, it is likely to miss some three-dimensional points, resulting in the absence of the determined region of interest. For example, for the same scene, it may be necessary to reconstruct the picture set A taken by the user at a close distance (for example, taken on the ground) and the picture set B taken at a longer distance (for example, taken in the air). At this time, the picture set A and the picture The resolution of set B may vary greatly, and screening based on the main resolution is likely to eliminate the 3D points corresponding to one of the image sets, resulting in the missing part of the determined region of interest.
基于此,本申请提供了一种三维点云处理方法,所述方法可以用于从通过摄像装置在不同视角采集的多张图像得到的三维点云中确定用户感兴趣的目标三维点,以根据目标三维点确定用户感兴趣区域。具体的,如图1所示,所述方法包括以下步骤:Based on this, the present application provides a three-dimensional point cloud processing method, which can be used to determine the target three-dimensional point of interest to the user from the three-dimensional point cloud obtained from multiple images collected by a camera device at different viewing angles, so as to The target 3D point defines the user's region of interest. Specifically, as shown in Figure 1, the method includes the following steps:
S101、针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄 角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;S101. For each three-dimensional point in the three-dimensional point cloud, obtain a visibility parameter and/or a shooting angle parameter of the three-dimensional point, wherein the visibility parameter is used to represent the number of target images in the multiple images, The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
S102、根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,所述目标参数用于从所述三维点云中确定目标三维点;S102. Determine the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
S103、根据所述目标三维点确定所述多张图像中的目标像素区域,以对所述目标像素区域对应的三维区域进行重建。S103. Determine a target pixel area in the multiple images according to the target three-dimensional point, so as to reconstruct a three-dimensional area corresponding to the target pixel area.
本申请的三维点云处理方法可以由任一具有三维点云处理功能的设备执行,该设备可以是无人机、无人车、手持云台、手机、笔记本电脑、云端服务器等。The three-dimensional point cloud processing method of the present application can be executed by any device with a three-dimensional point cloud processing function, and the device can be a drone, an unmanned vehicle, a handheld gimbal, a mobile phone, a notebook computer, a cloud server, or the like.
本申请中的三维点云可以通过摄像装置在不同视角采集的多张图像得到,摄像装置可以搭载于可移动平台上,比如、无人机、无人小车或者手持云台上,然后在不同的位置、不同角度对某个场景进行图像采集,得到多张图像。然后基于这多张图像得到三维点云。在某些实施例中,可以利用运动恢复结构技术(SFM:Structure From Motion)从这多张图像得到三维点云,比如,可以从图像中提取特征点,通过特征点匹配确定摄像装置采集不同图像的位姿以及这些特征点对应的三维点,即得到上述三维点云。The 3D point cloud in this application can be obtained through multiple images collected by a camera device at different viewing angles. The camera device can be mounted on a movable platform, such as a drone, an unmanned car or a handheld gimbal, and then the camera can be mounted on a different Image acquisition of a scene at different positions and angles to obtain multiple images. Then a 3D point cloud is obtained based on these multiple images. In some embodiments, a three-dimensional point cloud can be obtained from the multiple images by using a structure-from-motion technique (SFM: Structure From Motion), for example, feature points can be extracted from the images, and matching of the feature points can be used to determine that the camera device collects different images The pose and the three-dimensional points corresponding to these feature points are obtained, that is, the above three-dimensional point cloud is obtained.
通常用户通过采集目标区域的图像对目标区域进行重建,以得到该目标区域的三维模型、正射影像、数字表面模型等重建结果时,为了可以完整的重建用户感兴趣的区域,用户往往会从不同角度、以不同的分辨率去采集感兴趣区域的图像,因此,包含用户感兴区域的图像的数量往往比较多,并且为了得到该区域完整的图像,用户一般会控制摄像装置围绕该区域一周去采集图像,以便可以拍摄到该区域的各个角度,比如,用户感兴趣的区域为一个建筑物,一般用户会控制无人机绕着建筑物一周采集多张图像,以便建筑物的正面、背面、左侧、右侧都可以被拍摄到。所以,为了从三维点云中确定用户感兴趣的目标三维点,可以先统计每个三维点对应的可见度参数或拍摄角度参数中的一个或多个,其中,可见度参数用于表征多张图像中目标图像的数量,目标图像是指包含三维点对应的像素点 的图像,拍摄角度参数用于表征三维点与目标图像对应的光心的连线之间的夹角。Usually, when the user reconstructs the target area by collecting the image of the target area to obtain the reconstruction results of the 3D model, orthophoto, digital surface model, etc. of the target area, in order to completely reconstruct the area of interest to the user, the user often chooses The images of the region of interest are collected at different angles and with different resolutions. Therefore, the number of images containing the region of interest to the user is often large, and in order to obtain a complete image of the region, the user generally controls the camera to surround the region. To collect images, so that all angles of the area can be photographed. For example, the area of interest to the user is a building. Generally, the user will control the drone to collect multiple images around the building, so that the front and back of the building can be collected. , left, and right can be captured. Therefore, in order to determine the target 3D points that the user is interested in from the 3D point cloud, one or more of the visibility parameters or shooting angle parameters corresponding to each 3D point can be counted first. The number of target images. The target image refers to an image that includes pixels corresponding to the three-dimensional points. The shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image.
可见度参数可以表示该三维点被拍摄的次数的多少,比如,用户拍摄该三维点的次数越多,包含该三维点对应的像素点的目标图像也越多,因而表示该三维点越被用户关注。在某些实施例中,可以直接用目标图像的数量作为可见度参数,也可以用目标图像的数量与多张图像总数量的占比作为可见度参数,当然,也可以采用其他参数,本申请不作限制。The visibility parameter can indicate the number of times the 3D point is photographed. For example, the more times the user takes the 3D point, the more target images that contain the pixel corresponding to the 3D point, thus indicating that the 3D point is more concerned by the user. . In some embodiments, the number of target images may be directly used as the visibility parameter, or the ratio of the number of target images to the total number of multiple images may be used as the visibility parameter. Of course, other parameters may also be used, which are not limited in this application. .
拍摄角度参数可以表示用户拍摄该三维点时,摄像装置所处位置(即图像对应的光心位置)与三维点的连线的夹角。通常,夹角越大,说明用户在三维点的不同角度都对三维点进行了拍摄,即该三维点越被用户关注。如图2所示,三维点P为建筑物中的一个三维点,如果建筑物为用户感兴趣的区域,则用户通常会从建筑物的不同角度对建筑物进行拍照,因此,三维点与摄像装置所在位置的连线构成的夹角会比较大。在某些实施例中,可以采用三维点与图像对应的光心的连线的最大夹角作为拍摄角度参数。The shooting angle parameter may represent the included angle between the position of the camera device (ie, the position of the optical center corresponding to the image) and the line connecting the three-dimensional point when the user shoots the three-dimensional point. Generally, the larger the included angle is, the more the user has photographed the 3D point at different angles of the 3D point, that is, the more the user pays attention to the 3D point. As shown in Figure 2, the 3D point P is a 3D point in the building. If the building is an area of interest to the user, the user usually takes pictures of the building from different angles of the building. Therefore, the 3D point is related to the camera. The included angle formed by the connection lines where the device is located will be relatively large. In some embodiments, the maximum included angle of the line connecting the three-dimensional point and the optical center corresponding to the image may be used as the shooting angle parameter.
在确定每个三维点对应的可见度参数和拍摄角度参数后,可以根据可见度参数确定三维点的目标参数,也可以根据拍摄角度参数确定三维点的目标参数,也可以结合可见度参数和拍摄角度参数确定三维点的目标参数,以利用确定的目标参数从三维点云中确定目标三维点,其中,目标参数可以用来表征用户对三维点的感兴趣程度。比如,可以预先设置目标参数与可见度参数或者拍摄角度参数的映射关系,在确定可见度参数或拍摄角度参数后,可以根据该映射关系确定目标参数。当然,也可以预先设置可见度参数和拍摄角度参数对应的权重,然后根据确定的可见度参数、拍摄角度参数以及各自的权重确定目标参数。在确定目标参数后,即可以根据目标参数从三维点云中确定目标三维点,比如,可以将目标参数满足预设条件的三维点作为目标三维点,然后可以利用确定的目标三维点确定多张图像中的目标像素区域,以对目标像素区域对应的三维区域进行重建,上述目标像素区域对应的三维区域即为用户的感兴趣区域。After determining the visibility parameter and shooting angle parameter corresponding to each 3D point, the target parameter of the 3D point can be determined according to the visibility parameter, the target parameter of the 3D point can also be determined according to the shooting angle parameter, or the visibility parameter and the shooting angle parameter can be determined. The target parameter of the 3D point is used to determine the target 3D point from the 3D point cloud by using the determined target parameter, wherein the target parameter can be used to represent the user's interest in the 3D point. For example, the mapping relationship between the target parameter and the visibility parameter or the shooting angle parameter may be preset, and after the visibility parameter or the shooting angle parameter is determined, the target parameter may be determined according to the mapping relationship. Of course, the weights corresponding to the visibility parameters and the shooting angle parameters may also be preset, and then the target parameters are determined according to the determined visibility parameters, shooting angle parameters and their respective weights. After the target parameters are determined, the target 3D points can be determined from the 3D point cloud according to the target parameters. For example, the 3D points whose target parameters meet the preset conditions can be used as the target 3D points, and then the determined target 3D points can be used to determine multiple 3D points. The target pixel area in the image is used to reconstruct the three-dimensional area corresponding to the target pixel area, and the three-dimensional area corresponding to the target pixel area is the area of interest of the user.
在某些实施例中,目标参数可以用于表示用户对三维点的感兴趣程度,目标参数越大,则表示用户对三维点越感兴趣,其中,目标参数可以与可见度参数正相关,并且与拍摄角度参数正相关。举个例子,假设可见度参数为目标图像的数量,拍摄角度参数为三维点与目标图像对应的光心的连线的最大夹角,则可见度参数越大,拍摄角度参数越大,则目标参数越大。在某些实施例中,也可以是目标参数越小,表示用户对三维点越感兴趣,则此时,目标参数也可以与可见度参数或拍摄角度参数负相关。总之,目标图像的数量越大,三维点与图像对应的光心的连线的最大夹角越大,说明用户对该三维点越感兴趣。In some embodiments, the target parameter can be used to indicate the user's interest in 3D points, the larger the target parameter, the more interested the user is in the 3D point, wherein the target parameter can be positively correlated with the visibility parameter, and is related to the visibility parameter. The shooting angle parameter is positively correlated. For example, assuming that the visibility parameter is the number of target images, and the shooting angle parameter is the maximum angle between the three-dimensional point and the optical center corresponding to the target image, the larger the visibility parameter and the larger the shooting angle parameter, the higher the target parameter. Big. In some embodiments, the smaller the target parameter, the more interested the user is in the three-dimensional point. In this case, the target parameter may also be negatively correlated with the visibility parameter or the shooting angle parameter. In a word, the larger the number of target images, the larger the maximum included angle of the connection line between the three-dimensional point and the optical center corresponding to the image, indicating that the user is more interested in the three-dimensional point.
为了让根据目标参数确定的目标三维点更加可靠,在某些实施例中,在确定目标参数时,也可以同时结合可见度参数、拍摄角度参数和空间分辨率参数确定三维点的目标参数,其中,空间分辨率参数用于表征三维点在图像中对应的像素点在实际三维空间的物理尺寸。通常,如果用户近距离对一个区域进行拍摄,说明这个区域越被用户关注,因此,如果目标图像对应的最小的空间分辨率越小,说明用户对该三维点越感兴趣。In order to make the target 3D point determined according to the target parameter more reliable, in some embodiments, when determining the target parameter, the target parameter of the 3D point can also be determined in combination with the visibility parameter, the shooting angle parameter and the spatial resolution parameter, wherein, The spatial resolution parameter is used to characterize the physical size of the pixel corresponding to the 3D point in the image in the actual 3D space. Usually, if the user shoots an area at a close distance, it means that this area is more concerned by the user. Therefore, if the minimum spatial resolution corresponding to the target image is smaller, it means that the user is more interested in the three-dimensional point.
在某些实施例中,目标参数可以与空间分辨率参数负相关,假设空间分辨率参数为三维点在各目标图像对应的空间分辨率中的最小值,则空间分辨率参数越小,说明用户对该三维点越感兴趣,因此,目标参数越大。当然,在某些实施例中,空间分辨率参数也可以是三维点在各目标图像对应的空间分辨率中的最大值,空间分辨率参数越小,说明用户对该三维点越感兴趣,目标参数越大。当然,目标参数也可以与空间分辨率参数正相关,比如,目标参数越小,则用户越感兴趣,则此时目标参数可以与空间分辨率参数正相关。总而言之,用户越近距离拍摄三维点,说明用户对三维点越感兴趣。In some embodiments, the target parameter may be negatively correlated with the spatial resolution parameter. It is assumed that the spatial resolution parameter is the minimum value of the three-dimensional point in the spatial resolution corresponding to each target image. The more interested the 3D point is, and therefore, the larger the target parameter. Of course, in some embodiments, the spatial resolution parameter may also be the maximum value of the three-dimensional point in the spatial resolution corresponding to each target image. The smaller the spatial resolution parameter is, the more interested the user is in the three-dimensional point. the larger the parameter. Of course, the target parameter may also be positively correlated with the spatial resolution parameter. For example, the smaller the target parameter, the more interested the user is, and at this time the target parameter may be positively correlated with the spatial resolution parameter. All in all, the closer the user shoots the three-dimensional point, the more interested the user is in the three-dimensional point.
在某些实施例中,为了确定的目标参数可以更准确地表征用户对三维点的感兴趣程度,在根据可见度参数、拍摄角度参数和空间分辨率参数确定三维点的目标参数时,可以结合三维点云中所有三维点的上述参数的均 值和标准差来确定每个三维点的目标参数。举个例子,假设可见度参数为目标图像的数量,用D表示,拍摄角度参数为三维点与图像对应的光心的连线的最大夹角,用θ表示,空间分辨率参数为三维点在各目标图像对应的空间分辨率中的最小值,用gsd表示,目标参数用w表示,则每个三维点的目标参数可以通过公式(1)计算得到:In some embodiments, in order to determine the target parameter to more accurately represent the user's interest in the 3D point, when determining the target parameter of the 3D point according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, the 3D point can be combined with the 3D point. The mean and standard deviation of the above parameters for all 3D points in the point cloud are used to determine the target parameters for each 3D point. For example, suppose that the visibility parameter is the number of target images, denoted by D, the shooting angle parameter is the maximum angle between the 3D point and the optical center corresponding to the image, denoted by θ, and the spatial resolution parameter is the 3D point in each image. The minimum value of the spatial resolution corresponding to the target image is represented by gsd, and the target parameter is represented by w, then the target parameter of each 3D point can be calculated by formula (1):
Figure PCTCN2020112106-appb-000001
Figure PCTCN2020112106-appb-000001
其中,
Figure PCTCN2020112106-appb-000002
为三维点云中所有三维点的可见度参数D的均值,σ D为三维点云中所有三维点的可见度参数D的标准差,
Figure PCTCN2020112106-appb-000003
为三维点云中所有三维点最大视线夹角θ的均值,σ θ为三维点云中所有三维点最大视线夹角θ的标准差,
Figure PCTCN2020112106-appb-000004
为三维点云中所有三维点gsd的均值,σ gsd为三维点云中所有三维点的gsd的标准差。
in,
Figure PCTCN2020112106-appb-000002
is the mean value of the visibility parameter D of all 3D points in the 3D point cloud, σ D is the standard deviation of the visibility parameter D of all 3D points in the 3D point cloud,
Figure PCTCN2020112106-appb-000003
is the mean value of the maximum sight angle θ of all 3D points in the 3D point cloud, σ θ is the standard deviation of the maximum sight angle θ of all 3D points in the 3D point cloud,
Figure PCTCN2020112106-appb-000004
is the mean value of gsd of all 3D points in the 3D point cloud, and σ gsd is the standard deviation of the gsd of all 3D points in the 3D point cloud.
摄像装置采集的多张图像中通常包括一个或者多个感兴趣区域,在某些实施例中,为了可以从三维点云中确定这一个或多个感兴趣区域对应的目标三维点,在确定各三维点对应的目标参数后,可以根据目标参数对三维点云进行聚类处理,确定目标三维点,其中,聚类处理后的目标三维点可以被划分为一个或多个分组,每个分组中的目标三维点对应一个感兴趣区域。通过三维点云进行聚类处理,既可以从三维点云中准确的确定目标三维点,避免遗漏一部分三维点,造成确定的感兴趣区域缺失。The multiple images collected by the camera usually include one or more regions of interest. In some embodiments, in order to determine the target three-dimensional point corresponding to the one or more regions of interest from the three-dimensional point cloud, after determining each After the target parameters corresponding to the three-dimensional points, the three-dimensional point cloud can be clustered according to the target parameters to determine the target three-dimensional points. The target 3D point corresponds to a region of interest. By clustering the 3D point cloud, the target 3D point can be accurately determined from the 3D point cloud, so as to avoid missing some 3D points, resulting in the absence of the determined region of interest.
由于三维点云中三维点可能对应多个感兴趣区域,在根据目标参数对三维点云进行聚类处理时,可以重复地执行聚类操作,以得到一个或者多个分组。可以设置聚类操作停止的第一预设条件,只要第一预设条件未触发,则重复执行对三维点云进行聚类得到其中一个分组的操作。在对三维点云进行聚类得到一个分组时,可以根据目标参数从三维点云中未被分组的三维点确定聚类中心,然后以该聚类中心为基础对未被分组的三维点进行聚类得到一个分组,只要第一预设条件未触发,则重复执行上述操作,以得到一个或者多个分组。Since the three-dimensional points in the three-dimensional point cloud may correspond to multiple regions of interest, when the three-dimensional point cloud is clustered according to the target parameters, the clustering operation may be repeatedly performed to obtain one or more groups. A first preset condition for stopping the clustering operation can be set, and as long as the first preset condition is not triggered, the operation of clustering the three-dimensional point cloud to obtain one of the groups is repeatedly performed. When the 3D point cloud is clustered to obtain a group, the cluster center can be determined from the ungrouped 3D points in the 3D point cloud according to the target parameters, and then the ungrouped 3D points can be clustered based on the cluster center. The class obtains a group, and as long as the first preset condition is not triggered, the above operation is repeated to obtain one or more groups.
由于三维点云中并不是所有三维点都是用户感兴趣的三维点,因此,在聚类过程中,可以舍弃其中一些三维点。在某些实施例,第一预设条件可以是已经确定的一个或多个分组内的目标三维点的数量的总和与三维点云中三维点总数量的比值大于第一预设阈值,其中,第一预设阈值可以根据实际需求设定。举个例子,当三维点云中有100个三维点,当前已经聚类得到三个分组,分别有40个三维点,30个三维点、25个三维点,则分组内的三维点的总和与三维点云的总数量比值为95%,假设第一预设阈值为90%,这时即可以停止聚类操作,即总共确定三个分组,分组内的三维点即为目标三维点,未被分组的三维点即舍弃。Since not all the 3D points in the 3D point cloud are the 3D points that the user is interested in, some of the 3D points can be discarded during the clustering process. In some embodiments, the first preset condition may be that the ratio of the sum of the number of target 3D points in the determined one or more groups to the total number of 3D points in the 3D point cloud is greater than the first preset threshold, wherein, The first preset threshold can be set according to actual needs. For example, when there are 100 3D points in the 3D point cloud and three groups have been clustered, 40 3D points, 30 3D points, and 25 3D points respectively, the sum of the 3D points in the group is the same as The ratio of the total number of 3D point clouds is 95%. Assuming that the first preset threshold is 90%, the clustering operation can be stopped at this time, that is, a total of three groups are determined, and the 3D points in the group are the target 3D points, which are not Grouped 3D points are discarded.
在某些实施例中,当得到的一个或多个分组中指定分组的目标三维点数量与三维点云中的三维点总数量的比值小于第二预设阈值,则可以停止聚类操作,其中,指定分组为目标三维点数量最少的分组或者最新一次聚类得到的分组。举个例子,当三维点云中有100个三维点,当前已经聚类得到三个分组,分别有40个三维点,30个三维点、15个三维点,假设第二预设阈值为20%,最新一次聚类得到的分组中有15个三维点,与三维点云总数量的占比为15%,小于第二预设阈值,因此,此时可以停止聚类操作,将分组内的三维点作为目标三维点,未被分组的三维点即舍弃。In some embodiments, when the ratio of the number of target 3D points in the specified group in the obtained one or more groups to the total number of 3D points in the 3D point cloud is less than a second preset threshold, the clustering operation can be stopped, wherein , specify the grouping as the group with the least number of target 3D points or the grouping obtained by the latest clustering. For example, when there are 100 3D points in the 3D point cloud, three groups have been clustered, 40 3D points, 30 3D points, and 15 3D points respectively, assuming that the second preset threshold is 20%. , there are 15 3D points in the group obtained by the latest clustering, accounting for 15% of the total number of 3D point clouds, which is less than the second preset threshold. Therefore, the clustering operation can be stopped at this time, and the 3D point cloud in the group can be stopped. The point is used as the target 3D point, and the 3D points that are not grouped are discarded.
在某些实施例中,在根据聚类中心确定每个分组内的目标三维点时,可以设置第二预设条件,只要第二预设条件未触发,则重复执行从未被分组的三维点中选取三维点添加到当前分组的操作。可以根据三维点云中未被分组的三维点的目标参数以及未被分组的三维点与聚类中心的距离,确定未被分组的三维点与聚类中心的加权距离,然后将加权距离最小的三维点确定为当前分组内的目标三维点,然后再根据该分组内当前的目标三维点的坐标以及目标参数更新聚类中心。可以重复执行上述步骤,直到第二预设条件触发则停止往当前分组中添加加权距离最小的三维点,从而得到一个分组。其中,在确定加权距离时,可以先根据三维点的目标参数确定加权权重,然后根据三维点与聚类中心的距离以及加权权重确定加权距离。 在确定更新后的聚类中心时,可以先根据每个三维点的目标参数确定权重,然后根据权重和分组内每个三维点的坐标确定聚类中心的坐标。具体的,可以参考公式(2):In some embodiments, when determining the target 3D points in each group according to the cluster center, a second preset condition may be set, and as long as the second preset condition is not triggered, the 3D points that have never been grouped will be repeatedly executed The operation of selecting 3D points to add to the current group. According to the target parameters of the ungrouped 3D points in the 3D point cloud and the distance between the ungrouped 3D points and the cluster center, the weighted distance between the ungrouped 3D points and the cluster center can be determined, and then the weighted distance is the smallest. The three-dimensional point is determined as the target three-dimensional point in the current group, and then the cluster center is updated according to the coordinates of the current target three-dimensional point in the group and the target parameters. The above steps may be repeatedly performed until the second preset condition is triggered, and then stop adding the three-dimensional point with the smallest weighted distance to the current group, thereby obtaining a group. Wherein, when determining the weighted distance, the weighted weight may be first determined according to the target parameters of the three-dimensional point, and then the weighted distance may be determined according to the distance between the three-dimensional point and the cluster center and the weighted weight. When determining the updated cluster center, the weight can be first determined according to the target parameters of each 3D point, and then the coordinates of the cluster center can be determined according to the weight and the coordinates of each 3D point in the group. Specifically, you can refer to formula (2):
Figure PCTCN2020112106-appb-000005
Figure PCTCN2020112106-appb-000005
其中,其中Wi为当前分组中第i个三维点的权重,P i为当前分组中第i个点的坐标,n为当前分组中包含的三维点的个数。 Wherein, Wi is the weight of the i-th three-dimensional point in the current group, P i is the coordinate of the i-th point in the current group, and n is the number of three-dimensional points contained in the current group.
在某些实施例中,第二预设条件可以是三维点云中未被分组的三维点与更新后的聚类中心的加权距离的最小值大于该分组内当前的目标三维点与更新后的聚类中心的加权距离的平均值。即当前分组之外的未被分组的三维点与更新后的聚类中心的加权距离均大于当前分组内的目标三维点的加权距离的平均值,则认为未被分组的三维点中不存在适合划分到当前分组的三维点。In some embodiments, the second preset condition may be that the minimum weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the current target 3D point in the group and the updated cluster center. Average of weighted distances from cluster centers. That is, the weighted distance between the ungrouped 3D points outside the current group and the updated cluster center is greater than the average weighted distance of the target 3D points in the current group, then it is considered that there are no suitable 3D points in the ungrouped 3D points. 3D points divided into the current group.
以下以一个具体的例子来解释通过三维点云进行聚类处理得到一个或者多个分组的过程。假设三维点云中包括三维点P0、P1、P2、P3….P n。各自对应的目标参数为wi,wi越大,表示用户对该三维点越感兴趣,可以从上述三维点确定目标参数最大的三维点,假设为P0,将P0作为聚类中心,然后逐一确定其余点与P0的加权距离Di,Di=widi,其中,di为各三维点与聚类中心的距离。然后将加权距离最小的三维点添加到当前分组中。更新当前分组内的三维点后,可以根据当前分组内的三维点重新确定聚类中心,聚类中心Pcenter可以根据公式(2)确定:The following uses a specific example to explain the process of obtaining one or more groups by clustering the three-dimensional point cloud. It is assumed that the 3D point cloud includes 3D points P0, P1, P2, P3....P n. The corresponding target parameters are wi. The larger wi is, the more interested the user is in the 3D point. The 3D point with the largest target parameter can be determined from the above 3D points, assuming P0, and P0 is used as the cluster center, and then the remaining 3D points can be determined one by one. The weighted distance Di between the point and P0, Di=widi, where di is the distance between each three-dimensional point and the cluster center. The 3D point with the smallest weighted distance is then added to the current grouping. After updating the three-dimensional points in the current group, the cluster center can be re-determined according to the three-dimensional points in the current group, and the cluster center Pcenter can be determined according to formula (2):
Figure PCTCN2020112106-appb-000006
Figure PCTCN2020112106-appb-000006
其中,其中wi为当前分组中第i个三维点的权重,P i为当前分组中第i个三维点的坐标,n为当前分组中包含的三维点的个数。 Wherein, wi is the weight of the i-th three-dimensional point in the current group, P i is the coordinate of the i-th three-dimensional point in the current group, and n is the number of three-dimensional points contained in the current group.
重新确定聚类中心后,然后在重复往当前分组添加加权距离最小的三维点的步骤,直至三维点云中未被分组的三维点与更新后的聚类中心的加权距离的最小值大于该分组内当前的目标三维点与更新后的聚类中心的加权距离的平均值,则停止,从而确定一个分组。After re-determining the cluster center, then repeat the step of adding the 3D point with the smallest weighted distance to the current group until the minimum weighted distance between the ungrouped 3D point in the 3D point cloud and the updated cluster center is greater than the grouping. The average value of the weighted distance between the current target 3D point and the updated cluster center is then stopped, thereby determining a grouping.
然后再从被未分组的三维点中选取wi最大的三维点作为下一个分组的聚类中心,重复执行上述往当前分组添加加权距离最小的三维点的步骤,得到下一个分组。Then, select the 3D point with the largest wi from the ungrouped 3D points as the cluster center of the next group, and repeat the above steps of adding the 3D point with the smallest weighted distance to the current group to obtain the next group.
当已经确定的一个或多个分组内的目标三维点的数量的总和与三维点云中三维点总数量的比值大于第一预设阈值,或者最新一次得到的分组内的目标三维点的数量与三维点总数量的比值小于第二预设阈值,则停止对三维点云的聚类操作,从而得到了一个或者多个分组。When the ratio of the sum of the number of target 3D points in the determined one or more groups to the total number of 3D points in the 3D point cloud is greater than the first preset threshold, or the number of target 3D points in the group obtained last time is equal to If the ratio of the total number of three-dimensional points is smaller than the second preset threshold, the clustering operation on the three-dimensional point cloud is stopped, thereby obtaining one or more groups.
在某些实施例中,在确定目标三维点后,可以根据目标三维点确定目标三维区域,然后根据目标三维区域的边界点在摄像装置采集的多张图像中的投影点,在这多张图像中确定目标像素区域。可选的,基于聚类操作得到了一个或者多个分组的目标三维点,基于一个或多个分组的目标三维点可以得到一个或者多个目标三维区域。In some embodiments, after the target three-dimensional point is determined, the target three-dimensional area may be determined according to the target three-dimensional point, and then according to the projection points of the boundary points of the target three-dimensional area in the multiple images collected by the camera, in the multiple images Determine the target pixel area. Optionally, one or more grouped target three-dimensional points are obtained based on the clustering operation, and one or more target three-dimensional regions may be obtained based on the one or more grouped target three-dimensional points.
如图3所示,假设根据目标三维点确定的目标三维区域为一个长方体区域,可以将长方体的角点或者边上投影到图像上(图中以角点为例),在图像中确定投影点(图中的灰色点),然后可以根据投影点确定图像中与三维目标区域对应的目标像素区域。确定目标像素区域后,可以确定目标像素区域的像素点对应的三维点,以对目标三维区域进行重建。比如,可以对目标三维区域进行三维重建,得到目标三维区域的三维模型,当然,也可以得到目标三维区域对应的正射影像、数字表面模型等。As shown in Figure 3, assuming that the target 3D area determined according to the target 3D points is a cuboid area, the corners or sides of the cuboid can be projected onto the image (the corners are taken as an example in the figure), and the projection points can be determined in the image (gray dots in the figure), and then the target pixel area corresponding to the 3D target area in the image can be determined according to the projection point. After the target pixel area is determined, the three-dimensional point corresponding to the pixel point of the target pixel area may be determined to reconstruct the target three-dimensional area. For example, a three-dimensional reconstruction of the target three-dimensional area can be performed to obtain a three-dimensional model of the target three-dimensional area. Of course, an orthophoto, a digital surface model, etc. corresponding to the target three-dimensional area can also be obtained.
在某些实施中,目标三维区域可以是长方体区域,在根据目标三维点确定目标三维区域时,可以先确定目标三维点在三维空间三个轴向上最大坐标值和最小坐标值,然后根据最大坐标值和最小坐标值确定长方体区域的角点的坐标,以确定该长方体区域。In some implementations, the target three-dimensional area may be a cuboid area. When determining the target three-dimensional area according to the target three-dimensional point, the maximum coordinate value and the minimum coordinate value of the target three-dimensional point in the three axes of the three-dimensional space may be determined first, and then according to the maximum coordinate value and the minimum coordinate value of the three-dimensional space The coordinate value and the minimum coordinate value determine the coordinates of the corner points of the cuboid region to determine the cuboid region.
在某些实施例中,目标三维区域可以是球形区域,在根据目标三维点确定目标三维区域时,可以根据目标三维点在三维空间三个轴向上的坐标的中值确定球形区域的球心的坐标,然后将目标三维点与球心的距离的最大值作为球形区域的半径,以确定该球形区域。In some embodiments, the target three-dimensional area may be a spherical area. When the target three-dimensional area is determined according to the target three-dimensional point, the center of the spherical area may be determined according to the median value of the coordinates of the target three-dimensional point in the three axes of the three-dimensional space. and then take the maximum distance between the target three-dimensional point and the center of the sphere as the radius of the spherical area to determine the spherical area.
在某些实施例中,可以在用户交互界面上一并显示三维点云和目标三维区域,以便用户对目标三维区域进行适应性的调整,例如旋转、缩放、整体移动等等。In some embodiments, the 3D point cloud and the target 3D region can be displayed on the user interface, so that the user can make adaptive adjustments to the target 3D region, such as rotation, scaling, overall movement, and the like.
在某些实施例中,在确定目标像素区域的像素点对应的三维点时,可以根据目标三维区域的边界点在摄像装置采集的多张图像中的投影点的深度信息,确定目标像素区域的像素点的深度取值范围,然后以该深度取值范围作为约束,确定目标像素区域的像素点的深度信息,并根据目标像素区域的像素点的深度信息,确定目标像素区域的像素点对应的三维点。在确定目标像素区域的像素点对应的三维点时,需解算像素点的深度信息,在解算深度信息时,可以根据投影点的深度信息确定像素点的深度取值范围,然后以该深度取值范围作为约束,确定像素的深度信息。In some embodiments, when determining the three-dimensional point corresponding to the pixel point of the target pixel area, the depth information of the projection point of the boundary point of the target three-dimensional area in the multiple images collected by the camera can be used to determine the target pixel area. The depth value range of the pixel point, and then use the depth value range as a constraint to determine the depth information of the pixel point in the target pixel area, and determine the corresponding pixel point of the target pixel area according to the depth information of the pixel point in the target pixel area. 3D point. When determining the three-dimensional point corresponding to the pixel point of the target pixel area, the depth information of the pixel point needs to be calculated. When calculating the depth information, the depth value range of the pixel point can be determined according to the depth information of the projection point, and then use the depth The value range is used as a constraint to determine the depth information of the pixel.
为了进一步解释本申请的三维点云的处理方法,以下以一个具体的实施例加以解释。In order to further explain the processing method of the three-dimensional point cloud of the present application, a specific embodiment will be explained below.
摄像测量领域通常会采用无人机采集目标区域的多张图像,然后根据这多张图像对目标区域进行三维重建。三维重建通常包括以下步骤:In the field of photogrammetry, drones are usually used to collect multiple images of the target area, and then 3D reconstruction of the target area is performed based on these multiple images. 3D reconstruction usually includes the following steps:
(1)输入无人机拍摄的同一场景在不同拍摄视角下的多张图像,利用SFM(structure from motion)得到这多张图像对应的相机姿态以及稀疏三维点云;(1) Input multiple images of the same scene captured by the drone under different shooting angles, and use SFM (structure from motion) to obtain the camera pose and sparse 3D point cloud corresponding to these multiple images;
(2)然后利用得到的多张图像的相机姿态来进行多对图像的两两稠密匹配得到场景的稠密点云,即为MVS(Multi-view stereo)。(2) Then use the obtained camera poses of multiple images to perform pairwise dense matching of multiple pairs of images to obtain a dense point cloud of the scene, which is MVS (Multi-view stereo).
(3)然后对稠密点云进行三角网格的构建或者纹理贴图,得到带纹理的网格模型。(3) Then the dense point cloud is constructed by triangular mesh or texture mapping to obtain a textured mesh model.
由于图像中除了包括用户感兴趣的目标场景外,还会包括一些背景信息,如果对整张图像都进行稠密化处理,则最终得到的三维点云中会包含比较多的噪声,影响用户体验,同时,也会降低点云稠密化过程的处理效率。因而,可以根据已确定的稀疏三维点云确定用户感兴趣的目标三维区域,在进行三维重建时,只对用户感兴趣的目标三维区域进行重建,具体 的可以通过以下步骤实现:Since the image includes not only the target scene that the user is interested in, but also some background information. If the entire image is densified, the final 3D point cloud will contain more noise, which will affect the user experience. At the same time, it will also reduce the processing efficiency of the point cloud densification process. Therefore, the target 3D region of interest to the user can be determined according to the determined sparse 3D point cloud, and when performing 3D reconstruction, only the target 3D region of interest to the user is reconstructed, which can be specifically achieved through the following steps:
1、根据这多张图像确定稀疏三维点云中各三维点的可见度参数D、拍摄角度参数θ和空间分辨率参数gsd。1. Determine the visibility parameter D, the shooting angle parameter θ and the spatial resolution parameter gsd of each 3D point in the sparse 3D point cloud according to the multiple images.
首先,可以确定多张图像中包括该三维点对应的像素的目标图像,可见度参数D可以是目标图像的数量,拍摄角度参数θ可以是三维点与目标图像对应的光线的连线构成的最大夹角,空间分辨率参数gsd可以是三维点在各目标图像对应的空间分辨率的最小值。空间分辨率是图像中每个像素对应到三维空间的物理距离,空间分辨率参数gsd可以通过以下公式(3)确定:First, the target image including the pixel corresponding to the 3D point in the multiple images can be determined, the visibility parameter D can be the number of target images, and the shooting angle parameter θ can be the maximum clip formed by the connection line between the 3D point and the light corresponding to the target image. Angle, the spatial resolution parameter gsd can be the minimum value of the spatial resolution corresponding to the three-dimensional point in each target image. The spatial resolution is the physical distance corresponding to each pixel in the image to the three-dimensional space. The spatial resolution parameter gsd can be determined by the following formula (3):
gsd=min i∈{1,2,…,n}{depth i/focal i}   公式(3) gsd=min i∈{1,2,…,n} {depth i /focal i } Formula (3)
其中depth i为三维点的深度距离,focal i为目标图像对应的相机焦距。 where depth i is the depth distance of the 3D point, and focal i is the camera focal length corresponding to the target image.
2、确定用于表征用户对三维点感兴趣程度的目标参数w。2. Determine the target parameter w used to represent the user's interest in the three-dimensional point.
可以根据可见度参数D、拍摄角度参数θ和空间分辨率参数gsd确定每个三维点对应的目标参数w。其中,可见度参数D越大、拍摄角度参数θ越大、空间分辨率参数gsd越小,目标参数w越大。为了确定的目标参数可以更加准确地表征用户的感兴趣程度,目标参数w可以通过以下公式(1)确定:The target parameter w corresponding to each three-dimensional point can be determined according to the visibility parameter D, the shooting angle parameter θ, and the spatial resolution parameter gsd. Among them, the larger the visibility parameter D, the larger the shooting angle parameter θ, the smaller the spatial resolution parameter gsd, and the larger the target parameter w. In order that the determined target parameter can more accurately represent the user's interest degree, the target parameter w can be determined by the following formula (1):
Figure PCTCN2020112106-appb-000007
Figure PCTCN2020112106-appb-000007
其中,
Figure PCTCN2020112106-appb-000008
为三维点云所有三维点的可见度参数D的均值,σ D为三维点云中所有三维点的可见度参数D的标准差,
Figure PCTCN2020112106-appb-000009
为三维点云中所有三维点最大视线夹角θ的均值,σ θ为三维点云中所有三维点最大视线夹角θ的标准差,
Figure PCTCN2020112106-appb-000010
为三维点云中所有三维点gsd的均值,σ gsd为三维点云中所有三维点的gsd的标准差。
in,
Figure PCTCN2020112106-appb-000008
is the mean value of the visibility parameter D of all 3D points in the 3D point cloud, σ D is the standard deviation of the visibility parameter D of all 3D points in the 3D point cloud,
Figure PCTCN2020112106-appb-000009
is the mean value of the maximum sight angle θ of all 3D points in the 3D point cloud, σ θ is the standard deviation of the maximum sight angle θ of all 3D points in the 3D point cloud,
Figure PCTCN2020112106-appb-000010
is the mean value of gsd of all 3D points in the 3D point cloud, and σ gsd is the standard deviation of the gsd of all 3D points in the 3D point cloud.
3、根据目标参数对系数三维点云进行聚类处理,得到一个或者多个分组。3. Perform clustering processing on the coefficient three-dimensional point cloud according to the target parameters to obtain one or more groups.
(1)可以从稀疏三维点确定目标参数最大的三维点,假设为P0,将P0作为聚类中心,然后逐一确定其余点与P0的加权距离Di,Di=widi,其中,di为各三维点与聚类中心的距离,然后将加权距离最小的三维点添加到当前分组中。(1) The 3D point with the largest target parameter can be determined from the sparse 3D points, assuming P0, take P0 as the cluster center, and then determine the weighted distance Di between the remaining points and P0 one by one, Di=widi, where di is each 3D point The distance from the cluster center, then the 3D point with the smallest weighted distance is added to the current grouping.
(2)更新当前分组内的三维点后,可以根据当前分组内的三维点重新确定聚类中心,聚类中心Pcenter可以根据公式(2)确定:(2) After updating the three-dimensional points in the current group, the cluster center can be re-determined according to the three-dimensional points in the current group, and the cluster center Pcenter can be determined according to formula (2):
Figure PCTCN2020112106-appb-000011
Figure PCTCN2020112106-appb-000011
其中,其中Wi为当前分组中第i个三维点的权重,Pi为当前分组中第i个点的坐标,n为当前分组中包含的三维点的个数。Wherein, Wi is the weight of the ith three-dimensional point in the current group, Pi is the coordinate of the ith point in the current group, and n is the number of three-dimensional points contained in the current group.
(3)重新确定聚类中心后,然后在重复步骤(1),直至三维点云中未被分组的三维点与更新后的聚类中心的加权距离的最小值大于该分组内当前的目标三维点与更新后的聚类中心的加权距离的平均值,则停止,从而确定一个分组。(3) After re-determining the cluster center, then repeat step (1) until the minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the current target 3D in the group The average of the weighted distances of the points to the updated cluster centers is then stopped to determine a grouping.
(4)然后在从被未分组的三维点中选取wi最大的三维点作为下一个聚类中心,重复执行上述步骤(2)和(3),得到下一个分组。(4) Then, select the three-dimensional point with the largest wi from the ungrouped three-dimensional points as the next clustering center, and repeat the above steps (2) and (3) to obtain the next grouping.
当已经确定的一个或多个分组内的目标三维点的数量的总和与三维点云中三维点总数量的比值大于第一预设阈值,或者最新一次得到的分组内的目标三维点的数量与三维点总数量的比值小于第二预设阈值,则停止对三维点云的聚类操作,从而得到了一个或者多个分组。When the ratio of the sum of the number of target 3D points in the determined one or more groups to the total number of 3D points in the 3D point cloud is greater than the first preset threshold, or the number of target 3D points in the group obtained last time is equal to If the ratio of the total number of three-dimensional points is smaller than the second preset threshold, the clustering operation on the three-dimensional point cloud is stopped, thereby obtaining one or more groups.
4、根据各分组中的三维点确定目标三维区域。4. Determine the target three-dimensional area according to the three-dimensional points in each group.
对于步骤4中得到的每个分组,可以遍历分组中的三维点,计算各三维点的坐标在x轴方向的最小值、最大值,y轴方向的最小、最大值,z轴方向的最小、最大值,得到了一个长方体包围盒,来表示该分组对应的感兴趣目标三维区域。For each group obtained in step 4, you can traverse the three-dimensional points in the group to calculate the minimum and maximum coordinates of each three-dimensional point in the x-axis direction, the minimum and maximum values in the y-axis direction, and the minimum and maximum values in the z-axis direction. The maximum value is obtained, and a cuboid bounding box is obtained to represent the 3D area of the target of interest corresponding to the group.
5、根据确定的目标三维区域进行点云稠密化处理。5. Perform point cloud densification processing according to the determined target three-dimensional area.
可以将长方体包围盒的边界以及八个角点投影到多张图像上,以根据投影点在多张图像中确定用户感兴趣的目标像素区域,然后得到目标像素 区域中各像素点的三维点,以对用户感兴趣区域的目标三维区域进行重建。The boundary of the cuboid bounding box and the eight corner points can be projected onto multiple images to determine the target pixel area of interest to the user in the multiple images according to the projection points, and then obtain the three-dimensional point of each pixel in the target pixel area, To reconstruct the target three-dimensional area of the user's area of interest.
其中,在确定目标像素区域中各像素点的深度值时,可以根据长方体包围盒的八个角点确定目标像素区域各像素点的深度范围,以该深度范围作为约束、根据相机的位姿参数以及多张图像中的像素点匹配计算出目标像素区域中各像素点的深度。Among them, when determining the depth value of each pixel in the target pixel area, the depth range of each pixel in the target pixel area can be determined according to the eight corners of the cuboid bounding box, and the depth range is used as a constraint, according to the camera pose parameters And the pixel point matching in the multiple images calculates the depth of each pixel point in the target pixel area.
长方体包围盒以外的稠密点云去除之后,后续的网格重建、以及纹理映射也就只需要处理包围盒内的三维场景了,那么最终的三维重建结果也就只在兴趣区域内。如图4所示,图4中(a)为直接对多张图像进行三维重建的重建结果示意图,图4中(b)为采用本实施例的方法确定目标三维区域后再进行三维重建后的重建结果的示意图,从图中可知,通过确定感兴趣区域在进行三维重建,可以降低重建结果中的噪声。After the dense point cloud outside the cuboid bounding box is removed, the subsequent mesh reconstruction and texture mapping only need to deal with the 3D scene inside the bounding box, so the final 3D reconstruction result is only in the area of interest. As shown in FIG. 4 , (a) in FIG. 4 is a schematic diagram of the reconstruction result of directly performing 3D reconstruction on multiple images, and (b) in FIG. 4 is the result of the three-dimensional reconstruction after determining the target three-dimensional area by the method of this embodiment. A schematic diagram of the reconstruction result, it can be seen from the figure that by determining that the region of interest is undergoing 3D reconstruction, the noise in the reconstruction result can be reduced.
进一步地,本申请还提供一种三维点云的处理装置,如图5所示,所述三维点云基于图像采集装置在不同视角采集的多张图像得到,所述装置包括处理器51、存储器52、存储于所述存储器52所述处理器51可执行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:Further, the present application also provides a processing device for a three-dimensional point cloud. As shown in FIG. 5 , the three-dimensional point cloud is obtained based on multiple images collected by an image acquisition device at different viewing angles. The device includes a processor 51, a memory 52. A computer program executable by the processor 51 stored in the memory 52. When the processor executes the computer program, the following steps are implemented:
针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,所述目标参数用于从所述三维点云中确定目标三维点。According to the visibility parameter and/or the shooting angle parameter, the target parameter of the three-dimensional point is determined, and the target parameter is used to determine the target three-dimensional point from the three-dimensional point cloud.
在某些实施例中,所述可见度参数和/或所述拍摄角度参数与所述目标参数正相关。In some embodiments, the visibility parameter and/or the shooting angle parameter is positively correlated with the target parameter.
在某些实施例中,所述处理器用于根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数时,具体用于:In some embodiments, when the processor is configured to determine the target parameter of the three-dimensional point according to the visibility parameter and/or the shooting angle parameter, it is specifically configured to:
根据所述可见度参数、所述拍摄角度参数和空间分辨率参数确定所述三维点的目标参数,所述空间分辨率参数用于表征所述三维点在目标图像 中对应的像素点在实际三维空间的物理尺寸。The target parameter of the three-dimensional point is determined according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, and the spatial resolution parameter is used to represent the pixel point corresponding to the three-dimensional point in the target image in the actual three-dimensional space physical size.
在某些实施例中,所述空间分辨率参数与所述目标参数负相关。In some embodiments, the spatial resolution parameter is negatively correlated with the target parameter.
在某些实施例中,所述装置还用于:In certain embodiments, the apparatus is also used to:
根据所述目标参数对所述三维点云进行聚类处理,得到所述目标三维点,所述目标三维点被划分为一个或多个分组。The three-dimensional point cloud is clustered according to the target parameter to obtain the target three-dimensional point, and the target three-dimensional point is divided into one or more groups.
在某些实施例中,所述处理器用于根据所述目标参数对所述三维点云进行聚类处理时,具体用于:In some embodiments, when the processor is configured to perform clustering processing on the three-dimensional point cloud according to the target parameter, it is specifically configured to:
若第一预设条件未触发,则重复执行以下操作:If the first preset condition is not triggered, repeat the following operations:
根据所述目标参数从所述三维点云未被分组的三维点确定聚类中心;Determine the cluster center from the ungrouped 3D points of the 3D point cloud according to the target parameter;
基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点。A target three-dimensional point within one of the one or more groups is determined based on the cluster centers.
在某些实施例中,所述第一预设条件包括:In some embodiments, the first preset condition includes:
所述一个或多个分组内的目标三维点的数量的总和与所述三维点云中三维点总数量的比值大于第一预设阈值;或The ratio of the sum of the number of target 3D points in the one or more groups to the total number of 3D points in the 3D point cloud is greater than a first preset threshold; or
所述一个或多个分组中的指定分组的目标三维点数量与所述三维点云中的三维点总数量的比值小于第二预设阈值,所述指定分组为所述目标三维点数量最少的分组。The ratio of the number of target 3D points in the specified group in the one or more groups to the total number of 3D points in the 3D point cloud is less than the second preset threshold, and the specified group is the one with the least number of target 3D points. grouping.
在某些实施例中,所述处理器用于基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点时,具体用于:In some embodiments, when the processor is configured to determine the target 3D point in one of the one or more groups based on the cluster center, the processor is specifically configured to:
若第二预设条件未触发,则重复执行以下操作:If the second preset condition is not triggered, repeat the following operations:
基于所述三维点云中未被分组的三维点的目标参数以及所述未被分组的三维点与所述聚类中心的距离,确定所述未被分组的三维点与所述聚类中心的加权距离;Based on the target parameters of the ungrouped 3D points in the 3D point cloud and the distances between the ungrouped 3D points and the cluster centers, determine the distance between the ungrouped 3D points and the cluster centers weighted distance;
将所述加权距离最小的三维点确定为所述一个分组内的目标三维点;Determining the three-dimensional point with the smallest weighted distance as the target three-dimensional point in the one grouping;
基于所述一个分组内当前的目标三维点的坐标以及所述目标参数更新所述聚类中心。The cluster center is updated based on the coordinates of the current target three-dimensional point in the one group and the target parameter.
在某些实施例中,所述第二预设条件包括:In some embodiments, the second preset condition includes:
所述三维点云中未被分组的三维点与所述更新后的聚类中心的加权距离的最小值大于所述一个分组内当前的目标三维点与所述更新后的聚类中心的加权距离的平均值。The minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the weighted distance between the current target 3D point in the one group and the updated cluster center average value.
在某些实施例中,所述处理器还用于:In some embodiments, the processor is also used to:
根据所述目标三维点确定一个或多个目标三维区域;Determine one or more target three-dimensional regions according to the target three-dimensional points;
根据所述一个或多个目标三维区域的边界点在所述多张图像中的投影点,在所述多张图像中确定目标像素区域;According to the projection points of the boundary points of the one or more target three-dimensional regions in the plurality of images, determining a target pixel area in the plurality of images;
确定所述目标像素区域的像素点对应的三维点,以利用所述目标像素区域的像素点对应的三维点对所述目标三维区域进行重建。The three-dimensional point corresponding to the pixel point of the target pixel area is determined, so as to reconstruct the target three-dimensional area by using the three-dimensional point corresponding to the pixel point of the target pixel area.
在某些实施例中,所述处理器用于确定所述目标像素区域的像素点对应的三维点时,具体用于:In some embodiments, when the processor is configured to determine the three-dimensional point corresponding to the pixel point of the target pixel region, it is specifically configured to:
根据所述投影点的深度信息,确定所述目标像素区域的像素点的深度取值范围;According to the depth information of the projection point, determine the depth value range of the pixel point of the target pixel area;
以所述深度取值范围作为约束,确定所述目标像素区域的像素点的深度信息;Using the depth value range as a constraint, determine the depth information of the pixels in the target pixel area;
根据所述目标像素区域的像素点的深度信息,确定所述目标像素区域的像素点对应的三维点。According to the depth information of the pixel points of the target pixel area, the three-dimensional point corresponding to the pixel point of the target pixel area is determined.
在某些实施例中,所述目标三维区域为长方体区域,所述处理器用于根据所述目标三维点确定目标三维区域时,具体用于:In some embodiments, the target three-dimensional area is a cuboid area, and when the processor is configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically configured to:
确定所述目标三维点在三维空间三个轴向上最大坐标值和最小坐标值;Determine the maximum coordinate value and the minimum coordinate value of the target three-dimensional point on the three axes of the three-dimensional space;
基于所述最大坐标值和最小坐标值确定所述长方体区域的角点的坐标。Coordinates of the corner points of the rectangular parallelepiped region are determined based on the maximum coordinate value and the minimum coordinate value.
在某些实施例中,所述目标三维区域为球形区域,所述处理器用于根据所述目标三维点确定目标三维区域时,具体用于:In some embodiments, the target three-dimensional area is a spherical area, and when the processor is configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically configured to:
根据所述目标三维点在三维空间三个轴向上的坐标的中值确定所述球形区域的球心的坐标;Determine the coordinates of the spherical center of the spherical region according to the median of the coordinates of the target three-dimensional point in the three axes of the three-dimensional space;
将所述目标三维点与所述球心的距离的最大值作为所述球形区域的半径,以确定所述球形区域。The maximum value of the distance between the target three-dimensional point and the center of the sphere is taken as the radius of the spherical area to determine the spherical area.
其中,该装置用于对三维点云进行处理的具体实现细节可参考上述方法中各实施例的描述,本申请不作限制。For the specific implementation details of the device for processing the three-dimensional point cloud, reference may be made to the description of each embodiment in the above method, which is not limited in this application.
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的三维点云的处理方法。Correspondingly, an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the method for processing a three-dimensional point cloud in any of the foregoing embodiments is implemented.
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like. Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。For the apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial descriptions of the method embodiments for related parts. The device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者 暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that, in this document, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. The terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion such that a process, method, article or device comprising a list of elements includes not only those elements, but also other not expressly listed elements, or also include elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The methods and devices provided by the embodiments of the present invention have been described in detail above. The principles and implementations of the present invention are described with specific examples in this paper. The descriptions of the above embodiments are only used to help understand the methods of the present invention and its implementation. At the same time, for those of ordinary skill in the art, according to the idea of the present invention, there will be changes in the specific implementation and application scope. To sum up, the content of this description should not be construed as a limitation to the present invention. .

Claims (26)

  1. 一种三维点云的处理方法,其特征在于,所述三维点云基于图像采集装置在不同视角采集的多张图像得到,所述方法包括:A method for processing a three-dimensional point cloud, wherein the three-dimensional point cloud is obtained based on a plurality of images collected by an image acquisition device at different viewing angles, and the method includes:
    针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
    根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,所述目标参数用于从所述三维点云中确定目标三维点;determining the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
    根据所述目标三维点确定所述多张图像中的目标像素区域,以对所述目标像素区域对应的三维区域进行重建。The target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
  2. 根据权利要求1所述的方法,其特征在于,所述可见度参数和/或所述拍摄角度参数与所述目标参数正相关。The method according to claim 1, wherein the visibility parameter and/or the shooting angle parameter is positively correlated with the target parameter.
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数,包括:The method according to claim 1 or 2, wherein the determining the target parameter of the three-dimensional point according to the visibility parameter and/or the shooting angle parameter comprises:
    根据所述可见度参数、所述拍摄角度参数和空间分辨率参数确定所述三维点的目标参数,所述空间分辨率参数用于表征所述三维点在目标图像中对应的像素点在实际三维空间的物理尺寸。The target parameter of the three-dimensional point is determined according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, and the spatial resolution parameter is used to represent the pixel point corresponding to the three-dimensional point in the target image in the actual three-dimensional space physical size.
  4. 根据权利要求3所述的方法,其特征在于,所述空间分辨率参数与所述目标参数负相关。The method of claim 3, wherein the spatial resolution parameter is negatively correlated with the target parameter.
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-4, wherein the method further comprises:
    根据所述目标参数对所述三维点云进行聚类处理,得到所述目标三维点,所述目标三维点被划分为一个或多个分组。The three-dimensional point cloud is clustered according to the target parameter to obtain the target three-dimensional point, and the target three-dimensional point is divided into one or more groups.
  6. 根据权利要求5所述的方法,其特征在于,根据所述目标参数对所述三维点云进行聚类处理,包括:The method according to claim 5, wherein clustering the 3D point cloud according to the target parameter comprises:
    若第一预设条件未触发,则重复执行以下操作:If the first preset condition is not triggered, repeat the following operations:
    根据所述目标参数从所述三维点云未被分组的三维点确定聚类中心;Determine the cluster center from the ungrouped 3D points of the 3D point cloud according to the target parameter;
    基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点。A target three-dimensional point within one of the one or more groups is determined based on the cluster centers.
  7. 根据权利要求6所述的方法,其特征在于,所述第一预设条件包括:The method according to claim 6, wherein the first preset condition comprises:
    所述一个或多个分组内的目标三维点的数量的总和与所述三维点云中三维点总数量的比值大于第一预设阈值;或The ratio of the sum of the number of target 3D points in the one or more groups to the total number of 3D points in the 3D point cloud is greater than a first preset threshold; or
    所述一个或多个分组中的指定分组的目标三维点数量与所述三维点云中的三维点总数量的比值小于第二预设阈值,所述指定分组为所述目标三维点数量最少的分组。The ratio of the number of target 3D points in the specified group in the one or more groups to the total number of 3D points in the 3D point cloud is less than the second preset threshold, and the specified group is the one with the least number of target 3D points. grouping.
  8. 根据权利要求6或7所述的方法,其特征在于,所述基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点,包括:The method according to claim 6 or 7, wherein the determining the target 3D point in one of the one or more groups based on the cluster center comprises:
    若第二预设条件未触发,则重复执行以下操作:If the second preset condition is not triggered, repeat the following operations:
    基于所述三维点云中未被分组的三维点的目标参数以及所述未被分组的三维点与所述聚类中心的距离,确定所述未被分组的三维点与所述聚类中心的加权距离;Based on the target parameters of the ungrouped 3D points in the 3D point cloud and the distances between the ungrouped 3D points and the cluster centers, determine the distance between the ungrouped 3D points and the cluster centers weighted distance;
    将所述加权距离最小的三维点确定为所述一个分组内的目标三维点;Determining the three-dimensional point with the smallest weighted distance as the target three-dimensional point in the one grouping;
    基于所述一个分组内当前的目标三维点的坐标以及所述目标参数更新所述聚类中心。The cluster center is updated based on the coordinates of the current target three-dimensional point in the one group and the target parameter.
  9. 根据权利要求8所述的方法,其特征在于,所述第二预设条件包括:The method according to claim 8, wherein the second preset condition comprises:
    所述三维点云中未被分组的三维点与所述更新后的聚类中心的加权距离的最小值大于所述一个分组内当前的目标三维点与所述更新后的聚类中心的加权距离的平均值。The minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the weighted distance between the current target 3D point in the one group and the updated cluster center average value.
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述目标三维点确定所述多张图像中的目标像素区域,包括:The method according to any one of claims 1-9, wherein the determining the target pixel area in the multiple images according to the target three-dimensional point comprises:
    根据所述目标三维点确定目标三维区域;Determine the target three-dimensional area according to the target three-dimensional point;
    根据所述目标三维区域的边界点在所述多张图像中的投影点,在所述多张图像中确定目标像素区域。The target pixel area is determined in the plurality of images according to the projection points of the boundary points of the target three-dimensional area in the plurality of images.
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:The method of claim 10, wherein the method further comprises:
    根据所述投影点的深度信息,确定所述目标像素区域的像素点的深度取值范围;According to the depth information of the projection point, determine the depth value range of the pixel point of the target pixel area;
    以所述深度取值范围作为约束,确定所述目标像素区域的像素点的深度信息;Using the depth value range as a constraint, determine the depth information of the pixels in the target pixel area;
    根据所述目标像素区域的像素点的深度信息,确定所述目标像素区域的像素点对应的三维点,以利用所述目标像素区域的像素点对应的三维点对所述目标像素区域对应的三维区域进行重建。According to the depth information of the pixel points of the target pixel area, determine the three-dimensional point corresponding to the pixel point of the target pixel area, so as to use the three-dimensional point corresponding to the pixel point of the target pixel area to compare the three-dimensional point corresponding to the target pixel area. area for reconstruction.
  12. 根据权利要求10或11所述的方法,其特征在于,所述目标三维区域为长方体区域,根据所述目标三维点确定目标三维区域,包括:The method according to claim 10 or 11, wherein the target three-dimensional area is a cuboid area, and determining the target three-dimensional area according to the target three-dimensional points includes:
    确定所述目标三维点在三维空间三个轴向上最大坐标值和最小坐标值;Determine the maximum coordinate value and the minimum coordinate value of the target three-dimensional point on the three axes of the three-dimensional space;
    基于所述最大坐标值和最小坐标值确定所述长方体区域的角点的坐标。Coordinates of the corner points of the rectangular parallelepiped region are determined based on the maximum coordinate value and the minimum coordinate value.
  13. 根据权利要求10或11所述的方法,其特征在于,所述目标三维区域为球形区域,根据所述目标三维点确定目标三维区域,包括:The method according to claim 10 or 11, wherein the target three-dimensional area is a spherical area, and determining the target three-dimensional area according to the target three-dimensional points includes:
    根据所述目标三维点在三维空间三个轴向上的坐标的中值确定所述球形区域的球心的坐标;Determine the coordinates of the spherical center of the spherical region according to the median of the coordinates of the target three-dimensional point in the three axes of the three-dimensional space;
    将所述目标三维点与所述球心的距离的最大值作为所述球形区域的半径,以确定所述球形区域。The maximum value of the distance between the target three-dimensional point and the center of the sphere is taken as the radius of the spherical area to determine the spherical area.
  14. 一种三维点云的处理装置,其特征在于,所述三维点云基于图像采集装置在不同视角采集的多张图像得到,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时,实现以下步骤:A device for processing a three-dimensional point cloud, characterized in that the three-dimensional point cloud is obtained based on a plurality of images collected by an image acquisition device at different viewing angles, and the device comprises a processor, a memory, and the processor stored in the memory. An executable computer program, when the processor executes the computer program, the following steps are implemented:
    针对所述三维点云中的每个三维点,获取所述三维点的可见度参数和/或拍摄角度参数,其中,所述可见度参数用于表征所述多张图像中目标图像的数量,所述目标图像包含所述三维点对应的像素点,所述拍摄角度参数用于表征所述三维点与所述目标图像对应的光心的连线之间的夹角;For each 3D point in the 3D point cloud, obtain a visibility parameter and/or a shooting angle parameter of the 3D point, wherein the visibility parameter is used to represent the number of target images in the multiple images, and the The target image includes pixel points corresponding to the three-dimensional point, and the shooting angle parameter is used to represent the angle between the three-dimensional point and the line connecting the optical centers corresponding to the target image;
    根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数, 所述目标参数用于从所述三维点云中确定目标三维点;Determine the target parameter of the 3D point according to the visibility parameter and/or the shooting angle parameter, where the target parameter is used to determine the target 3D point from the 3D point cloud;
    根据所述目标三维点确定所述多张图像中的目标像素区域,以对所述目标像素区域对应的三维区域进行重建。The target pixel area in the multiple images is determined according to the target three-dimensional point, so as to reconstruct the three-dimensional area corresponding to the target pixel area.
  15. 根据权利要求14所述的装置,其特征在于,所述可见度参数和/或所述拍摄角度参数与所述目标参数正相关。The apparatus according to claim 14, wherein the visibility parameter and/or the shooting angle parameter is positively correlated with the target parameter.
  16. 根据权利要求14或15所述的装置,其特征在于,所述处理器用于根据所述可见度参数和/或拍摄角度参数,确定所述三维点的目标参数时,具体用于:The device according to claim 14 or 15, wherein when the processor is configured to determine the target parameter of the three-dimensional point according to the visibility parameter and/or the shooting angle parameter, it is specifically configured to:
    根据所述可见度参数、所述拍摄角度参数和空间分辨率参数确定所述三维点的目标参数,所述空间分辨率参数用于所述三维点在目标图像中对应的像素点在实际三维空间的物理尺寸。The target parameter of the three-dimensional point is determined according to the visibility parameter, the shooting angle parameter and the spatial resolution parameter, and the spatial resolution parameter is used for the pixel point corresponding to the three-dimensional point in the target image in the actual three-dimensional space. physical size.
  17. 根据权利要求16所述的装置,其特征在于,所述空间分辨率参数与所述目标参数负相关。The apparatus of claim 16, wherein the spatial resolution parameter is negatively correlated with the target parameter.
  18. 根据权利要求14-17任一项所述的装置,其特征在于,所述装置还用于:The device according to any one of claims 14-17, wherein the device is further used for:
    根据所述目标参数对所述三维点云进行聚类处理,得到所述目标三维点,所述目标三维点被划分为一个或多个分组。The three-dimensional point cloud is clustered according to the target parameter to obtain the target three-dimensional point, and the target three-dimensional point is divided into one or more groups.
  19. 根据权利要求18所述的方法,其特征在于,所述处理器用于根据所述目标参数对所述三维点云进行聚类处理时,具体用于:The method according to claim 18, wherein when the processor is configured to perform clustering processing on the three-dimensional point cloud according to the target parameter, it is specifically configured to:
    若第一预设条件未触发,则重复执行以下操作:If the first preset condition is not triggered, repeat the following operations:
    根据所述目标参数从所述三维点云未被分组的三维点确定聚类中心;Determine the cluster center from the ungrouped 3D points of the 3D point cloud according to the target parameter;
    基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点。A target three-dimensional point within one of the one or more groups is determined based on the cluster centers.
  20. 根据权利要求19所述的装置,其特征在于,所述第一预设条件包括:The device according to claim 19, wherein the first preset condition comprises:
    所述一个或多个分组内的目标三维点的数量的总和与所述三维点云中三维点总数量的比值大于第一预设阈值;或The ratio of the sum of the number of target 3D points in the one or more groups to the total number of 3D points in the 3D point cloud is greater than a first preset threshold; or
    所述一个或多个分组中的指定分组的目标三维点数量与所述三维点云中的三维点总数量的比值小于第二预设阈值,所述指定分组为所述目标三维点数量最少的分组。The ratio of the number of target 3D points in the specified group in the one or more groups to the total number of 3D points in the 3D point cloud is less than the second preset threshold, and the specified group is the one with the least number of target 3D points. grouping.
  21. 根据权利要求19或20所述的装置,其特征在于,所述处理器用于基于所述聚类中心确定所述一个或多个分组中的一个分组内的目标三维点时,具体用于:The apparatus according to claim 19 or 20, wherein when the processor is configured to determine the target 3D point in one of the one or more groups based on the cluster center, it is specifically configured to:
    若第二预设条件未触发,则重复执行以下操作:If the second preset condition is not triggered, repeat the following operations:
    基于所述三维点云中未被分组的三维点的目标参数以及所述未被分组的三维点与所述聚类中心的距离,确定所述未被分组的三维点与所述聚类中心的加权距离;Based on the target parameters of the ungrouped 3D points in the 3D point cloud and the distances between the ungrouped 3D points and the cluster centers, determine the distance between the ungrouped 3D points and the cluster centers weighted distance;
    将所述加权距离最小的三维点确定为所述一个分组内的目标三维点;Determining the three-dimensional point with the smallest weighted distance as the target three-dimensional point in the one grouping;
    基于所述一个分组内当前的目标三维点的坐标以及所述目标参数更新所述聚类中心。The cluster center is updated based on the coordinates of the current target three-dimensional point in the one group and the target parameter.
  22. 根据权利要求21所述的装置,其特征在于,所述第二预设条件包括:The device according to claim 21, wherein the second preset condition comprises:
    所述三维点云中未被分组的三维点与所述更新后的聚类中心的加权距离的最小值大于所述一个分组内当前的目标三维点与所述更新后的聚类中心的加权距离的平均值。The minimum value of the weighted distance between the ungrouped 3D points in the 3D point cloud and the updated cluster center is greater than the weighted distance between the current target 3D point in the one group and the updated cluster center average value.
  23. 根据权利要求14-22任一项所述的装置,其特征在于,所述处理器根据所述目标三维点确定所述多张图像中的目标像素区域时,具体用于:The device according to any one of claims 14-22, wherein when the processor determines the target pixel area in the multiple images according to the target three-dimensional point, the processor is specifically configured to:
    根据所述目标三维点确定目标三维区域;Determine the target three-dimensional area according to the target three-dimensional point;
    根据所述目标三维区域的边界点在所述多张图像中的投影点,在所述多张图像中确定目标像素区域。The target pixel area is determined in the plurality of images according to the projection points of the boundary points of the target three-dimensional area in the plurality of images.
  24. 根据权利要求23所述的装置,其特征在于,所述处理器还用于:The apparatus of claim 23, wherein the processor is further configured to:
    根据所述投影点的深度信息,确定所述目标像素区域的像素点的深度取值范围;According to the depth information of the projection point, determine the depth value range of the pixel point of the target pixel area;
    以所述深度取值范围作为约束,确定所述目标像素区域的像素点的深 度信息;Taking the depth value range as a constraint, determine the depth information of the pixel point of the target pixel area;
    根据所述目标像素区域的像素点的深度信息,确定所述目标像素区域的像素点对应的三维点,以利用所述目标像素区域的像素点对应的三维点对所述目标像素区域对应的三维区域进行重建。According to the depth information of the pixel points of the target pixel area, determine the three-dimensional point corresponding to the pixel point of the target pixel area, so as to use the three-dimensional point corresponding to the pixel point of the target pixel area to compare the three-dimensional point corresponding to the target pixel area. area for reconstruction.
  25. 根据权利要求23或24所述的装置,其特征在于,所述目标三维区域为长方体区域,所述处理器用于根据所述目标三维点确定目标三维区域时,具体用于:The device according to claim 23 or 24, wherein the target three-dimensional area is a cuboid area, and when the processor is configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically used for:
    确定所述目标三维点在三维空间三个轴向上最大坐标值和最小坐标值;Determine the maximum coordinate value and the minimum coordinate value of the target three-dimensional point on the three axes of the three-dimensional space;
    基于所述最大坐标值和最小坐标值确定所述长方体区域的角点的坐标。Coordinates of the corner points of the rectangular parallelepiped region are determined based on the maximum coordinate value and the minimum coordinate value.
  26. 根据权利要求23或24所述的装置,其特征在于,所述目标三维区域为球形区域,所述处理器用于根据所述目标三维点确定目标三维区域时,具体用于:The device according to claim 23 or 24, wherein the target three-dimensional area is a spherical area, and when the processor is configured to determine the target three-dimensional area according to the target three-dimensional point, it is specifically used for:
    根据所述目标三维点在三维空间三个轴向上的坐标的中值确定所述球形区域的球心的坐标;Determine the coordinates of the spherical center of the spherical region according to the median of the coordinates of the target three-dimensional point in the three axes of the three-dimensional space;
    将所述目标三维点与所述球心的距离的最大值作为所述球形区域的半径,以确定所述球形区域。The maximum value of the distance between the target three-dimensional point and the center of the sphere is taken as the radius of the spherical area to determine the spherical area.
PCT/CN2020/112106 2020-08-28 2020-08-28 Three-dimensional point cloud processing method and apparatus WO2022041119A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/112106 WO2022041119A1 (en) 2020-08-28 2020-08-28 Three-dimensional point cloud processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/112106 WO2022041119A1 (en) 2020-08-28 2020-08-28 Three-dimensional point cloud processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2022041119A1 true WO2022041119A1 (en) 2022-03-03

Family

ID=80352482

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112106 WO2022041119A1 (en) 2020-08-28 2020-08-28 Three-dimensional point cloud processing method and apparatus

Country Status (1)

Country Link
WO (1) WO2022041119A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN118101921A (en) * 2024-02-05 2024-05-28 广州雅清达智能***有限公司 Three-dimensional space indication method and indication system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108537879A (en) * 2018-03-29 2018-09-14 东华智业(北京)科技发展有限公司 Reconstructing three-dimensional model system and method
CN111583388A (en) * 2020-04-28 2020-08-25 光沦科技(深圳)有限公司 Scanning method and device of three-dimensional scanning system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108537879A (en) * 2018-03-29 2018-09-14 东华智业(北京)科技发展有限公司 Reconstructing three-dimensional model system and method
CN111583388A (en) * 2020-04-28 2020-08-25 光沦科技(深圳)有限公司 Scanning method and device of three-dimensional scanning system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114417489B (en) * 2022-03-30 2022-07-19 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN118101921A (en) * 2024-02-05 2024-05-28 广州雅清达智能***有限公司 Three-dimensional space indication method and indication system

Similar Documents

Publication Publication Date Title
CN112434709B (en) Aerial survey method and system based on unmanned aerial vehicle real-time dense three-dimensional point cloud and DSM
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN107705333B (en) Space positioning method and device based on binocular camera
CN113985445B (en) 3D target detection algorithm based on camera and laser radar data fusion
WO2021097126A1 (en) Method and system for scene image modification
WO2015135323A1 (en) Camera tracking method and device
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
CN110568447A (en) Visual positioning method, device and computer readable medium
CN112444242A (en) Pose optimization method and device
WO2020093307A1 (en) Method and device for simplifying three-dimensional mesh model
CN112184603B (en) Point cloud fusion method and device, electronic equipment and computer storage medium
WO2022041119A1 (en) Three-dimensional point cloud processing method and apparatus
CN113989376B (en) Method and device for acquiring indoor depth information and readable storage medium
WO2021185036A1 (en) Point cloud data generation and real-time display method and apparatus, device, and medium
CN113362363A (en) Automatic image annotation method and device based on visual SLAM and storage medium
CN113052880A (en) SFM sparse reconstruction method, system and application
CN117635875B (en) Three-dimensional reconstruction method, device and terminal
CN114926316A (en) Distance measuring method, distance measuring device, electronic device, and storage medium
CN112819937B (en) Self-adaptive multi-object light field three-dimensional reconstruction method, device and equipment
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
EP4407563A1 (en) Multi-type map-based fusion positioning method and electronic device
JP7195785B2 (en) Apparatus, method and program for generating 3D shape data
WO2022011560A1 (en) Image cropping method and apparatus, electronic device, and storage medium
CN116091588A (en) Three-dimensional object detection method, apparatus, and computer-readable storage medium
Kovynev et al. Review of photogrammetry techniques for 3D scanning tasks of buildings

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950789

Country of ref document: EP

Kind code of ref document: A1