CN111104958A - Medium storing topographic feature estimating program, topographic feature estimating method and device - Google Patents

Medium storing topographic feature estimating program, topographic feature estimating method and device Download PDF

Info

Publication number
CN111104958A
CN111104958A CN201910993527.1A CN201910993527A CN111104958A CN 111104958 A CN111104958 A CN 111104958A CN 201910993527 A CN201910993527 A CN 201910993527A CN 111104958 A CN111104958 A CN 111104958A
Authority
CN
China
Prior art keywords
point group
measurement
measurement points
dimensional model
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910993527.1A
Other languages
Chinese (zh)
Inventor
日高洋士
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of CN111104958A publication Critical patent/CN111104958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20041Distance transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Biology (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure relates to a medium storing a topographic feature estimating program, a topographic feature estimating method, and a device. The topographic feature estimation process includes: classifying a plurality of measurement points acquired through three-dimensional measurement of a scene and respectively including measurement information into a plurality of point group sub-regions, each of the plurality of point group sub-regions corresponding to a respective one of a plurality of classification vectors; and estimating a topographical feature of the scene by: for each of the plurality of point group sub-regions that have been classified, a plane intersecting a classification vector corresponding to the point group sub-region is set as a reference plane, for each of the measurement points included in the point group sub-region corresponding to the reference plane, a distance from the reference plane to each of the measurement points is taken as a height of each of the measurement points, and the measurement point corresponding to the non-ground object is removed from the plurality of measurement points acquired by the three-dimensional measurement by applying the improved morphological filter to each of the plurality of point group sub-regions.

Description

Medium storing topographic feature estimating program, topographic feature estimating method and device
Technical Field
Embodiments discussed herein relate to a recording medium storing a topographic feature estimating program, a topographic feature estimating method, and a topographic feature estimating apparatus.
Background
There are techniques in which three-dimensional measurement of a scene is performed using a laser mounted to an unmanned aerial vehicle or vehicle and a three-dimensional model is generated based on the acquired measurement points. However, if the measurement points are used "as is" when generating the three-dimensional model, non-ground objects such as buildings and trees may be included, making it difficult to generate a three-dimensional model of accurate terrain features.
There are techniques in which an improved morphological filter (hereinafter also referred to as PMF) is applied to measurement points acquired by aerial measurement to generate a three-dimensional model of accurate topographic features. In estimating the topographical features, the PMF uses the height information of the measurement points to eliminate non-ground objects from the scene.
The above technique is well suited to be applied to measurement points acquired by aerial measurements, i.e. topographical features present below in an open scene. However, this technique is not suitable for application to scenes that are closed above or at the sides or both, i.e. for scenes that are present above or at the sides or that have topographical features both above and at the sides.
[ relevant patent documents ]
Japanese laid-open patent application No. 2017-198581
[ non-patent document ]
Zhang et al,A Progressive Morphological Filter for Removing NongroundMeasurements From Airborne LIDAR Data,IEEE TRANSACTIONS ON GEOSCIENCE ANDREMOTE SENSING,April 2003,VOL.41,NO.4,pages 872to 882.
Kazhdan et al,Poisson Surface Reconstruction,Eurographics,Symposiumon Geometry Processing,2006.
Disclosure of Invention
It is therefore an object in one aspect of an embodiment to be able to accurately estimate, in addition to a topographical feature present below, a topographical feature present above or at a side or both above and at a side.
According to an aspect of an embodiment, a terrain feature estimation process includes: classifying a plurality of measurement points, which are acquired through three-dimensional measurement of a scene and respectively include measurement information, into a plurality of point group sub-regions by using a plurality of classification vectors having mutually different directions, each of the plurality of point group sub-regions corresponding to a respective one of the plurality of classification vectors; and estimating a topographical feature of the scene by: for each of the plurality of point group sub-regions that have been classified, a plane intersecting a classification vector corresponding to the point group sub-region is set as a reference plane, for each of the measurement points included in the point group sub-region corresponding to the reference plane, a distance from the reference plane to each of the measurement points is taken as a height of each of the measurement points, the distance being acquired based on measurement information of each of the measurement points, and by applying the improved morphology filter to each of the plurality of point group sub-regions, the measurement point corresponding to the non-terrestrial object is removed from the plurality of measurement points acquired by the three-dimensional measurement.
Drawings
Fig. 1 is a block diagram showing an example of the relevant functions of a topographic feature estimating apparatus of the first exemplary embodiment;
fig. 2 is a conceptual diagram showing an example of normal vectors corresponding to measurement points acquired by three-dimensional measurement;
fig. 3A is a conceptual diagram illustrating an example of gaze line vectors employed in classifying measurement points into point cluster sub-regions;
fig. 3B is a conceptual diagram illustrating an example of gaze line vectors employed in classifying measurement points into point cluster sub-regions;
fig. 4 is a conceptual diagram for explaining an example of the topographic feature estimating process of the first exemplary embodiment;
fig. 5A is a conceptual diagram for explaining an example of the topographic feature estimating process of the first exemplary embodiment;
fig. 5B is a conceptual diagram for explaining an example of the topographic feature estimating process of the first exemplary embodiment;
fig. 6 is a block diagram showing an example of hardware of the topographic feature estimating device;
fig. 7 is a flowchart showing an example of the flow of the topographic feature estimation process of the first exemplary embodiment;
fig. 8 is a block diagram showing an example of the relevant functions of the topographic feature estimating apparatus of the second exemplary embodiment;
fig. 9 is a conceptual diagram for explaining an example of the topographic feature estimating process of the second exemplary embodiment;
fig. 10A is a flowchart showing an example of the flow of the topographic feature estimation process of the second exemplary embodiment;
fig. 10B is a flowchart showing an example of the flow of the defect point repairing process of the second exemplary embodiment;
fig. 11A is a conceptual diagram showing an example of a scene to be measured;
FIG. 11B is a conceptual diagram illustrating an example of a three-dimensional model generated using measurement points of the scene in FIG. 11A;
fig. 11C is a conceptual diagram showing an example of a three-dimensional model generated in a case where a PMF has been applied to the measurement points of the scene in fig. 11A;
fig. 12A is a conceptual diagram showing an example of a scene to be measured;
FIG. 12B is a conceptual diagram illustrating an example of a three-dimensional model generated using measurement points of the scene in FIG. 12A;
fig. 12C is a conceptual diagram showing an example of a three-dimensional model generated in a case where a PMF has been applied to the measurement points of the scene in fig. 12A;
fig. 13A is a conceptual diagram showing an example of measurement points acquired by three-dimensional measurement;
fig. 13B is a conceptual diagram showing an example of a three-dimensional model generated in a case where a PMF is applied to a measurement point acquired by three-dimensional measurement without classifying the measurement point into a point group sub-region;
fig. 14A is a conceptual diagram showing an example of a case suitable for applying the exemplary embodiment;
fig. 14B is a conceptual diagram showing an example of a case suitable for applying the exemplary embodiment;
fig. 14C is a conceptual diagram showing an example of a case where the exemplary embodiment is suitably applied;
fig. 14D is a conceptual diagram showing an example of a case where the exemplary embodiment is suitably applied; and
fig. 14E is a conceptual diagram showing an example of a case where the exemplary embodiment is suitably applied.
Detailed Description
First exemplary embodiment
Examples of the first exemplary embodiment are explained in detail below with reference to the drawings.
The topographic feature estimating device 10 shown as an example in fig. 1 includes a point group input section 21, a normal vector estimating section 22, a point group classifying section 23, an improved morphology filter (hereinafter, PMF) section 24, a point group combiner section 25, a three-dimensional model generating section 26, and a three-dimensional model output section 29. The normal vector estimation section 22 and the point group classification section 23 correspond to a classification section, and the PMF section 24 and the point group combiner section 25 correspond to an estimation section.
For example, the point group input unit 21 inputs a point group consisting of measurement points (hereinafter also referred to as points) acquired by three-dimensional measurement of a scene using a three-dimensional measurement device. The upper left part of fig. 2 shows an example of a point group acquired for a scene in a tunnel. Each of the measurement points includes measurement information such as three-dimensional cartesian coordinate values. The three-dimensional measuring device may be an existing device capable of omni-directionally measuring a scene.
The normal vector estimation unit 22 estimates a normal vector corresponding to each point in the point group. The normal vector is a vector orthogonal to a plane approximated using a plurality of (for example, ten to twenty) points around the target point, and starts at the target point and faces the origin. The lower left portion of fig. 2 shows an example of a point group and a normal vector corresponding to the point group. In the example of the tunnel shown in fig. 2, the origin is set at approximately the center of the cross section inside the tunnel. However, this is not limited in the present exemplary embodiment. The origin may be set at any desired position in the cross-section inside the tunnel.
The left-middle part of fig. 2 shows an example of a state in which the number of points in the upper left part of fig. 2 is reduced to reduce the processing load. The lower left portion of fig. 2 shows an example of normal vectors for the point group at the middle left portion of fig. 2. The reduction in the number of points is optional and can be performed by existing methods. As an example of such a method, the first predetermined number of points may be reduced by eliminating therefrom the second predetermined number of points.
The point group classification unit 23 classifies points included in the point group into a plurality of point group sub-regions corresponding to a plurality of gaze lines. Specifically, a number M (M ≧ 2) gaze vectors v starting at the origin are utilizedj(j is an integer from 1 to M) classifying the point groups into corresponding gaze line vectors vjCorresponding point group sub-region Cj. Corresponding to point piA normal vector n (p) (i is an integer from 1 to L, and L is the number of measurement points)i) The inverse vector-v to each gaze line vectorjThe inner product (hereinafter referred to as a classification vector) is represented by n (p)i)·(-vj) To calculate. Each point piClassification to correspond to giving the maximum inner product n (p)i)·(-vj) Gaze line vector vjPoint group sub-region C ofjIn (1). This is because the larger the inner product of the normal vector and the classification vector becomes, the closer the angle formed between the normal vector and the classification vector is to 0 degree.
The upper left part and the lower left part of fig. 4 show an example of classification into two point group sub-regions in the case where the number M of gaze line vectors is two. The upper left part of fig. 4 shows the gaze line vector v included in the example shown in fig. 3A2Corresponding point group sub-region C2An example of a point in (1). The lower left part of fig. 4 shows the gaze line vector v included in the example shown in fig. 3A1Corresponding point group sub-region C1An example of a point in (1).
The upper left part, the middle left part, and the lower left part of fig. 5A show examples of classification into three point group sub-regions in the case where the number M of gaze line vectors is three. The upper left part of FIG. 5A shows an example included in FIG. 3BExample shown gaze line vector v1Corresponding point group sub-region C1An example of a point in (1). The left middle part of FIG. 5A shows a vector v included in the gaze line2Corresponding point group sub-region C2An example of a point in (1). The lower left part of FIG. 5A shows a vector v included in the gaze line3Corresponding point group sub-region C3An example of a point in (1).
The PMF section 24 estimates a topographic feature in the scene by applying a PMF to each of the point cloud sub-regions. The morphological filter on which the PMF is based is a filter of predetermined size (a straight line filter or a planar filter in the xy plane) adapted to the lowest height among the heights (z coordinate) of the plurality of measurement points acquired by measurement from above after application of the filter, and capable of removing non-ground objects having a small upper surface area, such as trees.
However, the morphological filter cannot remove non-ground objects such as buildings that are larger than a predetermined size of the filter. Therefore, in the PMF, the size of the filter used as the morphological filter is gradually increased, thereby enabling removal of non-terrestrial objects larger than the filter of a predetermined size.
However, it is also possible to remove protrusions on the ground having substantially the same upper surface area as the building, for example, by simply increasing the size of the filter gradually. Generally, the inclination of the projection on the ground as a natural object is gentler than the inclination of the side of a non-ground object such as a building. Thus, in the process of gradually increasing the size of the filter, the protrusions on the ground will be gradually removed, and the building will be removed at one time. With this characteristic, the protrusions on the ground that are gradually removed are determined as natural objects and are not finally removed, while the building that is removed at one time is determined as non-ground objects and is finally removed.
The PMF section 24 applies PMF to each of the point group sub-regions. When the PMF is applied to each of the point cloud sub-regions, a plane intersecting the classification vector corresponding to the point cloud sub-region is taken as a reference plane for each of the point cloud sub-regions, and a distance acquired based on a distance from the reference plane to each of the measurement points included in the point cloud sub-region corresponding to the reference plane is taken as a height of the corresponding measurement point. That is, the PMF is applied to each point cloud sub-region from an angle of looking down the line of sight of the point cloud sub-region from above.
Specifically, for example, the PMF section 24 makes the line of sight vector v excluded1Corresponding point group sub-region C1Each point group sub-region outside the point group sub-regions is rotated so that the gaze line vector and the gaze line vector v of the other point group sub-regions1Overlap. After the PMF has been applied to each of the point group sub-regions, the PMF section 24 makes the line of sight vector v excluded1Corresponding point group sub-region C1The outer dot cluster sub-regions are rotated back to their respective pre-rotation angles.
In the example in fig. 4, the PMF section 24 makes the line of sight vector v in fig. 3A2Corresponding point group sub-region C2The constructed point group is rotated so that the gaze line vector v2And gaze line vector v1Overlap. After the PMF has been applied to the point cloud sub-region C1And a point group subregion C2After both, the PMF unit 24 makes the dot group sub-region C2The rotation returns to its pre-rotation angle. The lower middle part of fig. 4 shows the application of PMF to the point cloud sub-region C1To remove an example of the result of a non-ground object NGO.
The application of the PMF to the point cloud sub-region C is shown in the upper part of fig. 42To remove an example of the result of a non-ground object NGO. The point corresponding to the non-ground object NGO at location RM has been removed.
In the example in fig. 5A, the PMF section 24 makes the gaze line vector v in fig. 3B2Corresponding point group sub-region C2Rotate so that the gaze line vector v2And gaze line vector v1Overlap. The PMF unit 24 also makes the gaze line vector v in FIG. 3B3Corresponding point group sub-region C3Rotate so that the gaze line vector v3And gaze line vector v1Overlap. After the PMF has been applied to the point cloud sub-region C1、C2And C3After each of them, the PMF unit 24 makes the gaze line vector v2Corresponding point groupSub-region C2And a gaze line vector v3Corresponding point group sub-region C3Rotate back to their respective pre-rotation angles.
The upper right part of fig. 5A shows the application of PMF to the point-crowd sub-region C1To remove an example of the result of a non-ground object NGO. The right-hand middle part of fig. 5A shows the application of PMF to the point cloud sub-region C2To remove an example of the result of a non-ground object NGO. The lower right part of fig. 5A shows the application of PMF to the point-cluster sub-region C3To remove an example of the result of a non-ground object NGO.
The point group combiner unit 25 combines the point group subregions. The upper right part of fig. 4 shows a point cloud sub-region C in which the upper middle part of fig. 4 has been2And a point-group sub-region C at the lower middle portion of FIG. 41Combining into an example of a group of combined points. The upper part of fig. 5B shows a point cloud sub-region C in which the upper right part of fig. 5A has been located1Point group sub-region C at the right middle portion of fig. 5A2And a point-group subregion C at the lower right part of FIG. 5A3Combining into an example of a group of combined points.
The three-dimensional model generation unit 26 generates a three-dimensional model using the combination point group. Existing methods can be applied to three-dimensional model generation. For example, the lower right portion of fig. 4 shows an example of a mesh generated by poisson surface reconstruction using the combined point group at the upper right portion of fig. 4. The bottom portion of fig. 5B shows an example of a mesh generated by poisson surface reconstruction using the combined point group at the upper portion of fig. 5B.
The three-dimensional model output unit 29 outputs the generated three-dimensional model to an output device. The output device may be an external storage device that stores the three-dimensional model information as a file or a display that visually displays the three-dimensional model.
As shown in fig. 6, the topographic feature estimating device 10 includes, as an example, a Central Processing Unit (CPU)51, a main storage section 52, a sub storage section 53, and an external interface 54. The CPU51 is an example of a processor, which is hardware. The CPU51, the main storage section 52, the sub storage section 53, and the external interface 54 are connected to each other by a bus 59.
The main storage unit 52 is a volatile memory such as a Random Access Memory (RAM). The secondary storage section 53 is a nonvolatile memory such as a Hard Disk Drive (HDD) or a Solid State Drive (SSD).
The secondary storage unit 53 includes a program storage area 53A and a data storage area 53B. As an example, the program save area 53A stores a program such as a topographic feature estimation program. As an example, the data holding area 53B stores information on measurement points acquired using three-dimensional measurement and intermediate data generated during execution of the topographic feature estimation program.
The CPU51 reads the topographic feature estimation program from the program save area 53A, and expands the program into the main storage section 52. By loading and executing the topographic feature estimation program, the CPU51 operates as the point group input section 21, the normal vector estimation section 22, the point group classification section 23, the PMF section 24, the point group combiner section 25, the three-dimensional model generation section 26, and the three-dimensional model output section 29 shown in fig. 1, respectively.
Note that a program such as a topographic feature estimating program may be stored in an external server and may be extended into the main storage section 52 via a network. Alternatively, the program such as the topographic feature estimating program may be stored on a non-transitory recording medium such as a Digital Versatile Disc (DVD) and extended into the main storage section 52 using a recording medium reading device.
An external device is connected to the external interface 54, and the external interface 54 supervises the exchange of various information between the external device and the CPU 51. Fig. 6 shows an example in which an external storage device 55A, a three-dimensional measurement device 55B, and a display 55C are connected to the external interface 54.
However, the configuration may be such that the external storage device 55A, the three-dimensional measurement device 55B, and the display 55C are not connected to the external interface 54, or such that only one or two of these external devices are connected to the external interface 54. Any combination of some or all of the external storage device 55A, the three-dimensional measurement device 55B, and the display 55C may be built into the topographic feature estimation device 10, or may be provided remotely from the topographic feature estimation device 10 through a network.
The topographic feature estimating device 10 may be a dedicated device or may be a workstation, personal computer or tablet.
The following is an explanation about an overview of the operation of the topographic feature estimation process. Fig. 7 shows an example of the flow of the topographic feature estimation process. At step 101, the CPU51 reads information on measurement points acquired by three-dimensional measurement of a scene. At step 102, the CPU51 estimates a normal vector for each point.
At step 103, the CPU51 bases on the normal vector corresponding to each point and as the gaze line vector vjIs a classification vector-v of the inverse vector ofjClassifying respective points into a plurality of point-group subregions Cj. At step 104, the CPU51 sets a variable j for distinguishing between the plurality of point group sub-areas to 1. At step 105, the CPU51 applies the PMF to the point group sub-area C1To include in the point group sub-area C1The points in (1) are removed from the points corresponding to non-ground objects.
At step 106, the CPU51 increments the variable j by 1 to transition the processing to the next point cloud sub-area. At step 107, the CPU51 determines whether the value of the variable j has exceeded the value M representing the number of point cloud sub-regions, i.e., whether the processing has been completed for all the point cloud sub-regions. In the case where a negative determination is made at step 107, that is, in the case where there is a point group sub-area for which processing has not been completed yet, at step 108, the CPU51 will face the point group sub-area CjThe gaze line of (a) is changed to a viewing angle as if looking down from above.
Specifically, for example, by clustering points into sub-regions CjAnd a rotation matrix RjMultiplying the point groups to form a point group sub-region CjRotate so that the corresponding gaze line vector vjV as shown by example in FIGS. 3A and 3B1Overlap. At step 109, the CPU51 applies PMF to the rotated point group sub-region Cj. At step 110, the CPU51 puts the point group sub-area C into operationjReturning to its original line of sight. That is, the point-crowd sub-region C to which the PMF has been appliedjInverse R of the rotation matrix used at step 108j -1Multiplying such that the point cloud sub-region Cj rotated at step 108 is multiplied with stepThe opposite direction at 108 rotates in the opposite direction to counter-rotate the same angle and return to its original position.
In the case where a positive determination is made at step 107, that is, in the case where a determination is made that the processing of all the point cloud sub-areas has been completed, at step 111, the CPU51 combines the plurality of point cloud sub-areas to which the PMF is applied and from which the non-ground object has been removed, to generate a combined point cloud. At step 112, the CPU51 generates a three-dimensional model using the points included in the combined point group, and outputs the generated three-dimensional model to an output device at step 114. The information on the three-dimensional model output to the file may be used as information for generating a three-dimensional model in the landml format, which is almost accepted as a standard for three-dimensional terrain modeling.
Note that at step 103, in the case where two point group sub-regions are classified therein, as in the example shown in fig. 3A, these point group sub-regions are the upper point group sub-region C1And a lower point group subregion C2. Instead of using the inner product of the classification vector and the normal vector, classification may be performed based on whether the z-coordinate (height direction coordinate) is a positive value or a negative value. That is, the point having the positive z-coordinate value is classified into the point group sub-region C shown in the example at the upper left part of fig. 42And the point having the negative z-coordinate value is classified into the point group sub-region C shown in the example at the lower left part of fig. 41In (1).
At steps 108 and 110, the corresponding point cloud sub-region is rotated and then rotated back again. Thus, when applying the PMF at step 109, for each of the plurality of classified point cloud sub-regions, a plane intersecting the classification vector corresponding to the point cloud sub-region is taken as a reference plane, and a distance from the reference plane to each point included in the point cloud sub-region corresponding to the reference plane is taken as a height of the corresponding point. That is, prior to applying the PMF, the gaze line from the perspective of looking down the point cluster sub-regions from above will be modified according to the respective gaze line of each point cluster sub-region.
However, instead of rotating and then rotating back again, for example, an imaginary plane corresponding to a reference plane intersecting the gaze line vector or the classification vector may be set, a distance from each point in the corresponding point group sub-area to the plane is calculated, and the calculated distance is taken as the height when applying the PMF. This also enables the line of regard to each point cloud sub-region to be modified to look down the line of regard of the point cloud sub-region from above before applying the PMF.
Note that although an example has been given in which two or three gaze line vectors are employed, the number and direction of gaze line vectors may be set as appropriate. Increasing the number of gaze line vectors, i.e. the number of classification vectors, and thereby increasing the number of point cloud sub-regions, i.e. the number of measurement point classifications, enables each measurement point to be classified into a suitable point cloud sub-region corresponding to a suitable gaze line, thereby enabling improved performance when removing non-ground objects.
The gaze line vectors need not all exist in the plane formed by the two gaze line vectors, and may be equal to 4 π [ sr ]]The corresponding angles at all directions are uniformly spaced. Furthermore, the downward oriented vector v shown in the examples of fig. 3A and 3B1It need not extend vertically downward. For example, for a terrain that is inclined over a wide range, the coordinate axes may be set such that the inclined plane extends along the xy plane, and the gaze line vector v may be set1Oriented towards the xy-plane and extending along a z-axis orthogonal to the xy-plane. Note that the set gaze line vector v1May be set to a vector corresponding to the gaze line from a top-down viewing angle.
In the present exemplary embodiment, the measurement points are classified into a plurality of point group sub-regions corresponding to a plurality of gaze line vectors, the gaze line toward each of the plurality of point group sub-regions is modified to a gaze line looking down from above before the PMF is applied, and then the plurality of point group sub-regions are combined. Thus, as shown in the example at the lower right portion of fig. 4 and the example at the lower portion of fig. 5B, it is possible to acquire a three-dimensional model from which non-ground objects have been removed, and accurately estimate that there are above and at the sides, in addition to the feature of the terrain belowThe topographical features of (a). Note that, for example, "below" means being set as the gaze line vector v1And the topographic feature present at the side means a topographic feature present in a direction of 360 ° around an imaginary axis extending from above to below.
The present exemplary embodiment may be applied at the scene of a tunnel during tunnel construction. During tunnel construction, non-ground objects such as heavy machinery, personnel and pipes are present in the tunnel, but it takes time and money to physically remove these non-ground objects to measure the topographical features of the tunnel during tunnel construction, which is not feasible. However, measuring a scene in a state in which non-ground objects are present, and then applying the present exemplary embodiment to estimate a topographic feature of removing the non-ground objects from the scene enables time and cost to be reduced, and would be very useful.
For example, tunnel construction for tunnels through mountains involves repeating cycles including drilling, charging, blasting, cleaning, removing loose rock, chiseling, primary spraying, steel support installation, secondary spraying, and anchoring rock every 1.0 to 1.2 meters, both day and night. After the ballast has been cleared, the working face is measured to check the level of progress of the blast, and the amount of chipping and clearing is checked. By applying the present exemplary embodiment after ballast cleaning, for example during work transfer preparation, so that there is no adverse effect on normal tunnel construction cycles, the topographical features can be accurately estimated from the measured scenario without impeding the tunnel construction. Furthermore, information about the estimated topographic features can be used immediately in tunnel construction. This makes it possible to reduce the time and cost of tunnel construction.
In the present exemplary embodiment, a three-dimensional measurement of a scene is acquired using a plurality of classification vectors having different directions, and a plurality of measurement points each including measurement information are classified into a plurality of point group sub-regions corresponding to the plurality of classification vectors. By applying the improved morphology filter to each of the plurality of point cloud sub-regions, measurement points corresponding to non-ground objects are removed from the plurality of measurement points acquired by the three-dimensional measurement, and the topographical features of the scene are estimated. For each of the plurality of classified point group sub-regions, a plane intersecting the classification vector corresponding to the point group sub-region is taken as a reference plane, and a distance from the reference plane to each measurement point included in the point group sub-region corresponding to the reference plane, the distance being acquired based on measurement information of the measurement point, is taken as a height of the corresponding measurement point.
Thus, the present exemplary embodiment enables accurate estimation of a topographic feature that is present above or at the side, or both, in addition to a topographic feature that is present below.
Second exemplary embodiment
Examples relating to the second exemplary embodiment are explained in detail below with reference to the drawings. Description about configurations and operations similar to those in the first exemplary embodiment is omitted.
Fig. 8 shows an example of the topographic feature estimating device 10 of the second exemplary embodiment. The topographic feature estimating device 10 of the second exemplary embodiment differs from the topographic feature estimating device 10 of the first exemplary embodiment in that the topographic feature estimating device 10 of the second exemplary embodiment includes a marking section 27 and a repairing section 28.
The labeling unit 27 labels a missing part of the three-dimensional model. The missing part is a part where points corresponding to elements of the three-dimensional model are missing after removing points corresponding to non-ground objects as a result of applying PMF at step 109 in fig. 7. Since the PMF was applied at step 109 of fig. 7, the points corresponding to non-ground objects are removed. For example, in the lower left portion of fig. 9, this results in a portion RM where points corresponding to the ground contacted by non-ground objects are missing.
For example, as shown in the example at the upper left portion of fig. 9, points in the missing portion are interpolated by applying a prior art technique such as poisson surface reconstruction during the three-dimensional model generation. As shown in the example at the upper middle portion of fig. 9, the mark section 27 attaches a mark MD to each portion of the three-dimensional model from which a point corresponding to an element is missing. The mark MD may be, for example, a visible mark of a specific color or an invisible mark such as label information.
The repairing section 28 in fig. 8 uses the second three-dimensional model generated using the measurement points acquired by the three-dimensional measurement at the second timing to repair the missing portion in the first three-dimensional model generated using the measurement points acquired by the three-dimensional measurement at the first timing. For example, by performing the topographic feature estimation process of removing the non-ground objects without actually physically removing the non-ground objects to delay the construction during the tunnel construction, no hindrance is caused to the tunnel construction. Repairing the missing portion after the non-ground object has been physically removed actually, for example, after tunnel construction is completed, enables accurate topographic feature estimation information to be maintained that is closer to the actual topographic feature.
An outline of the operation of the topographic feature estimation process of the second exemplary embodiment will be explained below. Fig. 10A shows an example of the flow of the topographic feature estimation process. The topographic feature estimation process in fig. 10A is different from the topographic feature estimation process in fig. 7 in that the topographic feature estimation process in fig. 10A includes the missing point patch process of step 113.
Fig. 10B shows a detailed example of the missing point patch processing of step 113 in fig. 10A. At step 121, the CPU51 determines whether there is a missing portion in the three-dimensional model generated at step 112. Specifically, the CPU51 calculates each element s constituting the three-dimensional modelk(k is an integer from 1 to Q, and Q is the number of elements) and the sub-region C included in the corresponding point groupjIs the shortest distance between each point in (a), and is a predetermined threshold value d at the shortest distancethOr below a predetermined threshold dthDetermining the element s in the case ofkNot corresponding to the missing part.
Exceeds a predetermined threshold d at the shortest distancethIn case of (2), element s is madekCorresponding to the determination of the missing part. Predetermined threshold value dthMay for example be 10 cm. This is because the element skIs estimated to be an element to be generated by interpolation during generation of the three-dimensional model. For example, the three-dimensional model may be a meshAnd each element may be a polygon.
At step 122, as shown in the example at the upper middle portion of fig. 9, the CPU51 appends the mark MD to the missing portion. At step 123, the CPU51 determines whether the three-dimensional model includes a missing portion, that is, whether there is a three-dimensional model including a flag MD. That is, for example, it is determined whether or not the three-dimensional model corresponding to the measurement point obtained by the three-dimensional measurement of the same scene as the three-dimensional model generated at step 112 and including the marker MD has been saved as a file. For example, such a file may be searched for among files saved in the data saving area 53B or the external storage device 55A in fig. 6.
In the case where a negative determination is made at step 123, the CPU51 ends the missing point patch processing. In the case where a positive determination is made at step 123, at step 124, the CPU51 reads a three-dimensional model corresponding to the first three-dimensional model including the missing portion. At step 125, the CPU51 determines whether there is an element corresponding to the marker MD indicating the missing portion in the three-dimensional model corresponding to the second three-dimensional model generated at step 112 and that is not the missing portion in the first three-dimensional model. Elements that are not missing portions are elements to which the tag MD is not attached.
In the case where a negative determination is made at step 125, the CPU51 ends the missing point patch processing. In the event that a positive determination is made at step 125, at step 126, the CPU51 patches the missing portion in the three-dimensional model generated at step 112 using the elements that are the non-missing portions in the three-dimensional model read at step 124, and then ends the missing point patch processing. At step 114 in fig. 10A, the CPU51 outputs any or all of the three-dimensional model generated at step 112, the three-dimensional model with the attached marker MD at step 122, or the three-dimensional model for which the missing portion is repaired at step 126 to the output device. That is, these three-dimensional models may be stored in the external storage device 55A or displayed on the display 55C, for example.
Note that the missing portion may be a portion corresponding to a measurement point that is not included in any point cloud sub-region when the measurement point is classified into the point cloud sub-regions at step 103. That is, the missing portion may be a portion of a point corresponding to an element of the three-dimensional model that is missing because an angle formed between the classification vector and a normal vector of the measurement point is a predetermined angle or more. For example, if the processing speed is increased by suppressing the number of point group sub-areas during tunnel construction, increasing the number of point group sub-areas after tunnel construction has been completed can maintain accurate topographic feature estimation information more approximate to actual topographic features.
In the present exemplary embodiment, a three-dimensional measurement of a scene is acquired using a plurality of classification vectors having mutually different directions, and a plurality of measurement points each including measurement information are classified into a plurality of point group sub-regions corresponding to the plurality of classification vectors. An improved morphology filter is applied to each of the plurality of point cloud sub-regions to remove measurement points corresponding to non-ground objects from the plurality of measurement points acquired by the three-dimensional measurement to estimate a topographical feature of the scene. For each of the plurality of classified point group sub-regions, a plane intersecting the classification vector corresponding to the point group sub-region is taken as a reference plane, and a distance from the reference plane to each measurement point included in the point group sub-region corresponding to the reference plane, the distance being acquired based on measurement information of the measurement point, is taken as a height of the corresponding measurement point.
Therefore, the present exemplary embodiment enables accurate estimation of a topographic feature that exists above or at a side or both, in addition to a topographic feature that exists below.
In the present exemplary embodiment, a marker is attached to an element corresponding to a missing portion in the combined point group in which a point corresponding to an element included in the three-dimensional model is missing. Further, in the present exemplary embodiment, the missing portion in the first three-dimensional model is repaired using an element included in the second three-dimensional model corresponding to a plurality of measurement points acquired through three-dimensional measurement of a scene similar to the scene corresponding to the first three-dimensional model.
Thus, for example, missing portions in a three-dimensional model generated during tunnel construction may be repaired using elements in the three-dimensional model generated using measurement points acquired by three-dimensional measurement after non-ground objects have actually been physically removed, for example, after tunnel construction has ended. This enables more accurate topographic feature estimation information to be maintained.
The flowcharts of fig. 7, 10A, and 10B are merely examples, and the order of steps may be changed as appropriate.
Comparison with related art
FIG. 11B illustrates an example of a three-dimensional model generated using measurement points acquired by aerial measurements of the scene shown in FIG. 11A. Fig. 11C shows an example of a three-dimensional model generated using measurement points for which measurement points corresponding to non-ground objects have been removed by applying PMF to measurement points acquired by aerial measurement of the scene in fig. 11A without classifying the measurement points into point group sub-regions as in the present exemplary embodiment.
FIG. 12B illustrates an example of a three-dimensional model generated using measurement points acquired by aerial measurements of the scene shown in FIG. 12A. Fig. 12C shows an example of a three-dimensional model generated using measurement points for which measurement points corresponding to non-ground objects have been removed by applying PMF to measurement points acquired by aerial measurement of the scene in fig. 12A without classifying the measurement points into point group sub-regions as in the present exemplary embodiment.
The three-dimensional models shown in fig. 11B and 12B to which PMF is not applied do not have non-ground objects removed, while the three-dimensional models shown in fig. 11C and 12C to which PMF is applied have non-ground objects removed. It is assumed that PMF will be applied to measurement points in the scene measured from the air. Thus, as with the example shown in fig. 11C and 12C, a three-dimensional model from which non-ground objects have been removed in an appropriate manner is generated from a scene in which there are no topographical features above as is the case in fig. 11A and 12A.
However, in the example of the scenario shown in fig. 13A, if PMF is applied without classifying the measurement points into point cluster sub-areas as in the present exemplary embodiment, a three-dimensional model from which non-terrestrial objects NGO have been appropriately removed may be generated for the underlying terrain features BT as in the example shown in fig. 13B. However, the upper terrain features TP and the side terrain features SD in the scene are also removed along with the non-ground objects NGO, and therefore a suitable three-dimensional model cannot be acquired. If a three-dimensional model is generated without applying PMF to the measurement points, then non-ground objects NGO will remain in the three-dimensional model as shown in the example at the middle right portion of fig. 2.
In contrast, the present exemplary embodiment, in which the measurement points are classified into the point cloud sub-regions and the gaze line for each point cloud sub-region is modified to a gaze line looking down the perspective of the point cloud sub-region from above before the PMF is applied to each point cloud sub-region, can be applied in the example of the tunnel shown in fig. 14A. Furthermore, the present exemplary embodiment may be applied not only to tunnels, but also to other scenes in which there are topographic features above or at the sides or both, such as a cave shown in the example of fig. 14B, a shaft shown in the example of fig. 14C, and an overhanging portion shown in the example of fig. 14D.
Furthermore, the present exemplary embodiment can also be applied to a scene where there is no feature of the terrain above or at the side as in the example shown in fig. 14E. In the present exemplary embodiment, setting the gaze line vector lines at angles evenly spaced in all directions corresponding to 4 π sr enables various scenes such as these to be accommodated. For example, in the scenario shown in fig. 14E, since almost all measurement points will be simply classified into the point group sub-regions corresponding to the gaze line vector directed downward, there is no need to modify the flow of processing according to the scenario of the application.
One aspect of the present disclosure enables accurate estimation of topographical features that are present above or at the sides, or both, in addition to those that are present below.

Claims (15)

1. A non-transitory recording medium storing a program that causes a computer to execute a topographic feature estimation process, the topographic feature estimation process comprising:
classifying a plurality of measurement points, which are acquired through three-dimensional measurement of a scene and respectively include measurement information, into a plurality of point group sub-regions each corresponding to a respective one of the plurality of classification vectors by using a plurality of classification vectors having mutually different directions; and
estimating a topographical feature of the scene by:
setting, as a reference plane, a plane intersecting the classification vector corresponding to the point group sub-region for each of the plurality of point group sub-regions that have been classified,
for each of the measurement points included in the point group subregion corresponding to the reference plane, a distance from the reference plane to each of the measurement points is taken as a height of each of the measurement points, the distance being acquired based on measurement information of each of the measurement points, and
removing measurement points corresponding to non-ground objects from the plurality of measurement points acquired by the three-dimensional measurement by applying a modified morphology filter to each of the plurality of point cloud sub-regions.
2. The non-transitory recording medium of claim 1, wherein each of the plurality of measurement points is classified into a point group subregion corresponding to one of the plurality of classification vectors, wherein an inner product of each of the plurality of classification vectors and a normal vector estimated from the measurement point is a maximum value.
3. The non-transitory recording medium of claim 1 or claim 2, wherein:
generating a three-dimensional model using a combined point group formed by combining together the plurality of point group sub-regions to which the improved morphological filter has been applied; and
the generated three-dimensional model is displayed on a display device or stored in a storage medium.
4. The non-transitory recording medium according to claim 3, wherein a marker is attached to an element included in the three-dimensional model, the element corresponding to a missing portion at which a measurement point corresponding to the element is missing from the combined point group.
5. The non-transitory recording medium of claim 4, wherein the missing portion in the first three-dimensional model is repaired using elements included in a second three-dimensional model corresponding to a plurality of measurement points acquired through three-dimensional measurement of a scene similar to a scene corresponding to the first three-dimensional model.
6. A method of topographic feature estimation comprising:
by means of the processor(s) it is possible to,
classifying a plurality of measurement points, which are acquired through three-dimensional measurement of a scene and respectively include measurement information, into a plurality of point group sub-regions each corresponding to a respective one of the plurality of classification vectors by using a plurality of classification vectors having mutually different directions; and
estimating a topographical feature of the scene by:
setting, as a reference plane, a plane intersecting the classification vector corresponding to the point group sub-region for each of the plurality of point group sub-regions that have been classified,
for each of the measurement points included in the point group subregion corresponding to the reference plane, a distance from the reference plane to each of the measurement points is taken as a height of each of the measurement points, the distance being acquired based on measurement information of each of the measurement points, and
removing measurement points corresponding to non-ground objects from the plurality of measurement points acquired by the three-dimensional measurement by applying a modified morphology filter to each of the plurality of point cloud sub-regions.
7. A topographical feature estimation method according to claim 6, wherein each of the plurality of measurement points is classified into a point group subregion corresponding to one of the plurality of classification vectors, wherein the inner product of each of the plurality of classification vectors with the normal vector estimated from the measurement point is the maximum value.
8. A topographical feature estimation method according to claim 6 or claim 7, wherein:
generating a three-dimensional model using a combined point group formed by combining together the plurality of point group sub-regions to which the improved morphological filter has been applied; and
the generated three-dimensional model is displayed on a display device or stored in a storage medium.
9. A topographic feature estimation method according to claim 8, wherein a marker is attached to an element included in the three-dimensional model, the element corresponding to a missing portion where a measurement point corresponding to the element is missing from the combined point group.
10. A topographical feature estimation method according to claim 9, wherein elements included in the second three-dimensional model corresponding to a plurality of measurement points acquired through three-dimensional measurements of a scene similar to that corresponding to the first three-dimensional model are used to patch missing portions in the first three-dimensional model.
11. A topographic feature estimating device comprising:
a classifying section that classifies a plurality of measurement points, which are acquired by three-dimensional measurement of a scene and respectively include measurement information, into a plurality of point group sub-regions, each of which corresponds to a corresponding one of the plurality of classification vectors, by using the plurality of classification vectors having mutually different directions; and
an estimating section that estimates a topographic feature of the scene by:
setting, as a reference plane, a plane intersecting the classification vector corresponding to the point group sub-region for each of the plurality of point group sub-regions that have been classified,
for each of the measurement points included in the point group subregion corresponding to the reference plane, a distance from the reference plane to each of the measurement points is taken as a height of each of the measurement points, the distance being acquired based on measurement information of each of the measurement points, and
removing measurement points corresponding to non-ground objects from the plurality of measurement points acquired by the three-dimensional measurement by applying a modified morphology filter to each of the plurality of point cloud sub-regions.
12. A topographic feature estimating device according to claim 11, wherein the classifying section classifies each of the plurality of measuring points into a point group subregion corresponding to one of the plurality of classification vectors, an inner product of each of the plurality of classification vectors and a normal vector estimated from the measuring point being a maximum value.
13. A topographic feature estimating device according to claim 11 or claim 12, further comprising:
a three-dimensional model generating section that generates a three-dimensional model using a combination point group formed by combining together the plurality of point group sub-regions to which the improved morphological filter has been applied; and
and a three-dimensional model output unit that displays the generated three-dimensional model on a display device or stores the generated three-dimensional model in a storage medium.
14. A topographic feature estimating device according to claim 13, further comprising a marking section that attaches a mark to an element included in the three-dimensional model, the element corresponding to a missing portion where a measuring point corresponding to the element is missing from the combination point group.
15. A topographic feature estimating device according to claim 14, further comprising a patch that patches a missing portion in a first three-dimensional model using elements included in a second three-dimensional model corresponding to a plurality of measurement points acquired through three-dimensional measurements of a scene similar to the scene corresponding to the first three-dimensional model.
CN201910993527.1A 2018-10-29 2019-10-18 Medium storing topographic feature estimating program, topographic feature estimating method and device Pending CN111104958A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-202607 2018-10-29
JP2018202607A JP7211005B2 (en) 2018-10-29 2018-10-29 Terrain estimation program, terrain estimation method, and terrain estimation device

Publications (1)

Publication Number Publication Date
CN111104958A true CN111104958A (en) 2020-05-05

Family

ID=68165441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993527.1A Pending CN111104958A (en) 2018-10-29 2019-10-18 Medium storing topographic feature estimating program, topographic feature estimating method and device

Country Status (4)

Country Link
US (1) US20200134914A1 (en)
EP (1) EP3648058A1 (en)
JP (1) JP7211005B2 (en)
CN (1) CN111104958A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3973140A4 (en) * 2019-05-21 2023-01-25 Services Pétroliers Schlumberger Geologic model and property visualization system
KR102291532B1 (en) * 2020-12-31 2021-08-20 경북대학교 산학협력단 Apparatus and method for processing terrain information using filters

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745436A (en) * 2013-12-23 2014-04-23 西安电子科技大学 LiDar point cloud data morphological filtering method based on area prediction
CN105074782A (en) * 2013-03-12 2015-11-18 三菱电机株式会社 Three-dimensional information processing device
CN107004302A (en) * 2014-11-28 2017-08-01 松下知识产权经营株式会社 Model building device, threedimensional model generating means, modeling method and program
US20180081035A1 (en) * 2016-09-22 2018-03-22 Beijing Greenvalley Technology Co., Ltd. Method and device for filtering point cloud data
CN108399424A (en) * 2018-02-06 2018-08-14 深圳市建设综合勘察设计院有限公司 A kind of point cloud classifications method, intelligent terminal and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5987541B2 (en) 2012-08-07 2016-09-07 株式会社大林組 Component installation judgment system
US9329272B2 (en) 2013-05-02 2016-05-03 Infineon Technologies Ag 3D camera and method of image processing 3D images
JP5949814B2 (en) 2014-03-06 2016-07-13 トヨタ自動車株式会社 Autonomous mobile robot and control method thereof
JP2017151744A (en) 2016-02-25 2017-08-31 シナノケンシ株式会社 Floor plan creating method
JP6693255B2 (en) 2016-04-28 2020-05-13 富士通株式会社 Measuring instruments and measuring systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105074782A (en) * 2013-03-12 2015-11-18 三菱电机株式会社 Three-dimensional information processing device
CN103745436A (en) * 2013-12-23 2014-04-23 西安电子科技大学 LiDar point cloud data morphological filtering method based on area prediction
CN107004302A (en) * 2014-11-28 2017-08-01 松下知识产权经营株式会社 Model building device, threedimensional model generating means, modeling method and program
US20180081035A1 (en) * 2016-09-22 2018-03-22 Beijing Greenvalley Technology Co., Ltd. Method and device for filtering point cloud data
CN108399424A (en) * 2018-02-06 2018-08-14 深圳市建设综合勘察设计院有限公司 A kind of point cloud classifications method, intelligent terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KEQI ZHANG 等: "A progressive morphological filter for removing nonground measurements from airborne LIDAR data" *
V. SANCHEZ 等: "Planar 3D modeling of building interiors from point cloud data" *

Also Published As

Publication number Publication date
EP3648058A1 (en) 2020-05-06
JP7211005B2 (en) 2023-01-24
JP2020071501A (en) 2020-05-07
US20200134914A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
Guo et al. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
CN111133472B (en) Method and apparatus for infrastructure design using 3D reality data
US9959670B2 (en) Method for rendering terrain
JP2018514031A (en) DeepStereo: learning to predict new views from real-world images
Rubinowicz et al. Study of city landscape heritage using LiDAR data and 3D-city models
US20150235392A1 (en) Drawing data generation device and image drawing device
JP2011501301A (en) Geospatial modeling system and related methods using multiple sources of geographic information
CN107464286B (en) Method, device, equipment and readable medium for repairing holes in three-dimensional city model
US7940262B2 (en) Unification and part hiding in three dimensional geometric data
CN111104958A (en) Medium storing topographic feature estimating program, topographic feature estimating method and device
JP4619504B2 (en) 3D digital map generator
Xiong et al. Machine learning using synthetic images for detecting dust emissions on construction sites
US9135749B2 (en) Method and apparatus for processing three-dimensional model data
JP4427656B2 (en) Survey data processing method
Favorskaya et al. Rendering of wind effects in 3D landscape scenes
Kolbeinsson et al. DDOS: the drone depth and obstacle segmentation dataset
CN112132466A (en) Route planning method, device and equipment based on three-dimensional modeling and storage medium
Cai et al. Research of dynamic terrain in complex battlefield environments
Tang et al. Moment-based metrics for mesh simplification
CN113920269A (en) Project progress obtaining method and device, electronic equipment and medium
Yan et al. Semi-automatic extraction of dangerous rock blocks from jointed rock exposures based on a discontinuity trace map
Zeng et al. An improved extraction method of individual building wall points from mobile mapping system data
Czyńska Visual Impact Analysis of Large Urban Investments on the Cityscape
JP7357087B2 (en) Flood height estimation device and program
Chiu et al. Potential applications of deep learning in automatic rock joint trace mapping in a rock mass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200505

WD01 Invention patent application deemed withdrawn after publication