WO2024029057A1 - Position correction device, displacement amount extraction system, and position correction method - Google Patents

Position correction device, displacement amount extraction system, and position correction method Download PDF

Info

Publication number
WO2024029057A1
WO2024029057A1 PCT/JP2022/030052 JP2022030052W WO2024029057A1 WO 2024029057 A1 WO2024029057 A1 WO 2024029057A1 JP 2022030052 W JP2022030052 W JP 2022030052W WO 2024029057 A1 WO2024029057 A1 WO 2024029057A1
Authority
WO
WIPO (PCT)
Prior art keywords
displacement
data
dimensional data
image
position correction
Prior art date
Application number
PCT/JP2022/030052
Other languages
French (fr)
Japanese (ja)
Inventor
昌志 渡辺
克之 亀井
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2022/030052 priority Critical patent/WO2024029057A1/en
Publication of WO2024029057A1 publication Critical patent/WO2024029057A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the technology disclosed in this specification relates to a three-dimensional data position correction technology.
  • the mobile object By installing sensors such as laser scanners, global navigation satellite system (GNSS) receivers, or cameras on mobile objects such as aircraft or drones, and performing measurements using them, the mobile object can be It is possible to acquire three-dimensional data such as the topographical shape or building shape near the moving route of the vehicle.
  • GNSS global navigation satellite system
  • the three-dimensional data is affected by noise during GNSS reception and includes a certain degree of positional deviation. Therefore, when trying to obtain displacement amounts such as outflow and inflow of earth and sand with high accuracy, it is important that the three-dimensional data have high positional accuracy. If the positional accuracy is low, even if the amount of displacement is extracted by directly comparing three-dimensional data before and after a disaster, the amount of displacement may be a value different from the actual amount of displacement.
  • Patent Document 1 proposes a method of correcting the position of three-dimensional data using an iterative closest point (ICP) method.
  • ICP iterative closest point
  • the technology disclosed in this specification was developed in view of the problems described above, and is a technology for aligning three-dimensional data with high accuracy.
  • the position correction device which is the first aspect of the technology disclosed in this specification, uses three-dimensional data indicating position information before displacement as three-dimensional data before displacement, and uses three-dimensional data indicating position information after displacement as three-dimensional data before displacement.
  • An invariant area that is defined as three-dimensional data after displacement, image data corresponding to the three-dimensional data after displacement is assumed to be image data after displacement, and an area of the image that is not displaced in the image data after displacement is extracted as an invariant area.
  • the image forming apparatus includes an extracting section and a correcting section that corrects the displaced three-dimensional data to align with the undisplaced three-dimensional data based only on the positional information corresponding to the unchanged area.
  • a large number of reference points can be easily and appropriately acquired by extracting an unchanged area using an image indicated by image data after displacement. I can do it. Therefore, alignment between three-dimensional data can be performed with high precision.
  • FIG. 1 is a diagram conceptually showing an example of a configuration of a position correction device according to an embodiment. It is a figure which shows the example of the image acquired by the artificial satellite after the landslide occurrence. It is a figure which shows the example of the displacement area extracted in the image after a landslide occurrence.
  • 1 is a diagram schematically showing an example of a hardware configuration of a position correction device according to an embodiment;
  • FIG. 7 is a flowchart illustrating an example of an operation of a positioning section in a position correction device according to an embodiment.
  • 1 is a diagram conceptually showing an example of a configuration of a displacement amount extraction system according to an embodiment. It is a flow chart which shows an example of operation of a displacement amount extraction part in a displacement amount extraction system concerning an embodiment.
  • ordinal numbers such as “first” or “second” are sometimes used in the description of the present specification, these terms will not be used to facilitate understanding of the content of the embodiments. These ordinal numbers are used for convenience and the content of the embodiments is not limited to the order that can occur based on these ordinal numbers.
  • FIG. 1 is a diagram conceptually showing an example of the configuration of a position correction device 100 according to the present embodiment.
  • the position correction device 100 acquires three-dimensional data before and after displacement and at least image data after displacement, and extracts an unchanged area from the image data after displacement. Then, the position correction device 100 performs correction on the post-displacement 3D data to align the 3D data before and after the displacement using the unchanged area, and outputs the corrected 3D data after the displacement. .
  • the position correction device 100 may also acquire image data before displacement and use it to extract the unchanged area.
  • the position correction device 100 is, for example, a computer. Further, the operation procedure of the position correction device 100 corresponds to a position correction method. Further, a program that realizes the operation of the position correction device 100 corresponds to a position correction program.
  • the position correction device 100 includes a data reading section 101, a constant area extraction section 102, a positioning section 103, and a data output section 104.
  • the data reading unit 101 reads three-dimensional data before and after displacement and at least image data after displacement.
  • before and after displacement refers to, for example, before and after a disaster such as a landslide or landslide.
  • 3D data refers to measurement equipment (laser scanners, GNSS receivers, cameras, inertial measurement units (IMU), etc.) mounted on artificial satellites, aircraft, drones, vehicles, or trolleys. This is data indicating three-dimensional position information such as topography obtained by performing measurements using a sensor, and includes data obtained directly by various sensors or data obtained by analyzing these data.
  • the image data after displacement is image data showing an image of topography, etc. corresponding to the three-dimensional data after displacement. Similar to three-dimensional data, the image data after displacement is acquired by a measuring device mounted on an artificial satellite, an aircraft, a drone, a vehicle, a trolley, or the like. However, the image data may be acquired by a measuring device (measuring medium) different from the measuring device (measuring medium) by which the three-dimensional data is acquired. The image data also includes position data indicating to which point (location) in real space an image such as a topography indicated by the image data corresponds.
  • the unchanged area extracting unit 102 extracts an unchanged area from the image (for example, see FIG. 2) indicated by the acquired image data after displacement.
  • FIG. 2 is a diagram showing an example of an image acquired by an artificial satellite after a landslide occurs.
  • the unchanged region extracting unit 102 extracts a displaced region and a fixed region as shown in FIG. 3 from a displaced image as shown in FIG. 2, for example. That is, the unchanged area extracting unit 102 extracts an area where earth and sand is exposed due to a landslide as a displaced area, and extracts other areas (not shown in FIG. 3) as an unchanged area.
  • FIG. 3 is a diagram showing an example of a displacement region extracted from an image after a landslide occurs.
  • the displacement region and the unchanged region may be extracted by manual visual confirmation, or may be extracted by artificial intelligence (AI) using machine learning or deep learning. or may be extracted by other automatic processing.
  • AI artificial intelligence
  • the image data after displacement includes position data, so if the displacement area and unchanged area are extracted on the image indicated by the image data, which area in real space corresponds to the displacement area and the unchanged area. You can decide whether to do so.
  • the image data may be image data acquired by an aircraft, a drone, or the like, or image data photographed on the ground.
  • the unchanged area extraction unit 102 extracts both image data before and after the displacement (that is, image data before the displacement corresponding to the three-dimensional data before the displacement, and image data after the displacement corresponding to the three-dimensional data after the displacement). (both) may be obtained. In this case, it is possible to analyze the images shown by both image data and extract the area of the image where there is no displacement as the unchanged area 300.
  • the displacement (change) of the image due to differences in the angle of view, weather, time, image resolution, or season at the time of image acquisition is different from the displacement (change) that you want to extract, such different
  • the displacement (change) that is, excluding displacement caused by a cause different from the displacement to be extracted from the amount of displacement when extracting the unchanged region
  • the constant region is extracted. For example, when extracting displaced areas and unchanged areas caused by landslides caused by a typhoon, an area with autumn leaves in the image before the displacement may turn green in the image after the displacement due to seasonal differences.
  • the area is not treated as a displacement area (that is, excluding displacement due to seasonal differences that are not caused by landslides), and is extracted as an unchanged area. do.
  • the alignment unit 103 corrects the displaced three-dimensional data in order to align the displaced three-dimensional data with the pre-displaced three-dimensional data. Then, the alignment unit 103 outputs the corrected three-dimensional data (three-dimensional data after displacement).
  • the alignment unit 103 calculates the amount of correction for correcting the three-dimensional data after the displacement, targeting only the data within the area extracted as the unchanged area in the three-dimensional data before and after the displacement.
  • data near the boundary of the unchanged area may be included in or excluded from the target data when calculating the correction amount, depending on the situation. For example, if the data amount of the three-dimensional data in the constant area is sufficiently large, the three-dimensional data near the boundary of the constant area, which is data with relatively low reliability, may be excluded from the target data. On the other hand, if the amount of three-dimensional data in the constant area is insufficient, three-dimensional data near the boundary of the constant area may be included in the target data in order to increase the amount of target data.
  • the alignment unit 103 corrects the unchanged area in the 3D data after displacement and also the data corresponding to the displacement area (that is, the entire 3D data after displacement) according to the correction amount calculated as described above. By doing this, the entire three-dimensional data after displacement is aligned.
  • the data output unit 104 outputs the corrected displacement three-dimensional data.
  • FIG. 4 is a diagram schematically showing an example of the hardware configuration of the position correction device 100 according to this embodiment.
  • the position correction device 100 includes a processor 901, a memory 902, an auxiliary storage device 903, an input device 904, a display device 905, and a communication device 906 as hardware. .
  • the auxiliary storage device 903 stores programs that implement the functions of the data reading section 101, constant area extraction section 102, alignment section 103, and data output section 104 described above.
  • the above program is loaded from the auxiliary storage device 903 to the memory 902. Then, the processor 901 executes the program and performs operations to realize the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, and the data output section 104.
  • FIG. 4 schematically shows a state in which the processor 901 is executing a program that implements the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, and the data output section 104.
  • the input device 904 receives instructions and the like from the user.
  • the display device 905 displays the unchanged area or the corrected and displaced image data.
  • the communication device 906 receives, for example, measurement data.
  • FIG. 5 is a flowchart showing an example of the operation of the alignment unit 103 in the position correction device 100 according to the present embodiment.
  • the alignment process after the process of extracting only the unchanged area from the point cloud data does not necessarily have to be a method that follows this method, and for example, the method including the ICP method etc.
  • existing three-dimensional alignment techniques may be applied.
  • alignment processing is performed after dividing into point clouds for each unit time, but it may also be divided into point clouds for each space, or division processing is performed. Alignment processing may be performed on the entire image at one time.
  • step ST01 the alignment unit 103 extracts only data corresponding to the unchanged area from the acquired point group data before and after displacement.
  • point cloud data near the boundary of the unchanged area should be excluded from the extraction target.
  • the point cloud data used for extracting the invariant region may be limited.
  • the point cloud data from which only the unchanged area is extracted may be only the point cloud data before displacement, or only the point cloud data after displacement. good.
  • step ST02 the alignment unit 103 divides the displaced point group data for each unit time.
  • step ST03 the alignment unit 103 extracts feature points from the point group data after displacement and the point group data before displacement, which are divided for each unit time.
  • the feature points mainly correspond to corners of structures or the upper and lower ends of utility poles.
  • points included in two different types of nearby areas for example, a 30 cm nearby area and a 1 meter nearby area
  • Obtain group data for example, a 30 cm nearby area and a 1 meter nearby area
  • Obtain group data for example, a 30 cm nearby area and a 1 meter nearby area
  • Obtain group data for example, a 30 cm nearby area and a 1 meter nearby area
  • Obtain group data for example, a 30 cm nearby area and a 1 meter nearby area.
  • a plane is determined by the least squares method using the point group data in each neighboring area, and the normal vector of each plane is determined.
  • the difference between the normal vectors obtained corresponding to two different types of neighboring regions is taken as the feature quantity of the point, and if the value of this feature quantity is equal to or greater than the threshold value, the point is designated as a feature point. do.
  • the difference in normal vectors is used as the feature quantity, but for example, the difference in curvature or the difference in the center of gravity may be extracted and used as the feature quantity.
  • a plurality of extracted feature amounts may be used in combination to determine whether a point is a feature point or not.
  • curvature for example, after acquiring point cloud data in two different types of neighboring regions, principal component analysis of the point cloud data is performed. Then, the curvature can be obtained as the eigenvalue of the covariance matrix.
  • centroid for example, it can be obtained by averaging the respective coordinate values of point cloud data in two different types of neighboring areas, and the distance between the centroids corresponding to the two neighboring areas is used as a feature value. It can be done.
  • step ST04 the alignment unit 103 performs feature point aggregation processing. Specifically, feature points that are close to each other are aggregated, and a new feature point is generated at the center of gravity of these feature points. At this time, the condition for aggregation may be not only that the distances are close, but also that the values of the respective feature quantities are close.
  • step ST05 the alignment unit 103 correlates the feature points acquired from the point cloud data before and after the displacement.
  • the correspondence between the feature points is determined based on whether the distance between the points (horizontal, vertical, or three-dimensional), the identity of the feature amount, or other assigned attributes correspond.
  • step ST06 the alignment unit 103 calculates the correction amount at the corresponding unit time from the associated points.
  • the six axes may be corrected, or only the parallel movement may be corrected.
  • step ST07 the alignment unit 103 determines whether the correction amount has been calculated for all the point group data divided for each unit time. If the correction amount has been determined for all the point cloud data, that is, if it corresponds to "YES" branching from step ST07, an example of which is shown in FIG. 5, the process proceeds to step ST08, an example of which is shown in FIG. move on. On the other hand, if there remains point cloud data for which the amount of correction has not been determined, that is, if it corresponds to "NO" branching from step ST07, an example of which is shown in FIG. 5, an example is shown in FIG. Return to step ST03.
  • step ST08 the alignment unit 103 recalculates the correction amount at each unit time by performing smoothing processing on the correction amount obtained at each unit time, and the recalculated By applying the correction amount to the displaced point group data, correction of the entire point group data is completed.
  • the position correction device when aligning point cloud data before and after displacement, an image indicated by the image data after displacement is used to align the area of data used for alignment ( range) can be appropriately limited. Therefore, alignment between point cloud data can be performed with high accuracy.
  • the position correction device by performing the displacement amount extraction process using the three-dimensional data that has been corrected with high accuracy by the position correction device according to the present embodiment, information on the displacement amount can be acquired with high accuracy.
  • highly accurate displacement information for example, in the case of displacement during a disaster, it becomes possible to estimate with high precision the time and cost required for restoration work, contributing to faster decision-making. can. Further, for example, if the displacement is due to deterioration over time of equipment, etc., it is possible to appropriately judge whether or not repair work is necessary.
  • an image represented by image data after displacement is used. Compared to predicting the displacement basin or unchanged area in advance and performing positioning processing using that information, this eliminates the need for advance preparation work, and also eliminates the need for displacement of a larger scale than expected. This has the advantage of being able to properly align the position even in the event of an accident.
  • the unchanged area is extracted as a range within the image. Therefore, compared to, for example, manually setting up and measuring ground control points (GCPs) that serve as reference points during drone surveying in areas that have not been displaced after a disaster, this method requires a large number of reference points for positioning. Since it can be acquired, it has the advantage of being able to align with high accuracy. Furthermore, there is no need to actually install a GCP on-site.
  • GCPs ground control points
  • FIG. 6 is a diagram conceptually showing an example of the configuration of the displacement amount extraction system 200 according to the present embodiment.
  • the displacement amount extraction system 200 includes a position correction device 100, a displacement amount extraction section 201, and a data display section 202.
  • the displacement amount extraction system 200 also performs the operation shown in FIG. 5, and the alignment unit 103 corrects the entire three-dimensional data after displacement.
  • the displacement amount extraction unit 201 compares the corrected three-dimensional data after displacement and the three-dimensional data before displacement, which are output from the data output unit 104, Extract the amount of displacement at each position in the three-dimensional data.
  • the data display section 202 displays the displacement amount extraction result in the displacement amount extraction section 201 on the screen in an easy-to-understand manner. For example, when displaying a two-dimensional map, areas where the height has decreased due to sediment outflow are displayed in blue, areas where the height has increased due to sediment inflow, etc. are displayed in red, and the height changes gradually depending on the amount of height displacement. Perform display such as changing the display color. Alternatively, increases and decreases at each position may be displayed like a three-dimensional graph.
  • FIG. 7 is a flowchart showing an example of the operation of the displacement amount extraction unit 201 in the displacement amount extraction system 200 according to the present embodiment.
  • the method for extracting the amount of displacement is not limited to the method shown below, and other methods may be used, such as, for example, calculating the amount of displacement after generating three-dimensional polygon data from point cloud data.
  • the displacement amount extraction unit 201 divides the entire area of the three-dimensional data before and after the displacement into a mesh shape.
  • the size of the section is, for example, a 10 cm square section.
  • step ST12 the displacement extraction unit 201 extracts the average height from the three-dimensional data existing in each mesh, and sets this as the height of that mesh section.
  • the height extracted here is not an average, but may be the minimum height or maximum height of three-dimensional data existing within the mesh.
  • step ST13 the displacement amount extraction unit 201 extracts the displacement amount of the height before and after the displacement in each mesh section. Thereby, a displacement map can be generated over the entire area of the three-dimensional data.
  • the height displacement amount can be determined based on the three-dimensional data before and after the displacement that is aligned with high accuracy. Therefore, an accurate amount of displacement can be determined.
  • the extracted accurate displacement amount can be visually and easily displayed, making it easy to quickly grasp the situation in the event of a disaster or the like.
  • the processor 901 is an integrated circuit (IC) that performs processing.
  • the processor 901 is, for example, a central processing unit (CPU), a digital signal processor (DSP), or the like.
  • Memory 902 is random access memory (RAM).
  • the auxiliary storage device 903 shown in FIG. 4 is a read only memory (ROM), a flash memory, a hard disk drive (HDD), or the like.
  • the communication device 906 shown in FIG. 4 is an electronic circuit that executes data communication processing.
  • Communication device 906 is, for example, a communication chip or a network interface card (NIC).
  • NIC network interface card
  • the auxiliary storage device 903 stores an operating system (OS). At least a portion of the OS is executed by the processor 901.
  • the processor 901 realizes the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 while executing at least a part of the OS. Run the program.
  • the processor 901 executes the OS, task management, memory management, file management, communication control, etc. are performed.
  • At least information, data, signal values, and variable values indicating the results of processing by the data reading section 101, the unchanged area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are provided. Any one of them is stored in at least one of memory 902, auxiliary storage device 903, a register in processor 901, and a cache memory.
  • the programs that realize the functions of the data reading section 101, the constant area extraction section 102, the positioning section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 can be used on magnetic disks, flexible disks, optical disks, compact disks, etc. It may be stored in a portable recording medium such as a disc, a Blu-ray (registered trademark) disc, or a DVD. Then, a portable recording medium storing a program that realizes the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 is distributed. You can.
  • the "parts" of the data reading section 101, the constant area extraction section 102, the positioning section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are referred to as “circuit”, “process”, or “procedure”. Alternatively, it may be read as “processing”.
  • the position correction device 100 and the displacement amount extraction system 200 may be realized by a processing circuit.
  • the processing circuit is, for example, a logic IC (Integrated Circuit), a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Circuit). e Gate Array).
  • the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are each implemented as part of a processing circuit.
  • processor 901 and the processing circuit
  • processing circuitry the general concept of the processor 901 and the processing circuit is referred to as "processing circuitry.”
  • processor 901 and the processing circuit are specific examples of “processing circuitry”.
  • the replacement may be performed across multiple embodiments. That is, the respective configurations shown as examples in different embodiments may be combined to produce similar effects.
  • the position correction device includes the constant area extraction section 102 and the correction section.
  • the correction section corresponds to, for example, the alignment section 103.
  • Three-dimensional data indicating position information before displacement is defined as three-dimensional data before displacement. Let the three-dimensional data indicating the position information after the displacement be the three-dimensional data after the displacement. Image data corresponding to the three-dimensional data after displacement is defined as image data after displacement.
  • the unchanged area extraction unit 102 extracts an undisplaced image area in the image data after displacement as an unchanged area 300.
  • the alignment unit 103 corrects the 3D data after the displacement to align it with the 3D data before the displacement, using only the position information corresponding to the unchanged area 300 as a reference.
  • the image data corresponding to the three-dimensional data before displacement is used as the image data before displacement.
  • the unchanged area extracting unit 102 extracts, as the unchanged area 300, an area of the image that is not displaced between the image data before displacement and the image data after displacement.
  • the unchanged region 300 can be easily specified based on the amount of displacement between the image data before and after the displacement. Therefore, the unchanged area 300 can be specified more easily and with higher precision than when the unchanged area 300 is specified based on three-dimensional data before and after displacement. By identifying the unchanged area 300 with high accuracy, alignment can also be corrected with high accuracy.
  • the unchanged area extraction unit 102 extracts the image data based on the image data before displacement indicating the image before the disaster occurrence and the image data after displacement indicating the image after the occurrence of the disaster. , extract the unchanged region 300.
  • the unchanged area extraction unit 102 extracts the unchanged area 300 based on the displaced image data indicating a satellite image acquired by an artificial satellite. According to such a configuration, by using a satellite image acquired by an artificial satellite as image data, it is possible to extract the unchanged area 300 based on information such as a wide range of topography.
  • the second measurement medium for obtaining the information is different.
  • three-dimensional data acquired by a drone or a mobile mapping system (i.e., MMS), etc. can be converted into an unchangeable image data extracted from image data acquired by shooting from an artificial satellite or an aircraft. By correcting based on the area 300, alignment with high accuracy becomes possible.
  • measurement media is not limited to the case where a camera mounted on one drone and light detection and ranging (LiDAR) are used; for example, the relationship between a drone and another aircraft, or the relationship between a drone and an MMS vehicle, etc. This includes cases where the mobile body on which the measurement medium (measuring instrument) is mounted is different, such as the relationship with an artificial satellite.
  • LiDAR light detection and ranging
  • the displacement amount extraction system includes the above-described position correction device 100 and the displacement amount extraction section 201.
  • the displacement extraction unit 201 extracts the position indicated by the 3D data before displacement and the 3D data after displacement based on the 3D data before displacement and the 3D data after displacement corrected by the position correction device. Extract the amount of displacement.
  • the accurate displacement amount can be obtained. You can get a map.
  • three-dimensional data indicating position information before displacement is used as three-dimensional data before displacement
  • three-dimensional data indicating position information after displacement is used as three-dimensional data indicating position information after displacement.
  • the material may contain other additives, such as This includes alloys, etc.
  • each component in the embodiments described above is a conceptual unit, and within the scope of the technology disclosed in this specification, a case where one component consists of a plurality of structures This includes a case where one component corresponds to a part of a certain structure, and a case where a plurality of components are included in one structure.
  • each component in the embodiments described above includes structures having other structures or shapes as long as they exhibit the same function.
  • each of the components described in the embodiments described above is assumed to be software or firmware, or hardware corresponding to it.
  • the hardware is called, for example, a “processing circuit” or “processor” (circuitry).
  • Position correction device 100 Position correction device, 102 Invariant area extraction unit, 200 Displacement amount extraction system, 201 Displacement amount extraction unit, 300 Invariant area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The present invention manages, with high precision, the alignment of three-dimensional data. This position correction device comprises: an invariable region extraction unit that extracts, as pre-displacement three-dimensional data, three-dimensional data representing pre-displacement position information, extracts, as post-displacement three-dimensional data, three-dimensional data representing post-displacement position information, extracts, as post-displacement image data, image data corresponding to the post-displacement three-dimensional data, and extracts, as an invariable region, a non-displaced image region in the post-displacement image data; and a correction unit that, using as a reference only position information corresponding to the invariable region, performs a correction so as to align the post-displacement three-dimensional data with the pre-displacement three-dimensional data.

Description

位置補正装置、変位量抽出システム、および、位置補正方法Position correction device, displacement amount extraction system, and position correction method
 本願明細書に開示される技術は、3次元データの位置補正技術に関するものである。 The technology disclosed in this specification relates to a three-dimensional data position correction technology.
 航空機またはドローンなどの移動体にレーザースキャナ、全球測位衛星システム(global navigation satellite system、すなわち、GNSS)受信器またはカメラなどのセンサーを搭載し、それらを使って計測を実施することで、当該移動体の移動経路付近の地形形状または建物形状などの3次元データを取得することが可能である。 By installing sensors such as laser scanners, global navigation satellite system (GNSS) receivers, or cameras on mobile objects such as aircraft or drones, and performing measurements using them, the mobile object can be It is possible to acquire three-dimensional data such as the topographical shape or building shape near the moving route of the vehicle.
 このような3次元データを、たとえば、台風などの災害前後に取得してそれらを比較することで、土砂崩れに起因する各位置での土砂の流出量および流入量を把握し、その後の復旧計画立案などに役立てることができる。 For example, by acquiring such three-dimensional data before and after a disaster such as a typhoon and comparing it, it is possible to understand the amount of outflow and inflow of soil at each location caused by a landslide, and to formulate subsequent recovery plans. It can be useful for things such as.
 ただし、3次元データにはGNSS受信時のノイズなどの影響を受け、ある程度の位置ずれが含まれる。そのため、土砂の流出量および流入量などの変位量を高精度で取得しようとする場合には、3次元データの位置精度が高いことが重要となる。位置精度が低いと、災害前後の3次元データを直接比較して変位量が抽出されたとしても、当該変位量は実際の変位量とは異なる値となる場合がある。 However, the three-dimensional data is affected by noise during GNSS reception and includes a certain degree of positional deviation. Therefore, when trying to obtain displacement amounts such as outflow and inflow of earth and sand with high accuracy, it is important that the three-dimensional data have high positional accuracy. If the positional accuracy is low, even if the amount of displacement is extracted by directly comparing three-dimensional data before and after a disaster, the amount of displacement may be a value different from the actual amount of displacement.
 3次元データに基づく変位量と実際の変位量とのずれを適切に補正するためには、災害前後の3次元データのうちの変位がなかった箇所を抽出し、当該箇所の位置が一致するように3次元データ全体の位置を補正する必要がある。 In order to appropriately correct the discrepancy between the amount of displacement based on 3D data and the actual amount of displacement, it is necessary to extract the locations where there was no displacement from the 3D data before and after the disaster, and make sure that the locations of the locations match. It is necessary to correct the position of the entire three-dimensional data.
 たとえば、特許文献1では、3次元データの位置をiterative closest point(ICP)法を用いて補正する方法が提案されている。 For example, Patent Document 1 proposes a method of correcting the position of three-dimensional data using an iterative closest point (ICP) method.
特開2017-207438号公報JP2017-207438A
 特許文献1に記載された技術では、ICP法を用いて3次元データの位置が補正されているが、ICP法では変位があった箇所も含めて3次元データ全体で位置合わせのための補正が実施される。そのため、3次元データの補正の精度が十分に高くはなかった。 In the technology described in Patent Document 1, the position of three-dimensional data is corrected using the ICP method, but in the ICP method, correction for positioning is performed on the entire three-dimensional data, including the location where there is displacement. Implemented. Therefore, the accuracy of three-dimensional data correction was not sufficiently high.
 本願明細書に開示される技術は、以上に記載されたような問題を鑑みてなされたものであり、高い精度で3次元データの位置合わせを行うための技術である。 The technology disclosed in this specification was developed in view of the problems described above, and is a technology for aligning three-dimensional data with high accuracy.
 本願明細書に開示される技術の第1の態様である位置補正装置は、変位前の位置情報を示す3次元データを変位前の3次元データとし、変位後の位置情報を示す3次元データを変位後の3次元データとし、前記変位後の3次元データに対応する画像データを変位後の画像データとし、前記変位後の画像データにおける変位していない画像の領域を不変領域として抽出する不変領域抽出部と、前記不変領域に対応する位置情報のみを基準として、前記変位後の3次元データを前記変位前の3次元データに位置合わせするように補正する補正部とを備える。 The position correction device, which is the first aspect of the technology disclosed in this specification, uses three-dimensional data indicating position information before displacement as three-dimensional data before displacement, and uses three-dimensional data indicating position information after displacement as three-dimensional data before displacement. An invariant area that is defined as three-dimensional data after displacement, image data corresponding to the three-dimensional data after displacement is assumed to be image data after displacement, and an area of the image that is not displaced in the image data after displacement is extracted as an invariant area. The image forming apparatus includes an extracting section and a correcting section that corrects the displaced three-dimensional data to align with the undisplaced three-dimensional data based only on the positional information corresponding to the unchanged area.
 本願明細書に開示される技術の少なくとも第1の態様によれば、変位後の画像データで示される画像を使って不変領域を抽出することで、多数の基準点を簡便かつ適切に取得することができる。そのため、高い精度で3次元データ間の位置合わせができる。 According to at least the first aspect of the technology disclosed in the present specification, a large number of reference points can be easily and appropriately acquired by extracting an unchanged area using an image indicated by image data after displacement. I can do it. Therefore, alignment between three-dimensional data can be performed with high precision.
 また、本願明細書に開示される技術に関連する目的と、特徴と、局面と、利点とは、以下に示される詳細な説明と添付図面とによって、さらに明白となる。 In addition, objects, features, aspects, and advantages related to the technology disclosed herein will become more apparent from the detailed description and accompanying drawings set forth below.
実施の形態に関する位置補正装置の構成の例を概念的に示す図である。1 is a diagram conceptually showing an example of a configuration of a position correction device according to an embodiment. 土砂崩れ発生後に人工衛星によって取得された画像の例を示す図である。It is a figure which shows the example of the image acquired by the artificial satellite after the landslide occurrence. 土砂崩れ発生後の画像において抽出された変位領域の例を示す図である。It is a figure which shows the example of the displacement area extracted in the image after a landslide occurrence. 実施の形態に関する位置補正装置のハードウェア構成の例を概略的に示す図である。1 is a diagram schematically showing an example of a hardware configuration of a position correction device according to an embodiment; FIG. 実施の形態に関する位置補正装置における位置合わせ部の動作の例を示すフローチャートである。7 is a flowchart illustrating an example of an operation of a positioning section in a position correction device according to an embodiment. 実施の形態に関する変位量抽出システムの構成の例を概念的に示す図である。1 is a diagram conceptually showing an example of a configuration of a displacement amount extraction system according to an embodiment. 実施の形態に関する変位量抽出システムにおける変位量抽出部の動作の例を示すフローチャートである。It is a flow chart which shows an example of operation of a displacement amount extraction part in a displacement amount extraction system concerning an embodiment.
 以下、添付される図面を参照しながら実施の形態について説明する。以下の実施の形態では、技術の説明のために詳細な特徴なども示されるが、それらは例示であり、実施の形態が実施可能となるために、それらのすべてが必ずしも必須の特徴ではない。 Hereinafter, embodiments will be described with reference to the attached drawings. In the following embodiments, detailed features and the like are shown for technical explanation, but these are merely examples, and not all of them are necessarily essential features in order for the embodiments to be implemented.
 なお、図面は概略的に示されるものであり、説明の便宜のため、適宜、構成の省略、または、構成の簡略化などが図面においてなされる。また、異なる図面にそれぞれ示される構成などの大きさおよび位置の相互関係は、必ずしも正確に記載されるものではなく、適宜変更され得るものである。また、断面図ではない平面図などの図面においても、実施の形態の内容を理解することを容易にするために、ハッチングが付される場合がある。 Note that the drawings are shown schematically, and for convenience of explanation, structures are omitted or simplified as appropriate in the drawings. Further, the mutual relationship between the sizes and positions of the structures shown in different drawings is not necessarily described accurately and may be changed as appropriate. Further, even in drawings such as plan views that are not cross-sectional views, hatching may be added to facilitate understanding of the content of the embodiments.
 また、以下に示される説明では、同様の構成要素には同じ符号を付して図示し、それらの名称と機能とについても同様のものとする。したがって、それらについての詳細な説明を、重複を避けるために省略する場合がある。 In addition, in the following description, similar components are shown with the same reference numerals, and their names and functions are also the same. Therefore, detailed descriptions thereof may be omitted to avoid duplication.
 また、本願明細書に記載される説明において、ある構成要素を「備える」、「含む」または「有する」などと記載される場合、特に断らない限りは、他の構成要素の存在を除外する排他的な表現ではない。 In addition, in the description provided in the specification of this application, when a component is described as "comprising," "includes," or "has," unless otherwise specified, exclusions that exclude the presence of other components are also used. It's not an expression.
 また、本願明細書に記載される説明において、「第1の」または「第2の」などの序数が使われる場合があっても、これらの用語は、実施の形態の内容を理解することを容易にするために便宜上使われるものであり、実施の形態の内容はこれらの序数によって生じ得る順序などに限定されるものではない。 Furthermore, even if ordinal numbers such as "first" or "second" are sometimes used in the description of the present specification, these terms will not be used to facilitate understanding of the content of the embodiments. These ordinal numbers are used for convenience and the content of the embodiments is not limited to the order that can occur based on these ordinal numbers.
 <第1の実施の形態>
 以下、本実施の形態に関する位置補正装置および位置補正方法について説明する。
<First embodiment>
Hereinafter, a position correction device and a position correction method according to the present embodiment will be explained.
 <位置補正装置の構成について>
 図1は、本実施の形態に関する位置補正装置100の構成の例を概念的に示す図である。位置補正装置100は、変位前後の3次元データと少なくとも変位後の画像データとを取得して、変位後の画像データから不変領域を抽出する。そして、位置補正装置100は、不変領域を用いて変位前後の3次元データの位置合わせのための補正を変位後の3次元データに対して行い、補正された変位後の3次元データを出力する。なお、位置補正装置100は、変位前の画像データも取得して、不変領域の抽出に用いてもよい。
<About the configuration of the position correction device>
FIG. 1 is a diagram conceptually showing an example of the configuration of a position correction device 100 according to the present embodiment. The position correction device 100 acquires three-dimensional data before and after displacement and at least image data after displacement, and extracts an unchanged area from the image data after displacement. Then, the position correction device 100 performs correction on the post-displacement 3D data to align the 3D data before and after the displacement using the unchanged area, and outputs the corrected 3D data after the displacement. . Note that the position correction device 100 may also acquire image data before displacement and use it to extract the unchanged area.
 位置補正装置100は、たとえば、コンピュータである。また、位置補正装置100の動作手順は、位置補正方法に相当する。また、位置補正装置100の動作を実現するプログラムは、位置補正プログラムに相当する。 The position correction device 100 is, for example, a computer. Further, the operation procedure of the position correction device 100 corresponds to a position correction method. Further, a program that realizes the operation of the position correction device 100 corresponds to a position correction program.
 図1に例が示されるように、位置補正装置100は、データ読み込み部101と、不変領域抽出部102と、位置合わせ部103と、データ出力部104とを備える。 As an example is shown in FIG. 1, the position correction device 100 includes a data reading section 101, a constant area extraction section 102, a positioning section 103, and a data output section 104.
 データ読み込み部101は、変位前後の3次元データおよび少なくとも変位後の画像データとを読み込む。ここで、変位前後とは、たとえば、土砂崩れまたは地滑りなどの災害の以前および以後を指す。また、3次元データとは、人工衛星、航空機、ドローン、車両または台車などに搭載された計測機器(レーザースキャナ、GNSS受信機、カメラまたは慣性計測装置(inertial measurement unit、すなわち、IMU)など)を使って計測を行うことで取得される地形などの3次元位置情報を示すデータであり、各種センサーによって直接取得されるデータ、または、これらのデータを解析処理することによって取得されるデータを含む。 The data reading unit 101 reads three-dimensional data before and after displacement and at least image data after displacement. Here, before and after displacement refers to, for example, before and after a disaster such as a landslide or landslide. In addition, 3D data refers to measurement equipment (laser scanners, GNSS receivers, cameras, inertial measurement units (IMU), etc.) mounted on artificial satellites, aircraft, drones, vehicles, or trolleys. This is data indicating three-dimensional position information such as topography obtained by performing measurements using a sensor, and includes data obtained directly by various sensors or data obtained by analyzing these data.
 変位後の画像データは、変位後の3次元データに対応する地形などの画像を示す画像データである。変位後の画像データは、3次元データと同様に、人工衛星、航空機、ドローン、車両または台車などに搭載された計測機器によって取得される。ただし、画像データは、3次元データが取得される計測機器(計測媒体)とは異なる計測機器(計測媒体)によって取得されてもよい。当該画像データには、画像データで示される地形などの画像が実空間上のどの地点(場所)に対応するものであるかを表す位置データも含まれる。 The image data after displacement is image data showing an image of topography, etc. corresponding to the three-dimensional data after displacement. Similar to three-dimensional data, the image data after displacement is acquired by a measuring device mounted on an artificial satellite, an aircraft, a drone, a vehicle, a trolley, or the like. However, the image data may be acquired by a measuring device (measuring medium) different from the measuring device (measuring medium) by which the three-dimensional data is acquired. The image data also includes position data indicating to which point (location) in real space an image such as a topography indicated by the image data corresponds.
 不変領域抽出部102は、取得された変位後の画像データで示される画像(たとえば、図2を参照)中から不変領域を抽出する。図2は、土砂崩れ発生後に人工衛星によって取得された画像の例を示す図である。 The unchanged area extracting unit 102 extracts an unchanged area from the image (for example, see FIG. 2) indicated by the acquired image data after displacement. FIG. 2 is a diagram showing an example of an image acquired by an artificial satellite after a landslide occurs.
 不変領域抽出部102は、たとえば、図2に例が示されるような変位後の画像から、図3に例が示されるような変位領域および不変領域を抽出する。すなわち、不変領域抽出部102は、土砂崩れによって土砂が露出している領域を変位領域とし、それ以外の領域(図3においては不図示)を不変領域として抽出する。図3は、土砂崩れ発生後の画像において抽出された変位領域の例を示す図である。 For example, the unchanged region extracting unit 102 extracts a displaced region and a fixed region as shown in FIG. 3 from a displaced image as shown in FIG. 2, for example. That is, the unchanged area extracting unit 102 extracts an area where earth and sand is exposed due to a landslide as a displaced area, and extracts other areas (not shown in FIG. 3) as an unchanged area. FIG. 3 is a diagram showing an example of a displacement region extracted from an image after a landslide occurs.
 このような変位領域および不変領域の抽出処理は、人手で目視確認することによって抽出されてもよいし、機械学習またはディープラーニングなどを用いて人工知能(artificial intelligence、すなわち、AI)によって抽出されてもよいし、その他の自動処理によって抽出されてもよい。 In this process of extracting the displacement region and the unchanged region, the displacement region and the unchanged region may be extracted by manual visual confirmation, or may be extracted by artificial intelligence (AI) using machine learning or deep learning. or may be extracted by other automatic processing.
 上記のとおり変位後の画像データは位置データを含んでいるため、画像データで示される画像上で変位領域および不変領域が抽出されれば、実空間上のどの領域が変位領域および不変領域に対応するかを判断することができる。なお、画像データは人工衛星などによって取得された画像データ以外にも、航空機またはドローンなどによって取得された画像データであってもよいし、地上で撮影された画像データであってもよい。 As mentioned above, the image data after displacement includes position data, so if the displacement area and unchanged area are extracted on the image indicated by the image data, which area in real space corresponds to the displacement area and the unchanged area. You can decide whether to do so. In addition to image data acquired by an artificial satellite or the like, the image data may be image data acquired by an aircraft, a drone, or the like, or image data photographed on the ground.
 また、不変領域抽出部102は、変位前後の画像データ両方(すなわち、変位前の3次元データに対応する変位前の画像データと、変位後の3次元データに対応する変位後の画像データとの双方)を取得してもよい。この場合、両画像データで示される画像を解析して変位がなかった画像の領域を不変領域300として抽出することができる。ただし、画像の取得時の画角、天候、時刻、画像解像度または季節などの違いに起因する画像の変位(変化)が、抽出したい変位(変化)とは異なるものであれば、このような異なる変位(変化)を考慮したうえで(すなわち、抽出したい変位とは異なる原因に起因する変位は、不変領域を抽出する際の変位量からは除外したうえで)、不変領域を抽出する。たとえば、台風による土砂災害に起因する変位領域および不変領域を抽出する場合に、季節の違いによって、変位前の画像では紅葉していた領域が変位後の画像では緑葉となっていた場合であっても、当該領域を変位領域とはせずに(すなわち、土砂災害に起因する変位ではない季節の違いに起因する変位は除外して)、土砂災害に起因する変位がなければ、不変領域として抽出する。 In addition, the unchanged area extraction unit 102 extracts both image data before and after the displacement (that is, image data before the displacement corresponding to the three-dimensional data before the displacement, and image data after the displacement corresponding to the three-dimensional data after the displacement). (both) may be obtained. In this case, it is possible to analyze the images shown by both image data and extract the area of the image where there is no displacement as the unchanged area 300. However, if the displacement (change) of the image due to differences in the angle of view, weather, time, image resolution, or season at the time of image acquisition is different from the displacement (change) that you want to extract, such different After considering the displacement (change) (that is, excluding displacement caused by a cause different from the displacement to be extracted from the amount of displacement when extracting the unchanged region), the constant region is extracted. For example, when extracting displaced areas and unchanged areas caused by landslides caused by a typhoon, an area with autumn leaves in the image before the displacement may turn green in the image after the displacement due to seasonal differences. However, if there is no displacement caused by a landslide, the area is not treated as a displacement area (that is, excluding displacement due to seasonal differences that are not caused by landslides), and is extracted as an unchanged area. do.
 位置合わせ部103は、変位後の3次元データを変位前の3次元データに対して位置合わせするために、変位後の3次元データを補正する。そして、位置合わせ部103は、補正後の3次元データ(変位後の3次元データ)を出力する。 The alignment unit 103 corrects the displaced three-dimensional data in order to align the displaced three-dimensional data with the pre-displaced three-dimensional data. Then, the alignment unit 103 outputs the corrected three-dimensional data (three-dimensional data after displacement).
 位置合わせ部103は、変位後の3次元データを補正するための補正量を、変位前後の3次元データにおける不変領域として抽出された領域内のデータのみを対象として算出する。この際、不変領域の境界付近のデータについては、状況に応じて、補正量を算出する際の対象データに含めてもよいし、対象データから除外してもよい。たとえば、不変領域の3次元データのデータ量が十分に大きい場合は、相対的に信頼性が低いデータである不変領域の境界付近の3次元データを対象データから除外としてもよい。一方で、不変領域の3次元データのデータ量が不足している場合は、対象データを増やすために、不変領域の境界付近の3次元データを対象データに含めてもよい。 The alignment unit 103 calculates the amount of correction for correcting the three-dimensional data after the displacement, targeting only the data within the area extracted as the unchanged area in the three-dimensional data before and after the displacement. At this time, data near the boundary of the unchanged area may be included in or excluded from the target data when calculating the correction amount, depending on the situation. For example, if the data amount of the three-dimensional data in the constant area is sufficiently large, the three-dimensional data near the boundary of the constant area, which is data with relatively low reliability, may be excluded from the target data. On the other hand, if the amount of three-dimensional data in the constant area is insufficient, three-dimensional data near the boundary of the constant area may be included in the target data in order to increase the amount of target data.
 さらに、位置合わせ部103は、上記のように算出された補正量にしたがって変位後の3次元データにおける不変領域さらには変位領域に対応するデータ(すなわち、変位後の3次元データ全体)を補正することで、変位後の3次元データ全体の位置合わせを行う。 Furthermore, the alignment unit 103 corrects the unchanged area in the 3D data after displacement and also the data corresponding to the displacement area (that is, the entire 3D data after displacement) according to the correction amount calculated as described above. By doing this, the entire three-dimensional data after displacement is aligned.
 データ出力部104は、補正された変位後の3次元データを出力する。 The data output unit 104 outputs the corrected displacement three-dimensional data.
 <位置補正装置のハードウェア構成について>
 図4は、本実施の形態に関する位置補正装置100のハードウェア構成の例を概略的に示す図である。
<About the hardware configuration of the position correction device>
FIG. 4 is a diagram schematically showing an example of the hardware configuration of the position correction device 100 according to this embodiment.
 図4に例が示されるように、位置補正装置100は、ハードウェアとして、プロセッサ901と、メモリ902と、補助記憶装置903と、入力装置904と、表示装置905と、通信装置906とを備える。 As an example shown in FIG. 4, the position correction device 100 includes a processor 901, a memory 902, an auxiliary storage device 903, an input device 904, a display device 905, and a communication device 906 as hardware. .
 補助記憶装置903には、上記のデータ読み込み部101、不変領域抽出部102、位置合わせ部103およびデータ出力部104の機能を実現するプログラムが記憶されている。 The auxiliary storage device 903 stores programs that implement the functions of the data reading section 101, constant area extraction section 102, alignment section 103, and data output section 104 described above.
 上記のプログラムは、補助記憶装置903からメモリ902にロードされる。そして、プロセッサ901が当該プログラムを実行して、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104の機能を実現する動作を行う。 The above program is loaded from the auxiliary storage device 903 to the memory 902. Then, the processor 901 executes the program and performs operations to realize the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, and the data output section 104.
 図4では、プロセッサ901がデータ読み込み部101、不変領域抽出部102、位置合わせ部103およびデータ出力部104の機能を実現するプログラムを実行している状態が模式的に示されている。 FIG. 4 schematically shows a state in which the processor 901 is executing a program that implements the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, and the data output section 104.
 入力装置904は、ユーザーからの指示などが入力される。表示装置905は、不変領域または補正後の変位後の画像データなどを表示する。通信装置906は、たとえば、計測データなどを受信する。 The input device 904 receives instructions and the like from the user. The display device 905 displays the unchanged area or the corrected and displaced image data. The communication device 906 receives, for example, measurement data.
 <位置補正装置の動作について>
 次に、図5を参照しつつ、本実施の形態に関する位置補正装置100における位置合わせ部103の動作の例について説明する。図5は、本実施の形態に関する位置補正装置100における位置合わせ部103の動作の例を示すフローチャートである。
<About the operation of the position correction device>
Next, an example of the operation of the alignment unit 103 in the position correction device 100 according to the present embodiment will be described with reference to FIG. 5. FIG. 5 is a flowchart showing an example of the operation of the alignment unit 103 in the position correction device 100 according to the present embodiment.
 以下では、取得された3次元データが点群データである場合について説明する。ただし、以下で示す動作の例において、点群データから不変領域のみを抽出する処理よりも後の位置合わせ処理については、必ずしもこれにしたがう手法である必要はなく、たとえば、ICP法などを含む、既存の3次元位置合わせ手法が適用されてもよい。また、以下に示される動作の例では、単位時間ごとの点群に分割した上で位置合わせ処理が実施されているが、空間ごとの点群に分割されてもよいし、分割処理が実施されずに、一度に全体が位置合わせ処理されてもよい。 In the following, a case where the acquired three-dimensional data is point group data will be described. However, in the example of the operation shown below, the alignment process after the process of extracting only the unchanged area from the point cloud data does not necessarily have to be a method that follows this method, and for example, the method including the ICP method etc. Existing three-dimensional alignment techniques may be applied. In addition, in the example of the operation shown below, alignment processing is performed after dividing into point clouds for each unit time, but it may also be divided into point clouds for each space, or division processing is performed. Alignment processing may be performed on the entire image at one time.
 まず、ステップST01において、位置合わせ部103が、取得された変位前後の点群データから、不変領域に対応するデータのみを抽出する。 First, in step ST01, the alignment unit 103 extracts only data corresponding to the unchanged area from the acquired point group data before and after displacement.
 この際、画像データに含まれる位置データ、または、変位後の点群データには位置ずれが存在する可能性があることなどから、不変領域の境界付近の点群データは抽出対象から除外するようにして、不変領域の抽出に用いる点群データを限定してもよい。また、後述の位置合わせ処理に応じて、不変領域のみを抽出する対象となる点群データは、変位前の点群データのみであってもよいし、変位後の点群データのみであってもよい。 At this time, since there is a possibility that there is a positional shift in the position data included in the image data or in the point cloud data after displacement, point cloud data near the boundary of the unchanged area should be excluded from the extraction target. The point cloud data used for extracting the invariant region may be limited. In addition, depending on the alignment process described later, the point cloud data from which only the unchanged area is extracted may be only the point cloud data before displacement, or only the point cloud data after displacement. good.
 次に、ステップST02において、位置合わせ部103が、変位後の点群データを、単位時刻ごとに分割する。 Next, in step ST02, the alignment unit 103 divides the displaced point group data for each unit time.
 次に、ステップST03において、位置合わせ部103が、単位時刻ごとに分割された変位後の点群データと変位前の点群データとから、特徴点を抽出する。ここで、特徴点とは、主に構造物の角または電柱の上端および下端などに対応する。 Next, in step ST03, the alignment unit 103 extracts feature points from the point group data after displacement and the point group data before displacement, which are divided for each unit time. Here, the feature points mainly correspond to corners of structures or the upper and lower ends of utility poles.
 特徴点の抽出には、まず、取得された3次元の点群データのそれぞれの点について、2種類の異なる近傍領域(たとえば、30cm範囲の近傍領域と1m範囲の近傍領域)内に含まれる点群データを取得する。そして、それぞれの近傍領域内の点群データを用いて最小二乗法によって平面を決定し、当該平面の法線ベクトルをそれぞれ求める。そして、2種類の異なる近傍領域に対応して取得された法線ベクトル間の差分を当該点の特徴量とし、この特徴量の値がしきい値以上である場合に、その点を特徴点とする。 To extract feature points, first, for each point of the acquired three-dimensional point cloud data, points included in two different types of nearby areas (for example, a 30 cm nearby area and a 1 meter nearby area) are extracted. Obtain group data. Then, a plane is determined by the least squares method using the point group data in each neighboring area, and the normal vector of each plane is determined. Then, the difference between the normal vectors obtained corresponding to two different types of neighboring regions is taken as the feature quantity of the point, and if the value of this feature quantity is equal to or greater than the threshold value, the point is designated as a feature point. do.
 また、ここでは法線ベクトルの差分が特徴量とされたが、たとえば、曲率の差分または重心の差分を抽出して特徴量としてもよい。また、抽出された複数の特徴量を複合的に用いて、特徴点かどうかを判断してもよい。 Furthermore, here, the difference in normal vectors is used as the feature quantity, but for example, the difference in curvature or the difference in the center of gravity may be extracted and used as the feature quantity. Alternatively, a plurality of extracted feature amounts may be used in combination to determine whether a point is a feature point or not.
 上記の曲率の抽出方法としては、たとえば、2種類の異なる近傍領域内の点群データを取得した後、当該点群データの主成分分析を行う。そして、共分散行列の固有値として曲率が取得することができる。重心の抽出方法としては、たとえば、2種類の異なる近傍領域内の点群データのそれぞれの座標値を平均することで取得することができ、2つの近傍領域に対応する重心間の距離を特徴量とすることができる。 As a method for extracting the above-mentioned curvature, for example, after acquiring point cloud data in two different types of neighboring regions, principal component analysis of the point cloud data is performed. Then, the curvature can be obtained as the eigenvalue of the covariance matrix. As a method for extracting the centroid, for example, it can be obtained by averaging the respective coordinate values of point cloud data in two different types of neighboring areas, and the distance between the centroids corresponding to the two neighboring areas is used as a feature value. It can be done.
 次に、ステップST04において、位置合わせ部103が、特徴点の集約処理を行う。具体的には、距離が近い特徴点同士を集約し、それらの特徴点の重心位置に新たな特徴点を生成する。この際、距離が近いだけでなく、それぞれの特徴量の値が近いことも集約の条件としてもよい。 Next, in step ST04, the alignment unit 103 performs feature point aggregation processing. Specifically, feature points that are close to each other are aggregated, and a new feature point is generated at the center of gravity of these feature points. At this time, the condition for aggregation may be not only that the distances are close, but also that the values of the respective feature quantities are close.
 次に、ステップST05において、位置合わせ部103が、変位前後それぞれの点群データから取得された特徴点の対応付けを行う。特徴点の対応付けは、点間距離(水平、垂直または3次元)、特徴量の同一性、または、その他付与された属性などが対応するか否かに基づいて判定される。 Next, in step ST05, the alignment unit 103 correlates the feature points acquired from the point cloud data before and after the displacement. The correspondence between the feature points is determined based on whether the distance between the points (horizontal, vertical, or three-dimensional), the identity of the feature amount, or other assigned attributes correspond.
 次に、ステップST06において、位置合わせ部103が、対応付けられた点から、対応する単位時刻における補正量を求める。補正量を求める際には、6軸を補正してもよいし、平行移動のみにとどめて補正してもよい。 Next, in step ST06, the alignment unit 103 calculates the correction amount at the corresponding unit time from the associated points. When determining the amount of correction, the six axes may be corrected, or only the parallel movement may be corrected.
 次に、ステップST07において、位置合わせ部103が、単位時刻ごとに分割されたすべての点群データについて補正量が求められたか否かを判定する。そして、すべての点群データについて補正量が求められた場合、すなわち、図5に例が示されるステップST07から分岐する「YES」に対応する場合には、図5に例が示されるステップST08へ進む。一方で、補正量が求められていない点群データが残っている場合、すなわち、図5に例が示されるステップST07から分岐する「NO」に対応する場合には、図5に例が示されるステップST03に戻る。 Next, in step ST07, the alignment unit 103 determines whether the correction amount has been calculated for all the point group data divided for each unit time. If the correction amount has been determined for all the point cloud data, that is, if it corresponds to "YES" branching from step ST07, an example of which is shown in FIG. 5, the process proceeds to step ST08, an example of which is shown in FIG. move on. On the other hand, if there remains point cloud data for which the amount of correction has not been determined, that is, if it corresponds to "NO" branching from step ST07, an example of which is shown in FIG. 5, an example is shown in FIG. Return to step ST03.
 そして、ステップST08において、位置合わせ部103が、それぞれの単位時刻で求められた補正量に対して平滑化処理を行うことで、それぞれの単位時刻での補正量を再計算し、再計算された補正量を変位後の点群データに適用することで、点群データ全体の補正が完了する。 Then, in step ST08, the alignment unit 103 recalculates the correction amount at each unit time by performing smoothing processing on the correction amount obtained at each unit time, and the recalculated By applying the correction amount to the displaced point group data, correction of the entire point group data is completed.
 以上より、本実施の形態に関する位置補正装置によれば、変位前後の点群データの位置合わせを行う際に、変位後の画像データで示される画像を用いて、位置合わせに用いるデータの領域(範囲)を適切に限定することができる。そのため、高い精度で点群データ間の位置合わせを行うことができる。 As described above, according to the position correction device according to the present embodiment, when aligning point cloud data before and after displacement, an image indicated by the image data after displacement is used to align the area of data used for alignment ( range) can be appropriately limited. Therefore, alignment between point cloud data can be performed with high accuracy.
 また、本実施の形態に関する位置補正装置によって高い精度で補正された3次元データを用いて変位量抽出処理を行うことで、高い精度で変位量の情報を取得することができる。高精度な変位量情報を取得することで、たとえば、災害時の変位であれば、復旧工事に要する時間または費用などを高い精度で見積もることが可能となり、意思決定の迅速化に寄与することができる。また、たとえば、設備などの経年劣化による変位であれば、補修工事の要否などを適切に判断することができる。 Furthermore, by performing the displacement amount extraction process using the three-dimensional data that has been corrected with high accuracy by the position correction device according to the present embodiment, information on the displacement amount can be acquired with high accuracy. By acquiring highly accurate displacement information, for example, in the case of displacement during a disaster, it becomes possible to estimate with high precision the time and cost required for restoration work, contributing to faster decision-making. can. Further, for example, if the displacement is due to deterioration over time of equipment, etc., it is possible to appropriately judge whether or not repair work is necessary.
 また、本実施の形態では、変位後の画像データで示される画像を用いる。これは、事前に変位流域または不変領域を予想しておき、当該情報を利用して位置合わせ処理を行う場合と比較して、事前の準備作業が不要となり、また、想定以上の規模の変位が起こった際なども適切に位置合わせが行えるというメリットがある。 Furthermore, in this embodiment, an image represented by image data after displacement is used. Compared to predicting the displacement basin or unchanged area in advance and performing positioning processing using that information, this eliminates the need for advance preparation work, and also eliminates the need for displacement of a larger scale than expected. This has the advantage of being able to properly align the position even in the event of an accident.
 また、本実施の形態では、不変領域を画像内の範囲として抽出する。そのため、たとえば、災害後に変位がなかった箇所に人手で、ドローン測量の際の基準点となるグラウンドコントロールポイント(GCP)を設置して計測する場合と比較して、大量の位置合わせの基準点を取得できるため、高い精度に位置合わせができるというメリットがある。また、実際に現地でGCPを設置する必要もなくなる。 Additionally, in this embodiment, the unchanged area is extracted as a range within the image. Therefore, compared to, for example, manually setting up and measuring ground control points (GCPs) that serve as reference points during drone surveying in areas that have not been displaced after a disaster, this method requires a large number of reference points for positioning. Since it can be acquired, it has the advantage of being able to align with high accuracy. Furthermore, there is no need to actually install a GCP on-site.
 また、本実施の形態では、変位前後の画像データを用いて不変領域を抽出することができるため、たとえば、変位前から土砂がむき出しになっていた箇所を適切に不変領域として抽出することなどが可能となり、高い精度で不変領域を抽出することができる。 In addition, in this embodiment, since it is possible to extract an unchanged area using image data before and after displacement, for example, it is possible to appropriately extract a place where earth and sand was exposed before displacement as an unchanged area. This makes it possible to extract unchanged regions with high accuracy.
 <第2の実施の形態>
 本実施の形態に関する位置補正装置、および、変位量抽出システムについて説明する。なお、以下の説明においては、以上に記載された実施の形態で説明された構成要素と同様の構成要素については同じ符号を付して図示し、その詳細な説明については適宜省略するものとする。
<Second embodiment>
A position correction device and a displacement amount extraction system according to the present embodiment will be described. In addition, in the following description, components similar to those described in the embodiment described above will be illustrated with the same reference numerals, and detailed description thereof will be omitted as appropriate. .
 <変位量抽出システムの構成について>
 図6は、本実施の形態に関する変位量抽出システム200の構成の例を概念的に示す図である。図6に例が示されるように、変位量抽出システム200は、位置補正装置100と、変位量抽出部201と、データ表示部202とを備える。
<About the configuration of the displacement extraction system>
FIG. 6 is a diagram conceptually showing an example of the configuration of the displacement amount extraction system 200 according to the present embodiment. As an example is shown in FIG. 6, the displacement amount extraction system 200 includes a position correction device 100, a displacement amount extraction section 201, and a data display section 202.
 本実施の形態に関する変位量抽出システム200のハードウェア構成は、たとえば、図4に示された位置補正装置100のハードウェア構成に加えて、プロセッサ901が実行するプログラムに変位量抽出部201およびデータ表示部202の機能が含まれるものである。 For example, in addition to the hardware configuration of the position correction device 100 shown in FIG. This includes the functions of the display unit 202.
 本実施の形態に関する変位量抽出システム200においても、図5に示されるような動作を行い、位置合わせ部103が、変位後の3次元データ全体を補正する。 The displacement amount extraction system 200 according to the present embodiment also performs the operation shown in FIG. 5, and the alignment unit 103 corrects the entire three-dimensional data after displacement.
 本実施の形態では、上記に加えて、変位量抽出部201が、データ出力部104から出力される、補正された変位後の3次元データと、変位前の3次元データとを比較して、3次元データにおける各位置での変位量を抽出する。 In this embodiment, in addition to the above, the displacement amount extraction unit 201 compares the corrected three-dimensional data after displacement and the three-dimensional data before displacement, which are output from the data output unit 104, Extract the amount of displacement at each position in the three-dimensional data.
 そして、データ表示部202が、変位量抽出部201における変位量抽出結果を、画面上にわかりやすく表示する。たとえば、2次元地図を表示し、土砂流出などによって高さが減少した箇所を青で、土砂流入などによって高さが増加した箇所を赤でそれぞれ表示し、高さの変位量に応じて徐々に表示色を変化させるなどの表示を行う。他にも、3次元グラフのように各位置での増減量を表示してもよい。 Then, the data display section 202 displays the displacement amount extraction result in the displacement amount extraction section 201 on the screen in an easy-to-understand manner. For example, when displaying a two-dimensional map, areas where the height has decreased due to sediment outflow are displayed in blue, areas where the height has increased due to sediment inflow, etc. are displayed in red, and the height changes gradually depending on the amount of height displacement. Perform display such as changing the display color. Alternatively, increases and decreases at each position may be displayed like a three-dimensional graph.
 <変位量抽出システムの動作について>
 次に、本実施の形態に関する変位量抽出システム200における変位量抽出部201の動作の例について説明する。図7は、本実施の形態に関する変位量抽出システム200における変位量抽出部201の動作の例を示すフローチャートである。
<About the operation of the displacement extraction system>
Next, an example of the operation of the displacement amount extraction section 201 in the displacement amount extraction system 200 according to the present embodiment will be described. FIG. 7 is a flowchart showing an example of the operation of the displacement amount extraction unit 201 in the displacement amount extraction system 200 according to the present embodiment.
 なお、変位量の抽出方法は以下に示される方法に限られず、たとえば、点群データから3次元ポリゴンデータを生成した後に変位量を求めるなど、別の方法が用いられてもよい。 Note that the method for extracting the amount of displacement is not limited to the method shown below, and other methods may be used, such as, for example, calculating the amount of displacement after generating three-dimensional polygon data from point cloud data.
 まず、ステップST11において、変位量抽出部201が、変位前後の3次元データの全領域をメッシュ状に区切る。区画の大きさは、たとえば、10cmの正方形区画とする。 First, in step ST11, the displacement amount extraction unit 201 divides the entire area of the three-dimensional data before and after the displacement into a mesh shape. The size of the section is, for example, a 10 cm square section.
 次に、ステップST12において、変位量抽出部201が、それぞれのメッシュ内に存在する3次元データから平均の高さを抽出し、これをそのメッシュ区画の高さとする。ここで抽出する高さは平均でなく、メッシュ内に存在する3次元データの最低高さまたは最高高さであってもよい。 Next, in step ST12, the displacement extraction unit 201 extracts the average height from the three-dimensional data existing in each mesh, and sets this as the height of that mesh section. The height extracted here is not an average, but may be the minimum height or maximum height of three-dimensional data existing within the mesh.
 次に、ステップST13において、変位量抽出部201が、それぞれのメッシュ区画において、変位前後の高さの変位量を抽出する。これによって、3次元データの全域において、変位量マップを生成することができる。 Next, in step ST13, the displacement amount extraction unit 201 extracts the displacement amount of the height before and after the displacement in each mesh section. Thereby, a displacement map can be generated over the entire area of the three-dimensional data.
 以上より、本実施の形態に関する変位量抽出システムによれば、高い精度で位置合わせされた変位前後の3次元データに基づいて、高さの変位量を求めることができる。よって、正確な変位量を求めることができる。 As described above, according to the displacement amount extraction system according to the present embodiment, the height displacement amount can be determined based on the three-dimensional data before and after the displacement that is aligned with high accuracy. Therefore, an accurate amount of displacement can be determined.
 高精度な変位量情報を取得することで、たとえば、災害時の変位であれば、復旧工事に要する時間または費用などを高い精度で見積もることが可能となり、意思決定の迅速化に寄与することができる。また、たとえば、設備などの経年劣化による変位であれば、補修工事の要否などを適切に判断することができる。 By acquiring highly accurate displacement information, for example, in the case of displacement during a disaster, it becomes possible to estimate with high precision the time and cost required for restoration work, contributing to faster decision-making. can. Further, for example, if the displacement is due to deterioration over time of equipment, etc., it is possible to appropriately judge whether or not repair work is necessary.
 また、本実施の形態では、抽出された正確な変位量を視覚的にわかりやすく表示することができるため、災害時などの状況の迅速な把握が容易となる。 Furthermore, in this embodiment, the extracted accurate displacement amount can be visually and easily displayed, making it easy to quickly grasp the situation in the event of a disaster or the like.
 <ハードウェア構成の補足説明>
 図4を用いて位置補正装置100、変位量抽出システム200のハードウェア構成の説明を補足する。
<Supplementary explanation of hardware configuration>
The description of the hardware configuration of the position correction device 100 and the displacement amount extraction system 200 will be supplemented with reference to FIG.
 プロセッサ901は、プロセッシングを行う集積回路(integrated circuit、すなわち、IC)である。プロセッサ901は、たとえば、中央演算処理装置(central processing unit、すなわち、CPU)、デジタルシグナルプロセッサー(digital signal processor、すなわち、DSP)などである。 The processor 901 is an integrated circuit (IC) that performs processing. The processor 901 is, for example, a central processing unit (CPU), a digital signal processor (DSP), or the like.
 メモリ902は、ランダムアクセスメモリー(random access memory、すなわち、RAM)である。 Memory 902 is random access memory (RAM).
 図4に示される補助記憶装置903は、リードオンリーメモリー(read only memory、すなわち、ROM)、フラッシュメモリ、ハードディスクドライブ(Hard disk drive、すなわち、HDD)などである。 The auxiliary storage device 903 shown in FIG. 4 is a read only memory (ROM), a flash memory, a hard disk drive (HDD), or the like.
 図4に示される通信装置906は、データの通信処理を実行する電子回路である。通信装置906は、たとえば、通信チップまたはネットワークインタフェースカード(network interface card、すなわち、NIC)である。 The communication device 906 shown in FIG. 4 is an electronic circuit that executes data communication processing. Communication device 906 is, for example, a communication chip or a network interface card (NIC).
 補助記憶装置903には、operating system(OS)が記憶されている。そして、OSの少なくとも一部がプロセッサ901によって実行される。プロセッサ901は、OSの少なくとも一部を実行しながら、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202の機能を実現するプログラムを実行する。プロセッサ901がOSを実行することで、タスク管理、メモリ管理、ファイル管理、通信制御などが行われる。 The auxiliary storage device 903 stores an operating system (OS). At least a portion of the OS is executed by the processor 901. The processor 901 realizes the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 while executing at least a part of the OS. Run the program. When the processor 901 executes the OS, task management, memory management, file management, communication control, etc. are performed.
 また、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202の処理の結果を示す情報、データ、信号値および変数値の少なくともいずれかが、メモリ902、補助記憶装置903、プロセッサ901内のレジスタおよびキャッシュメモリの少なくともいずれかに記憶される。 In addition, at least information, data, signal values, and variable values indicating the results of processing by the data reading section 101, the unchanged area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are provided. Any one of them is stored in at least one of memory 902, auxiliary storage device 903, a register in processor 901, and a cache memory.
 また、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202の機能を実現するプログラムは、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ブルーレイ(登録商標)ディスク、DVDなどの可搬記録媒体に格納されていてもよい。そして、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202の機能を実現するプログラムが格納された可搬記録媒体を流通させてもよい。 In addition, the programs that realize the functions of the data reading section 101, the constant area extraction section 102, the positioning section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 can be used on magnetic disks, flexible disks, optical disks, compact disks, etc. It may be stored in a portable recording medium such as a disc, a Blu-ray (registered trademark) disc, or a DVD. Then, a portable recording medium storing a program that realizes the functions of the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 is distributed. You can.
 また、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202の「部」を、「回路」または「工程」または「手順」または「処理」に読み替えてもよい。 In addition, the "parts" of the data reading section 101, the constant area extraction section 102, the positioning section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are referred to as "circuit", "process", or "procedure". Alternatively, it may be read as "processing".
 また、位置補正装置100、変位量抽出システム200は、処理回路により実現されてもよい。処理回路は、たとえば、ロジックIC(Integrated Circuit)、GA(Gate Array)、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)である。 Furthermore, the position correction device 100 and the displacement amount extraction system 200 may be realized by a processing circuit. The processing circuit is, for example, a logic IC (Integrated Circuit), a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Circuit). e Gate Array).
 この場合、データ読み込み部101、不変領域抽出部102、位置合わせ部103、データ出力部104、変位量抽出部201およびデータ表示部202は、それぞれ処理回路の一部として実現される。 In this case, the data reading section 101, the constant area extraction section 102, the alignment section 103, the data output section 104, the displacement amount extraction section 201, and the data display section 202 are each implemented as part of a processing circuit.
 なお、本明細書では、プロセッサ901と処理回路との上位概念を、「プロセッシングサーキットリー」という。つまり、プロセッサ901と処理回路とは、それぞれ「プロセッシングサーキットリー」の具体例である。 Note that in this specification, the general concept of the processor 901 and the processing circuit is referred to as "processing circuitry." In other words, the processor 901 and the processing circuit are specific examples of "processing circuitry".
 <以上に記載された複数の実施の形態によって生じる効果について>
 次に、以上に記載された複数の実施の形態によって生じる効果の例を示す。なお、以下の説明においては、以上に記載された複数の実施の形態に例が示された具体的な構成に基づいて当該効果が記載されるが、同様の効果が生じる範囲で、本願明細書に例が示される他の具体的な構成と置き換えられてもよい。すなわち、以下では便宜上、対応づけられる具体的な構成のうちのいずれか1つのみが代表して記載される場合があるが、代表して記載された具体的な構成が対応づけられる他の具体的な構成に置き換えられてもよい。
<About the effects produced by the multiple embodiments described above>
Next, examples of effects produced by the plurality of embodiments described above will be shown. Note that in the following explanation, the effects will be described based on the specific configurations illustrated in the multiple embodiments described above, but the present specification does not apply to the extent that similar effects occur. may be replaced with other specific configurations, examples of which are shown in . That is, for convenience, only one of the concrete configurations that are associated may be described below as a representative, but other specific configurations that are described as a representative may also be described. It may be replaced with a similar configuration.
 また、当該置き換えは、複数の実施の形態に跨ってなされてもよい。すなわち、異なる実施の形態において例が示されたそれぞれの構成が組み合わされて、同様の効果が生じる場合であってもよい。 Further, the replacement may be performed across multiple embodiments. That is, the respective configurations shown as examples in different embodiments may be combined to produce similar effects.
 以上に記載された実施の形態によれば、位置補正装置は、不変領域抽出部102と、補正部とを備える。ここで、補正部は、たとえば、位置合わせ部103などに対応するものである。変位前の位置情報を示す3次元データを変位前の3次元データとする。変位後の位置情報を示す3次元データを変位後の3次元データとする。変位後の3次元データに対応する画像データを変位後の画像データとする。不変領域抽出部102は、変位後の画像データにおける変位していない画像の領域を不変領域300として抽出する。位置合わせ部103は、不変領域300に対応する位置情報のみを基準として、変位後の3次元データを変位前の3次元データに位置合わせするように補正する。 According to the embodiment described above, the position correction device includes the constant area extraction section 102 and the correction section. Here, the correction section corresponds to, for example, the alignment section 103. Three-dimensional data indicating position information before displacement is defined as three-dimensional data before displacement. Let the three-dimensional data indicating the position information after the displacement be the three-dimensional data after the displacement. Image data corresponding to the three-dimensional data after displacement is defined as image data after displacement. The unchanged area extraction unit 102 extracts an undisplaced image area in the image data after displacement as an unchanged area 300. The alignment unit 103 corrects the 3D data after the displacement to align it with the 3D data before the displacement, using only the position information corresponding to the unchanged area 300 as a reference.
 このような構成によれば、変位後の画像データで示される画像を使って不変領域300を抽出することで、多数の基準点を簡便かつ適切に取得することができる。そのため、高い精度で3次元データ間の位置合わせができる。 According to such a configuration, by extracting the unchanged region 300 using the image indicated by the image data after displacement, it is possible to easily and appropriately acquire a large number of reference points. Therefore, alignment between three-dimensional data can be performed with high precision.
 なお、上記の構成に本願明細書に例が示された他の構成を適宜追加した場合、すなわち、上記の構成としては言及されなかった本願明細書中の他の構成が適宜追加された場合であっても、同様の効果を生じさせることができる。 In addition, in the case where other configurations illustrated in the present specification are appropriately added to the above configuration, that is, when other configurations in the present specification that are not mentioned as the above configurations are appropriately added. Even if there is, the same effect can be produced.
 また、以上に記載された実施の形態によれば、変位前の3次元データに対応する画像データを変位前の画像データとする。そして、不変領域抽出部102が、変位前の画像データと変位後の画像データとの間で変位していない画像の領域を不変領域300として抽出する。このような構成によれば、変位前後の画像データ間の変位量に基づいて不変領域300を容易に特定することができる。そのため、変位前後の3次元データに基づいて不変領域300を特定する場合よりも、簡便かつ高精度で不変領域300を特定することができる。そして、高い精度で不変領域300を特定することで、高い精度で位置合わせの補正も行うことができる。 Furthermore, according to the embodiment described above, the image data corresponding to the three-dimensional data before displacement is used as the image data before displacement. Then, the unchanged area extracting unit 102 extracts, as the unchanged area 300, an area of the image that is not displaced between the image data before displacement and the image data after displacement. According to such a configuration, the unchanged region 300 can be easily specified based on the amount of displacement between the image data before and after the displacement. Therefore, the unchanged area 300 can be specified more easily and with higher precision than when the unchanged area 300 is specified based on three-dimensional data before and after displacement. By identifying the unchanged area 300 with high accuracy, alignment can also be corrected with high accuracy.
 また、以上に記載された実施の形態によれば、不変領域抽出部102が、災害発生前の画像を示す変位前の画像データと災害発生後の画像を示す変位後の画像データとに基づいて、不変領域300を抽出する。このような構成によれば、災害前後の画像を比較して不変領域300を特定することで、復旧工事に要する時間または費用などを高い精度で見積もることが可能となり、意思決定の迅速化に寄与することができる。 Further, according to the embodiment described above, the unchanged area extraction unit 102 extracts the image data based on the image data before displacement indicating the image before the disaster occurrence and the image data after displacement indicating the image after the occurrence of the disaster. , extract the unchanged region 300. With this configuration, by comparing images before and after a disaster and identifying the unchanged area 300, it becomes possible to estimate the time or cost required for restoration work with high accuracy, contributing to faster decision-making. can do.
 また、以上に記載された実施の形態によれば、不変領域抽出部102が、人工衛星によって取得される衛星画像を示す変位後の画像データに基づいて、不変領域300を抽出する。このような構成によれば、人工衛星によって取得される衛星画像を画像データとして使うことで、広範囲の地形などの情報に基づいて不変領域300を抽出することができる。 Furthermore, according to the embodiment described above, the unchanged area extraction unit 102 extracts the unchanged area 300 based on the displaced image data indicating a satellite image acquired by an artificial satellite. According to such a configuration, by using a satellite image acquired by an artificial satellite as image data, it is possible to extract the unchanged area 300 based on information such as a wide range of topography.
 また、以上に記載された実施の形態によれば、変位前の3次元データおよび変位後の3次元データを取得するための第1の計測媒体と、変位前の画像データおよび変位後の画像データを取得するための第2の計測媒体とが異なる。このような構成によれば、ドローンまたはモービルマッピングシステム(mobile mapping system、すなわち、MMS)などで取得された3次元データを、人工衛星または航空機からの撮影によって取得された画像データから抽出される不変領域300に基づいて補正することで、高い精度での位置合わせが可能となる。なお、計測媒体が異なるとは、1つのドローンに搭載されたカメラとlight detection and ranging(LiDAR)とをいう場合に限られるものではなく、たとえば、ドローンと別の航空機との関係、MMS車両と人工衛星との関係など、計測媒体(計測器)が搭載される移動体が異なる場合が含まれる。 Further, according to the embodiment described above, the first measurement medium for acquiring the three-dimensional data before displacement and the three-dimensional data after displacement, and the image data before displacement and the image data after displacement. The second measurement medium for obtaining the information is different. According to such a configuration, three-dimensional data acquired by a drone or a mobile mapping system (i.e., MMS), etc. can be converted into an unchangeable image data extracted from image data acquired by shooting from an artificial satellite or an aircraft. By correcting based on the area 300, alignment with high accuracy becomes possible. Note that different measurement media is not limited to the case where a camera mounted on one drone and light detection and ranging (LiDAR) are used; for example, the relationship between a drone and another aircraft, or the relationship between a drone and an MMS vehicle, etc. This includes cases where the mobile body on which the measurement medium (measuring instrument) is mounted is different, such as the relationship with an artificial satellite.
 また、以上に記載された実施の形態によれば、変位量抽出システムが、上記の位置補正装置100と、変位量抽出部201とを備える。変位量抽出部201は、変位前の3次元データと、位置補正装置において補正された変位後の3次元データとに基づいて、変位前の3次元データおよび変位後の3次元データで示される位置の変位量を抽出する。このような構成によれば、位置補正装置100における高い精度の位置補正によって補正された変位後の3次元データを使って変位前後の3次元データの変位量を抽出することによって、正確な変位量マップを得ることができる。 Furthermore, according to the embodiment described above, the displacement amount extraction system includes the above-described position correction device 100 and the displacement amount extraction section 201. The displacement extraction unit 201 extracts the position indicated by the 3D data before displacement and the 3D data after displacement based on the 3D data before displacement and the 3D data after displacement corrected by the position correction device. Extract the amount of displacement. According to such a configuration, by extracting the displacement amount of the 3D data before and after the displacement using the 3D data after the displacement corrected by the highly accurate position correction in the position correction device 100, the accurate displacement amount can be obtained. You can get a map.
 以上に記載された実施の形態によれば、位置補正方法において、変位前の位置情報を示す3次元データを変位前の3次元データとし、変位後の位置情報を示す3次元データを変位後の3次元データとし、変位後の3次元データに対応する画像データを変位後の画像データとする。そして、変位後の画像データにおける変位していない画像の領域を不変領域300として抽出する。そして、不変領域300に対応する位置情報のみを基準として、変位後の3次元データを変位前の3次元データに位置合わせするように補正する。 According to the embodiment described above, in the position correction method, three-dimensional data indicating position information before displacement is used as three-dimensional data before displacement, and three-dimensional data indicating position information after displacement is used as three-dimensional data indicating position information after displacement. Let the image data corresponding to the three-dimensional data after displacement be the image data after displacement. Then, an area of the image that has not been displaced in the image data after the displacement is extracted as an unchanged area 300. Then, using only the positional information corresponding to the unchanged area 300 as a reference, the three-dimensional data after the displacement is corrected so as to be aligned with the three-dimensional data before the displacement.
 このような構成によれば、変位後の画像データで示される画像を使って不変領域300を抽出することで、多数の基準点を簡便かつ適切に取得することができる。そのため、高い精度で3次元データ間の位置合わせができる。 According to such a configuration, by extracting the unchanged region 300 using the image indicated by the image data after displacement, it is possible to easily and appropriately acquire a large number of reference points. Therefore, alignment between three-dimensional data can be performed with high precision.
 なお、特段の制限がない場合には、それぞれの処理が行われる順序は変更することができる。 Note that if there are no particular restrictions, the order in which each process is performed can be changed.
 また、上記の構成に本願明細書に例が示された他の構成を適宜追加した場合、すなわち、上記の構成としては言及されなかった本願明細書中の他の構成が適宜追加された場合であっても、同様の効果を生じさせることができる。 In addition, if other configurations exemplified in the specification of the present application are appropriately added to the above configuration, that is, if other configurations in the specification of the present application that are not mentioned as the above configurations are appropriately added. Even if there is, the same effect can be produced.
 <以上に記載された複数の実施の形態の変形例について>
 以上に記載された複数の実施の形態では、それぞれの構成要素の寸法、形状、相対的配置関係または実施の条件などについても記載する場合があるが、これらはすべての局面においてひとつの例であって、限定的なものではない。
<About modifications of the multiple embodiments described above>
In the plurality of embodiments described above, the dimensions, shapes, relative arrangement relationships, implementation conditions, etc. of each component may be described, but these are only one example in all aspects. However, it is not limited.
 したがって、例が示されていない無数の変形例と均等物とが、本願明細書に開示される技術の範囲内において想定される。たとえば、少なくとも1つの構成要素を変形する場合、追加する場合または省略する場合、さらには、少なくとも1つの実施の形態における少なくとも1つの構成要素を抽出し、他の実施の形態における構成要素と組み合わせる場合が含まれるものとする。 Accordingly, countless variations and equivalents not illustrated are envisioned within the scope of the technology disclosed herein. For example, when at least one component is modified, added, or omitted, or when at least one component in at least one embodiment is extracted and combined with a component in another embodiment. shall be included.
 また、以上に記載された少なくとも1つの実施の形態において、特に指定されずに材料名などが記載された場合は、矛盾が生じない限り、当該材料に他の添加物が含まれた、たとえば、合金などが含まれるものとする。 In at least one of the embodiments described above, if a material name is listed without being specified, unless a contradiction arises, the material may contain other additives, such as This includes alloys, etc.
 また、矛盾が生じない限り、以上に記載された実施の形態において「1つ」の構成要素が備えられる、と記載された場合に、当該構成要素が「1つ以上」備えられていてもよい。 Also, unless a contradiction occurs, when it is stated that "one" component is provided in the embodiment described above, "one or more" of the component may be provided. .
 さらに、以上に記載された実施の形態におけるそれぞれの構成要素は概念的な単位であって、本願明細書に開示される技術の範囲内には、1つの構成要素が複数の構造物から成る場合と、1つの構成要素がある構造物の一部に対応する場合と、さらには、複数の構成要素が1つの構造物に備えられる場合とを含むものとする。 Furthermore, each component in the embodiments described above is a conceptual unit, and within the scope of the technology disclosed in this specification, a case where one component consists of a plurality of structures This includes a case where one component corresponds to a part of a certain structure, and a case where a plurality of components are included in one structure.
 また、以上に記載された実施の形態におけるそれぞれの構成要素には、同一の機能を発揮する限り、他の構造または形状を有する構造物が含まれるものとする。 Furthermore, each component in the embodiments described above includes structures having other structures or shapes as long as they exhibit the same function.
 また、本願明細書における説明は、本技術に関連するすべての目的のために参照され、いずれも、従来技術であると認めるものではない。 Further, the description herein is referred to for all purposes related to the present technology, and none is admitted to be prior art.
 また、以上に記載された実施の形態で記載されたそれぞれの構成要素は、ソフトウェアまたはファームウェアとしても、それと対応するハードウェアとしても想定され、ソフトウェアとしては、たとえば「部」などを称され、ハードウェアとしては、たとえば「処理回路」「プロセッサ」(circuitry)などと称される。 In addition, each of the components described in the embodiments described above is assumed to be software or firmware, or hardware corresponding to it. The hardware is called, for example, a "processing circuit" or "processor" (circuitry).
 100 位置補正装置、102 不変領域抽出部、200 変位量抽出システム、201 変位量抽出部、300 不変領域。 100 Position correction device, 102 Invariant area extraction unit, 200 Displacement amount extraction system, 201 Displacement amount extraction unit, 300 Invariant area.

Claims (7)

  1.  変位前の位置情報を示す3次元データを変位前の3次元データとし、
     変位後の位置情報を示す3次元データを変位後の3次元データとし、
     前記変位後の3次元データに対応する画像データを変位後の画像データとし、
     前記変位後の画像データにおける変位していない画像の領域を不変領域として抽出する不変領域抽出部と、
     前記不変領域に対応する位置情報のみを基準として、前記変位後の3次元データを前記変位前の3次元データに位置合わせするように補正する補正部とを備える、
     位置補正装置。
    Let the 3D data indicating the position information before displacement be the 3D data before displacement,
    Let 3D data indicating position information after displacement be 3D data after displacement,
    Let image data corresponding to the three-dimensional data after the displacement be image data after the displacement,
    an unchanged area extraction unit that extracts an undisplaced image area in the image data after the displacement as an unchanged area;
    a correction unit that corrects the post-displacement three-dimensional data to align with the pre-displacement three-dimensional data based only on position information corresponding to the unchanged area;
    Position correction device.
  2.  請求項1に記載の位置補正装置であり、
     前記変位前の3次元データに対応する画像データを変位前の画像データとし、
     前記不変領域抽出部が、前記変位前の画像データと前記変位後の画像データとの間で変位していない画像の領域を前記不変領域として抽出する、
     位置補正装置。
    The position correction device according to claim 1,
    Image data corresponding to the three-dimensional data before displacement is image data before displacement,
    The unchanged area extraction unit extracts, as the unchanged area, an area of the image that is not displaced between the image data before displacement and the image data after displacement;
    Position correction device.
  3.  請求項2に記載の位置補正装置であり、
     前記不変領域抽出部が、災害発生前の画像を示す前記変位前の画像データと災害発生後の画像を示す前記変位後の画像データとに基づいて、前記不変領域を抽出する、
     位置補正装置。
    The position correction device according to claim 2,
    The unchanged area extraction unit extracts the unchanged area based on the pre-displacement image data representing an image before the disaster occurrence and the post-displacement image data representing an image after the disaster occurrence.
    Position correction device.
  4.  請求項1から3のうちのいずれか1つに記載の位置補正装置であり、
     前記不変領域抽出部が、人工衛星によって取得される衛星画像を示す前記変位後の画像データに基づいて、前記不変領域を抽出する、
     位置補正装置。
    The position correction device according to any one of claims 1 to 3,
    The unchanged area extracting unit extracts the unchanged area based on the displaced image data indicating a satellite image acquired by an artificial satellite.
    Position correction device.
  5.  請求項1から4のうちのいずれか1つに記載の位置補正装置であり、
     前記変位前の3次元データおよび前記変位後の3次元データを取得するための第1の計測媒体と、前記変位前の画像データおよび前記変位後の画像データを取得するための第2の計測媒体とが異なる、
     位置補正装置。
    The position correction device according to any one of claims 1 to 4,
    A first measurement medium for acquiring the three-dimensional data before the displacement and the three-dimensional data after the displacement, and a second measurement medium for acquiring the image data before the displacement and the image data after the displacement. is different from
    Position correction device.
  6.  請求項1から5のうちのいずれか1つに記載の位置補正装置と、
     前記変位前の3次元データと、前記位置補正装置において補正された前記変位後の3次元データとに基づいて、前記変位前の3次元データおよび前記変位後の3次元データで示される位置の変位量を抽出する変位量抽出部とを備える、
     変位量抽出システム。
    A position correction device according to any one of claims 1 to 5,
    Based on the three-dimensional data before displacement and the three-dimensional data after displacement corrected by the position correction device, displacement of the position indicated by the three-dimensional data before displacement and the three-dimensional data after displacement. and a displacement amount extraction unit that extracts the amount.
    Displacement extraction system.
  7.  変位前の位置情報を示す3次元データを変位前の3次元データとし、
     変位後の位置情報を示す3次元データを変位後の3次元データとし、
     前記変位後の3次元データに対応する画像データを変位後の画像データとし、
     前記変位後の画像データにおける変位していない画像の領域を不変領域として抽出し、
     前記不変領域に対応する位置情報のみを基準として、前記変位後の3次元データを前記変位前の3次元データに位置合わせするように補正する、
     位置補正方法。
    Let 3D data indicating position information before displacement be 3D data before displacement,
    Let 3D data indicating position information after displacement be 3D data after displacement,
    Let image data corresponding to the three-dimensional data after the displacement be image data after the displacement,
    extracting an undisplaced image area in the image data after the displacement as an unchanged area;
    correcting the post-displacement three-dimensional data to align with the pre-displacement three-dimensional data based only on position information corresponding to the unchanged area;
    Position correction method.
PCT/JP2022/030052 2022-08-05 2022-08-05 Position correction device, displacement amount extraction system, and position correction method WO2024029057A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/030052 WO2024029057A1 (en) 2022-08-05 2022-08-05 Position correction device, displacement amount extraction system, and position correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/030052 WO2024029057A1 (en) 2022-08-05 2022-08-05 Position correction device, displacement amount extraction system, and position correction method

Publications (1)

Publication Number Publication Date
WO2024029057A1 true WO2024029057A1 (en) 2024-02-08

Family

ID=89848737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/030052 WO2024029057A1 (en) 2022-08-05 2022-08-05 Position correction device, displacement amount extraction system, and position correction method

Country Status (1)

Country Link
WO (1) WO2024029057A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353030A (en) * 1993-06-09 1994-10-04 Science Applications International Corporation Method for simulating high resolution synthetic aperture radar imagery from high altitude photographs
JP2006284224A (en) * 2005-03-31 2006-10-19 Kimoto & Co Ltd Method for generating three-dimensional data of geographical features, method for evaluating geographical feature variation, system for evaluating geographical feature variation
WO2010061852A1 (en) * 2008-11-25 2010-06-03 Necシステムテクノロジー株式会社 Building change detecting device, building change detecting method, and recording medium
JP2019143984A (en) * 2018-02-15 2019-08-29 株式会社安藤・間 Two-time change estimation device, and two-time change estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353030A (en) * 1993-06-09 1994-10-04 Science Applications International Corporation Method for simulating high resolution synthetic aperture radar imagery from high altitude photographs
JP2006284224A (en) * 2005-03-31 2006-10-19 Kimoto & Co Ltd Method for generating three-dimensional data of geographical features, method for evaluating geographical feature variation, system for evaluating geographical feature variation
WO2010061852A1 (en) * 2008-11-25 2010-06-03 Necシステムテクノロジー株式会社 Building change detecting device, building change detecting method, and recording medium
JP2019143984A (en) * 2018-02-15 2019-08-29 株式会社安藤・間 Two-time change estimation device, and two-time change estimation method

Similar Documents

Publication Publication Date Title
James et al. 3‐D uncertainty‐based topographic change detection with structure‐from‐motion photogrammetry: precision maps for ground control and directly georeferenced surveys
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
Hallermann et al. Vision-based deformation monitoring of large scale structures using Unmanned Aerial Systems
Ozturk et al. A low-cost approach for determination of discontinuity orientation using smartphone images and application to a part of Ihlara Valley (Central Turkey)
CN111256730A (en) Earth mass balance correction calculation method for low-altitude oblique photogrammetry technology
Iheaturu et al. An assessment of the accuracy of structure-from-motion (SfM) photogrammetry for 3D terrain mapping
CN110986888A (en) Aerial photography integrated method
Mirko et al. Assessing the Impact of the Number of GCPS on the Accuracy of Photogrammetric Mapping from UAV Imagery
AGUILAR et al. 3D coastal monitoring from very dense UAV-Based photogrammetric point clouds
Ostrowski Accuracy of measurements in oblique aerial images for urban environment
WO2022104251A1 (en) Image analysis for aerial images
CN110470275A (en) A method of withered riverbed bed ripples morphological parameters are measured based on UAV aerial survey terrain data
WO2016157802A1 (en) Information processing apparatus, information processing system, information processing method, and storage medium
Javadnejad Small unmanned aircraft systems (UAS) for engineering inspections and geospatial mapping
WO2024029057A1 (en) Position correction device, displacement amount extraction system, and position correction method
Lascelles et al. Automated digital photogrammetry: a valuable tool for small‐scale geomorphological research for the non‐photogrammetrist?
Gao et al. Automatic geo-referencing mobile laser scanning data to UAV images
Li et al. Registration of Aerial Imagery and Lidar Data in Desert Areas Using the Centroids of Bushes as Control Information.
Chuanxiang et al. Automatic detection of aerial survey ground control points based on Yolov5-OBB
Bertin et al. A merging solution for close-range DEMs to optimize surface coverage and measurement resolution
Verykokou et al. Metric exploitation of a single low oblique aerial image
Luo et al. The texture extraction and mapping of buildings with occlusion detection
Stojcsics et al. Automated Volume Analysis of Open Pit Mining Productions Based on Time Series Aerial Survey
Wang et al. Factors influencing measurement accuracy of unmanned aerial systems (UAS) and photogrammetry in construction earthwork
Kulur et al. The Effect of Pixel Size on the Accuracy of Orthophoto Production

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22954045

Country of ref document: EP

Kind code of ref document: A1