CN113168714A - System and method for facilitating geographic information generation - Google Patents

System and method for facilitating geographic information generation Download PDF

Info

Publication number
CN113168714A
CN113168714A CN201980004687.7A CN201980004687A CN113168714A CN 113168714 A CN113168714 A CN 113168714A CN 201980004687 A CN201980004687 A CN 201980004687A CN 113168714 A CN113168714 A CN 113168714A
Authority
CN
China
Prior art keywords
processor
image data
data
geographic information
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980004687.7A
Other languages
Chinese (zh)
Inventor
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gps Map Capital Singapore Pte Ltd
Original Assignee
Gps Map Capital Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gps Map Capital Singapore Pte Ltd filed Critical Gps Map Capital Singapore Pte Ltd
Publication of CN113168714A publication Critical patent/CN113168714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system and method for facilitating geographic information generation is disclosed. The system for facilitating geographic information generation may include an imaging module operable to acquire image data; a processor operable to receive the image data and generate a point cloud having a plurality of data points using the image data, wherein the processor is further operable to analyze the point cloud to extract at least one feature associated with the geographic information and generate the geographic information using the extracted feature.

Description

System and method for facilitating geographic information generation
The application claims priority of singapore patent application with application number SG10201808791P and application time of 2018, 10 and 4.
Technical Field
The present invention relates to a system and method for facilitating geographic information generation.
Background
The following discussion of the technical background is intended to facilitate an understanding of the present invention only. It should be noted that the discussion is not an acknowledgement or admission that any of the material referred to in the discussion was published, known or part of the common general knowledge of a skilled person in any jurisdiction as at the priority date of the invention.
With the development of imaging sensors, three-dimensional type sensors (hereinafter, referred to as "3D sensors") are widely used to acquire image data to generate geographical information, such as High Definition (HD) maps. An example of a 3D sensor is a light detection and ranging sensor (also known as LiDAR sensor). The laser radar sensor measures a distance to an object by irradiating the object with a pulse laser and detecting a reflected pulse, and obtains image data using the detected distance.
However, after the 3D sensor obtains the image data, the user may be faced with a complex process of generating geographic information using the image data. Existing systems for generating geographic information using image data tend to be complex to use because they require the user to analyze and process the image data to generate the geographic information.
In view of the above, it is not easy for the user to generate geographical information using the obtained image data. Furthermore, because of the time required for a user to analyze and process image data, the user may often be challenged to generate geographic information in real-time or near real-time (real-time or near real-time). The generation of geographic information is a laborious, inefficient, time-consuming, and expensive task. This is due in part to the computational intensive nature of generating three-dimensional image data based on acquired image data.
In view of the above, there is a need to provide a solution to the above needs or to at least partially solve the problem.
Disclosure of Invention
In this document, unless the context requires otherwise, the term "comprise" or variations such as "comprises" or "comprising", are intended to imply the inclusion of any stated integer or group of integers but not the exclusion of any other integer or group of integers.
Herein, unless the context requires otherwise, the term "comprise" or variations such as "comprises" or "comprising", are intended to imply the inclusion of any stated integer or group of integers but not the exclusion of any other integer or group of integers.
The present invention is directed to a system and method that reduces manual and physical efforts by a user in generating geographic information.
The technical scheme of the invention is provided in the form of a system and a method for promoting geographic information generation. In particular, the system includes a processor for analyzing image data obtained by an imaging module. Thereafter, the processor may be operative to extract at least one feature from the image data based on the analysis. The extracted features are associated with geographic information that the user wishes to generate. The processor may then generate geographic information using the extracted features.
In this manner, the processor may be used to generate geographic information that a user wishes to generate in real-time or near real-time, without requiring manual and physical effort by the user.
In one aspect, a system for facilitating generation of geographic information, comprising: an imaging module for acquiring image data; a processor for receiving the image data and generating a point cloud having a plurality of data points using the image data; wherein the processor is further operable to analyze the point cloud to extract at least one feature associated with the geographic information, and to generate the geographic information using the extracted feature.
In some embodiments, the imaging module comprises at least one 3D sensor for generating 3D image data.
In some embodiments, the generating of the point cloud comprises geo-referencing the image data.
In some embodiments, the system further comprises a Position and Orientation System (POS) for sending position and/or orientation related data to the processor.
In some embodiments, the processor may be operative to parse the transmitted data and perform geo-referencing of the image data using the parsed information.
In some embodiments, the processor may be used to radiatively and/or geometrically correct the point cloud.
In some embodiments, the processor may be operative to compute an octree and separate the point cloud into a plurality of cells based on the computed octree such that each cell of the plurality of cells has the same size.
In some embodiments, the processor may be operative to calculate a normal for each of the plurality of data points and create an eigenvalue (eigenvalues) for each of the plurality of bins based on the calculated normal.
In some embodiments, the processor may be operative to segment the point cloud according to geometric attributes.
In some embodiments, the processor is configured to classify each of the plurality of data points based on the point cloud segmented according to geometric attributes.
In some embodiments, the imaging module further comprises at least one 2D sensor for generating 2D image data.
In some embodiments, the processor may be operative to merge the 2D image data and the 3D image data.
In some embodiments, the processor may be configured to recalculate a geometric property of said each of said plurality of cells and/or to reclassify said each of said plurality of data points based on the merged image data.
In some embodiments, the processor may be operative to receive a selected type of the geographic information that needs to be generated and to extract at least one feature associated with the geographic information that needs to be generated.
In some embodiments, the geographic information comprises at least one of a high definition map, vegetation information, or public infrastructure information.
In some embodiments, the feature comprises at least one of a geographic feature or an object feature.
In some embodiments, the imaging module may be used to provide the image data previously obtained by the processor.
In another aspect, a method of facilitating geographic information generation is provided, comprising: acquiring image data on an imaging module; receiving, on a processor, the image data from the imaging module; generating, on the processor, a point cloud having a plurality of data points using the image data; analyzing, on the processor, the point cloud; extracting at least one feature associated with the geographic information; and generating, on the processor, geographic information using the extracted features.
Other aspects of the disclosed subject matter will become apparent to those skilled in the art from a review of the following detailed description of the disclosed subject matter when taken in conjunction with the drawings.
Drawings
The technical solution of the invention will now be described, by way of example only, with reference to the accompanying drawings.
Fig. 1 shows a block diagram of some of the embodiments of the present invention.
Fig. 2 shows a flow diagram of some embodiments of the invention.
Fig. 3 shows a block diagram of further embodiments of the invention.
Fig. 4 shows a flow chart of further embodiments of the invention.
FIG. 5 shows a flow diagram for geographic information generation in accordance with some embodiments of the invention.
Other arrangements of the presently disclosed subject matter are possible and, accordingly, the drawings are not to be taken as an alternative to the foregoing general description of the presently disclosed subject matter.
Detailed Description
Example 1
Fig. 1 shows a block diagram of some of the embodiments of the present invention.
The system 100 may include an imaging module 110 and a processor 120. The system 100 may further include a Position and Orientation System (POS) module 130.
The imaging module 110 may include at least one 3D sensor 111 and/or at least one 2D sensor 112. The 3D sensor 111 may include, but is not limited to: light detection and ranging (LiDAR) sensors (also known as "LiDAR scanners"). The 2D sensors 112 may include, but are not limited to: an RGB camera, a multispectral imager, and/or a hyperspectral imager. The 3D sensor 111 may be used to generate 3D image data and the 2D sensor 112 may be used to generate 2D image data.
The imaging module 110 may generate image data through image acquisition of a subject. For example, the LiDAR sensor detects a distance to an object by illuminating the object with a pulsed laser and detecting a reflected pulse, and uses the detected distance to generate image data. For example, the image data may include, but is not limited to: at least one of raw data generated by image acquisition of an object and/or data processed from the raw data (referred to as "processed data").
The imaging module 110 may be used to provide the image data to the processor 120 in real-time or near real-time for rapid evaluation and time critical decisions by the processor 120.
Although not shown, the imaging module 110 may obtain the image data from an external database and/or server. For example, the imaging module 110 may receive the image data from an external database and/or server and then provide the image data to the processor 120.
Although not shown, the imaging module 110 may provide previously generated image data to the processor 120. For example, the imaging module 110 may have an internal database. The internal database may include one or more storage units for storing the image data generated previously. The imaging module 110 may extract the image data from the internal database and then provide the image data to the processor 120.
The POS130 may include an Inertial Navigation System (INS) module 131 and a Global Navigation Satellite System (GNSS) module 132. The POS130 may be configured to generate direction and/or position related data via the module 131 and/or the module 132. The POS130 may send the location and/or orientation related data to the processor 120. In some embodiments, the position and/or orientation related data may be sent to the processor 120 in real time or near real time. The transmitted position and/or orientation related data may be transmitted by streaming.
The processor 120 may include an Explicit Data Graph Execution (EDGE) processor. The processor 120 may be mounted on a portable device, such as a backpack, mobile, and/or onboard platform. The portable device may include a mounting structure, such as a frame, for securing or attaching an item thereto. The processor 120 may include various form factors based on the platform in which the processor 120 is used. The processor 120 may include at least one GPU card. The processor 120 may include at least one Graphics Processing Unit (GPU). The GPU may be integrated into the processor 120 in the form of a card as a hardware component of dedicated software, installed on the processor 120.
The processor 120 may have a housing that meets certain requirements, for example, a housing that is usable in outdoor conditions. For example, the housing may be waterproof, dustproof, and/or shockproof. In some examples, the enclosure conforms to the requirements of IP68 (also known as the IP specification, international protection marking, IEC standard 60529 issued by the International Electrotechnical Commission (IEC), or european standard EN 60529). The processor 120 may be an off-the-shelf solution having a small footprint box (http:// connectitectch. com/product/rose-embedded-System-with-nvidia-jetson-tx 2-tx1/) similar to the trade name ROSIE by Connecttech corporation, a robust gpu Computing server is available from Mercury systems corporation (https:// www.mrcy.com/rolled servers/pgserving Computing servers /), or it may include a custom housing, milled from plastic or aluminum using CNC machines, or printed using 3D printers, for example.
The processor 120 may be used to synchronize and control the imaging module 110 and/or the POS 130. The processor 120 may also be used to process and analyze the image data in real-time or near real-time. The GPU may be operative to process the image data for output to a display device.
The processor 120 is further operable to generate geographic information using the processed and analyzed image data in real-time or near real-time, thereby facilitating real-time generation of data as compared to conventional approaches. In some embodiments, the processor 120 extracts at least one necessary feature associated with the geographic information that the user wishes to generate from the processed and analyzed image data. Thereafter, the processor 120 generates geographical information using the extracted factors or information. This process can be automated without manual or physical effort by the user.
It is understood that the imaging module 110, the processor 120, and the POS130 are coupled together via a communication network, such as a Local Area Network (LAN) or an Ethernet or wireless communication network, which may include Wi-Fi, Bluetooth, or other mobile wireless networks. The system 100 may enable real-time mapping and monitoring of a range of mobile platforms over a communications network.
In this manner, the system 100 in conjunction with the imaging module 110 and the POS130 provides a general purpose computing for Graphics Processing Unit (GPGPU) hardware and software solutions. The system 100 is capable of collecting the image data and data relating to position and/or orientation and processing the image data including geo-referencing and analysis. In some embodiments, the analysis results may be streamed to a user, such as a remote viewer, and transmitted to the user via a LAN in real-time and near real-time for immediate review.
Further, the system 100 is able to generate geographic information in real-time or near real-time using the image data without requiring the user to perform any post-processing tasks. In some embodiments, the generated geographic information may be streamed to a user, such as a remote viewer, and transmitted to the user via a LAN in real-time and near real-time for immediate review.
Example 2
Fig. 2 shows a flow diagram of some embodiments of the invention.
As described above, the system 100 may include the imaging module 110 and the processor 120. The system 100 further may also include the POS 130.
First, the imaging module 110 acquires image data (S210). The imaging module 110 may include at least one 3D sensor 111. The 3D sensor 111 generates 3D image data, such as LiDAR data, as image data.
The processor 120 then receives image data from the imaging module 110 (S220). The processor 120 may receive or collect the 3D image data from the 3D sensor 111 in real time or near real time over a communication network such as a LAN.
After the 3D sensor 111 receives the 3D image data, the processor 120 performs geo-referencing of the 3D image data. Geo-referencing is performed to generate one or more point clouds. It will be appreciated that the position and/or orientation related data generated from the POS130 can be used to perform geo-referencing of the 3D image data.
The POS130 sends data relating to location and/or orientation to the processor 120 in real-time or near real-time. For example, POS130 transmits a series of binary data packets over a User Datagram Protocol (UDP) network connection. The UDP network connection allows the POS130 to act as a server associated with the processor 120 and continuously transmit location and/or direction related data over the UDP network connection.
The processor 120, acting as a client associated with the POS130, receives data from the POS130 relating to location and/or orientation, such as a string of binary data packets, and parses the string of binary data packets to find the desired information. The data packet may include, but is not limited to: position, velocity, status, velocity and dynamics related data reported in a reference frame (e.g., POS130 reference frame). It will be appreciated that such data may be variable values that are input into equations (e.g., LiDAR equations) to perform the geo-referencing. In this manner, the processor 120 may perform geo-referencing of the image data using the parsed information. By geo-referencing, the coordinates in the sensor frame are converted to coordinates in the mapping frame, such as ECEF (earth centered, earth fixed) coordinates.
The processor 120 then generates a point cloud using the geo-reference image data (S230). It is understood that the point cloud comprises a set of data points.
Although not shown, the processor 120 may be used to radiometrically (radiometric) and/or geometrically (geotrically) correct the point cloud. For example, geometric and radiometric corrections may be made to the intensity values of the point cloud. In this respect, the recorded intensity may be subject to some deformation due to geometrical measurements and environmental influences. Thus, before the intensity values can be used, they can be radiometric and geometrically corrected. It should be understood that the recorded intensities may be a black image, which may represent a low intensity, and a white image, which may represent a high intensity.
Thereafter, the processor 120 analyzes the point cloud (S240). The analysis may include, but is not limited to, one or more of the following list: computation of tree data structures (e.g., octree), computation of normals (normal), computation of geometric attributes, segmentation of the point cloud, and classification of each data point.
In some embodiments, the processor 120 may calculate an octree and segment the point cloud into a plurality of cells based on the calculated octree such that each cell of the plurality of cells has the same size regardless of the number of points included in each cell. In some embodiments, each cell may be in the form of a cube. Since the cells have the same size, the spatial search and subsequent processing can be speeded up.
The processor 120 may then calculate a normal for each data point and create an eigenvalue and/or eigenvector for each cell of the plurality of cells based on the calculated normal. The normal may be a vector perpendicular to a best fit plane of the set of data points. In some embodiments, the eigenvalues and/or eigenvectors of each cell of the plurality of cells are covariance matrices created from the neighbors of each data point.
Thereafter, the processor 120 may normalize the feature values of each of the plurality of cells and calculate at least one geometric property for each of the plurality of cells. The at least one geometric property may include, but is not limited to: linearity, planarity, scattering, total variance, anisotropy, characteristic entropy (eigen entropy), local curvature, normal value, intensity, Elevation and delta Elevation with respect to Ground Elevation (AGE).
The processor 120 may then segment the point cloud according to geometric attributes. It will be appreciated that at least one of the above-mentioned geometrical properties may be used for segmenting the point cloud, in particular the elevation (elevation), the normal and/or the intensity, the homogeneity/variation and/or the (feature) value. The processor 120 then classifies each data point based on the point cloud segmented according to geometric attributes. The data points in each cell are classified according to a predetermined set of categories, such as the LiDAR category of the American Society for Photogrammetry and Remote Sensing (ASPRS). Thereafter, the data points may be gridded and used as a 0-level for AGE evaluation. Thus, there may be one classification and AEG elevation value per data point.
After classifying each data point, the processor 120 extracts at least one feature associated with geographic information (S250).
The processor 120 receives a selection of the type of geographic information that needs to be generated from the user. The geographic information may include, but is not limited to: high definition maps derived from 3D feature extraction, classification and/or segmentation, road and pavement condition monitoring and/or maintenance, detection of changes in natural and/or architectural environments or plant information (e.g., tree health). Although not shown, the processor 120 may recommend various types of geographic information, and the user may select at least one geographic information that the user wishes to generate. For example, the processor 120 may output information regarding various types of geographic information to a display so that a user may select at least one geographic information.
The processor 120 extracts at least one feature associated with the selected geographic information. The features may include, but are not limited to: at least one of a geographic feature or an object feature.
For example, if plant information is selected, an NDVI (normalized vegetation index) value is calculated. Based on the calculated NDVI values, the features may be extracted and/or measured. Low values of NDVI, such as 0.1 and below, may represent non-vegetation areas. Moderate values of NDVI, such as 0.2 to 0.3, may indicate shrubs and grass. High values of NDVI, such as 0.6 to 0.8, may represent trees and forests.
Thereafter, the processor 120 generates geographic information using the extracted features (S260).
In some embodiments, it is understood that the imaging module 110 further includes at least one 2D sensor 112. Accordingly, a combination of the 3D sensor 111 and the 2D sensor 112 may be used. The 2D sensor may be used to generate 2D image data, such as a 2D image. The 3D sensor 111 and the 2D sensor 112 may acquire 3D image data and 2D image data, respectively, for the same object or background.
The processor 120 may receive or collect 3D image data from the 3D sensor 111 and 2D image data from the 2D sensor 112 in real-time or near real-time via a communication network. In some embodiments, the processor 120 may match 3D image data and 2D image data based on an object or background contained in the 3D image data and the 2D image data.
After receiving the 3D image data and the 2D image data from the 3D sensor 111 and the 2D sensor 112, respectively, the processor 120 performs geo-referencing of the 3D image data and the 2D image data in real-time or near real-time.
The processor 120 then generates a point cloud using the geo-referenced image data. Although not shown, the processor 120 may perform radial and/or geometric corrections to the point cloud. For two-dimensional image data, in the case of a multispectral imager, after the image data is first radiation corrected and then geometrically corrected, some indices are extracted or generated from the band ratios, including NDVI, SR (simple ratio vegetation index), and PRI (actinic vegetation index). These indices are used to analyze the point cloud.
After classifying each data point, the processor 120 may merge and/or fuse the 2D image data and the 3D image data. The processor 120 may recalculate at least one geometric property of each of the plurality of cells and/or reclassify each data point based on the merged image data. The processor 120 may select and perform a recalculation of the geometric property or a reclassification of each data point.
With respect to the recalculation of the geometric attributes, the processor 120 may add other geometric attributes, such as vegetation health, chlorophyll concentration, and the like. Since the 2D image data is from the 2D sensor 112, the 2D sensor 112 is a passive sensor with no visual penetration capability, so adding a geometric attribute to a data point is limited to a first return point.
For re-classification, i.e., refinement of the classification, of the data points, the vegetation zones in the point cloud may be re-enhanced and affine using 2D image data of the vegetation index. The fusion of the 2D image data and the 3D image data may rely on photogrammetry and ray tracing principles.
In this manner, the processor 120 performs further processing and analysis of the enhanced point cloud through real-time or near real-time 2D and 3D index extraction, including Digital Surface Models (DSMs), Digital Terrain Models (DTMs), contours, breaks, points of interest, and the like. Among these results, the point cloud index can be stored in LAS or LAZ format; the raster metrics may be stored in a GeoTIFF format. The point cloud containing the actual 3D image data may be stored in a binary file format, such as LAS or LAZ format. This is the raw data and the processed data. The processor 120 may generate three (3) types of data from the point cloud. The data types are as follows: point clouds for classification and segmentation; gratings, such as 2D images, for DSM, DTM, etc.; and/or vectors for contours and line breaks.
Fig. 3 shows a block diagram of some embodiments of the invention. Fig. 3 shows a comprehensive layout of the relationship between the server 121 and the client 122 of the processor 120, for example an edge processor. The server 121 of the processor 120 interfaces directly with the 3D sensor 111, the 2D sensor 112, and/or the POS 130.
In fig. 3, the processor 120 includes a server 121 and a client 122. In other words, at least a portion of the processor 120 may be associated with at least one process of the server 121 and at least a portion of the processor 120 is associated with at least one process of the client 122. It will be appreciated that additional clients, such as one or more GPUs, may be added to the processor 120 if more processing power is required. Although not shown, in some embodiments, an integrated device may include the processor 120, the 3D sensor 111, the 2D sensor 112, and/or the POS 130.
The system 100 is capable of performing efficient parallelization of different processes that occur in a synchronous and asynchronous manner. The system 100 can also allow the processor 120 to be scaled up or down based on the physical settings of the system 100. In this way, the processor 120 may adjust the amount of data to be processed in real time or near real time.
The server 121 may directly interface with the 3D sensor 111 and the 2D sensor 112 through a transmission control protocol/internet protocol (TCP/IP) network connection. One streamReader3D module 123a (streamReader3D module 123a) and one streamReader2D module 123b (streamReader2D module 123b) may retrieve the 3D image data stream and the 2D image data stream, respectively.
The 3D sensor 111 may be programmatically accessed through a predetermined library that communicates with the system 100 through a TCP/IP connection. The library may be a documenting platform independent software library for control and data retrieval from the 3D sensor 111. The library may be used to configure and control the 3D sensor 111 and retrieve and parse measurement results from the 3D sensor 111. It is understood that the system 100 is compatible with any scanner (e.g., a V-line scanner) as the 3D sensor 111, whether the scanner is an airborne, mobile, or terrestrial scanner.
It will be appreciated that the predetermined library should be compatible with the sensor/scanner hardware.
The server 121 may interface directly with the POS130 via a User Datagram Protocol (UDP) network connection. A streaming reader POS module 123c (streamreader POS module 123c) may retrieve data streams related to location and/or orientation.
In some embodiments, each packet of the data stream may have a separate timestamp, as shown in 310a, 320a, and 330a of fig. 4. The time stamp is used to cross-reference each packet for geo-registration on the GPU of the server 121 using a data geo-reference handler module 124(dataGeorefHandler module 124). The data geo-reference processor module 124 may be an implementation of photogrammetry and LiDAR equations applied to the 2D image data and the 3D image data. After the grating index and the point cloud are created, the grating index and the point cloud are locally stored on a hard disk inside a server.
The server 121 may send a series of messages to one or more clients 122 to inform the raw data set, such as raster metrics and point clouds, that are available for further processing.
After receiving a series of messages from the server 121, each client 122 may ask the server 121 to provide the required data, with the metricXHandler 125a-125n performing its own processing task. The required data may include, but is not limited to: specific indexes such as Digital Surface Model (DSM), Digital Terrain Model (DTM), and the like.
Once the clients 122 successfully perform their own processing tasks, the clients 122 can copy the results into a folder of the server's internal hard drive, which can be accessed by the Samba server 126 (file service on LAN).
It will be appreciated that the data geo-reference handler module 124(dataGeorefHandler module 124) may require a system installation file containing system installation parameters in addition to the 3D image data stream, the 2D image data stream and the location and/or orientation related data stream. In some embodiments, system installation parameters are unique to the system physical installation in that they relate to the actual spatial distribution and orientation of the 3D sensor 111 and the 2D sensor 112 relative to the POS130 and GPS antennas.
In some embodiments, to simplify the overall configuration of the system 100, various parameters may be entered into a predefined XML file that may be stored in the server 121. Thus, a simple text editor can be used to configure the processor 120.
Example 3
Fig. 4 shows a flow chart of some of the embodiments of the present invention. Details of the elements 310a, 320a, 330a, 340a, 350a, 360b, 360c, 370a, 370b, 390a, 390b, 390c and steps S320 to S390 will be described with reference to fig. 4.
The streaming reader module 123 is connected to the POS module 130, the 3D sensor 111 and the 2D sensor 112, and a system configuration module (system installation module) 140, respectively. The POS module 130 sends the position and status information at S330a to the streaming reader module 123, and the 3D sensor 111 sends the target information spatial coordinates referencing the 3D imaging module 111 to the 123. It may also send relevant attribute information, e.g. object attribute information, e.g. color, reflectivity. The 2D module 112 sends the streaming reader module 123 imaging information presented in 2D vectors with multiple layers of related information, where each layer may have information such as red, green, or blue information, or reflectance measurements for various spectral bands. The system installation module 140 will pass the information contained in the XML file to the streaming read module 123. This information is related to the physical mounting between each sensor, e.g. the vehicle comprising the mounting system. Data streams 310a, 320a and 330a share a field of information, the time stamps actually coming from the 3D sensor 111 and the 2D sensor 112 synchronized by the Pulse Per Second (PPS) signal of the POS module.
The data streams 310a, 320a and 330a are synchronized by the streaming reader module 123, since each stream will be received at its own rate. Once synchronized and merged together by the 123, the information is passed to the information geo-reference processor module 124, where the actual point cloud (360a) geo-references the data from the module 111, and an image is generated from the data from the module 112 and geo-referenced (360 c). The data geo-processor module 123 may also generate Total Propagation Uncertainty (360b) information for each point of the point cloud. The point cloud 360a and associated raster image 360c are passed to module 125a for point cloud (370a) and image (370b) classification and segmentation, respectively. Once sorted, the two data are merged together by block S380. Thereafter, the merged data is passed to module 125n for a final feature extraction process that will generate a set of vector files (390a) and raster files (390 c). Eventually, these files will be stuffed into the database for allowing the fasts "search and find". Further, the files that make up vector files 390a and 390b are placed on a SAMBA server that allows any user to view the collected and processed data on a remote hard disk and to copy the data to their own computer by simple drag and drop. It is noted that for simplicity, the module 123 in FIG. 4 writes a system log (350S), however, in practice, in each embodiment, the modules S350, S360, S370, S380, and S390 are all written for output to the system log (350 a).
The client 122 may collect the required data packets through the data geo-processor module 124 on the server 121 and send the data packets to various sub-modules that can handle direct geo-referencing and to the target product through the metricXHandler modules 125a-125n of S370, S380, S390 on the client 122.
Streaming reader module 123
Although not shown, the streaming reader module 123 may include, but is not limited to: one streaming reader3D module 111, one streaming reader2D module 112, streaming reader POS module 123c and system configuration reader 123D (sysconfig reader). The streaming reader module 123 may interface with imaging modules of 3D and 2D sensors (the 3D sensor 111 or the 2D sensor 111) and POS S330, respectively. After the 3D image data is transmitted from the 3D sensor 111 to the streaming reader3D module 111, the 2D image data is transmitted from the 2D sensor 112 to the streaming reader2D module 112, the position and/or direction related data is transmitted from the POS130 to the streaming reader POS module 123c, and the streaming reader module 123 may receive or collect the 3D image data, the 2D image data, the position and/or direction related data, and system installation information from the 3D sensor 111, the 2D sensor 112, and the POS130, respectively (S350).
Data geo-reference processor module 124
The data geo-reference processor module 124 may perform actual geo-referencing of the 3D image data and the 2D image data (S360). By geo-referencing, the coordinates in the sensor frame are converted to coordinates in the mapping frame, such as ECEF coordinates. In some embodiments, to perform the conversion, LiDAR equations and/or photogrammetric equations may be used. In some embodiments, the data geo-reference processor module 124 may perform automatic post-processing of the traces using the Applanix cloud service or a PosPAC script (not shown).
Although not shown, another sub-module may be added to perform slice alignment at the end of the 3D image data and the 2D image data. The stripe alignment may also improve any inaccurate boresight angle estimation. In some embodiments, the data geo-reference processor module 124 may perform stripe alignment.
It will be appreciated that the coordinate transformation described above is applicable to post-processing as well as real-time or near real-time processing.
Metric 1 processor module 125a for classifier (Metrics1Handler module 125a)
This is the first stage of data processing that occurs after the geo-referencing (S370). The metric 1 processor module 125a may act as a classifier for the 3D image data and the 2D image data.
For the 2D image data, in the case of a multispectral imager, after the image data is first radiatively corrected and then geometrically corrected, some indices, such as NDVI, SR, and PRI, are extracted or generated from the band ratios, and in most cases, the 2D image data is processed using remote sensing image analysis to segment the 2D image data.
For the 3D image data, geometric and radiation corrections may be made to the intensity values of the point cloud. Thereafter, the octree is calculated so that the plurality of cells have the same size regardless of the number of points contained in each cell. Thereafter, a normal is calculated for said each data point, and an eigenvalue and/or eigenvector is created for each cell of said plurality of cells based on the calculated normal. The normal may be a vector perpendicular to a best fit plane of the set of data points. In some embodiments, the eigenvalues and/or eigenvectors of each cell of the plurality of cells are used to create a covariance matrix from the neighborhood of each data point. Thereafter, the feature values are normalized and at least one geometric property is calculated for each of the plurality of cells. The geometric properties may include, but are not limited to: linearity, flatness, scattering, total variance, anisotropy, characteristic entropy, local curvature, normal value, intensity, elevation with respect to ground elevation (AGE), and delta elevation.
The point cloud is then segmented according to geometric attributes. It is to be appreciated that the point cloud can be segmented using at least one of geometric features, elevation, normal and intensity, uniformity/variation, or values. The data points in each cell are classified according to a preset set of categories, such as the ASPRS LiDAR category. Thereafter, the data points may be gridded and used as a 0-level for AGE evaluation. Thus, each data point may have a category and an AEG elevation value.
Metric 2 processor module 125b (Metrics2Handler module 125b) for data fusion
The 2D image data and the 3D image data are then merged together (S380). The metric 2 processor module 125b may recalculate at least one geometric property for each of the plurality of cells and/or reclassify each data point based on the merged image data. The metric 2 processor module 125b may select and perform a recalculation of the geometric property or a reclassification of each data point.
With respect to the recalculation of geometric attributes, the metric 2 processor module 125b may add other geometric attributes, such as vegetation health, chlorophyll concentration, and the like. Since the 2D image data is from the 2D sensor 112, the 2D sensor 112 is a passive sensor and has no visual penetration capability, adding a geometric attribute to a data point is limited to a first return point.
For the reclassification, namely classification refinement, of the data points, the vegetation division in the point cloud is re-enhanced and affine by using the 2D image data of the vegetation index. The fusion of the 2D image data with the 3D image data may rely on photogrammetry and ray-tracing principles.
In some embodiments, this step of S380 may be omitted if the 2D image data is absent.
(ad-hoc processing) metrics N processor Module 125N (MetricsNHandler Module 125N) for a particular process
The metric N processor module 125N applies each data point having a category and an AEG elevation value to the generation of geographic information (S390). In this way, the metric-N processor module 125N can generate the geographic information desired by the user.
It is to be appreciated that the system 100 may generate geographic information without the 2D image data. Without the 2D image data, the step of 112 and the elements 320a, 360c and 370b may be omitted from fig. 4.
Although real-time or near real-time data processing is described throughout the description, it will be appreciated that the processor 120 may also operate as a post-processor. In case the processor 120 is a post-processor, the 2D image data and/or the 3D image data is received from one or more image files (e.g. 2D image files and/or 3D image files) instead of from the 2D sensor 112 and/or the 3D sensor 111.
In some embodiments, the 3D image file, such as a raw LiDAR file, may be an RXP file and may be accessed through a predetermined library. For the POS file, it may be a post-processing trace in Smooth Best Estimate Trace (SBET) format, or may be in "file rnav _ miss 1.out" format generated by the apple market PosPAC software. The latter file format may be used for real-time positioning services, such as RTK (real time base correction) or RTX (real time satellite correction). The latter file may be in the same format as the SBET file format, but need not be post-processed. For the 2D image file, it may depend on the type of the 2D sensor 112. If the 2D sensor 112 is a push-broom sensor with, for example, micro-CASI, the raw 2D image may first be a first post-processed image using SBET trajectories, which may then be fed into the processor 120. If a CMOS sensor is used as the 2D sensor 112, the 2D image with its time tag can be fed directly to the processor 120.
FIG. 5 shows a flow chart of the generation of geographic information in some embodiments of the invention.
Geographic information may be generated by analyzing the point cloud. The processor 120 receives a selection of a type of geographic information to be generated by the user. Although not shown, the processor 120 may recommend various types of geographic information, and the user may select at least one geographic information that the user wishes to generate.
The processor 120 extracts at least one feature associated with the selected geographic information. The features may include, but are not limited to: at least one of a geographic feature or an object feature.
The geographic information may include, but is not limited to: high definition maps derived from 3D feature extraction, classification and segmentation, road and pavement condition monitoring and maintenance, detection of changes in natural and architectural environments, or plant information (e.g., tree health). In some embodiments, the geographic information may include, but is not limited to: canopy under-canopy mapping information or emergency response information.
HD map
The processor 120 may generate an HD map of geographic information as follows. The generation process may correspond to S260 in fig. 2.
The processor 120 extracts at least one feature associated with the selected geographic information. The features to be extracted may be related to road conditions and may include, but are not limited to: road center lines, road signs, curbs, traffic light posts, signs, cables, etc. The extraction process may correspond to S250 in fig. 2. In some embodiments, the processor 120 may be used with a dual scanner mobile mapping system. The dual scanner mobile mapping system may consist of two LiDAR sensors, a controller unit, a camera trigger box, a Global Navigation Satellite System (GNSS) receiver, and a 360 ° spherical camera. In particular, the processor 120 may be used with an Applanix AP60, a VMX system (Riegl's MMS VMX-1HA with 2 Riegl VUX-1 HA), and a Ladybug5 camera provided by FLIR.
The processor 120 may use a 360 ° rotation of the LiDAR sensor as the 3D contouring sensor (3D sensor 111). In some embodiments, the processor 120 may further use a 2D contour sensor (2D sensor 112). The contour sensor can acquire 2D slices. The 2D slices may be stacked to generate the 3D image data.
For example, for the extraction of the road centerline, the contour sensor can be used to extract a near vertical 2D slice of the rear of the vehicle, allowing centerline extraction. To achieve extraction, the point cloud may be analyzed using a piecewise linear time series segmentation method that uses a sliding window to compute a moving average to find corners or breakpoints in the point cloud. The analysis process may correspond to S240 in fig. 2. By semantic analysis, their relative position with respect to the vehicle and/or each other is determined, road edges and/or kerbs are identified, and centerlines are derived. To enable analysis of the point cloud, the point cloud may be generated using image data. The generation process may correspond to S230 in fig. 2.
In some embodiments, analysis of the accumulation of location and/or orientation related data generated by the POS130 (e.g., IMU) may assist in detecting a road centerline. In some embodiments, the detection of the centerline may provide a set of direction vectors that may direct the sliding analysis window to slide in the correct direction to reduce processing time. In some embodiments, the accumulation may also be converted to polylines, which may then be saved as a Shapefile vector file. The saved file may be reviewed later.
In some countries, the kerbstones are identical patterns, such as alternating stripes of black and gray (the natural color of stone or concrete). Thus, the extraction of the kerbstone is based on the geometrical properties and physical properties of the kerbstone, such as color and reflectivity. The extraction may be performed after centerline detection, since each side of the road has been extracted. Once the edge of the road is found, the reflectivity can be analyzed to determine the boundary of the curb.
The road markings may be extracted based on the intensity attributes. The road point cloud may be extracted from the entire point cloud using the curb restriction information. From this sub-point cloud, i.e. the extracted point cloud, an intensity threshold may be applied, and the remaining points may be clustered using euclidean distance filters. Thereafter, the shape and label of each cluster is analyzed.
It is understood that the extracted columnar features may be artificial objects such as lampposts, traffic lights, etc., or may be trees. A piece of data is extracted between 1 meter and 1.3 meters. This piece of data is then divided into smaller portions to reduce processing time. For example, the segment of data is clustered using the euclidean distance segmentation algorithm. For each cluster, a cylinder is fit in the clustered slice of data and a fit score is output. The fit score is used to evaluate the shape type. In some embodiments, the fit score may be the actual residual normal, which may be the actual RMSE (root mean square error) of the orthogonal distance between the best-fit planes estimated from the points and the cylinder.
If the fit score is high, the cluster appears cylindrical, meaning either a tree or some artificial rod. If the fit score is low, the cluster is not a cylinder and can be set aside for further processing. For each fit, the processor 120 may then extract other data segments every other meter or 2 meters up (until no more points are available) and perform a cylinder fit. If subsequent fits show high scores and similar cylinder characteristics are all centered on the same X, Y geographic center and a consistent number of points within each cluster, the processor 120 assumes that the feature is an artificial feature. Further processing is then performed by extracting the buffer around the centroid of the lowest cluster. The properties of the sequence of slices may then use machine learning methods to establish the type of object (based on height, diameter, etc.).
In the presence of a tree, a Quantitative Structure Model (QSM) algorithm is applied to the buffered point cloud. The algorithm may generate a water-impermeable mesh from the trunk and branch networks. From the grid, the processor 120 may evaluate the bounding box and adjust the size of the buffer. For this new buffer, the processor 120 may evaluate the height of the tree in relation to AGE and crown diameter by fitting a circle or ellipsoid.
In some embodiments, the fitted object may be determined based on a fit score, e.g., a remaining normal. The fitting objects may include, but are not limited to: cylinders, circles and/or ellipsoids. In some embodiments, the library performing the fitting may provide multiple types of objects for the fitting and output a fitting score.
The processor 120 may then evaluate the volume of the tree (trunk and branch networks) using the grid generated by the QSM algorithm. For information derived from the volume and variety of the numbers, such as from third party sources, the processor 120 may evaluate the weight and/or carbon content of the trees.
Each object is then stored in a geographic information system database where it can be further displayed and analyzed.
The real-time or near real-time results are stored in the IMU/BDDY reference frame with an appropriate timestamp. This may allow to refine the final position of the feature as if the user wanted to obtain a more precise position, he may perform post-processing of the trajectory to obtain the SBET file and apply it on the point of interest.
Tree health (as shown in figure 5)
The processor 120 may generate tree health information as the following geographic information. The generation process may correspond to S260 in fig. 2.
The processor 120 may perform a mapping of trees along curbs, parks, and residential areas and assess their health. The arborescent may use the geographic information to determine at least one of the following: the image-based tree species, tree circumference measurements from the ground at a predetermined meter height, estimated tree height and crown size, tree location mapping for all trees, or one point feature refers to the centroid of the trunk at a predetermined height from the ground.
For such applications, two types of motion system integration may be used. The first system may be from a vehicle, such as a car, and the other system may be from a backpack. In some embodiments, the processor 120 may be used with a Riegl's MMS VMX-1HA (double laser head mobile mapping System). The Riegl's MMS VMX-1HA may consist of two Riegl VUX-1HA, a controller unit and a camera trigger box, an Applanix AP60, a 360 spherical Ladybug5 camera from FLIR, and one or two microcaSI-1920 Itres. In other embodiments, the second system may consist of the processor 120 (with a smaller footprint), a Riegl miniVUX-1UAV from APX20 of Applanix, Theta V, a 360 ° spherical camera from Ricoh, and micro-CASI 1920 from Itres.
The processing workflow may be the same as the high definition map described above. This means that 3D and 2D data are acquired from 3D and 2D imagers and then geo-referenced using trajectory information (S501 and S505). A point cloud may be corrected for some geometry (S503) and radiance (S504). It will be appreciated that the point cloud may be generated in the same manner as in fig. 3 and 4. S503 and S504 may correspond to S240 in fig. 2. However, for this application, an additional 2D sensor 112, such as a hyperspectral imager, may help identify vegetation in the scene by analyzing a hyperspectral vegetation index, such as NDVI, SRI, etc. (S511).
First, LiDAR data, which is 3D image data, may be classified using the methods proposed in HD maps as described above. Thereafter, 2D image data from the hyperspectral imager may be radiation corrected using the rres' RCX library (S507), then finally geometrically projected using trajectory information (S508), and atmospheric correction may be performed using calibration and geospatial image data (S510), e.g., calibration and geospatial 2D image data (S509). However, in this application, the distance between the sensor and the object is minimal, so atmospheric corrections cannot be used due to the trees being on the ground.
Subsequently, the value of NDVI is calculated (S511). Based on the calculated value of NDVI, features may be extracted and/or determined. The extraction and/or determination process may correspond to S250 in fig. 2. Low values of NDVI, such as 0.1 and below, may indicate non-vegetation areas. Moderate values of NDVI, such as 0.2 to 0.3, may indicate shrubs and grass. High values of NDVI, for example 0.6 to 0.8, may indicate trees and forests. The reference value may be determined by a user and/or the processor 120.
At this time, the 3D and 2D images, S510, S511, S506, and S504, may be merged together based on the time stamp information. The fusion process may include point cloud coloring (S521), classification based on geometric, color, and intensity attributes (S522), and final segmentation (S523). In the case of tree health, a tree may be extracted from the point cloud, and a tree-like mesh for volume estimation is generated using a QSM algorithm (S524). From the grid volume estimation (S525), the carbon content may be estimated using tree species information and tree volume provided by local treetologists (S526). Finally, the vegetation index at S511 may be used to estimate the tree health index.
Those skilled in the art will appreciate that variations and combinations of the above-described features, which are not alternatives or alternatives, may be combined to form further embodiments within the scope of the invention.

Claims (19)

1. A system that facilitates geographic information generation, comprising:
an imaging module for acquiring image data; and
a processor to receive the image data and generate a point cloud having a plurality of data points using the image data,
wherein the processor is further operable to analyze the point cloud to extract at least one feature associated with the geographic information, and to generate the geographic information using the extracted feature.
2. The system of claim 1, wherein the imaging module includes at least one 3D sensor for generating 3D image data as the image data.
3. The system of claim 1 or 2, wherein the generation of the point cloud comprises geo-referencing the image data.
4. A system according to any of claims 1-3 further comprising a Position and Orientation System (POS) operable to send position and/or orientation related data to said processor.
5. The system of claim 4, wherein the processor is operable to parse the transmission data and perform geo-referencing the image data using the parsed information.
6. The system of any one of claims 1-5, wherein the processor is operable to radiatively and/or geometrically correct the point cloud.
7. The system of any one of claims 1-6, wherein the processor is operable to compute an octree and separate the point cloud into a plurality of units based on the computed octree such that each unit of the plurality of units has the same size.
8. The system of claim 7, wherein the processor is operable to calculate a normal for each of the plurality of data points and create a feature value for each of the plurality of cells based on the calculated normal.
9. The system of claim 8, wherein the processor is operable to calculate at least one geometric property of the each of the plurality of elements by normalizing the feature values of the each of the plurality of elements.
10. The system of claim 9, wherein the processor is operable to segment the point cloud according to the geometric attributes.
11. The system of claim 10, wherein the processor classifies each of the plurality of data points according to the point cloud segmented according to the geometric attribute.
12. The system of any one of claims 1-11, wherein the imaging module further comprises at least one 2D sensor for generating 2D image data.
13. The system of claim 12, wherein the processor is operable to combine the 2D image data and the 3D image data.
14. The system of any of claims 9-11, wherein the processor is operable to recalculate the at least one geometric property for the each of the plurality of bins and/or reclassify the each of the plurality of data points based on the merged image data.
15. The system of any one of claims 1-14, wherein the processor is operable to receive a selected type of the geographic information that needs to be generated and extract the at least one feature associated with the geographic information that needs to be generated.
16. The system of any one of claims 1-15, wherein the geographic information comprises at least one of high definition maps, vegetation information, or public infrastructure information.
17. The system of any one of claims 1-16, wherein the feature comprises at least one of a geographic feature or an object feature.
18. The system of any one of claims 1-17, wherein the imaging module is operable to provide the image data previously obtained to the processor.
19. A method that facilitates geographic information generation, comprising:
acquiring image data on an imaging module;
receiving, on a processor, the image data from the imaging module;
generating, on the processor, a point cloud having a plurality of data points using the image data;
on the processor, the point cloud;
extracting, on the processor, at least one feature associated with the geographic information; and
generating, on the processor, the geographic information using the extracted features.
CN201980004687.7A 2018-10-04 2019-09-18 System and method for facilitating geographic information generation Pending CN113168714A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10201808791P 2018-10-04
SG10201808791P 2018-10-04
PCT/SG2019/050466 WO2020072001A1 (en) 2018-10-04 2019-09-18 System and method for facilitating generation of geographical information

Publications (1)

Publication Number Publication Date
CN113168714A true CN113168714A (en) 2021-07-23

Family

ID=70055977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980004687.7A Pending CN113168714A (en) 2018-10-04 2019-09-18 System and method for facilitating geographic information generation

Country Status (8)

Country Link
JP (1) JP2022511147A (en)
KR (1) KR20210067979A (en)
CN (1) CN113168714A (en)
AU (1) AU2019352559A1 (en)
GB (1) GB2589024A (en)
SG (1) SG11202009873RA (en)
TW (1) TW202022808A (en)
WO (1) WO2020072001A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504543B (en) * 2021-06-16 2022-11-01 国网山西省电力公司电力科学研究院 Unmanned aerial vehicle LiDAR system positioning and attitude determination system and method
CN117710590A (en) * 2022-09-06 2024-03-15 北京图森智途科技有限公司 Parameterization and map construction method for point cloud data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2014254426B2 (en) * 2013-01-29 2018-05-10 Andrew Robert Korb Methods for analyzing and compressing multiple images
US9803985B2 (en) * 2014-12-26 2017-10-31 Here Global B.V. Selecting feature geometries for localization of a device
US9830706B2 (en) * 2015-09-17 2017-11-28 Skycatch, Inc. Generating georeference information for aerial images
CN106688017B (en) * 2016-11-28 2019-03-01 深圳市大疆创新科技有限公司 Generate method, computer system and the device of point cloud map
KR102647351B1 (en) * 2017-01-26 2024-03-13 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud

Also Published As

Publication number Publication date
KR20210067979A (en) 2021-06-08
GB202019895D0 (en) 2021-01-27
GB2589024A (en) 2021-05-19
TW202022808A (en) 2020-06-16
AU2019352559A1 (en) 2020-12-17
SG11202009873RA (en) 2020-11-27
JP2022511147A (en) 2022-01-31
WO2020072001A1 (en) 2020-04-09

Similar Documents

Publication Publication Date Title
Torres-Sánchez et al. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards
Iglhaut et al. Structure from motion photogrammetry in forestry: A review
Singh et al. A meta-analysis and review of unmanned aircraft system (UAS) imagery for terrestrial applications
Liu et al. LiDAR-derived high quality ground control information and DEM for image orthorectification
Sona et al. UAV multispectral survey to map soil and crop for precision farming applications
Radoux et al. A quantitative assessment of boundaries in automated forest stand delineation using very high resolution imagery
Vo et al. Processing of extremely high resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS data fusion contest—Part B: 3-D contest
Li et al. Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach
Heenkenda et al. Mangrove tree crown delineation from high-resolution imagery
Stone et al. Alternatives to LiDAR-derived canopy height models for softwood plantations: a review and example using photogrammetry
Kuzmin et al. Automatic segment-level tree species recognition using high resolution aerial winter imagery
US20220004740A1 (en) Apparatus and Method For Three-Dimensional Object Recognition
US20220366605A1 (en) Accurate geolocation in remote-sensing imaging
CN109492606A (en) Multispectral vector picture capturing method and system, three dimensional monolithic method and system
Demir Using UAVs for detection of trees from digital surface models
Ramli et al. Homogeneous tree height derivation from tree crown delineation using Seeded Region Growing (SRG) segmentation
CN110675448A (en) Ground light remote sensing monitoring method, system and storage medium based on civil aircraft
Tiwari et al. UAV remote sensing for campus monitoring: a comparative evaluation of nearest neighbor and rule-based classification
CN113168714A (en) System and method for facilitating geographic information generation
Parmehr et al. Mapping urban tree canopy cover using fused airborne lidar and satellite imagery data
Fol et al. Evaluating state-of-the-art 3D scanning methods for stem-level biodiversity inventories in forests
Yurtseven et al. Using of high-resolution satellite images in object-based image analysis
Pahlavani et al. 3D reconstruction of buildings from LiDAR data considering various types of roof structures
Wijesingha Geometric quality assessment of multi-rotor unmanned aerial vehicle borne remote sensing products for precision agriculture
Benjamin et al. Assessment of Structure from Motion (SfM) processing parameters on processing time, spatial accuracy, and geometric quality of unmanned aerial system derived mapping products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination