CN115265519A - Online point cloud map construction method and device - Google Patents

Online point cloud map construction method and device Download PDF

Info

Publication number
CN115265519A
CN115265519A CN202210812140.3A CN202210812140A CN115265519A CN 115265519 A CN115265519 A CN 115265519A CN 202210812140 A CN202210812140 A CN 202210812140A CN 115265519 A CN115265519 A CN 115265519A
Authority
CN
China
Prior art keywords
point cloud
dynamic scene
cloud data
characteristic points
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210812140.3A
Other languages
Chinese (zh)
Inventor
任勇
张广鹏
何贝
刘鹤云
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202210812140.3A priority Critical patent/CN115265519A/en
Publication of CN115265519A publication Critical patent/CN115265519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The application provides an online point cloud map construction method, an online point cloud map construction device, electronic equipment and a machine readable storage medium, which are applied to unmanned vehicles in dynamic scenes, and the method comprises the following steps: acquiring a point cloud data set of a dynamic scene at a current time node; acquiring line characteristic points and/or surface characteristic points of the point cloud data set; matching the line characteristic points and/or the plane characteristic points with the line characteristic points corresponding to the point cloud data of the point cloud data set of the dynamic scene at the historical time nodes and/or the corresponding plane characteristic points, and determining the pose of the unmanned vehicle through the adjacent line characteristic points and/or the plane characteristic points in the dynamic scene; and constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene, and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.

Description

Online point cloud map construction method and device
Technical Field
The present disclosure relates to the field of point cloud map construction technologies, and in particular, to a method and an apparatus for constructing an online point cloud map, an electronic device, and a machine-readable storage medium.
Background
The inner container truck is mainly responsible for carrying containers between a yard and a wharf surface in a port scene, and in order to achieve unmanned vehicle, the problem of automatic positioning of an unmanned vehicle needs to be solved, namely, the vehicle can accurately acquire the current position in real time, and the current position is usually acquired through a sensor carried by the unmanned vehicle, such as a GPS (global positioning system), a Lidar and the like. Because the port scene has shielding objects such as bridges, containers and the like, effective and reliable position information cannot be provided only by a GPS, and therefore laser radar Lidar is needed for positioning, but because the containers in the port scene change continuously in the carrying process, a preset point cloud map is established conventionally, and the idea of matching the current laser radar data and the preset point cloud map cannot be realized.
Disclosure of Invention
The application provides an online point cloud map construction and determination method, which is applied to unmanned vehicles in dynamic scenes and is characterized by comprising the following steps:
acquiring a point cloud data set of a dynamic scene at a current time node;
acquiring line characteristic points and/or surface characteristic points of the point cloud data set;
matching the line characteristic points and/or the plane characteristic points with the line characteristic points corresponding to the point cloud data of the point cloud data set of the dynamic scene at the historical time nodes and/or the corresponding plane characteristic points, and determining the pose of the unmanned vehicle through the adjacent line characteristic points and/or the plane characteristic points in the dynamic scene;
and constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene, and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
Optionally, the obtaining line feature points and/or surface feature points of the point cloud data set includes:
respectively calculating the curvatures of the point cloud data based on the point cloud data in the point cloud data set and at least one point cloud data adjacent to the point cloud data;
and determining point cloud data corresponding to the first M maximum curvatures as line characteristic points of the dynamic scene from the calculated curvatures, and/or determining point cloud data corresponding to the first N minimum curvatures as surface characteristic points of the dynamic scene.
Optionally, the determining point cloud data corresponding to the first M largest curvatures as line feature points of the dynamic scene and/or determining point cloud data corresponding to the first N smallest curvatures as surface feature points of the dynamic scene includes:
and selecting the point cloud data with the curvature larger than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold, determining the point cloud data corresponding to the first M curvatures with the maximum curvature as the line characteristic points of the dynamic scene, and/or determining the point cloud data corresponding to the first N curvatures with the minimum curvature as the surface characteristic points of the dynamic scene.
Optionally, the constructing an online point cloud map of the dynamic scene according to the line feature points and/or the surface feature points of the dynamic scene includes:
fitting the line characteristic points and/or the surface characteristic points of at least one time node of the dynamic scene to obtain a line fitting line and/or a fitting surface, and constructing an online point cloud map of the dynamic scene by using the line fitting line and/or the fitting surface.
Optionally, adding the online point cloud map to a preset point cloud map of the dynamic scene includes:
and projecting the online point cloud map of the current time node under the global coordinate system through the relative relation between the pose of the unmanned vehicle and the global coordinate system, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
Optionally, the dynamic scene is a port scene.
The application provides an online point cloud map founds device is applied to unmanned car under the dynamic scene, its characterized in that, the device includes:
the data acquisition module is used for acquiring a point cloud data set of the dynamic scene at a current time node;
the characteristic extraction module is used for acquiring line characteristic points and/or surface characteristic points of the point cloud data set;
the pose determining module is used for matching the line characteristic points and/or the surface characteristic points with the line characteristic points corresponding to the point cloud data of the dynamic scene in the point cloud data set of the historical time nodes and/or the corresponding surface characteristic points, and determining the pose of the unmanned vehicle through the adjacent line characteristic points and/or the surface characteristic points in the dynamic scene;
and the map updating module is used for constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
Optionally, the obtaining line feature points and/or surface feature points of the point cloud data set includes:
respectively calculating the curvatures of the point cloud data based on the point cloud data in the point cloud data set and at least one point cloud data adjacent to the point cloud data;
and determining point cloud data corresponding to the first M maximum curvatures as line characteristic points of the dynamic scene from the calculated curvatures, and/or determining point cloud data corresponding to the first N minimum curvatures as surface characteristic points of the dynamic scene.
Optionally, the determining point cloud data corresponding to the first M largest curvatures as line feature points of the dynamic scene and/or determining point cloud data corresponding to the first N smallest curvatures as surface feature points of the dynamic scene includes:
and selecting the point cloud data with the curvature larger than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold, determining the point cloud data corresponding to the first M curvatures with the maximum curvature as the line characteristic points of the dynamic scene, and/or determining the point cloud data corresponding to the first N curvatures with the minimum curvature as the surface characteristic points of the dynamic scene.
Optionally, the constructing an online point cloud map of the dynamic scene according to the line feature points and/or the surface feature points of the dynamic scene includes:
fitting the line characteristic points and/or the surface characteristic points of at least one time node of the dynamic scene to obtain a line fitting line and/or a fitting surface, and constructing an online point cloud map of the dynamic scene by using the line fitting line and/or the fitting surface.
Optionally, adding the online point cloud map to a preset point cloud map of the dynamic scene includes:
and projecting the online point cloud map of the current time node under the global coordinate system through the relative relation between the pose of the unmanned vehicle and the global coordinate system, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
The present application further provides an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the above method by executing the executable instructions.
The present application also provides a machine-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the above-described method.
Through the embodiment, the online point cloud map can be constructed through the point cloud data, and the preset point cloud map can be updated through the online point cloud map, so that the updated point cloud map can be matched with the pose of the unmanned vehicle at the current time node, the positioning error is reduced, and the positioning accuracy of the unmanned vehicle is improved.
Drawings
FIG. 1 is a flow diagram of a method for online point cloud mapping, shown in an exemplary embodiment;
FIG. 2 is a block diagram of an online point cloud mapping apparatus, shown in an exemplary embodiment;
fig. 3 is a hardware structure diagram of an electronic device in which an online point cloud mapping apparatus is located according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In order to make those skilled in the art better understand the technical solution in the embodiment of the present disclosure, the following briefly describes the related art of positioning an unmanned vehicle according to the embodiment of the present disclosure.
Point cloud data: laser pulses are emitted outwards by the laser radar, and are reflected from the ground or the surface of an object to form a plurality of echoes, the echoes return to the laser radar sensor, and the processed reflected data is called point cloud data.
Pose: and the transformation matrix corresponding to the relative relation between the pose of the unmanned vehicle and the global coordinate system is used for describing the position and the orientation of the unmanned vehicle.
Application scenario overview
The inner container truck is mainly responsible for carrying containers between a yard and a wharf surface in a port scene, and in order to achieve unmanned vehicle, the problem of automatic positioning of an unmanned vehicle needs to be solved, namely, the vehicle can accurately acquire the current position in real time, and the current position is generally acquired through a sensor carried by the unmanned vehicle, such as a GPS (global positioning system), a Lidar and the like. Because the port scene has shielding objects such as bridges, containers and the like, effective and reliable position information cannot be provided only by a GPS, and therefore laser radar Lidar is needed for positioning, but because the containers in the port scene change continuously in the carrying process, a preset point cloud map is established conventionally, and the idea of matching the current laser radar data and the preset point cloud map cannot be realized. .
In practical application, the unmanned vehicle usually scans a scene by using a laser radar to obtain point cloud data. The dynamic container occupies a large space, and if the information is directly removed, the data volume of the remaining point cloud data is too small to obtain an accurate result after being matched with a preset map, so that a large error is brought to vehicle positioning, and the requirement of unmanned vehicle port operation lane level positioning cannot be met.
Inventive concept
As described above, in a dynamic scene, since the scene frequently changes, it is obviously difficult to obtain an accurate positioning result only by matching and positioning the unmanned vehicle and the preset point cloud map, so that an error is generated in positioning the unmanned vehicle.
In view of this, the present specification aims to provide a technical solution for constructing an online point cloud map through point cloud data and updating a preset point cloud map through the online point cloud map.
The core concept of the specification is as follows:
and constructing an online point cloud map according to the line and surface features of the extracted point cloud data set, adding the online point cloud map into a preset point cloud map based on the pose of the unmanned vehicle, and updating the preset point cloud map.
Through the method, the preset point cloud map can be continuously updated along with scene change, the matching accuracy of the pose of the unmanned vehicle and the preset point cloud map is improved, more accurate point location information can be output, and therefore the accuracy of the point location of the unmanned vehicle is improved.
The present application is described below with reference to specific embodiments and specific application scenarios.
Referring to fig. 1, fig. 1 is a flowchart illustrating an online point cloud map building method according to an exemplary embodiment, where the method performs the following steps:
acquiring a point cloud data set of a dynamic scene at a current time node;
acquiring line characteristic points and/or surface characteristic points of the point cloud data set;
matching the line characteristic points and/or the surface characteristic points with line characteristic points corresponding to point cloud data of the dynamic scene in a point cloud data set of historical time nodes and/or corresponding surface characteristic points, and determining the pose of the unmanned vehicle through adjacent line characteristic points and/or surface characteristic points in the dynamic scene;
and constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
Step 102: and acquiring a point cloud data set of the dynamic scene at the current time node.
Step 104: and acquiring line characteristic points and/or surface characteristic points of the point cloud data set.
Step 106: and matching the line characteristic points and/or the surface characteristic points with the line characteristic points corresponding to the point cloud data of the dynamic scene in the point cloud data set of the historical time nodes and/or the corresponding surface characteristic points, and determining the pose of the unmanned vehicle through the adjacent line characteristic points and/or the surface characteristic points in the dynamic scene.
Step 108: and constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene, and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
Unmanned vehicle carries usually can laser radar in order to satisfy the location demand, can scan the current scene of locating through laser radar's many scanning pencil, obtains the point cloud data of current scene, and laser radar outwards launches laser pulse, from ground or object surface reflection, forms a plurality of echoes, returns laser radar sensor, and the reflection data after handling forms point cloud data.
After the point cloud data is acquired, the characteristics of the point cloud data can be identified, and line characteristic points and surface characteristic points in the point cloud data are identified so as to be matched with a preset point cloud map conveniently.
In one illustrated embodiment, curvatures of point cloud data may be respectively calculated based on at least one point cloud data adjacent thereto in a point cloud data set, and point cloud data corresponding to the largest first M curvatures from among the calculated curvatures may be determined as line feature points of the dynamic scene, and/or point cloud data corresponding to the smallest first N curvatures may be determined as plane feature points of the dynamic scene. When laser radar's scanning pencil was scanned current scene, can utilize its information of close point to calculate the camber that this point corresponds to every point cloud data on every pencil, arrange in an order the camber of point cloud data on every pencil, then can elect the great m point cloud data of camber respectively, the less n point cloud data of camber is as line, the face characteristic point of point cloud respectively.
For example, when the beam is scanned to a plane in a scene, such as a wall, the curvature may be small, and when an object, such as an edge of the wall or a lamp post, is scanned, the curvature may be large, so that features, such as a line surface, may be extracted according to the size of the curvature. According to the curvature, the point cloud data corresponding to the larger 20 curvatures can be used as line feature points, and the point cloud data corresponding to the smaller 20 curvatures can be used as surface feature points.
In another embodiment shown, the point cloud data may be first screened according to the curvature value, and the point cloud data satisfying a certain threshold condition is used as the feature point. The point cloud data with the curvature greater than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold can be selected, the point cloud data corresponding to the first M curvatures with the largest curvature is determined as the line characteristic point of the dynamic scene, and/or the point cloud data corresponding to the first N curvatures with the smallest curvature is determined as the surface characteristic point of the dynamic scene.
Because point cloud data is more, calculating the curvatures of all the point cloud data consumes a large amount of resources and time, so that an unmanned vehicle cannot obtain positioning information in real time.
In one embodiment shown, the line and surface feature points of the point cloud data can be extracted by random sampling or grid uniform sampling. For example, the space scanned by the laser radar is divided into a plurality of grids, and each grid extracts a feature point.
In the traditional method for matching with the preset point cloud map, after line and surface feature points are obtained, the line and surface feature points can be matched with the preset point cloud data, and the position and orientation information of the unmanned vehicle in the preset point cloud map is determined. In the solution provided in the present specification, in order to improve the positioning accuracy, an online point cloud map may be constructed using line and surface feature points, so as to update a preset point cloud map.
In one illustrated embodiment, the line feature points, and/or the face feature points, corresponding line feature points corresponding to point cloud data of the dynamic scene in the point cloud data set of historical time nodes, and/or corresponding face feature points may be matched, and the pose of the unmanned vehicle determined by adjacent line feature points, and/or face feature points in the dynamic scene.
For each extracted line and surface feature point, the corresponding relation between the initial pose of the unmanned vehicle and a preset point cloud map can be determined through preset initial pose information, line and surface feature points of a plurality of time nodes of the dynamic scene can be matched respectively, adjacent feature points are selected to construct a fitting line or a fitting surface, the distance from the line feature point to the fitting line can be calculated, the distance from the surface feature point to the fitting surface serves as a constraint condition, and the pose can be solved by utilizing the constraint condition in an optimization mode. For example, the distance calculation formula from a line feature point to a fit line is as follows:
Figure BDA0003739644830000091
the cross product of the molecular terms (i, j) and (i, l) represents the area of the triangle ijl, and then the length of (j, l) is divided by the area of (j, l) to be used as the height of i from (j, l), so that the distance from the line characteristic point to the fitting line can be obtained.
The distance calculation formula from the surface feature point to the fitting surface is as follows:
Figure BDA0003739644830000092
and (j, l) and (j, m) cross multiplication represent a normal vector of the fitting surface, then the distance between any point and the fitting surface is calculated, and the distance between the surface feature point and the fitting surface can be obtained only by point multiplication of the point and a connecting line of one point on the plane by the normal vector.
A residual error can be constructed for each feature point, then all the residual errors can be used as optimization factors to be added into nonlinear optimization, and the optimal pose can be solved through iteration.
After the pose is solved, the corresponding relation between the pose and a preset point cloud map can be utilized, an online point cloud map constructed by the line and surface features of the current time node is added into the preset point cloud map, and the preset point cloud map is updated.
In an illustrated embodiment, the online point cloud map of the current time node may be projected to the global coordinate system through the relative relationship between the pose of the unmanned vehicle and the global coordinate system, and the online point cloud map is added to the preset point cloud map of the dynamic scene. The pose of the unmanned vehicle can be represented by the position and the posture of the unmanned vehicle under a global coordinate system, specifically, the position can be the translation amount of the unmanned vehicle relative to the origin of the global coordinate system, the posture can be the rotation amount of the unmanned vehicle relative to the origin of the global coordinate system, an online point cloud map of the current time node can be projected under the global coordinate system through a preset corresponding relation, and the online point cloud map is added into a preset point cloud map of the dynamic scene.
For example, the position of the unmanned vehicle in the global coordinate system may be represented using a three-dimensional vector t = (x, y, z), the pose may be represented using a rotation matrix R, and the rotation matrix may be a matrix of 3*3. A certain point in the online point cloud map of the current time node is P = (px, py, pz), the point in the online point cloud map of the current time node can be transformed to a point Q = (qx, qy, qz) under the global coordinate system through a preset corresponding relation Q = R × P + t, and by analogy, all the points in the current online point cloud map can be transformed to the global coordinate system and then added into the point cloud map.
If the number of point cloud data in the online point cloud map is too large, point cloud data can be downsampled to reduce memory occupation, for example, voxel sampling filtering can be used for downsampling, namely, a space is divided into small cubes, only one point is reserved in each cube, the number of point clouds is reduced, and therefore instantaneity in the matching process is guaranteed.
Referring to fig. 2, fig. 2 is a block diagram of an online point cloud mapping apparatus according to an exemplary embodiment. The above-mentioned device includes:
a data obtaining module 210, configured to obtain a point cloud data set of the dynamic scene at a current time node;
a feature extraction module 220, configured to obtain line feature points and/or plane feature points of the point cloud data set;
a pose determining module 230, configured to match the line feature points, and/or the plane feature points, and/or corresponding plane feature points corresponding to point cloud data of the dynamic scene in the point cloud data set of the historical time nodes, and determine a pose of the unmanned vehicle through adjacent line feature points, and/or corresponding plane feature points in the dynamic scene;
and the map updating module 240 is configured to construct an online point cloud map of the dynamic scene according to the line feature points and/or the area feature points of the dynamic scene, and add the online point cloud map to a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
And adding the map into a preset point cloud map of the dynamic scene.
Optionally, the obtaining line feature points and/or plane feature points of the point cloud data set includes:
respectively calculating the curvatures of the point cloud data based on the point cloud data in the point cloud data set and at least one point cloud data adjacent to the point cloud data;
and determining point cloud data corresponding to the first M maximum curvatures as line characteristic points of the dynamic scene from the calculated curvatures, and/or determining point cloud data corresponding to the first N minimum curvatures as surface characteristic points of the dynamic scene.
Optionally, the determining point cloud data corresponding to the first M largest curvatures as line feature points of the dynamic scene and/or determining point cloud data corresponding to the first N smallest curvatures as surface feature points of the dynamic scene includes:
and selecting the point cloud data with the curvature larger than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold, determining the point cloud data corresponding to the first M curvatures with the maximum curvature as the line characteristic points of the dynamic scene, and/or determining the point cloud data corresponding to the first N curvatures with the minimum curvature as the surface characteristic points of the dynamic scene.
Optionally, the constructing an online point cloud map of the dynamic scene according to the line feature points and/or the surface feature points of the dynamic scene includes:
fitting the line characteristic points and/or the surface characteristic points of at least one time node of the dynamic scene to obtain a line fitting line and/or a fitting surface, and constructing an online point cloud map of the dynamic scene by using the line fitting line and/or the fitting surface.
Optionally, adding the online point cloud map to a preset point cloud map of the dynamic scene includes:
and projecting the online point cloud map of the current time node under the global coordinate system through the relative relation between the pose of the unmanned vehicle and the global coordinate system, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
Referring to fig. 3, fig. 3 is a hardware structure diagram of an electronic device where an online point cloud mapping apparatus according to an exemplary embodiment is located. At the hardware level, the device includes a processor 302, an internal bus 304, a network interface 306, a memory 408, and a non-volatile memory 310, although other hardware required for the service may be included. One or more embodiments of the present description may be implemented in software, such as by processor 302 reading a corresponding computer program from non-volatile storage 310 into memory 408 and then executing. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are only illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the present specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein in one or more embodiments to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (13)

1. An online point cloud map construction determination method is applied to an unmanned vehicle in a dynamic scene, and is characterized by comprising the following steps:
acquiring a point cloud data set of a dynamic scene at a current time node;
acquiring line characteristic points and/or surface characteristic points of the point cloud data set;
matching the line characteristic points and/or the surface characteristic points with line characteristic points corresponding to point cloud data of the dynamic scene in a point cloud data set of historical time nodes and/or corresponding surface characteristic points, and determining the pose of the unmanned vehicle through adjacent line characteristic points and/or surface characteristic points in the dynamic scene;
and constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene, and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
2. The method of claim 1, wherein the obtaining line feature points and/or surface feature points of the point cloud data set comprises:
respectively calculating the curvature of the point cloud data based on the point cloud data in the point cloud data set and at least one point cloud data adjacent to the point cloud data;
and determining point cloud data corresponding to the first M maximum curvatures as line characteristic points of the dynamic scene from the calculated curvatures, and/or determining point cloud data corresponding to the first N minimum curvatures as surface characteristic points of the dynamic scene.
3. The method of claim 2, wherein determining point cloud data corresponding to the largest first M curvatures as line feature points of the dynamic scene and/or determining point cloud data corresponding to the smallest first N curvatures as face feature points of the dynamic scene comprises:
and selecting the point cloud data with the curvature larger than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold, determining the point cloud data corresponding to the first M curvatures with the maximum curvature as the line characteristic points of the dynamic scene, and/or determining the point cloud data corresponding to the first N curvatures with the minimum curvature as the surface characteristic points of the dynamic scene.
4. The method according to claim 1, wherein the constructing an online point cloud map of the dynamic scene according to line feature points and/or plane feature points of the dynamic scene comprises:
fitting the line characteristic points and/or the surface characteristic points of at least one time node of the dynamic scene to obtain a line fitting line and/or a fitting surface, and constructing an online point cloud map of the dynamic scene by using the line fitting line and/or the fitting surface.
5. The method of claim 1, wherein the adding the online point cloud map to a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle comprises:
and projecting the online point cloud map of the current time node under the global coordinate system through the relative relation between the pose of the unmanned vehicle and the global coordinate system, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
6. The method of claim 1, wherein the dynamic scenario is a port scenario.
7. An online point cloud map construction device is applied to an unmanned vehicle under a dynamic scene, and is characterized by comprising:
the data acquisition module is used for acquiring a point cloud data set of the dynamic scene at a current time node;
the characteristic extraction module is used for acquiring line characteristic points and/or surface characteristic points of the point cloud data set;
the pose determining module is used for matching the line characteristic points and/or the surface characteristic points with the line characteristic points corresponding to the point cloud data of the dynamic scene in the point cloud data set of the historical time nodes and/or the corresponding surface characteristic points, and determining the pose of the unmanned vehicle through the adjacent line characteristic points and/or the surface characteristic points in the dynamic scene;
and the map updating module is used for constructing an online point cloud map of the dynamic scene according to the line characteristic points and/or the surface characteristic points of the dynamic scene and adding the online point cloud map into a preset point cloud map of the dynamic scene based on the pose of the unmanned vehicle.
8. The apparatus of claim 7, wherein the obtaining line feature points and/or surface feature points of the point cloud data set comprises:
respectively calculating the curvatures of the point cloud data based on the point cloud data in the point cloud data set and at least one point cloud data adjacent to the point cloud data;
and determining point cloud data corresponding to the first M maximum curvatures as line characteristic points of the dynamic scene from the calculated curvatures, and/or determining point cloud data corresponding to the first N minimum curvatures as surface characteristic points of the dynamic scene.
9. The apparatus of claim 8, wherein determining point cloud data corresponding to a maximum first M curvatures as line feature points of the dynamic scene and/or determining point cloud data corresponding to a minimum first N curvatures as face feature points of the dynamic scene comprises:
and selecting the point cloud data with the curvature larger than the line characteristic point threshold and the curvature smaller than the surface characteristic point threshold, determining the point cloud data corresponding to the first M curvatures with the maximum curvature as the line characteristic points of the dynamic scene, and/or determining the point cloud data corresponding to the first N curvatures with the minimum curvature as the surface characteristic points of the dynamic scene.
10. The apparatus of claim 7, wherein the constructing an online point cloud map of the dynamic scene according to line feature points and/or surface feature points of the dynamic scene comprises:
fitting the line characteristic points and/or the surface characteristic points of at least one time node of the dynamic scene to obtain a line fitting line and/or a fitting surface, and constructing an online point cloud map of the dynamic scene by using the line fitting line and/or the fitting surface.
11. The apparatus of claim 7, wherein the adding the online point cloud map to a preset point cloud map of the dynamic scene comprises:
and projecting the online point cloud map of the current time node to the global coordinate system through the relative relation between the pose of the unmanned vehicle and the global coordinate system, and adding the online point cloud map into a preset point cloud map of the dynamic scene.
12. A machine readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1-6.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the method of any one of claims 1-6 by executing the executable instructions.
CN202210812140.3A 2022-07-11 2022-07-11 Online point cloud map construction method and device Pending CN115265519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210812140.3A CN115265519A (en) 2022-07-11 2022-07-11 Online point cloud map construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210812140.3A CN115265519A (en) 2022-07-11 2022-07-11 Online point cloud map construction method and device

Publications (1)

Publication Number Publication Date
CN115265519A true CN115265519A (en) 2022-11-01

Family

ID=83764757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210812140.3A Pending CN115265519A (en) 2022-07-11 2022-07-11 Online point cloud map construction method and device

Country Status (1)

Country Link
CN (1) CN115265519A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030212A (en) * 2023-03-28 2023-04-28 北京集度科技有限公司 Picture construction method, device, vehicle and program product
CN117213469A (en) * 2023-11-07 2023-12-12 中建三局信息科技有限公司 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030212A (en) * 2023-03-28 2023-04-28 北京集度科技有限公司 Picture construction method, device, vehicle and program product
CN117213469A (en) * 2023-11-07 2023-12-12 中建三局信息科技有限公司 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall

Similar Documents

Publication Publication Date Title
US11506769B2 (en) Method and device for detecting precision of internal parameter of laser radar
US20220130156A1 (en) Three-dimensional object detection and intelligent driving
US8954272B2 (en) Method and apparatus for the tracking of multiple objects
CN115265519A (en) Online point cloud map construction method and device
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US11300964B2 (en) Method and system for updating occupancy map for a robotic system
WO2019062649A1 (en) Adaptive region division method and system
CN112034431B (en) External parameter calibration method and device for radar and RTK
CN111445472B (en) Laser point cloud ground segmentation method, device, computing equipment and storage medium
CN110807439A (en) Method and device for detecting obstacle
CN112505671A (en) Millimeter wave radar target positioning method and device under GNSS signal missing environment
CN115164868A (en) Robot positioning method, device, robot and storage medium
CN113252046B (en) Port information processing method and device and related equipment
CN112965076B (en) Multi-radar positioning system and method for robot
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN110413716B (en) Data storage and data query method and device and electronic equipment
CN115656982B (en) Water surface clutter removal method, device, computer equipment and storage medium
CN112597574A (en) Construction method and device of building information model
CN115713600A (en) Method and device for generating digital elevation model of automatic driving scene
CN111113405A (en) Method for robot to obtain position service and robot
CN114509774A (en) Positioning method, positioning system, vehicle, and computer-readable storage medium
CN114387488A (en) Road extraction system and method based on Potree point cloud image fusion
CN117743491B (en) Geographic entity coding method, device, computer equipment and medium
CN113465614B (en) Unmanned aerial vehicle and generation method and device of navigation map thereof
US11035674B2 (en) GPS-denied geolocation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination