CN117213469A - Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall - Google Patents

Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall Download PDF

Info

Publication number
CN117213469A
CN117213469A CN202311465173.6A CN202311465173A CN117213469A CN 117213469 A CN117213469 A CN 117213469A CN 202311465173 A CN202311465173 A CN 202311465173A CN 117213469 A CN117213469 A CN 117213469A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
model
cloud model
subway station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311465173.6A
Other languages
Chinese (zh)
Inventor
王浩
裴以军
朱紫威
汪咏琳
刘博林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Third Engineering Bureau Information Technology Co ltd
Original Assignee
China Construction Third Engineering Bureau Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Third Engineering Bureau Information Technology Co ltd filed Critical China Construction Third Engineering Bureau Information Technology Co ltd
Priority to CN202311465173.6A priority Critical patent/CN117213469A/en
Publication of CN117213469A publication Critical patent/CN117213469A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a synchronous positioning and mapping method, a system, equipment and a storage medium for a subway station hall, wherein the method comprises the following steps: acquiring an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects; controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a priori three-dimensional point cloud model; controlling a robot model to obtain a multi-frame local three-dimensional point cloud model of a subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model; and positioning and map construction of the robot model in the subway station hall are realized based on multi-frame fusion three-dimensional point cloud models. The application uses the priori three-dimensional point cloud model to enhance the SLAM algorithm, thereby improving the positioning and mapping accuracy of the robot model.

Description

Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall
Technical Field
The application relates to the technical field of positioning navigation, in particular to a synchronous positioning and mapping method, a synchronous positioning and mapping system, synchronous positioning and mapping equipment and a synchronous positioning and mapping storage medium for a subway station hall.
Background
The laser radar has extremely high perception capability, and a synchronous positioning and mapping (Simultaneous LocalizationandMapping, SLAM) technology based on the laser radar can be described as moving from an unknown position in an unknown environment, and can perform self-positioning according to sensor information and a map in the moving process, and meanwhile, an incremental map is constructed on the basis of self-positioning, so that autonomous positioning navigation is realized. The laser radar can perform high-precision map building and positioning by utilizing laser point cloud information reflected by an object, thereby realizing functions of obstacle avoidance navigation and the like.
However, there is a large-area transparent glass door in the subway station hall scene, because the reflectivity of the transparent glass door to light is lower, optical sensors such as laser radar are difficult to carry out effective discernment detection to the transparent glass door, lead to the robot unable to construct the map information of such object, and then influence the positioning accuracy of robot, even arouse that the robot bumps with the object and so on accident. In addition, the situation of multiple floors commonly exists in the current subway station, the existing laser radar SLAM algorithm is challenged, the position of the stairs and the specific floor where the robot is located are difficult to accurately distinguish in the hall, and therefore map construction and loading confusion is caused, and the situation that large errors and the like occur in positioning and navigation of the robot are caused.
Therefore, there is an urgent need to provide a method, a system, a device and a storage medium for synchronous positioning and mapping of subway station halls, so as to solve the above technical problems.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, a system, a device and a storage medium for synchronous positioning and mapping in a subway station hall, which are used for solving the technical problems that the synchronous positioning and mapping method in the prior art cannot adapt to the scene of the subway station hall including glass doors and floors, and the positioning and mapping accuracy in the subway station hall is low.
In one aspect, the application provides a synchronous positioning and mapping method for subway station halls, which comprises the following steps:
acquiring an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a priori three-dimensional point cloud model;
controlling the robot model to acquire a multi-frame local three-dimensional point cloud model of the subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and realizing the positioning and map construction of the robot model in the subway station hall based on the multi-frame fusion three-dimensional point cloud model.
In some possible implementations, the multi-frame local three-dimensional point cloud model includes a first frame local three-dimensional point cloud model, and the multi-frame fused three-dimensional point cloud model includes a first frame fused three-dimensional point cloud model; the determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model comprises the following steps:
aligning the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model to obtain a first frame local three-dimensional pair Ji Dian cloud model and a prior three-dimensional pair Ji Dian cloud model;
and supplementing the first frame local three-dimensional pair Ji Dian cloud model based on the prior pair Ji Dian cloud model to obtain the first frame fusion three-dimensional point cloud model.
In some possible implementations, the multi-frame fused three-dimensional point cloud model further includes a plurality of historical frame fused three-dimensional point cloud models before the first frame fused three-dimensional point cloud model and a second frame fused three-dimensional point cloud model after the first frame fused three-dimensional point cloud model; the positioning and map construction of the robot model in the subway station hall are realized based on the multi-frame fusion three-dimensional point cloud model, and the method comprises the following steps:
determining a first edge line characteristic point and a first plane characteristic point of the first frame fusion three-dimensional point cloud model;
determining a second edge line characteristic point and a second plane characteristic point of the second frame fusion three-dimensional point cloud model;
determining pose information of the robot model in a second frame fusion three-dimensional point cloud model based on the first edge line feature points, the first plane feature points, the second edge line feature points and the second plane feature points;
updating the pose information based on the historical line characteristic point set and the historical surface characteristic point set of the three-dimensional point cloud model fused by the historical frames to obtain pose updating information;
and updating the map constructed based on the first frame fusion three-dimensional point cloud model based on the pose updating information.
In some possible implementations, the first frame fusion three-dimensional point cloud model includes a current feature point and a plurality of peripheral detection points; the determining the first edge line feature point and the first plane feature point of the first frame fusion three-dimensional point cloud model comprises the following steps:
determining a first curvature of the current feature point based on a line feature point discrimination model, the current feature point and the plurality of peripheral detection points;
when the first curvature is larger than a first curvature threshold value, the current feature point is the first edge line feature point;
determining a second curvature of the current feature point based on a face feature point discrimination model, the current feature point and the plurality of peripheral detection points;
and when the second curvature is smaller than a second curvature threshold value, the current characteristic point is the first plane characteristic point.
In some possible implementations, the line feature point discriminant model is:
the surface characteristic point discrimination model is as follows:
in the method, in the process of the application,is a first curvature; />The current feature point; />Is a fifth peripheral detection point; />Is the first peripheral detection point; />Is a second curvature; />Is a third peripheral inspection point; />Transpose the symbol; and I is a modulo operator.
In some possible implementations, before the controlling the robot model to move in the subway station hall, obtaining the three-dimensional point cloud model of the subway station hall, the method further includes:
planning a walking path of the robot model based on the engineering project model to obtain a planned path;
the robot model is controlled to move in the subway station hall, and the three-dimensional point cloud model of the subway station hall is obtained specifically as follows:
and controlling the robot model to move along the planning path in the subway station hall, and obtaining a three-dimensional point cloud model of the subway station hall.
In some possible implementations, the feature objects include glass doors and stairs.
On the other hand, the application also provides a synchronous positioning and mapping system of the subway station hall, which comprises the following steps:
the model acquisition unit is used for acquiring an engineering project model and a robot model of the subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
the prior three-dimensional point cloud model determining unit is used for controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a prior three-dimensional point cloud model;
the three-dimensional point cloud model fusion unit is used for controlling the robot model to acquire a multi-frame local three-dimensional point cloud model of the subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and the positioning and map building unit is used for realizing the positioning and map building of the robot model in the subway station hall based on the multi-frame fusion three-dimensional point cloud model.
In another aspect, the present application also provides a robot navigation comprising a memory and a processor, wherein,
the memory is used for storing programs;
the processor is coupled to the memory, and is configured to execute the program stored in the memory, so as to implement the steps in the method for synchronous positioning and mapping of a subway station hall in any one of the possible implementation manners.
In another aspect, the present application further provides a computer readable storage medium, configured to store a computer readable program or instructions, where the program or instructions, when executed by a processor, implement the steps in the method for synchronous positioning and mapping of a subway station hall in any one of the possible implementation manners described above.
The beneficial effects of the implementation mode are that: according to the synchronous positioning and mapping method for the subway station hall, the engineering project model of the subway station hall is obtained, and because the engineering project model comprises the position information and the semantic description of the characteristic objects in the subway station hall, the prior information can be added to the three-dimensional point cloud model based on the engineering project model to obtain the prior three-dimensional point cloud model, the prior information is provided for positioning and mapping in a subsequent real scene, and further the positioning and mapping precision of the subway station hall can be improved, the accuracy of the robot model in identifying the characteristic objects is improved, and the potential safety hazards such as collision of the robot model during autonomous inspection are reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the description of the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an embodiment of a method for synchronous positioning and mapping of subway station halls provided by the application;
FIG. 2 is a flowchart illustrating the step S103 of FIG. 1 according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating the step S104 of FIG. 1 according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating the step S301 of FIG. 3 according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a synchronous positioning and mapping system for subway station halls provided by the present application;
fig. 6 is a schematic structural diagram of an embodiment of a robotic navigation device provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be appreciated that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor systems and/or microcontroller systems.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
If any, the terms "first," "second," etc. are used merely for distinguishing between technical features, they are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
The embodiment of the application provides a synchronous positioning and mapping method, a system, equipment and a storage medium for a subway station hall, which are respectively described below.
Fig. 1 is a schematic flow chart of an embodiment of a method for synchronous positioning and mapping of a subway station hall, where the method for synchronous positioning and mapping of a subway station hall shown in fig. 1 includes:
s101, acquiring an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
s102, controlling the robot model to move in a subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a priori three-dimensional point cloud model;
s103, controlling a robot model to obtain a multi-frame local three-dimensional point cloud model of a subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
s104, positioning and map construction of the robot model in the subway station hall are achieved based on the multi-frame fusion three-dimensional point cloud model.
Compared with the prior art, the synchronous positioning and mapping method for the subway station hall provided by the embodiment of the application has the advantages that the engineering project model of the subway station hall is obtained, the prior information can be added to the three-dimensional point cloud model based on the engineering project model to obtain the prior three-dimensional point cloud model, the prior three-dimensional point cloud model is used for enhancing the radar SLAM algorithm, the positioning and mapping precision of the robot model is improved, the accuracy of the characteristic object is improved, the potential safety hazards such as collision and the like of the robot model during autonomous inspection are greatly reduced, and the accurate and reliable position information is provided for the autonomous inspection of the robot.
Wherein the feature objects in step S101 include, but are not limited to, glass and stairs, the semantic description of glass includes, but is not limited to, reflectivity of glass and size of glass, and the semantic description of stairs includes, but is not limited to, floor number and stair length.
In a specific embodiment of the application, the engineering project model is a building information model (Building Information Modeling, BIM). The BIM is a building model established based on various related information data of a building engineering project. Through digital information simulation, the real information of the building is simulated, and the building has the characteristics of visualization, coordination, simulation, optimality and diagrammability 5. BIM can be through the realistic simulation and building visualization to know the construction specific condition in the hall of standing.
The BIM is a model built when the subway station hall is constructed, so that the engineering project model is set as the BIM, the subway station hall can be built without being required to be built again, and the synchronous positioning and mapping speed of the subway station hall is improved.
It should be noted that: because BIM is usually drawn by Revit three-dimensional modeling software, but Revit software does not support a robot model, the robot model needs to be built in a Gazebo three-dimensional physical engine, and the execution process of steps S101-S104 needs to use BIM and the robot model at the same time, so that the robot model and BIM need to be compatible into one software.
Specifically: the BIM in the Revit software is imported into Gazebo to be converted into a file format, firstly, a Lumino plug-in is required to be used for converting a model file in a rvt format into a model file in a dae format and a model material sticker, and then the model file and the model material sticker are loaded into a Gazebo engine through scripts. The robot model is constructed through XML format model files such as URDF and XACRO, and various simulation sensors such as radar, cameras and the like can be added in the robot model to simulate the robot model in the real world.
In some embodiments of the present application, the multi-frame local three-dimensional point cloud model comprises a first frame local three-dimensional point cloud model, and the multi-frame fused three-dimensional point cloud model comprises a first frame fused three-dimensional point cloud model; then, as shown in fig. 2, step S103 includes:
s201, aligning a first frame local three-dimensional point cloud model and a priori three-dimensional point cloud model to obtain a first frame local three-dimensional pair Ji Dian cloud model and a priori three-dimensional pair Ji Dian cloud model;
s202, based on the prior pair Ji Dian cloud model, the first frame local three-dimensional pair Ji Dian cloud model is supplemented, and a first frame fusion three-dimensional point cloud model is obtained.
It should be understood that: the method for aligning the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model can be as follows: the first frame local three-dimensional point cloud model is motionless, and the prior three-dimensional point cloud model is subjected to rotation, translation and other operations and aligned to the first frame local three-dimensional point cloud model; the method can also be as follows: the prior three-dimensional point cloud model is motionless, and the first frame of local three-dimensional point cloud model is subjected to rotation, translation and other operations and aligned to the prior three-dimensional point cloud model; the method can also be as follows: and constructing a target position, and respectively carrying out translation and rotation operations on the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model to align to the target position.
The specific method for aligning the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model in step S201 is as follows:
and (3) taking the minimum error residual as a target, adjusting the translation matrix and the rotation matrix to obtain a target translation matrix and a target rotation matrix, and aligning the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model.
The filling operation in step S202 is as follows:
in the method, in the process of the application,fusing the three-dimensional point cloud model for the first frame; />A first frame of local three-dimensional point cloud pair Ji Dian cloud model;is a priori model of Ji Dian cloud; />Is a union operation.
According to the embodiment of the application, the first frame fusion three-dimensional point cloud model is the point cloud union of the first frame local three-dimensional point cloud pair Ji Dian cloud model and the prior pair Ji Dian cloud model, so that when the robot model is positioned at the edge of an object with lower reflectivity such as a subway station hall glass door or at a stair, the first frame local three-dimensional point cloud pair Ji Dian cloud model has larger point cloud deficiency, and the point cloud data of the first frame fusion three-dimensional point cloud model is more complete.
It should be noted that: the multi-frame local three-dimensional point cloud model can further comprise other frame local three-dimensional point cloud models except the first frame local three-dimensional point cloud model, and each frame local three-dimensional point cloud model needs to be fused with the prior three-dimensional point cloud model so as to enhance the obtained point cloud data of the fused three-dimensional point cloud model.
In some embodiments of the present application, the multi-frame fused three-dimensional point cloud model further comprises a plurality of historical frame fused three-dimensional point cloud models before the first frame fused three-dimensional point cloud model and a second frame fused three-dimensional point cloud model after the first frame fused three-dimensional point cloud model; then, as shown in fig. 3, step S104 includes:
s301, determining a first edge line characteristic point and a first plane characteristic point of a first frame fusion three-dimensional point cloud model;
s302, determining a second edge line characteristic point and a second plane characteristic point of a second frame fusion three-dimensional point cloud model;
s303, determining pose information of a robot model in the second frame fusion three-dimensional point cloud model based on the first edge line feature points, the first plane feature points, the second edge line feature points and the second plane feature points;
s304, updating pose information based on a history line characteristic point set and a history surface characteristic point set of a three-dimensional point cloud model fused by a plurality of history frames to obtain pose updating information;
and S305, updating the map constructed based on the first frame fusion three-dimensional point cloud model based on the pose updating information.
The step S305 specifically includes: and determining a plurality of characteristic points under the current pose based on the pose updating information, expanding the map constructed based on the first frame fusion three-dimensional point cloud model based on the plurality of characteristic points, and updating the map constructed based on the first frame fusion three-dimensional point cloud model.
Because the pose updating information is obtained by updating the pose information based on the historical line characteristic point set and the historical surface characteristic point set of the three-dimensional point cloud model fused by a plurality of historical frames, the space accumulated error in the pose information is eliminated, and the accuracy of the obtained pose updating information is improved, more accurate characteristic points can be obtained based on the accurate pose updating information, and an accurate map can be obtained.
In some embodiments of the present application, if the first frame fusion three-dimensional point cloud model includes a current feature point and a plurality of peripheral detection points, as shown in fig. 4, step S301 includes:
s401, determining a first curvature of a current feature point based on a line feature point discrimination model, the current feature point and a plurality of peripheral detection points;
s402, when the first curvature is larger than a first curvature threshold, the current feature point is a first edge line feature point;
s403, determining a second curvature of the current feature point based on the surface feature point discrimination model, the current feature point and a plurality of peripheral detection points;
s404, when the second curvature is smaller than a second curvature threshold value, the current characteristic point is a first plane characteristic point.
It should be noted that: the process of determining the second edge line feature point and the second plane feature point in step S302 is the same as the process of determining the first edge line feature point and the first plane feature point, and reference may be made to steps S401 to S404, which are not described herein.
It should be understood that: the first curvature threshold value and the second curvature threshold value may be set or adjusted according to actual application scenarios and experience, and are not specifically limited herein.
Specifically, the line feature point discrimination model is:
the surface characteristic point discrimination model is as follows:
in the method, in the process of the application,is a first curvature; />The current feature point; />Is a fifth peripheral detection point; />Is the first peripheral detection point; />Is a second curvature; />Is a third peripheral inspection point; />Transpose the symbol; and I is a modulo operator.
In a specific embodiment of the present application, the pose information includes position information and pose information, the position information being:
the gesture information is:
in the method, in the process of the application,is position information; />Is gesture information; />Fusing historical position information in the three-dimensional point cloud model for the first frame; />Fusing historical posture information in the three-dimensional point cloud model for the first frame; />The three-dimensional point cloud model is fused for the first frame to the three-dimensional point cloud model is fused for the second frame; />And fusing the position rotation matrix of the three-dimensional point cloud model for the first frame to the second frame.
Specifically:
in the method, in the process of the application,is a unit matrix; />Euler angles for the change of the gesture of the robot model; />Is->Is an anti-symmetric matrix of (a).
To avoid a safety accident such as a collision when the robot model moves in the subway station hall when step S102 is performed, in some embodiments of the present application, before step S102, the method further includes:
planning a walking path of the robot model based on the engineering project model to obtain a planned path;
step S102 is specifically:
and controlling the robot model to move along the planned path in the subway station hall, and obtaining the three-dimensional point cloud model of the subway station hall.
According to the embodiment of the application, the planning path is obtained by planning the walking path of the robot model based on the engineering project model, and when the three-dimensional point cloud model of the subway station hall is obtained, the robot model is controlled to move along the planning path, so that the safety problems of collision and the like of the robot model are avoided, and the moving safety of the robot model in the subway station hall is improved. Furthermore, the construction of the three-dimensional point cloud model can be efficiently completed according to the engineering project model, and the synchronous positioning and mapping efficiency of the subway station hall is improved.
In order to better implement the synchronous positioning and mapping method of the subway station hall in the embodiment of the application, correspondingly, the embodiment of the application also provides a synchronous positioning and mapping system of the subway station hall, as shown in fig. 5, the synchronous positioning and mapping system 500 of the subway station hall comprises:
a model obtaining unit 501, configured to obtain an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
the prior three-dimensional point cloud model determining unit 502 is used for controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a prior three-dimensional point cloud model;
the three-dimensional point cloud model fusion unit 503 is configured to control the robot model to obtain a multi-frame local three-dimensional point cloud model of the subway station hall, and determine a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and the positioning and map building unit 504 is used for realizing the positioning and map building of the robot model in the subway station hall based on the multi-frame fusion three-dimensional point cloud model.
The synchronous positioning and mapping system 500 of the subway station hall provided in the foregoing embodiment may implement the technical solutions described in the foregoing embodiment of the synchronous positioning and mapping method of the subway station hall, and the specific implementation principles of the foregoing modules or units may refer to the corresponding contents in the foregoing embodiment of the synchronous positioning and mapping method of the subway station hall, which are not described herein again.
As shown in fig. 6, the present application also provides a robot navigation device 600 accordingly. The robotic navigation device 600 includes a processor 601, a memory 602, and a display 603. Fig. 6 shows only some of the components of the robotic navigation device 600, but it should be understood that not all of the illustrated components need be implemented, and that more or fewer components may be implemented instead.
The processor 601 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 602, such as the synchronized locating and mapping method of the subway station hall of the present application.
In some embodiments, the processor 601 may be a single server or a group of servers. The server farm may be centralized or distributed. In some embodiments, the processor 601 may be local or remote. In some embodiments, the processor 601 may be implemented in a cloud platform. In an embodiment, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-internal, multiple clouds, or the like, or any combination thereof.
The memory 602 may be an internal storage unit of the robotic navigation device 600 in some embodiments, such as a hard disk or memory of the robotic navigation device 600. The memory 602 may also be an external storage device of the robot navigation device 600 in other embodiments, such as a plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card) or the like, which are provided on the robot navigation device 600.
Further, the memory 602 may also include both internal storage units and external storage devices of the robotic navigation device 600. The memory 602 is used for storing application software and various types of data for installing the robot navigation device 600.
The display 603 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like in some embodiments. The display 603 is used for displaying information at the robotic navigation device 600 and for displaying a visualized user interface. The components 601-603 of the robotic navigation device 600 communicate with each other via a system bus.
In some embodiments of the present application, when the processor 601 executes the synchronized positioning and mapping program of the subway station hall in the memory 602, the following steps may be implemented:
acquiring an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a priori three-dimensional point cloud model;
controlling a robot model to obtain a multi-frame local three-dimensional point cloud model of a subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and positioning and map construction of the robot model in the subway station hall are realized based on multi-frame fusion three-dimensional point cloud models.
It should be understood that: the processor 601 may perform other functions in addition to the above functions when executing the synchronized positioning and mapping procedure of the subway station hall in the memory 602, and in particular, reference may be made to the description of the related method embodiments.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium is used for storing a computer readable program or instruction, and when the program or instruction is executed by a processor, the steps or functions in the synchronous positioning and mapping method of the subway station hall provided by the embodiments of the method can be realized.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program stored in a computer readable storage medium to instruct related hardware (e.g., a processor, a controller, etc.). The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The method, the system, the equipment and the storage medium for synchronously positioning and constructing the subway station hall provided by the application are described in detail, and specific examples are applied to the description of the principle and the implementation mode of the application, and the description of the examples is only used for helping to understand the method and the core idea of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. The synchronous positioning and mapping method for the subway station hall is characterized by comprising the following steps of:
acquiring an engineering project model and a robot model of a subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a priori three-dimensional point cloud model;
controlling the robot model to acquire a multi-frame local three-dimensional point cloud model of the subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and realizing the positioning and map construction of the robot model in the subway station hall based on the multi-frame fusion three-dimensional point cloud model.
2. The method for synchronously positioning and mapping a subway station hall according to claim 1, wherein the multi-frame local three-dimensional point cloud model comprises a first-frame local three-dimensional point cloud model, and the multi-frame fusion three-dimensional point cloud model comprises a first-frame fusion three-dimensional point cloud model; the determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model comprises the following steps:
aligning the first frame local three-dimensional point cloud model and the prior three-dimensional point cloud model to obtain a first frame local three-dimensional pair Ji Dian cloud model and a prior three-dimensional pair Ji Dian cloud model;
and supplementing the first frame local three-dimensional pair Ji Dian cloud model based on the prior pair Ji Dian cloud model to obtain the first frame fusion three-dimensional point cloud model.
3. The synchronized locating and mapping method of subway station halls of claim 2, wherein the multi-frame fusion three-dimensional point cloud model further comprises a plurality of history frame fusion three-dimensional point cloud models before the first frame fusion three-dimensional point cloud model and a second frame fusion three-dimensional point cloud model after the first frame fusion three-dimensional point cloud model; the positioning and map construction of the robot model in the subway station hall are realized based on the multi-frame fusion three-dimensional point cloud model, and the method comprises the following steps:
determining a first edge line characteristic point and a first plane characteristic point of the first frame fusion three-dimensional point cloud model;
determining a second edge line characteristic point and a second plane characteristic point of the second frame fusion three-dimensional point cloud model;
determining pose information of the robot model in a second frame fusion three-dimensional point cloud model based on the first edge line feature points, the first plane feature points, the second edge line feature points and the second plane feature points;
updating the pose information based on the historical line characteristic point set and the historical surface characteristic point set of the three-dimensional point cloud model fused by the historical frames to obtain pose updating information;
and updating the map constructed based on the first frame fusion three-dimensional point cloud model based on the pose updating information.
4. The synchronized locating and mapping method of a subway station hall of claim 3, wherein the first frame fusion three-dimensional point cloud model includes a current feature point and a plurality of peripheral detection points; the determining the first edge line feature point and the first plane feature point of the first frame fusion three-dimensional point cloud model comprises the following steps:
determining a first curvature of the current feature point based on a line feature point discrimination model, the current feature point and the plurality of peripheral detection points;
when the first curvature is larger than a first curvature threshold value, the current feature point is the first edge line feature point;
determining a second curvature of the current feature point based on a face feature point discrimination model, the current feature point and the plurality of peripheral detection points;
and when the second curvature is smaller than a second curvature threshold value, the current characteristic point is the first plane characteristic point.
5. The method for synchronously positioning and mapping a subway station hall according to claim 4, wherein the line feature point discrimination model is as follows:
the surface characteristic point discrimination model is as follows:
in the method, in the process of the application,is a first curvature; />The current feature point; />Is a fifth peripheral detection point; />Is the first peripheral detection point; />Is a second curvature; />Is a third peripheral inspection point; />Transpose the symbol; and I is a modulo operator.
6. The synchronized locating and mapping method of a subway station hall of claim 1, further comprising, prior to said controlling the movement of the robot model in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall:
planning a walking path of the robot model based on the engineering project model to obtain a planned path;
the robot model is controlled to move in the subway station hall, and the three-dimensional point cloud model of the subway station hall is obtained specifically as follows:
and controlling the robot model to move along the planning path in the subway station hall, and obtaining a three-dimensional point cloud model of the subway station hall.
7. The synchronized locating and mapping method of a subway station hall of any one of claims 1-6, wherein the feature objects include glass doors and stairways.
8. The synchronous positioning and mapping system of subway station hall is characterized by comprising:
the model acquisition unit is used for acquiring an engineering project model and a robot model of the subway station hall; the engineering project model comprises characteristic objects of the subway station hall, and position information and semantic description of the characteristic objects;
the prior three-dimensional point cloud model determining unit is used for controlling the robot model to move in the subway station hall to obtain a three-dimensional point cloud model of the subway station hall, and fusing the position information and semantic description of the characteristic object with the three-dimensional point cloud model to obtain a prior three-dimensional point cloud model;
the three-dimensional point cloud model fusion unit is used for controlling the robot model to acquire a multi-frame local three-dimensional point cloud model of the subway station hall, and determining a multi-frame fusion three-dimensional point cloud model based on the multi-frame local three-dimensional point cloud model and the prior three-dimensional point cloud model;
and the positioning and map building unit is used for realizing the positioning and map building of the robot model in the subway station hall based on the multi-frame fusion three-dimensional point cloud model.
9. A robot navigation device comprising a memory and a processor, wherein,
the memory is used for storing programs;
the processor is coupled to the memory for executing the program stored in the memory to implement the steps in the synchronized locating and mapping method of the subway station hall of any one of the preceding claims 1 to 7.
10. A computer readable storage medium storing a computer readable program or instructions which when executed by a processor enable the steps in the synchronized locating and mapping method of a subway station hall according to any one of the preceding claims 1 to 7.
CN202311465173.6A 2023-11-07 2023-11-07 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall Pending CN117213469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311465173.6A CN117213469A (en) 2023-11-07 2023-11-07 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311465173.6A CN117213469A (en) 2023-11-07 2023-11-07 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall

Publications (1)

Publication Number Publication Date
CN117213469A true CN117213469A (en) 2023-12-12

Family

ID=89041216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311465173.6A Pending CN117213469A (en) 2023-11-07 2023-11-07 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall

Country Status (1)

Country Link
CN (1) CN117213469A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108759840A (en) * 2018-05-25 2018-11-06 北京建筑大学 A kind of indoor and outdoor integrated three-dimensional navigation path planning method
CN111006676A (en) * 2019-11-14 2020-04-14 广东博智林机器人有限公司 Map construction method, device and system
CN113341983A (en) * 2021-06-15 2021-09-03 上海有个机器人有限公司 Escalator autonomous avoidance early warning method for robot
CN113360987A (en) * 2021-06-16 2021-09-07 北京建筑大学 Spatial topological relation data organization model for indoor navigation and construction method
CN113888691A (en) * 2020-07-03 2022-01-04 上海大界机器人科技有限公司 Method, device and storage medium for building scene semantic map construction
CN114041170A (en) * 2019-06-06 2022-02-11 高通技术公司 Model retrieval for objects in an image using field descriptors
CN114683290A (en) * 2022-05-31 2022-07-01 深圳鹏行智能研究有限公司 Method and device for optimizing pose of foot robot and storage medium
CN115265519A (en) * 2022-07-11 2022-11-01 北京斯年智驾科技有限公司 Online point cloud map construction method and device
CN115294287A (en) * 2022-08-05 2022-11-04 太原理工大学 Laser SLAM mapping method for greenhouse inspection robot
CN115461262A (en) * 2020-06-03 2022-12-09 伟摩有限责任公司 Autonomous driving using surface element maps
CN115866238A (en) * 2021-09-24 2023-03-28 苹果公司 Feedback using coverage for object scanning
CN115979243A (en) * 2022-12-05 2023-04-18 北京航空航天大学 Mobile robot navigation map conversion method and system based on BIM information
CN116878504A (en) * 2023-09-07 2023-10-13 兰笺(苏州)科技有限公司 Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108759840A (en) * 2018-05-25 2018-11-06 北京建筑大学 A kind of indoor and outdoor integrated three-dimensional navigation path planning method
CN114041170A (en) * 2019-06-06 2022-02-11 高通技术公司 Model retrieval for objects in an image using field descriptors
CN111006676A (en) * 2019-11-14 2020-04-14 广东博智林机器人有限公司 Map construction method, device and system
CN115461262A (en) * 2020-06-03 2022-12-09 伟摩有限责任公司 Autonomous driving using surface element maps
CN113888691A (en) * 2020-07-03 2022-01-04 上海大界机器人科技有限公司 Method, device and storage medium for building scene semantic map construction
CN113341983A (en) * 2021-06-15 2021-09-03 上海有个机器人有限公司 Escalator autonomous avoidance early warning method for robot
CN113360987A (en) * 2021-06-16 2021-09-07 北京建筑大学 Spatial topological relation data organization model for indoor navigation and construction method
CN115866238A (en) * 2021-09-24 2023-03-28 苹果公司 Feedback using coverage for object scanning
CN114683290A (en) * 2022-05-31 2022-07-01 深圳鹏行智能研究有限公司 Method and device for optimizing pose of foot robot and storage medium
CN115265519A (en) * 2022-07-11 2022-11-01 北京斯年智驾科技有限公司 Online point cloud map construction method and device
CN115294287A (en) * 2022-08-05 2022-11-04 太原理工大学 Laser SLAM mapping method for greenhouse inspection robot
CN115979243A (en) * 2022-12-05 2023-04-18 北京航空航天大学 Mobile robot navigation map conversion method and system based on BIM information
CN116878504A (en) * 2023-09-07 2023-10-13 兰笺(苏州)科技有限公司 Accurate positioning method for building outer wall operation unmanned aerial vehicle based on multi-sensor fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
爱锐科技(IRUIKEJI.COM): "地铁站厅巡检机器人", pages 1 - 2, Retrieved from the Internet <URL:http://www.iruikeji.com/a/product/a/product/tzjc/2017/0802/9.html> *

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
Diakite et al. Automatic geo-referencing of BIM in GIS environments using building footprints
CN111442722B (en) Positioning method, positioning device, storage medium and electronic equipment
CN109345596B (en) Multi-sensor calibration method, device, computer equipment, medium and vehicle
CN112650255B (en) Robot positioning navigation method based on visual and laser radar information fusion
CN108827249B (en) Map construction method and device
CN109141446A (en) For obtaining the method, apparatus, equipment and computer readable storage medium of map
CN112146682B (en) Sensor calibration method and device for intelligent automobile, electronic equipment and medium
CN109186596A (en) IMU measurement data generation method, system, computer installation and readable storage medium storing program for executing
CN111744199B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN115578433A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114818065A (en) Three-dimensional roadway model building method and device, electronic equipment and storage medium
CN111724485B (en) Method, device, electronic equipment and storage medium for realizing virtual-real fusion
CN117213469A (en) Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall
WO2024036984A1 (en) Target localization method and related system, and storage medium
CN116105712A (en) Road map generation method, reinjection method, computer device and medium
KR101733681B1 (en) Mobile device, system and method for provinding location information using the same
CN116805047A (en) Uncertainty expression method and device for multi-sensor fusion positioning and electronic equipment
CN114396959B (en) Lane matching positioning method, device, equipment and medium based on high-precision map
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN111105480B (en) Building semantic map building method, medium, terminal and device
CN114571460A (en) Robot control method, device and storage medium
CN114299192A (en) Method, device, equipment and medium for positioning and mapping
Kawecki et al. Ar tags based absolute positioning system
CN113513983A (en) Precision detection method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination