CN115655261B - Map generation method, map generation device, robot, and storage medium - Google Patents

Map generation method, map generation device, robot, and storage medium Download PDF

Info

Publication number
CN115655261B
CN115655261B CN202211575911.8A CN202211575911A CN115655261B CN 115655261 B CN115655261 B CN 115655261B CN 202211575911 A CN202211575911 A CN 202211575911A CN 115655261 B CN115655261 B CN 115655261B
Authority
CN
China
Prior art keywords
nodes
map
topological
target area
topology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211575911.8A
Other languages
Chinese (zh)
Other versions
CN115655261A (en
Inventor
刘洪剑
肖志光
古明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202211575911.8A priority Critical patent/CN115655261B/en
Publication of CN115655261A publication Critical patent/CN115655261A/en
Application granted granted Critical
Publication of CN115655261B publication Critical patent/CN115655261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a map generation method and device, a robot and a storage medium, which can generate a map supporting the robot to execute various tasks such as positioning, navigation, path planning and the like, do not need to switch different maps back and forth, and effectively improve the reusability and application efficiency of the map. The method of the embodiment of the application comprises the following steps: acquiring an initial map to be processed, wherein the initial map comprises environment semantic information corresponding to a target environment; determining a target area in the initial map according to the environment semantic information; generating a plurality of topological nodes in the target area; obtaining topological data corresponding to the target area according to the connection relation among the plurality of topological nodes; generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, wherein the semantic topological map comprises the environment semantic information.

Description

Map generation method, map generation device, robot, and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a map generation method, apparatus, robot, and storage medium.
Background
The robot can perform positioning, navigation or path planning tasks by means of maps, the generation of which is crucial for robot applications. The robot map can have various expression modes, such as a grid map, a semantic map and the like.
In most indoor scenes at present, a robot can perform path planning and navigation based on a grid map, but in outdoor scenes, the grid map occupies a large memory, and the path planning efficiency is low. The semantic map reserves objects with semantic features, such as traffic lights, billboards and the like, can be used for positioning, but cannot be directly used for path planning, and different maps need to be switched for different tasks. Therefore, how to generate a map of a robot and improve reusability and application efficiency of the map becomes a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a map generation method and device, a robot and a storage medium, and the map generation method and device can be used for the robot to execute various tasks such as positioning, navigation and path planning, so that the reusability and the application efficiency of the map are effectively improved.
The present application provides, from a first aspect, a map generation method, including:
acquiring an initial map to be processed, wherein the initial map comprises environment semantic information corresponding to a target environment;
determining a target area in the initial map according to the environment semantic information;
generating a plurality of topological nodes in the target area;
obtaining topological data corresponding to the target area according to the connection relation among the plurality of topological nodes;
generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, wherein the semantic topological map comprises the environment semantic information.
Optionally, the map generation method further includes:
acquiring environmental information acquired in the moving process, and updating the sparse map according to the environmental information to obtain an updated sparse map;
and marking the updated sparse map as an initial map, returning to the step of determining a target area in the initial map according to the environment semantic information, and generating a semantic topological map corresponding to the updated sparse map.
The present application provides, from a second aspect, a map generating apparatus, the apparatus comprising:
the map acquisition unit is used for acquiring an initial map to be processed, and the initial map comprises environment semantic information corresponding to a target environment;
the area determining unit is used for determining a target area in the initial map according to the environment semantic information;
the topological data generating unit is used for generating a plurality of topological nodes in the target area and obtaining topological data corresponding to the target area according to the connection relation among the plurality of topological nodes;
and the map generation unit is used for generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, and the semantic topological map comprises the environment semantic information.
The present application provides in a third aspect a robot comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform the method of any of the first aspect and the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having a program stored thereon, which when executed on a computer performs the method of any one of the first aspect and the first aspect.
According to the method, an initial map comprising environment semantic information corresponding to a target environment is obtained, a target area in the initial map is determined according to the environment semantic information, a plurality of topological nodes are generated in the target area, topological data are obtained through the connection relation among the generated topological nodes, a semantic topological map corresponding to the target environment is generated by combining the initial map, the semantic topological map comprises the environment semantic information corresponding to the target environment and simultaneously comprises the topological data, the semantic topological map can be used for supporting a robot to execute various tasks such as positioning, navigation and path planning, different maps do not need to be switched, and the reusability and the application efficiency of the map are effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of a hardware structure of a robot provided by the present application;
fig. 2 is a schematic mechanical structure diagram of a robot provided by the present application;
FIG. 3 is a flowchart illustrating a map generation method according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of an initial map provided by the present application;
FIG. 5 is a schematic diagram of a map containing topology data provided herein;
FIG. 6 is a schematic diagram of a semantic topological map provided herein;
FIG. 7 is a schematic illustration of a targeted area map provided herein;
FIG. 8 is a schematic diagram of a map including a movement trajectory provided by the present application;
FIG. 9 is a schematic diagram of a map including an extended line provided by the present application;
FIG. 10 is a schematic diagram of a map including line segments to be sampled according to the present application;
FIG. 11 is a schematic illustration of a map provided herein that includes dense nodes;
FIG. 12 is a schematic diagram of a map including a plurality of initial topological links as provided herein;
FIG. 13 is a schematic diagram of an initial topological link process provided herein;
FIG. 14 is another schematic of a map including topology data provided herein;
FIG. 15 is a schematic structural diagram of an embodiment of a map generation apparatus provided in the present application;
fig. 16 is a schematic structural diagram of another embodiment of a map generation apparatus provided in the present application;
fig. 17 is a schematic structural diagram of an embodiment of a robot provided by the present application.
Detailed Description
In the embodiments of the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are used only for explaining relative positional relationships between the respective members or components, and do not particularly limit specific mounting orientations of the respective members or components.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as the case may be.
In addition, the structures, the proportions, the sizes, and the like, which are illustrated in the accompanying drawings and described in the present application, are intended to be considered illustrative and not restrictive, and therefore, not limiting, since those skilled in the art will understand and read the present application, it is understood that any modifications of the structures, changes in the proportions, or adjustments in the sizes, which are not necessarily essential to the practice of the present application, are intended to be within the scope of the present disclosure without affecting the efficacy and attainment of the same.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to represent components are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The present application can be applied to a robot. Types of robots include, but are not limited to, legged robots, wheeled robots, tracked robots, crawling robots, peristaltic robots, and mobile robots, among others. The legged robots include, but are not limited to, one-legged robots, two-legged robots, multi-legged robots, and the like. The robot is a machine capable of performing work semi-autonomously or fully autonomously, and the robot is not limited to a humanoid robot device, and may include, for example, a robot having a dog shape, a horse shape, a snake shape, a fish shape, and the like, and specifically, the robot may be a quadruped robot horse. Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a robot 100 according to an embodiment of the present disclosure. The robot 100 may be any one of various robots, for example, the robot 100 may specifically be a foot robot, or a combination of a foot robot and a wheel robot. In the embodiment shown in fig. 1, the robot 100 comprises a mechanical unit 101, a communication unit 102, a sensing unit 103, an interface unit 104, a storage unit 105, a display unit 106, an input unit 107, a control module 108, and a power supply 109.
The various components of the robot 100 may be connected in any manner, including wired or wireless connections, etc. Those skilled in the art will appreciate that the specific structure of the robot 100 shown in fig. 1 does not constitute a limitation to the robot 100, the robot 100 may include more or less components than those shown, and some components do not belong to the essential components of the robot 100, and some components may be omitted, added or combined as needed in the actual application scenario within the scope of not changing the essence of the invention.
Fig. 2 is a schematic mechanical structure diagram of a robot according to an embodiment of the present application, and the following describes each component of the robot 100 in detail with reference to fig. 1 and fig. 2:
the mechanical unit 101 is hardware of the robot 100. As shown in fig. 1, the mechanical unit 101 may specifically include a drive plate 1011, a motor 1012, and a mechanical structure 1013. Wherein the driving board 1011 and the motor 1012 can be used to provide power for the robot 100, and the mechanical structure 1013 can include at least one structure for forming the robot 100, and the specific structure corresponds to the practical application requirement of the robot 100.
As shown in fig. 2, in some embodiments, the robotic machine structure 1013 may include a fuselage body 1014, extendable legs 1015, feet 1016, and in other embodiments, the machine structure 1013 may further include extendable robotic arms, a rotatable head structure, a shakable tail structure, a payload structure, a saddle structure, a camera structure, and the like. It should be noted that each component module of the mechanical unit 101 may be one or multiple, and may be configured according to specific situations, for example, the number of the legs 1015 may be 4, each leg 1015 may be configured with 3 motors 1012, and the number of the corresponding motors 1012 is 12, which is not completely listed here for the mechanical structure of the robot.
The communication unit 102 may be used for receiving and transmitting signals, and may also communicate with other devices via a network. For example, the instruction information sent by the remote controller or other robot to move in a specific direction at a specific speed according to a specific gait is received and transmitted to the control module 108 for processing. The communication unit 102 includes, for example, a Wi-Fi module, a 4G module, a 5G module, a bluetooth module, or an infrared module.
The sensing unit 103 is configured to obtain information data of an environment around the robot 100 and parameter data of each component inside the robot 100, and send the information data to the control module 108. The sensing unit 103 includes various sensors such as a sensor for acquiring surrounding environment information: laser radar (for long-range object detection, distance determination, and/or velocity value determination), millimeter wave radar (for short-range object detection, distance determination, and/or velocity value determination), a camera, an infrared camera, a Global Navigation Satellite System (GNSS), and the like. Such as sensors monitoring various components inside the robot 100: an Inertial Measurement Unit (IMU) (for measuring velocity, acceleration, and angular velocity values), sole sensors (for monitoring sole impact point position, sole attitude, ground contact force magnitude and direction), and temperature sensors (for detecting component temperatures). As for other sensors such as a load sensor, a touch sensor, a motor angle sensor, and a torque sensor, which are also configurable by the robot 100, detailed descriptions thereof are omitted.
The interface unit 104 may be used to receive input from an external device (e.g., data information, power, etc.) and transmit the received input to one or more components within the robot 100, or may be used to output to an external device (e.g., data information, power, etc.). The interface unit 104 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 105 is used to store software programs and various data. The storage unit 105 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the robot 100 in use (such as various sensing data acquired by the sensing unit 103, log file data), and the like. Further, the storage unit 105 may include high-speed random access memory, and may also include non-volatile memory, such as disk memory, flash memory, or other volatile solid-state memory.
The display unit 106 is used to output information input by a user or information provided to the user. The information provided to the user may specifically include any one of multiple types of information, such as text information, audio information, image information, or video information. For example, the Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like. The display unit 106 may also include an audio output unit, such as a speaker or the like.
The input unit 107 may be used to receive input numeric or character information. Specifically, the input unit 107 may include a touch panel or other input device 1072. The touch panel, also called a touch screen, may collect a touch operation of a user (e.g., an operation of the user on or near the touch panel using a palm, a finger, or a matching device), and drive a corresponding connection device according to a preset program. The touch panel may include two parts, a touch detection device 1073 and a touch controller 1074. The touch detection device 1073 detects the touch orientation of the user, detects a signal caused by a touch operation, and transmits the signal to the touch controller 1074; the touch controller 1074 receives touch information from the touch sensing device 1073, converts it into touch point coordinates, transmits them to the control module 108, and receives and executes commands from the control module 108. The input unit 107 may include other input devices 1072 in addition to the touch panel. In particular, other input devices 1072 may include, but are not limited to, one or more of a remote control joystick, a mouse, a keyboard, a computer, a mobile terminal, or a microphone, etc., which are not limited to these specific examples.
In one embodiment, the touch panel can cover the display panel 1061, and when the touch panel detects a touch operation on or near the touch panel, the touch panel transmits the touch operation to the control module 108 to determine the type of the touch event, and then the control module 108 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel and the display panel 1061 are shown in fig. 1 as two separate components to implement input and output functions, in some embodiments, the touch panel and the display panel 1061 may be integrated to implement input and output functions, which is not limited herein.
The control module 108 is a control center of the robot 100, connects various components of the entire robot 100 using various interfaces and lines, and performs overall control of the robot 100 by running or executing software programs stored in the storage unit 105 and calling data stored in the storage unit 105.
The power supply 109 is used to supply power to the various components, and the power supply 109 may include a battery and a power control board that controls the battery charging, discharging, and power management functions. In the embodiment shown in fig. 1, the power supply 109 is electrically connected to the control module 108, and in other embodiments, the power supply 109 may also be electrically connected to the sensing unit 103 (e.g., a camera, a radar, a speaker, etc.) or the motor 1012, respectively. It should be noted that each component may be connected to a different power source 109 or powered by the same power source 109.
On the basis of the above embodiments, in particular, in some embodiments, the robot 100 may be in communication connection with a terminal device, when the terminal device communicates with the robot 100, the robot 100 may send instruction information to the terminal device, and the robot 100 may receive the instruction information through the communication unit 102 and may transmit the instruction information to the control module 108 when receiving the instruction information, so that the control module 108 may process the target speed value according to the instruction information. Terminal devices include, but are not limited to: the mobile phone, the tablet computer, the server, the personal computer, the wearable intelligent device or other electrical equipment with the image shooting function.
The instruction information may be determined according to a preset condition. In one embodiment, the robot 100 may include a sensing unit 103, and the sensing unit 103 may generate instruction information according to a current environment in which the robot 100 is located. The control module 108 may determine whether the current speed value of the robot 100 satisfies a corresponding preset condition according to the instruction information. If yes, the current speed value and the current gait movement of the robot 100 are maintained; if not, the target speed value and the corresponding target gait are determined according to the corresponding preset conditions, so that the robot 100 can be controlled to move with the target speed value and the corresponding target gait. The environmental sensors may include temperature sensors, air pressure sensors, visual sensors, sound sensors. The instruction information may include temperature information, air pressure information, image information, and sound information. The communication mode between the environmental sensor and the control module 108 may be wired communication or wireless communication. The manner of wireless communication includes, but is not limited to: wireless network, mobile communication network (3G, 4G, 5G, etc.), bluetooth, infrared.
The hardware configuration and the mechanical configuration of the robot according to the present application are explained above, and the map generation method according to the present application is explained below.
The robot can perform positioning, navigation or path planning tasks by means of a map, and currently, in most indoor scenes, the mobile robot performs positioning, path planning and navigation based on a grid map. However, it should be noted that the grid map scheme has a good effect when used in an indoor scene, and if used in an outdoor large scene, the grid map has a relatively high memory occupation and a low path planning efficiency, so that in an outdoor scene, a light semantic map is often used by a robot working outdoors. However, the light-weight semantic map can be used for positioning, but cannot be directly used for path planning of the robot. Therefore, the map generation method provided by the application can be used for positioning based on the map generated according to the application and planning the path based on the corresponding scene, so that the reusability and the application efficiency of the map are effectively improved, and the storage resource of the map is saved.
In an embodiment, please refer to fig. 3, and fig. 3 is a flowchart illustrating a map generation method according to an embodiment of the present application. The map generation method provided by the present application may be applied to the robot shown in fig. 1 or fig. 2, and may also be applied to other electronic devices, such as a server, a computer, a mobile phone, or a wearable electronic device. For convenience of description, the following embodiments are exemplified with a robot as an execution subject. Specifically, the map generation method includes:
and S10, acquiring an initial map to be processed, wherein the initial map comprises environment semantic information corresponding to a target environment.
In some specific embodiments, the initial map to be processed may specifically be a map established based on a target environment, and the initial map includes environment semantic information corresponding to the target environment. The initial map may specifically be a 2D map, a 2.5D map or a 3D map. The environmental semantic information included in the initial map is hierarchically divided into a visual layer, an object layer and a concept layer, wherein the visual layer is a bottom layer of characteristic semantics such as color, texture, shape, edge and the like; the object layer is a middle-layer feature semantic meaning and generally comprises attribute features, such as state composition presented by a certain object at a certain moment; the concept layer is a high-level feature semantic, and is generally displayed as a whole of a certain object.
The target environment may be an environment in a preset range where the robot has been, is currently located, or is about to enter, and the preset range may be specifically a range determined according to an actual application requirement or a requirement of the robot. For example, an environment in the range of 100 meters by 100 meters outside the room where the robot is to perform patrol tasks, etc. may be used. A metric map may be selected based on the type of map created by the target environment, and a sparse semantic map with abstraction may be selected in the metric map. As shown in fig. 4, fig. 4 is a schematic diagram of an initial map in one embodiment. The initial map is specifically a sparse semantic map corresponding to a target environment, and the sparse semantic map may include environmental semantic information such as traffic lights, billboards, buildings, lawns, or road edge lines, which is represented by black area blocks in fig. 4, and is not limited herein.
Specifically, the robot may obtain an initial map to be processed, where the initial map may be generated and currently driven by the robot in the target environment, or may be a map that has been established in advance. For example, a pre-established global map of a certain area may be used as the initial map to be processed. The processing mode of the self-generation of the robot can be that when the robot runs, a map is built through data acquired in the running process, or the data acquired by tracking in the running process is stored firstly and then is uniformly generated. The processing method is not particularly limited herein. The robot can adopt a stable tracking system, such as an ARCore/ARKit and the like, for providing a six-degree-of-freedom position and posture so as to realize continuous tracking of a target environment.
And S20, determining a target area in the initial map according to the environment semantic information.
In some embodiments, when the initial map obtained is a sparse map, since the sparse map is used for visual localization, and the sparse map is a map in which a part of information is selectively ignored, all objects are abstracted to some extent, so that only a part of meaningful map is retained, and therefore, only the positions of point features and line features, such as arcs and nodes, can be seen on the sparse map. If the robot is intended not to enter an impassable area by mistake when performing a task, for example, an impassable, already-determined area such as a lane or a lawn is preset, or an unexplored area by mistake, the moving area of the robot can be further limited on the initial map.
Specifically, the robot may determine the target area in the initial map according to the environmental semantic information in the initial map. The target area may be a passable area where the robot can move in the target environment, such as a sidewalk, a zebra crossing, and the like. In some embodiments, a movement track of the robot may be acquired based on the acquired initial map, and then the movement track is associated with the line feature on the initial map, so that an area between the movement track and the line feature is set as a target area, and the rest areas are set as an unviable area or an unexplored area of the robot; the passable area of the robot can also be determined according to the acquired environment semantic information, for example, the environment semantic information may include semantic information corresponding to the passable area; it is also possible that the passable area of the robot or the like is already determined at the time of establishing the initial map, for example, the target area may be artificially marked in the initial map. The specific target area is not limited herein.
And step S30, generating a plurality of topological nodes in the target area.
The topology is an arrangement of points, lines and polygon elements sharing geometry, wherein the points are independent from each other, the points are connected into lines, the lines are connected into planes, each line starts from a starting node and ends at a stopping node, and is adjacent to the left polygon and the right polygon and used for representing the connection relation and the distance of different positions.
Since the vision system of the robot may comprise one or at least two, e.g. monocular, binocular, multi-ocular or panoramic, etc., stereo vision. Therefore, in some embodiments, a certain number of topological nodes may be generated on the obtained initial map, and then additional geographic information may be represented on the basis of arcs and nodes by using the topological structure, so that the robot for convenient application may determine a passable area within the visual field according to the topological structure, and then plan a movement path in the area.
Specifically, the topology nodes may include general nodes and may also include special nodes indicating specific locations, for example, some passage ports and accessible location points of elevators may be set as special nodes.
The generation order of the topology nodes may include various orders. For example, the topological node may be directly generated in the target area, or a plurality of nodes may be generated on the obtained initial map, and then the nodes in the target area are screened out as the topological nodes, and the like, which is not limited herein. The topology node may also be generated in various ways, for example, randomly generated, or generated according to a preset node generation rule, and the way of generating the topology node is not limited here.
In some embodiments, the topology node may be generated by a preset node generation rule. Specifically, a point (x, y, z) may be set on the target region, and the extended line projection may be performed outward on a horizontal plane having a height z with the point as an origin. Wherein the extension line may be virtual. If a certain extension line is intersected with the area of the impassable area or the unexplored area, taking the intersected point as the terminal point of the extension line, sampling the line segment between the origin point and the intersected point, and taking the sampled point as a topological node; the geographic data on the initial map may also be stored as a sequence of X, Y coordinate pairs, representing points, lines, polygons, respectively, with associated table data stored in tables, linked to semantic features by internal identification numbers, and so forth.
And S40, obtaining topological data corresponding to the target area according to the connection relation among the plurality of topological nodes.
In the process of navigating by depending on a map, when encountering an obstacle which can not be crossed when meeting a certain height threshold, the robot needs to avoid the obstacle through a sensor and frequently recalculate a navigation path so as to bypass the obstacle and finally reach a target position point, however, the obstacle avoidance inevitably affects the movement time and the movement efficiency of the robot.
In the embodiment of the application, the robot can obtain the topological data corresponding to the target area according to the connection relationship among the plurality of topological nodes. The topological data can reflect space elements and relations among the elements, wherein the space elements refer to topological nodes and topological line segments, and the topological data can be used for providing support for optimizing the movement duration of the robot and guaranteeing the path planning of the safety of the robot in the movement process.
In some specific embodiments, the manner of obtaining the topology data corresponding to the target area according to the connection relationship of the topology nodes may include multiple manners, for example, the corresponding topology data may be obtained through the connection between the topology nodes, the corresponding topology data may be obtained through the connection between the nodes and the arc line segments, or the corresponding topology data may be obtained through the connection between the nodes and the polygons, which is not limited herein.
When analyzing the connection relationship between the topological nodes, local nodes can be selected for analysis, and analysis can be performed for global nodes. When local nodes are selected for analysis, point cloud assistance can be adopted. For example, semantic features in a target area are sampled in advance to obtain corresponding point cloud data, the point cloud data can be regarded as obstacles, then a plurality of topological nodes in a certain local range are connected with one another, so that the nodes are connected with one another, a map effect presented between the connected nodes is shown in fig. 5, and finally the corresponding topological data are obtained according to the position relation between line segments connected among the nodes and the point cloud data. When the analysis is performed on the global node, certain conditional constraints can be performed on the node and the polygon, for example, when the position relationship between the node and the polygon satisfies the intersection condition, the topology nodes can be connected according to a preset rule to obtain the topology data corresponding to the target area, and the specific manner of obtaining the topology data is not limited here.
And S50, generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, wherein the semantic topological map comprises environment semantic information.
And generating a semantic topological map comprising environment semantic information based on the topological data and the initial map for planning a feasible path network for the robot, wherein the range of the semantic topological map corresponds to the range of the initial map. In the embodiment of the present application, the semantic topological map refers to a map that can represent an environment by using semantic features (such as points, lines, and planes) and has topological data of topological nodes and related connecting lines. As shown in fig. 6, fig. 6 is a schematic diagram of a semantic topological map in an embodiment. The semantic topological map can be used for tasks such as positioning, navigation and path planning of the robot.
In this embodiment, after an initial map including environment semantic information corresponding to a target environment is acquired, a target area where a robot is located is determined according to the environment semantic information, a topology node is generated for the target area, processing is performed according to a connection relationship between a plurality of topology nodes to obtain passable topology data, and a semantic topology map corresponding to the target environment is generated based on the topology data and the initial map, where the semantic topology map includes both the environment semantic information corresponding to the target environment and the topology data. The semantic topological map acquired by the embodiment of the application can be used for supporting a robot to execute various tasks such as positioning, navigation and path planning, different maps do not need to be switched, and the reusability and the application efficiency of the map are effectively improved.
It should be noted that, when the initial map is a sparse map, the topological data obtained based on the sparse map does not require real-time performance and has no influence on the performance of the semantic topological map, and if the obtained topological data is found to be unsatisfactory, the relevant parameters of the sparse map can be readjusted to regenerate, the positions of all topological points do not need to be maintained, the relationship with the positions of the existing topological points is recalculated, or repeated topological points are deleted, or the like, and the map does not need to be re-created.
In one embodiment, the determining the target area in the initial map according to the environment semantic information in step S20 may include:
s201, obtaining obstacle edge lines from the initial map according to the environment semantic information.
S202, obtaining a moving track in the initial map.
And S203, determining an area between the obstacle edge line and the movement track as a target area in the initial map.
The environment semantic information in the initial map may be represented by a line feature in the map, where an obstacle edge line refers to a line on the periphery of an obstacle in the target environment, where an obstacle corresponding to the obstacle edge line refers to an object that the robot cannot enter or reduce collision, and may specifically be a dynamic obstacle, such as a moving human body, or a static obstacle, such as a lawn, a building, or the like, which is not limited here.
The environment semantic information can be semantic information of a two-dimensional scene or a three-dimensional scene, and can be further subdivided into semantic information of indoor and outdoor scenes such as hospitals, restaurants, hotels, parks and the like. The moving track refers to a track corresponding to the movement of the robot in the initial map, and specifically may be a mapping track corresponding to the establishment of the initial map, or may also be a moving track of the robot in a target environment, and the moving track may include a plurality of track points.
The target area is a passable area which can be moved by the robot in a target environment, such as a sidewalk, a zebra crossing and the like, and can also be an area which is passed by the robot. In some embodiments, as shown in FIG. 7, the environment semantic information may be used to represent the black areas in FIG. 7, which may be, for example, buildings in the target environment. The robot can take the lines forming the black area in fig. 7 as barrier edge lines according to the environment semantic information, and the curve a is the moving track of the robot in the initial map.
The robot can obtain the barrier edge line in the initial map according to the environment semantic information, and the area divided by the barrier edge line is used as the impassable area. However, it cannot be determined which side area is the impassable area only according to the obstacle edge line, and the robot may acquire the movement track in the initial map, where the movement track indicates that the robot has moved in the corresponding area, that is, the area where the movement track is located is the impassable area. Therefore, the robot can determine the area between the obstacle edge line and the movement track as the target area in the initial map, i.e., the passable area corresponding to the robot. As shown in fig. 7, in the initial map corresponding to fig. 7, a white area between the movement locus a and the obstacle edge line is a target area in the initial map. The black area is an area where the robot cannot pass.
In the embodiment, the obstacle edge line is acquired from the initial map according to the environment semantic information, the moving track in the initial map is acquired, and the area between the obstacle edge line and the moving track is determined as the target area, so that the passable area in the initial map is determined quickly and accurately, and the efficiency and the accuracy of determining the target area are effectively improved.
In one embodiment, the generating the plurality of topology nodes in the target area in step S30 may include:
s301, obtaining a moving track corresponding to the initial map.
And S302, carrying out node sampling on the moving track to obtain a plurality of track nodes on the moving track.
And S303, acquiring obstacle edge lines from the initial map according to the environment semantic information.
S304, determining a plurality of line segments to be sampled between at least one track node and the obstacle edge line.
S305, carrying out topology sampling on the line segments to be sampled to obtain a plurality of topology nodes in the target area.
The robot can sample nodes of the moving track to obtain a plurality of track nodes on the moving track. By sampling the moving track by nodes, the moving track data on the initial map can be more conveniently processed, the calculation power of the robot is saved, and the geographic information on the initial map can be better utilized by the vision of the robot. The nodes in the node sampling refer to track nodes, the track nodes can be any position point where the robot is located, the node sampling mode can be uniform sampling, random sampling, the random sampling can be further divided into dense sampling, sparse sampling and the like, and the specific sampling mode is not limited here. As shown in fig. 8, fig. 8 is a map diagram including a movement trajectory in one embodiment. In fig. 8, the robot performs node sampling on the moving track to obtain a plurality of track nodes on the moving track.
The robot can determine a plurality of line segments to be sampled between at least one track node and the obstacle edge line according to a plurality of track nodes obtained by sampling. The line segment to be sampled is a line segment which can be used for sampling a topological node in the target area, and the line segment to be sampled is connected with the track node and the obstacle edge line. The line segment to be sampled can be a straight line segment or a broken line segment. The line segment to be sampled may be the shortest line segment from the track node to the obstacle edge line, or may not be. One, two or more line segments to be sampled can be arranged between one track node and any barrier edge line.
Specifically, the robot may use the plurality of trajectory nodes as starting points to establish a connection between the trajectory node and the obstacle edge line, so as to obtain a plurality of line segments to be sampled. The line segments to be sampled are all located in the target area, and the robot can perform topological sampling on the line segments to be sampled to obtain a plurality of topological nodes of the target area. The starting point of each line segment to be sampled is respectively positioned on a track node and an obstacle edge line, and the line segment to be sampled can represent a path which can be used for planning and passing by the robot. The topology sampling may also be a plurality of sampling manners, such as uniform sampling or random sampling according to a preset interval.
In the embodiment, by sampling the nodes of the moving track, the computational power of the moving track can be saved, and the topological sampling is performed on the line segment to be sampled, so that the communication relation and the distance between different position points in the target area can be more accurately and clearly shown while the computational power of the line segment to be sampled is saved. In addition, the robot samples nodes around the moving track, and performs topology sampling on the line segment to be sampled obtained around the track node to obtain a plurality of topology nodes in the target area, so that the topology nodes surround the moving track more.
In one embodiment, the step S304 of determining a plurality of line segments to be sampled between at least one trajectory node and the obstacle edge line may include:
s3041, projecting extension lines to the periphery by taking at least one track node as an end point to obtain a plurality of target intersection points where the barrier edge lines and the extension lines intersect.
S3042, determining a connection line between each of the at least one track node and the plurality of target intersections as a line segment to be sampled between the track node and the obstacle edge line.
The extension line refers to a line extending outward with the track node as an end point, and the extension line may be a virtual line. The robot can respectively use at least one track node as an end point, and project a plurality of extension lines to the periphery to obtain a plurality of target intersection points of the intersection of the barrier edge line and the extension lines. The extension line may be a straight line, a curved line, a broken line, or the like. The robot can project extension lines to the periphery according to a preset angle, and the preset angle can be determined according to the actual application requirements. For example, spreading lines may be projected around every 10 ° interval. The extension line can be extended infinitely until the extension line is intersected with any barrier edge line, and a plurality of target intersection points of the intersection of the barrier edge line and the extension line are obtained. As shown in fig. 9, fig. 9 is a map diagram including an extension line in one embodiment. The target intersection point is not necessarily an intersection relationship actually generated, and even if the target intersection point is not intersected, if the height of the determined projection intersection point corresponding to a certain extension line is within a preset range, the target intersection point can be called as the target intersection point.
The robot can determine a connecting line between at least one track node and each of the plurality of target intersection points as a line segment to be sampled between the track node and the obstacle edge line.
In one embodiment, when the initial map is a three-dimensional map, the robot may project a plurality of extension lines to the periphery with the track nodes as end points, where the extension lines are parallel to the horizontal plane, that is, the extension lines are projected to the periphery horizontally. It will be appreciated that when the initial map is a two-dimensional map, there is no data in the vertical direction and the robot can project an extension line on the plane on which the initial map lies. When the initial map is a three-dimensional map, the initial map comprises data in the vertical direction, the robot can obtain the height of the robot when the robot is located at the track node, and the robot is projected on a plane corresponding to the height of the robot to obtain a plurality of extension lines on the height of the robot. Wherein the robot height may be different at different trajectory nodes. The robot height may specifically be the total height of the robot, or may be the center height, the center of mass height of the robot, or the height of any point on or around the robot.
The robot can project the extension lines and the barrier edge lines onto the same plane to obtain a plurality of projection intersection points of the extension lines and the barrier edge lines on the plane, so that intersection points between the extension lines and the barrier edge lines are ensured when the initial map is a three-dimensional map, and the situation that no intersection point exists due to height errors is avoided. The robot obtains a first intersection point height corresponding to the projection intersection point on the extension line and a second intersection point height corresponding to the projection intersection point on the barrier edge line, wherein the first intersection point height corresponds to the height of the extension line, and the second intersection point height corresponds to the height of the barrier edge line. The robot can determine a height difference between the height of the first intersection point and the height of the second intersection point, and if the height difference is smaller than a preset height threshold, a point corresponding to the projection intersection point on the extension line is determined as a target intersection point. And if the height difference value is greater than or equal to the preset height threshold value, filtering out the corresponding point of the projection intersection point on the extension line, and not taking the projection intersection point as the target intersection point. The preset height threshold may be determined according to actual application requirements. For example, a hollow table top may be included in the three-dimensional map, and the actual robot height is slightly higher than the table top height, but the detected robot height may be lower than the table top height due to errors during the actual application process. If the extension line is projected on the height plane of the robot, no intersection point exists between the extension line and the desktop, but the intersection point is different from the actual scene, and the robot is easy to collide with an obstacle in a target environment. By adopting the method, the target intersection point of the extension line and the desktop can be determined, the robot is prevented from colliding with the barrier, and the safety of the robot is effectively improved.
As shown in fig. 10, fig. 10 is a map diagram including a line segment to be sampled in one embodiment. The robot can traverse a plurality of track nodes to obtain a target intersection point between an extension line of the track node and the barrier edge line, and a connecting line between the track node and the target intersection point is determined as a line segment to be sampled. The connecting line between the track node and the plurality of target intersection points can be a straight line connecting line, and can also be a curve connecting line or a broken line connecting line.
In an embodiment, in step S305, performing topology sampling on a plurality of line segments to be sampled to obtain a plurality of topology nodes in the target area, which may include:
s3051, sampling the line segments to be sampled to obtain a plurality of dense nodes in the target area.
S3052, performing down-sampling processing on the plurality of dense nodes to obtain a plurality of sparse nodes.
S3053, performing disturbance processing on the sparse nodes to obtain a plurality of out-of-order nodes in the target area.
S3054, determining out-of-order nodes with the distance between the out-of-order nodes and the obstacle edge line larger than or equal to a first distance threshold value as topological nodes in the target area.
The robot may sample a plurality of line segments to be sampled to obtain a plurality of dense nodes in the target area, the obtained dense nodes are shown in fig. 11, and the sampling of the dense nodes may be uniform sampling or random sampling. The robot can perform downsampling processing on the dense nodes to obtain sparse nodes. For example, the robot may randomly delete dense nodes in the target area, resulting in a plurality of sparse nodes. The degree of downsampling for the dense nodes can be determined according to the actual application requirements. The robot can add disturbance processing to the sparse nodes to obtain a plurality of out-of-order nodes in the target area, and the disturbance processing can be specifically random disturbance, so that the randomness of generating the topological nodes is improved. The robot may determine, as a topological node in the target area, an out-of-order node having a distance from the obstacle edge line greater than or equal to a first distance threshold. The first distance threshold may be preset, and may be set according to a target environment. The disorder nodes with the distance between the barrier edge lines larger than or equal to the first distance threshold value are screened, the disorder nodes are prevented from being too close to the barrier edge lines, a certain movement space is reserved for the robot, and therefore safety of the robot when tasks are executed based on the target map is effectively improved.
In an embodiment, in step S40, obtaining topology data corresponding to the target area according to a connection relationship between the plurality of topology nodes may include:
s401, connecting a plurality of topological nodes to obtain a plurality of initial topological connecting lines.
S402, sampling the initial topological connecting lines to obtain sampling points on the initial topological connecting lines.
And S403, determining the object distance between the sampling point and the obstacle edge line.
S404, screening the initial topological connecting line with the object distance larger than or equal to the second distance threshold value to obtain topological data corresponding to the target area, wherein the topological data comprise a plurality of topological nodes, and screening the obtained target topological connecting line.
After a plurality of topology nodes in the target area are generated, the robot may connect the plurality of topology nodes to obtain a plurality of initial topology connection lines, as shown in fig. 12. The robot can sample a plurality of initial topological connecting lines to obtain sampling points on the plurality of initial topological connecting lines, and calculation force for determining the object distance is saved. The sampling mode for the initial topological connecting lines may be point sampling, the point sampling may be uniform sampling, grid sampling, and the like, and the specific details are not limited herein.
The object distance is a distance between the sampling point and a certain point on the barrier edge line, and the distance may be a linear distance. The second distance threshold represents a threshold value of recognizable or safe distance, which may be determined according to practical application requirements, for example, a threshold value of a field of view of the robot.
In some embodiments, the robot may perform point sampling on all obstacle edge lines in the initial map in a dense sampling manner to obtain at least two edge sampling points of the obstacle edge lines, determine a field of view range of the applied robot, and set a field of view critical value of the applied robot as a second distance threshold value, so as to ensure that when subsequent topology data is used for path planning of the robot, a path is not set at a field of view blind area of the robot. Assuming that the robot connects the point B to a plurality of edge sampling points on the obstacle edge line in sequence at a sampling point B on an initial topological connecting line to determine object distances between the two points, as shown in fig. 13, if there is a point location where the object distance between the sampling point B and the edge sampling point is greater than or equal to a second distance threshold, it is determined that the initial topological connecting line satisfying the condition is outside the visual field range of the robot, and the initial topological connecting lines are shielded, and the shielding process may be direct deletion or hiding. The robot may traverse all the sampling points, and finally sequentially connect the initial topological connection lines maintaining the display state in order to obtain topological data corresponding to the target area, as shown in fig. 14.
In the embodiment, on the premise that the topological point is not close to the obstacle, the connecting line segment of the topological point is further ensured not to be close to or even pass through the obstacle, so that the application reliability of the topological data and the safety of the robot are improved.
In one embodiment, step S30 of generating a plurality of topology nodes in the target area may further include:
and S306, acquiring a moving track corresponding to the initial map.
And S307, determining a sampling range corresponding to the moving track according to the target area.
And S308, sampling based on the sampling range and the moving track to obtain a plurality of topological nodes in the target area.
The sampling range refers to a range in which the topological nodes can be sampled in the initial map, and the sampling range can correspond to the movement track of the robot. The sampling range may be determined according to actual application requirements, for example, the sampling range may specifically be the entire target area in the initial map, or may be a part in the target area, such as specifically a range in a preset area around the movement trajectory, or may be the range of the movement trajectory.
The robot can obtain a moving track corresponding to the initial map, determine a sampling range corresponding to the moving track according to the target area, and perform sampling based on the sampling range and the moving track to obtain a plurality of topological nodes in the target area. The topology nodes may include topology nodes sampled within a sampling range.
In the embodiment, the moving track corresponding to the initial map is obtained, the sampling range corresponding to the moving track is determined according to the target area, sampling is performed based on the sampling range and the moving track, and the obtained plurality of topological nodes surround the moving track, so that the nodes which can be run by the robot can be ensured, and the safety of the sampled topological nodes is effectively improved.
In one embodiment, step S30 of generating a plurality of topology nodes in the target area may further include:
s309, acquiring the environment object in the target environment according to the environment semantic information.
S310, determining the object type corresponding to the environment object.
S311, acquiring a preset node corresponding to the environment object in the target area according to the object type, and determining the preset node as a topological node in the target area.
An environmental object refers to an object in the target environment, such as a building, elevator, trash can, or the like in the target environment. The object type is a type corresponding to the environment object, and may include, for example, a passable passage, and the passable passage may further include a passable building entrance, an elevator entrance, and the like.
The preset node is preset according to the actual application requirement, and may refer to a position marking point corresponding to the environment object on the initial map, or may refer to an additional special position point of the environment object. For example, when the environment object is a building, the preset node may mark a point for the location of the building, or may be a special location point that some passage ports and elevators can pass through in the building.
The robot can determine the object type of the environment object according to the environment object obtained by the environment semantic information, obtain the preset node corresponding to the environment object according to the object type, and determine the preset node as the topological node in the target area.
In this embodiment, besides the topological nodes obtained by sampling in the initial map, the preset nodes can be determined according to the environment object in the target environment, and the preset nodes are determined as the topological nodes in the target area, so that a special position is added in the target area as the topological nodes, which is beneficial for the robot to drive to the special position in the target area based on the topological nodes, and the accuracy and flexibility of executing tasks based on the map are effectively improved.
In one embodiment, step S30 of generating a plurality of topology nodes in the target area may further include:
and S312, generating a plurality of candidate nodes in the initial map.
And S313, acquiring a moving track corresponding to the initial map and an obstacle edge line in the initial map.
And S314, acquiring track connecting lines between the plurality of candidate nodes and the moving track respectively.
S315, determining candidate nodes corresponding to the track connecting lines without the intersection points with the obstacle edge lines as topological nodes in the target area.
In some embodiments, the robot may first perform candidate node generation processing on the initial map to obtain a plurality of candidate nodes, and perform position screening processing on the plurality of candidate nodes to obtain a topology node in the target area. For example, candidate nodes of an unviewable area of the robot in the target area can be filtered out through similar extension line logic, and the unfiltered candidate nodes serve as topological nodes in the target area. The candidate nodes are topology nodes to be processed, the generation mode of the candidate nodes may be dense generation or sparse generation, and the dense generation or sparse generation may be further divided into dense uniform generation, dense random generation or sparse uniform generation, sparse random generation and the like, which are not limited herein.
It is to be supplemented that, when the number of generated candidate nodes is large and the display is dense, the filtering may be performed in a down-sampling manner to obtain sparse candidate nodes which are uniformly distributed, and then random disturbance is added to make the sparse candidate nodes appear irregular arrangement.
The robot can acquire track connecting lines between a plurality of candidate nodes and the moving track respectively, determine the candidate nodes corresponding to the track connecting lines without intersection points with the obstacle edge lines as topological nodes in the target area, and hide or delete the candidate nodes if the candidate nodes are not. Therefore, repeated judgment of the candidate nodes is avoided, each candidate node is ensured to traverse once, the track connecting line passing through the barrier is filtered, namely the candidate nodes in the impassable area are filtered, the calculation force of the determination process of the topological nodes is saved to a certain extent, the accuracy of the topological nodes is effectively improved, and the accuracy of the semantic topological map is further improved.
In one embodiment, step S30 of generating a plurality of topology nodes in the target area may further include:
and S316, randomly generating a plurality of topological nodes in the target area of the initial map.
In one embodiment, after step S50, the map generation method may further include:
s60, acquiring environmental information acquired in the moving process, and updating the sparse map according to the environmental information to obtain an updated sparse map;
and S70, marking the updated sparse map as an initial map, returning to the step of determining a target area in the initial map according to the environment semantic information, and generating a semantic topological map corresponding to the updated sparse map.
The environment information may be new environment semantic information identified and acquired by the robot in the moving process, and after the new environment semantic information is updated to the sparse map, the updated sparse map may be marked as an initial map, and the step of determining the target area in the initial map according to the environment semantic information is executed again to generate a semantic topological map corresponding to the updated sparse map.
In the aspect of real-time updating of the map of the robot, after the semantic topological map corresponding to the target environment is generated for the first time, when the robot continues to move to a new position in the target environment, the sparse map can be directly updated according to the environment information collected in the moving process, and the sparse map is updated according to the environment semantic information, so that a new semantic topological map can be obtained, and when the semantic topological map is used for path planning of the robot, the high efficiency of path planning and navigation of the robot is ensured.
Referring to fig. 15, the present application provides an embodiment of a map generating apparatus applied to a robot, including:
a map obtaining unit 801, configured to obtain an initial map to be processed, where the initial map includes environment semantic information corresponding to a target environment;
an area determining unit 802, configured to determine a target area in the initial map according to the environment semantic information;
a topology data generating unit 803, configured to generate a plurality of topology nodes in a target area, and obtain topology data corresponding to the target area according to a connection relationship between the plurality of topology nodes;
and a map generating unit 804, configured to generate a semantic topological map corresponding to the target environment based on the topological data and the initial map, where the semantic topological map includes environment semantic information.
In the embodiment of the application, first, an initial map to be processed is acquired through a map acquisition unit 801 to determine environment semantic information corresponding to a target environment, where the target environment is an environment area where a robot needs to execute a task; then, the area determining unit 802 determines a target area of the initial map from the environmental semantic information in the initial map; next, a plurality of topology nodes are generated in the target area by the topology data generation unit 803, and topology data corresponding to the target area is obtained according to the connection relationship among the plurality of topology nodes; finally, the map generating unit 804 generates a semantic topological map corresponding to the target environment according to the initial map acquired by the map acquiring unit 801 and the topological data acquired by the topological data generating unit 803, and the semantic topological map can be used for planning a path for the robot. The map obtained by the method can be used for supporting the robot to execute various tasks such as positioning, navigation and path planning, different maps do not need to be switched, and the reusability and the application efficiency of the map are effectively improved.
Referring to fig. 16, the present application provides another embodiment of a map generating apparatus applied to a robot, including:
a map obtaining unit 901, configured to obtain an initial map to be processed, where the initial map includes environment semantic information corresponding to a target environment;
an area determining unit 902, configured to determine a target area in the initial map according to the environment semantic information;
a topology data generating unit 903, configured to generate a plurality of topology nodes in a target area, and obtain topology data corresponding to the target area according to a connection relationship between the plurality of topology nodes;
a map generating unit 904, configured to generate a semantic topological map corresponding to the target environment based on the topological data and the initial map, where the semantic topological map includes environment semantic information;
the map updating unit 905 is configured to, when the initial map is a sparse map, acquire environmental information acquired in a moving process, update the sparse map according to the environmental information to obtain an updated sparse map, mark the updated sparse map as the initial map, return to the step of determining a target area in the initial map according to environmental semantic information, and generate a semantic topological map corresponding to the updated sparse map.
In this embodiment of the application, the area determining unit 902 is further configured to obtain an obstacle edge line from the initial map according to the environment semantic information, obtain a movement track in the initial map, and determine an area between the obstacle edge line and the movement track as a target area in the initial map.
In this embodiment of the application, the topological data generating unit 903 is further configured to obtain a moving track corresponding to the initial map, perform node sampling on the moving track to obtain a plurality of track nodes on the moving track, obtain an obstacle edge line from the initial map according to the environment semantic information, determine a plurality of to-be-sampled line segments between at least one track node and the obstacle edge line, and perform topological sampling on the plurality of to-be-sampled line segments to obtain a plurality of topological nodes in the target area.
In this embodiment of the application, the topology data generating unit 903 is further configured to project an extension line to the periphery by using at least one track node as an endpoint, obtain a plurality of target intersections where the obstacle edge line intersects with the extension line, and determine a line segment between the at least one track node and each of the plurality of target intersections as a line segment to be sampled between the track node and the obstacle edge line.
In this embodiment, the topology data generating unit 903 is further configured to horizontally project multiple extension lines to the periphery with the track node as an end point, project the extension lines and the barrier edge lines onto the same plane to obtain multiple projection intersection points, obtain a first intersection point height corresponding to the projection intersection points on the extension lines and a second intersection point height corresponding to the projection intersection points on the barrier edge lines, determine a height difference between the first intersection point height and the second intersection point height, and if the height difference is smaller than a preset height threshold, determine a point corresponding to the projection intersection point on the extension line as the target intersection point.
In this embodiment of the application, the topological data generating unit 903 is further configured to sample a plurality of line segments to be sampled to obtain a plurality of dense nodes in the target area, perform downsampling on the plurality of dense nodes to obtain a plurality of sparse nodes, perform perturbation processing on the plurality of sparse nodes to obtain a plurality of out-of-order nodes in the target area, and determine the out-of-order nodes whose distance from the obstacle edge line is greater than or equal to a first distance threshold as the topological nodes in the target area.
In this embodiment of the application, the topological data generating unit 903 is further configured to obtain a moving track corresponding to the initial map, determine a sampling range corresponding to the moving track according to the target area, and perform sampling based on the sampling range and the moving track to obtain a plurality of topological nodes in the target area.
In this embodiment of the application, the topology data generating unit 903 is further configured to obtain an environment object in the target environment according to the environment semantic information, determine an object type corresponding to the environment object, obtain a preset node corresponding to the environment object according to the object type, and determine the preset node as a topology node in the target area.
In this embodiment of the application, the topology data generating unit 903 is further configured to generate a plurality of candidate nodes in the initial map, acquire a moving track corresponding to the initial map and obstacle edge lines in the initial map, acquire track connecting lines between the plurality of candidate nodes and the moving track, respectively, and determine a candidate node corresponding to a track connecting line where there is no intersection with the obstacle edge line as a topology node in the target area.
In the embodiment of the present application, the topology data generation unit 903 is further configured to randomly generate a plurality of topology nodes in the target area of the initial map.
In this embodiment of the application, the topology data generating unit 903 is further configured to connect a plurality of topology nodes to obtain a plurality of initial topology connection lines, sample the plurality of initial topology connection lines to obtain sampling points on the plurality of initial topology connection lines, determine an object distance between the sampling point and an obstacle edge line, screen out an initial topology connection line whose object distance is greater than or equal to a second distance threshold, and obtain topology data corresponding to a target area, where the topology data includes the plurality of topology nodes and a target topology connection line obtained by screening.
Referring to fig. 17, fig. 17 provides an embodiment of a robot for the present application, the robot comprising:
a processor 1001, a memory 1002, an input/output unit 1003, and a bus 1004;
the processor 1001 is connected to the memory 1002, the input-output unit 1003, and the bus 1004;
the processor 1001 specifically performs the following operations:
acquiring an initial map to be processed, wherein the initial map comprises environment semantic information corresponding to a target environment;
determining a target area in the initial map according to the environment semantic information;
generating a plurality of topological nodes in a target area;
obtaining topology data corresponding to a target area according to the connection relation among the plurality of topology nodes;
and generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, wherein the semantic topological map comprises environment semantic information.
In this embodiment, the functions of the processor 1001 correspond to the steps in the embodiment of fig. 3, and are not described herein again.
The present application provides a computer-readable storage medium, where a program is stored on the computer-readable storage medium, and when the program is executed on a computer, the program corresponds to the steps in the embodiment shown in fig. 3, which is not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.

Claims (24)

1. A map generation method, characterized in that the method comprises:
acquiring an initial map to be processed and a moving track corresponding to the initial map, wherein the initial map comprises environment semantic information corresponding to a target environment, and the environment semantic information comprises barrier edge lines;
determining a target area in the initial map according to the environment semantic information;
generating a plurality of topological nodes in the target area, wherein the plurality of topological nodes are respectively connected with the track between the moving tracks, and no intersection point exists between the plurality of topological nodes and the obstacle edge line;
obtaining topological data corresponding to the target area according to the connection relation among the plurality of topological nodes;
generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, wherein the semantic topological map comprises the environment semantic information.
2. The method of claim 1, wherein determining the target area in the initial map according to the environmental semantic information comprises:
acquiring barrier edge lines from the initial map according to the environment semantic information;
acquiring a moving track in the initial map;
and determining the area between the obstacle edge line and the movement track as a target area in the initial map.
3. The method of claim 1, wherein the generating the plurality of topological nodes in the target region comprises:
acquiring a moving track corresponding to the initial map;
carrying out node sampling on the moving track to obtain a plurality of track nodes on the moving track;
acquiring barrier edge lines from the initial map according to the environment semantic information;
determining a plurality of line segments to be sampled between at least one track node and the barrier edge line;
and carrying out topology sampling on the plurality of line segments to be sampled to obtain a plurality of topology nodes in the target area.
4. The method of claim 3, wherein said determining a plurality of line segments to be sampled between at least one of said trajectory nodes and said obstacle edge line comprises:
projecting extension lines to the periphery by taking at least one track node as an end point to obtain a plurality of target intersection points where the barrier edge lines and the extension lines are intersected;
and determining a connecting line between at least one track node and the plurality of target intersection points as a line segment to be sampled between the track node and the obstacle edge line.
5. The method of claim 4, wherein said projecting an extension line around at least one of said trajectory nodes as an end point to obtain a plurality of target intersections where said barrier edge line intersects said extension line, comprises:
projecting a plurality of extension lines to the periphery by taking the track nodes as end points, wherein the extension lines are parallel to the horizontal plane;
projecting the extension lines and the barrier edge lines onto the same plane to obtain a plurality of projection intersection points;
acquiring a first intersection point height corresponding to the projection intersection point on the extension line and a second intersection point height corresponding to the projection intersection point on the barrier edge line;
determining a height difference between the first intersection height and the second intersection height;
and if the height difference is smaller than a preset height threshold, determining that the point, corresponding to the projection intersection point, on the extension line is a target intersection point.
6. The method according to claim 3, wherein the topologically sampling the plurality of line segments to be sampled to obtain a plurality of topological nodes in the target area includes:
sampling the line segments to be sampled to obtain a plurality of dense nodes in the target area;
performing down-sampling processing on the plurality of dense nodes to obtain a plurality of sparse nodes;
performing disturbance processing on the sparse nodes to obtain a plurality of out-of-order nodes of the target area;
and determining out-of-order nodes with the distance to the obstacle edge line larger than or equal to a first distance threshold value as topological nodes in the target area.
7. The method of claim 1, wherein the generating the plurality of topological nodes in the target region comprises:
acquiring a moving track corresponding to the initial map;
determining a sampling range corresponding to the moving track according to the target area;
and sampling based on the sampling range and the moving track to obtain a plurality of topological nodes in the target area.
8. The method of claim 1, wherein the generating the plurality of topological nodes in the target region comprises:
acquiring an environment object in the target environment according to the environment semantic information;
determining an object type corresponding to the environment object;
and acquiring a preset node corresponding to the environment object in a target area according to the object type, and determining the preset node as a topological node in the target area.
9. The method of claim 1, wherein the generating the plurality of topological nodes in the target region comprises:
generating a plurality of candidate nodes in the initial map;
obtaining a moving track corresponding to the initial map and an obstacle edge line in the initial map;
acquiring track connecting lines between the candidate nodes and the moving track respectively;
and determining candidate nodes corresponding to the track connecting lines without the intersection points with the obstacle edge lines as topological nodes in the target area.
10. The method of claim 1, wherein the generating the plurality of topological nodes in the target region comprises:
randomly generating a plurality of topological nodes in the target area of the initial map.
11. The method according to claim 2, wherein the obtaining topology data corresponding to the target area according to the connection relationship among the plurality of topology nodes comprises:
connecting a plurality of the topology nodes to obtain a plurality of initial topology connecting lines;
sampling the initial topological connecting lines to obtain sampling points on the initial topological connecting lines;
determining an object distance between the sampling point and the obstacle edge line;
screening the initial topological connecting lines with the object distance larger than or equal to a second distance threshold value to obtain topological data corresponding to the target area, wherein the topological data comprise a plurality of topological nodes and the screened target topological connecting lines.
12. A map generation apparatus, characterized in that the apparatus comprises:
the map acquisition unit is used for acquiring an initial map to be processed and a moving track corresponding to the initial map, wherein the initial map comprises environment semantic information corresponding to a target environment, and the environment semantic information comprises barrier edge lines;
the area determining unit is used for determining a target area in the initial map according to the environment semantic information;
a topology data generating unit, configured to generate a plurality of topology nodes in the target area, and further configured to obtain topology data corresponding to the target area according to a connection relationship between the plurality of topology nodes, where the plurality of topology nodes are respectively connected with the trajectory between the moving trajectories, and there is no intersection with the obstacle edge line;
and the map generation unit is used for generating a semantic topological map corresponding to the target environment based on the topological data and the initial map, and the semantic topological map comprises the environment semantic information.
13. The apparatus according to claim 12, wherein the area determining unit is specifically configured to obtain an obstacle edge line from the initial map according to the environment semantic information, obtain a movement track in the initial map, and determine an area between the obstacle edge line and the movement track as the target area in the initial map.
14. The apparatus according to claim 12, wherein the topology data generating unit is specifically configured to obtain a moving trajectory corresponding to the initial map, perform node sampling on the moving trajectory to obtain a plurality of trajectory nodes on the moving trajectory, obtain an obstacle edge line from the initial map according to the environment semantic information, determine a plurality of line segments to be sampled between at least one of the trajectory nodes and the obstacle edge line, and perform topology sampling on the plurality of line segments to be sampled to obtain a plurality of topology nodes in the target area.
15. The apparatus according to claim 14, wherein the topology data generating unit is specifically configured to project an extension line to the periphery with at least one of the trajectory nodes as an endpoint, obtain a plurality of target intersections where the obstacle edge line intersects with the extension line, and determine a line segment between at least one of the trajectory nodes and each of the plurality of target intersections as the line segment to be sampled between the trajectory node and the obstacle edge line.
16. The apparatus according to claim 15, wherein the topology data generating unit is further configured to horizontally project a plurality of extension lines to the periphery with the trajectory node as an end point, project the extension lines and the obstacle edge line onto the same plane, obtain a plurality of projection intersection points, obtain a first intersection point height corresponding to the projection intersection points on the extension lines and a second intersection point height corresponding to the projection intersection points on the obstacle edge line, determine a height difference between the first intersection point height and the second intersection point height, and determine, if the height difference is smaller than a preset height threshold, that a point corresponding to the projection intersection point on the extension line is a target intersection point.
17. The apparatus according to claim 14, wherein the topology data generation unit is specifically configured to sample the plurality of line segments to be sampled to obtain a plurality of dense nodes in the target area, perform downsampling processing on the plurality of dense nodes to obtain a plurality of sparse nodes, perform perturbation processing on the plurality of sparse nodes to obtain a plurality of out-of-order nodes in the target area, and determine an out-of-order node having a distance from the obstacle edge line that is greater than or equal to a first distance threshold as the topology node in the target area.
18. The apparatus according to claim 12, wherein the topology data generating unit is further specifically configured to acquire a movement trajectory corresponding to the initial map, determine a sampling range corresponding to the movement trajectory according to the target area, and perform sampling based on the sampling range and the movement trajectory to obtain a plurality of topology nodes in the target area.
19. The apparatus according to claim 12, wherein the topology data generating unit is further configured to obtain an environment object in the target environment according to the environment semantic information, determine an object type corresponding to the environment object, obtain a preset node corresponding to the environment object according to the object type, and determine the preset node as the topology node in the target area.
20. The apparatus according to claim 12, wherein the topology data generating unit is further configured to generate a plurality of candidate nodes in the initial map, acquire a moving track corresponding to the initial map and obstacle edge lines in the initial map, acquire track connecting lines between the plurality of candidate nodes and the moving track, respectively, and determine a candidate node corresponding to a track connecting line where no intersection exists with the obstacle edge line as the topology node in the target area.
21. The apparatus according to claim 12, wherein the topology data generation unit is further configured to randomly generate a plurality of topology nodes in the target area of the initial map.
22. The apparatus according to claim 13, wherein the topology data generating unit is further configured to connect a plurality of topology nodes to obtain a plurality of initial topology connection lines, sample the plurality of initial topology connection lines to obtain sampling points on the plurality of initial topology connection lines, determine an object distance between the sampling points and the obstacle edge line, screen out an initial topology connection line whose object distance is greater than or equal to a second distance threshold, obtain topology data corresponding to the target area, where the topology data includes the plurality of topology nodes, and screen out a target topology connection line.
23. A robot, characterized in that the robot comprises:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform the method of any of claims 1 to 11.
24. A computer-readable storage medium having a program stored thereon, which when executed on a computer performs the method of any one of claims 1 to 11.
CN202211575911.8A 2022-12-09 2022-12-09 Map generation method, map generation device, robot, and storage medium Active CN115655261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211575911.8A CN115655261B (en) 2022-12-09 2022-12-09 Map generation method, map generation device, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211575911.8A CN115655261B (en) 2022-12-09 2022-12-09 Map generation method, map generation device, robot, and storage medium

Publications (2)

Publication Number Publication Date
CN115655261A CN115655261A (en) 2023-01-31
CN115655261B true CN115655261B (en) 2023-04-07

Family

ID=85019141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211575911.8A Active CN115655261B (en) 2022-12-09 2022-12-09 Map generation method, map generation device, robot, and storage medium

Country Status (1)

Country Link
CN (1) CN115655261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116625386A (en) * 2023-04-21 2023-08-22 苏州大学 Map construction method and multi-robot path planning method based on image filtering

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4288203B2 (en) * 2004-04-28 2009-07-01 三菱重工業株式会社 How to create a travel route map
CN108121331A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of autonomous cleaning paths planning method and device
CN106780735B (en) * 2016-12-29 2020-01-24 深圳先进技术研究院 Semantic map construction method and device and robot
CN107672588B (en) * 2017-08-29 2020-05-08 广州小鹏汽车科技有限公司 Method, device and system for detecting collision of obstacles in automatic parking path
CN111750862A (en) * 2020-06-11 2020-10-09 深圳优地科技有限公司 Multi-region-based robot path planning method, robot and terminal equipment
CN113008251B (en) * 2021-02-22 2022-11-25 湖南大学 Digital map updating method for unstructured roads in closed area
CN115406453A (en) * 2021-05-27 2022-11-29 阿里巴巴新加坡控股有限公司 Navigation method, navigation device and computer storage medium
CN115147637A (en) * 2022-03-17 2022-10-04 天津大学 Real-time semantic map generation method and device based on robot

Also Published As

Publication number Publication date
CN115655261A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US11468983B2 (en) Time-dependent navigation of telepresence robots
KR102504729B1 (en) Autonomous map driving using waypoint matching
US11340620B2 (en) Navigating a mobile robot
JP2019501384A (en) Autonomous positioning navigation equipment, positioning navigation method and autonomous positioning navigation system
WO2015017691A1 (en) Time-dependent navigation of telepresence robots
WO2015180021A1 (en) Pruning robot system
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
CN115655261B (en) Map generation method, map generation device, robot, and storage medium
CN114564027A (en) Path planning method of foot type robot, electronic equipment and readable storage medium
CN114510041A (en) Robot motion path planning method and robot
CN115972217B (en) Map building method based on monocular camera and robot
CN116358522A (en) Local map generation method and device, robot, and computer-readable storage medium
CN203241822U (en) A mobile robot based on a preset moving path
WO2022153669A1 (en) Distributed coordination system and task execution method
CN114872051B (en) Traffic map acquisition system, method, robot and computer readable storage medium
CN115790606B (en) Track prediction method, device, robot and storage medium
Novotny et al. Design and implementation of a mobile search and rescue robot
CN114633826B (en) Leg collision processing method for foot type robot and foot type robot
Bruemmer et al. Virtual camera perspectives within a 3-d interface for robotic search and rescue
Aydemir et al. A Mapping and Monitoring Platform for Mobile Robots
Hutsebaut-Buysse et al. Directed Real-World Learned Exploration
CN117191001A (en) Search and rescue method of search and rescue robot, upper computer and lower computer
CN117008599A (en) Path planning method, related equipment and robot of path planning method
CN116295338A (en) Positioning method, positioning device, robot and computer readable storage medium
Kazi et al. Use of Collaborative Technologies for Building an Autonomous Unmanned Ground Vehicle (UGV)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant