CN116931557A - Method and device for controlling movement of robot, storage medium and electronic device - Google Patents

Method and device for controlling movement of robot, storage medium and electronic device Download PDF

Info

Publication number
CN116931557A
CN116931557A CN202210366407.0A CN202210366407A CN116931557A CN 116931557 A CN116931557 A CN 116931557A CN 202210366407 A CN202210366407 A CN 202210366407A CN 116931557 A CN116931557 A CN 116931557A
Authority
CN
China
Prior art keywords
target
robot
point cloud
cloud data
clusters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210366407.0A
Other languages
Chinese (zh)
Inventor
张陆涵
曹蒙
崔凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Technology Suzhou Co ltd
Original Assignee
Dreame Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Technology Suzhou Co ltd filed Critical Dreame Technology Suzhou Co ltd
Priority to CN202210366407.0A priority Critical patent/CN116931557A/en
Priority to PCT/CN2023/080705 priority patent/WO2023193567A1/en
Publication of CN116931557A publication Critical patent/CN116931557A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a method and a device for controlling movement of a robot, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring point cloud data of a space environment where a target robot is located, and acquiring target point cloud data, wherein the target point cloud data comprises a plurality of target points; clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points; selecting a target notch from among the notches between each adjacent cluster in a group of clusters, wherein the target notch is a notch allowing a target robot to pass through; and controlling the target robot to move towards the direction of the target notch. By adopting the technical scheme, the problem that the time consumption of movement control is long due to the fact that the gap is found by using the full-view information in the movement control method of the robot in the related technology is solved.

Description

Method and device for controlling movement of robot, storage medium and electronic device
[ field of technology ]
The present application relates to the field of robots, and in particular, to a method and apparatus for controlling movement of a robot, a storage medium, and an electronic apparatus.
[ background Art ]
At present, when an unknown environment is autonomously explored by a robot, environment data can be acquired first, a two-dimensional grid map is generated according to the environment data, gaps are searched through boundary information of the two-dimensional grid map, and a moving path is planned according to the searched gaps, so that environment exploration is performed.
However, in the above-described method of searching for a gap based on the boundary information of the two-dimensional grid map to control the movement of the robot, since the gap is searched for using the whole-image information, the time required for searching increases as the two-dimensional grid map increases.
As can be seen from the above, the movement control method of the robot in the related art has a problem in that it takes a long time to perform movement control due to finding a gap using the full-view information.
[ application ]
The application aims to provide a movement control method and device of a robot, a storage medium and an electronic device, which at least solve the problem that the movement control method of the robot in the related art has long time consumption caused by searching for a gap by using full-image information.
The application aims at realizing the following technical scheme:
according to an aspect of an embodiment of the present application, there is provided a movement control method of a robot, including: acquiring point cloud data of a space environment where a target robot is located, and obtaining target point cloud data, wherein the target point cloud data comprises a plurality of target points; clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in the plurality of target points; selecting a target notch from among the notches between each adjacent cluster in the group of clusters, wherein the target notch is a notch allowing the target robot to pass through; and controlling the target robot to move towards the direction of the target notch.
In an exemplary embodiment, the acquiring the point cloud data of the spatial environment in which the target robot is located, to obtain the target point cloud data includes: and in the process of controlling the target robot to rotate, acquiring point cloud data of a space environment where the target robot is positioned by a laser sensor on the target robot to obtain the point cloud data of the target point.
In an exemplary embodiment, the performing a clustering operation on the target point cloud data, to obtain a set of clusters includes: determining a distance between each of the plurality of target points and each of a set of reference rays, wherein each of the reference rays corresponds to a reference angle; determining target points matched with each reference ray according to the distance between each target point and each reference ray; and respectively determining the target points matched with each reference ray as a cluster, and obtaining the group of clusters.
In an exemplary embodiment, the determining a distance between each of the plurality of target points and each of a set of reference rays comprises: and respectively projecting each target point onto each reference ray to obtain the distance between each target point and each reference ray.
In an exemplary embodiment, the performing a clustering operation on the target point cloud data, to obtain a set of clusters includes: downsampling the target point cloud data onto a two-dimensional grid map to obtain two-dimensional points corresponding to each of the plurality of target points; and clustering the plurality of target points according to the two-dimensional points corresponding to each target point to obtain the group of clusters.
In an exemplary embodiment, the performing clustering operation on the plurality of target points according to the two-dimensional points corresponding to each target point to obtain the group of class clusters includes: determining grids of the two-dimensional points corresponding to each target point in the two-dimensional grid map to obtain a group of target grids, wherein each target grid in the group of target grids comprises two-dimensional points corresponding to at least one target point of the plurality of target points; and clustering the target points according to the two-dimensional points contained in each target grid and the neighborhood information of each target grid to obtain the group of clusters.
In an exemplary embodiment, the selecting a target notch from among the notches between each adjacent cluster in the group of clusters includes: determining the distance between each adjacent cluster as the size of a gap between each adjacent cluster; and selecting a notch with the largest size from among the notches between each adjacent class of clusters to obtain the target notch.
According to another aspect of the embodiment of the present application, there is also provided a movement control apparatus of a robot, including: the acquisition unit is used for acquiring point cloud data of a space environment where the target robot is located to obtain target point cloud data, wherein the target point cloud data comprises a plurality of target points; a clustering unit, configured to perform a clustering operation on the target point cloud data to obtain a set of class clusters, where each class cluster in the set of class clusters includes at least one target point in the plurality of target points; a selecting unit, configured to select a target notch from a notch between each adjacent cluster in the group of clusters, where the target notch is a notch allowing the target robot to pass through; and the control unit is used for controlling the target robot to move towards the direction of the target notch.
In an exemplary embodiment, the acquisition unit includes: and the acquisition module is used for acquiring point cloud data of a space environment where the target robot is positioned through a laser sensor on the target robot in the process of controlling the target robot to rotate so as to acquire the point cloud data of the target point.
In an exemplary embodiment, the clustering unit includes: a first determining module for determining a distance between each of the plurality of target points and each of a set of reference rays, wherein each of the reference rays corresponds to a reference angle; a second determining module, configured to determine, according to a distance between each target point and each reference ray, a target point that matches each reference ray; and the third determining module is used for determining target points matched with each reference ray as a cluster respectively to obtain the group of clusters.
In one exemplary embodiment, the first determining module includes: and the projection submodule is used for respectively projecting each target point onto each reference ray to obtain the distance between each target point and each reference ray.
In an exemplary embodiment, the clustering unit includes: the downsampling module is used for downsampling the target point cloud data onto a two-dimensional grid map to obtain two-dimensional points corresponding to each target point in the plurality of target points; and the clustering module is used for executing clustering operation on the plurality of target points according to the two-dimensional points corresponding to each target point to obtain the group of clusters.
In one exemplary embodiment, the clustering module includes: a determining submodule, configured to determine a grid to which a two-dimensional point corresponding to each target point in the two-dimensional grid map belongs, to obtain a set of target grids, where each target grid in the set of target grids includes a two-dimensional point corresponding to at least one target point of the plurality of target points; and the execution submodule is used for executing clustering operation on the plurality of target points according to the two-dimensional points contained in each target grid and the neighborhood information of each target grid to obtain the group of clusters.
In an exemplary embodiment, the selecting unit includes: a fourth determining module, configured to determine a distance between each adjacent cluster as a size of a gap between each adjacent cluster; and the selecting module is used for selecting the notch with the largest size from the notches among the adjacent clusters to obtain the target notch.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-described movement control method of a robot when run.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the movement control method of the robot through the computer program.
In the embodiment of the application, the point cloud data of the space environment where the robot is located is clustered, and a gap allowing the robot to pass through is selected from gaps among adjacent clusters, so that the point cloud data of the space environment where the target robot is located is obtained by acquiring the point cloud data, wherein the point cloud data of the target point comprises a plurality of target points; clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points; selecting a target notch from among the notches between each adjacent cluster in a group of clusters, wherein the target notch is a notch allowing a target robot to pass through; the target robot is controlled to move towards the direction of the target notch, and because the clustering is carried out based on the point cloud data of the space environment where the robot is located, the notch allowing the robot to pass through is selected from the notches between adjacent clusters instead of searching the notch by using the full-image information, the purpose of reducing the searching range of the robot can be achieved, the technical effects of reducing the time consumption of movement control and improving the space exploration efficiency are achieved, and the problem that the time consumption of movement control is long due to the fact that the notch is searched by using the full-image information in the movement control method of the robot in the related technology is solved.
[ description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an alternative method of controlling movement of a robot according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of controlling movement of a robot according to an embodiment of the present application;
FIG. 3 is a flow chart of another alternative method of controlling movement of a robot according to an embodiment of the present application;
FIG. 4 is a block diagram of an alternative movement control device of a robot according to an embodiment of the present application;
fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the application.
[ detailed description ] of the application
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
According to an aspect of an embodiment of the present application, there is provided a movement control method of a robot. Alternatively, in the present embodiment, the above-described movement control method of the robot may be applied to a hardware environment constituted by the robot 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the robot 102 may be connected to a server 104 (e.g., an internet of things platform or cloud server) through a network to control the robot 102.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth, infrared. The robot 102 may include, but is not limited to: the server 104 may be a server of an internet of things platform, for example, a cleaning robot, such as a sweeping robot, a washing robot, an automatic mop washing robot, a self-cleaning robot, and the like.
The movement control method of the robot according to the embodiment of the present application may be executed by the robot 102 and the server 104 separately, or may be executed by both the robot 102 and the server 104 together. The method for controlling the movement of the robot 102 according to the embodiment of the present application may be performed by a client installed thereon.
Taking the movement control method of the robot in this embodiment performed by the robot 102 as an example, fig. 2 is a schematic flow chart of an alternative movement control method of the robot according to an embodiment of the present application, and as shown in fig. 2, the flow of the method may include the following steps:
step S202, obtaining point cloud data of a space environment where a target robot is located, and obtaining target point cloud data, wherein the target point cloud data comprises a plurality of target points.
The movement control method of the robot in this embodiment may be applied to a scenario in which a short search for an unknown environment is implemented by performing movement control on the robot, where the robot may be a cleaning robot (i.e., a robot with a cleaning function, which may be a sweeping robot or a floor cleaning robot), and the corresponding unknown environment may be an area to be cleaned, may be a flying robot, and may be an area to be detected, or may be another type of robot, which is not limited herein. The short exploration can be that the robot performs mapping operation on the space environment where the robot is located so as to avoid obstacles and the like.
Optionally, the robot is provided with a laser radar, which may be a radar system for detecting the position, speed and other characteristic quantities of the target by emitting a laser beam, and the working principle is as follows: transmitting a detection signal (laser beam) to the target, and reflecting a reflected signal (target echo) reflected by the interface target; the received reflected signal is compared with the transmitted signal, and after appropriate processing, information about the target, such as parameters like target distance, azimuth, altitude, speed, attitude, and even shape, can be obtained, so that the target is detected, tracked, and identified. The laser radar may include a laser transmitter, an optical receiver, a turntable, an information processing system, and the like, where the optical transmitter may convert an electrical pulse into an optical pulse and transmit the optical pulse, the optical receiver may restore the optical pulse reflected from the target into an electrical pulse, and by processing the restored electrical pulse, point cloud data of the target may be obtained, and the obtained point cloud data may be sent to a display (may be displayed in a point cloud form).
In this embodiment, the target robot may emit a laser beam to the spatial environment where the target robot is located, and the optical receiver on the target robot may restore the optical pulse reflected from the target to an electrical pulse, and analyze the acquired electrical pulse to acquire point cloud data of the spatial environment where the target robot is located, that is, target point cloud data, where the target point cloud data may include a plurality of target points.
Step S204, clustering operation is performed on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points.
In this embodiment, clustering may be performed on the target point cloud data to obtain a set of class clusters, where each class cluster in the set of class clusters may include at least one target point of the plurality of target points. Alternatively, the obtained set of clusters may comprise at least two clusters, each cluster corresponding to at least one obstacle in the spatial environment in which the target robot is located.
Clustering the target point cloud data may be: clustering the plurality of target points based on distances between different target points in the target point cloud data, wherein one or more clustering modes can be adopted, and the clustering modes can include but are not limited to: the clustering manner of the number of clusters (for example, K-means clustering) is specified, and the clustering manner of the number of clusters (for example, hierarchical clustering) is not specified.
For example, K-means clustering can select the number of clusters for a point set of scattered target point cloud data, randomly initialize a center point, and increase the distance between points in classes by iteratively calculating the distance between points in the drawn classes, thereby forming a group of class clusters.
For another example, hierarchical clustering may first treat each target point as a separate class cluster, and if the target point cloud data includes X target points, X class clusters may be obtained; then, in the process of each iteration, two class clusters meeting the merging condition are merged into one according to the distance between the two class clusters. And when the iteration ending condition is met, the obtained plurality of class clusters are a group of class clusters.
Alternatively, the clustering operation may be implemented based on grids, where a two-dimensional space is divided into grids, then the cloud data of the target point is mapped into the grids, then the point concentration in each grid is calculated, and the categories of the grid units are classified according to a preset threshold value and form a category with the adjacent grid groups.
Step S206, selecting a target notch from among the notches between each adjacent cluster in the group of clusters, wherein the target notch is a notch allowing the target robot to pass through.
In this embodiment, a gap exists between adjacent clusters in the group of clusters, where the gap may correspond to a distance between obstacles, which may be an opening direction in an open space in a space environment, such as a space between two obstacles, a door where a room is open, or the like. In order to facilitate environment exploration, a gap allowing a robot to pass through can be selected from gaps between each adjacent cluster in a group of clusters to obtain a target gap, wherein the selected target gap is a gap for the target robot to pass through. The target notch may be the largest notch among all the notches through which the target robot can pass, any notch among all the notches through which the target robot can pass, or any notch among all the notches through which the target robot does not pass, which is not limited in this embodiment.
For example, the robot may be located in a first room, after acquiring point cloud data of an environmental space where the robot is located through a laser radar carried by the robot, clustering the acquired point cloud data to obtain a group of clusters, and selecting a largest notch among notches between any two adjacent clusters in the group of clusters, where the selected notch may be an open door connecting the first room and the second room.
Step S208, the target robot is controlled to move towards the direction of the target notch.
In this embodiment, after the target gap is determined, the target robot may move in the direction of the target gap. Further, after the target robot moves to the target gap, point cloud data of the current space environment of the target robot can be re-acquired, clustering operation and the like are performed on the acquired point cloud data, and the gap is re-selected to control the target robot to move, so that gradual exploration of the global space is realized.
For example, after the robot is located in the first room and the target gap direction is determined to be the open door of the first room and the second room, the robot may calculate a movement track moving to the target gap and move to the target gap direction according to the calculated movement track. After the robot moves to the door, whether to start exploration of the second room can be determined according to actual requirements, if so, point cloud data of the second room can be obtained through a laser radar, and the operation of selecting the notch is executed again, so that exploration of the second room is conducted.
Through the steps S202 to S208, point cloud data of the spatial environment where the target robot is located is obtained, and target point cloud data is obtained, where the target point cloud data includes a plurality of target points; clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points; selecting a target notch from among the notches between each adjacent cluster in a group of clusters, wherein the target notch is a notch allowing a target robot to pass through; the method for controlling the target robot to move towards the direction of the target notch solves the problem that the moving control method of the robot in the related art has long time consumption due to the fact that the full-image information is used for searching the notch, reduces the time consumption of moving control, and improves the space exploration efficiency.
In an exemplary embodiment, acquiring point cloud data of a spatial environment in which a target robot is located, to obtain the target point cloud data includes:
s11, acquiring point cloud data of a space environment where the target robot is located through a laser sensor on the target robot in the process of controlling the target robot to rotate, and obtaining the point cloud data of the target point.
In this embodiment, the point cloud data of the spatial environment where the target robot is located may be obtained by a laser sensor set on the target robot, where the laser sensor may be a laser radar. The method for acquiring the point cloud data can be as follows: the target robot obtains point cloud data of the spatial environment in situ. In this case, the movement speed of the target robot may be zero, or the movement may be performed at a low speed, and the movement may be a movement in the front-rear direction or the left-right direction, or may be an in-situ rotation.
The protection device for the laser sensor in the target robot can be a column or a transparent baffle, when the laser sensor adopts the column for protection, the rotation of the target robot needs to be controlled, and when the laser sensor adopts the baffle for protection, the rotation of the target robot does not need to be controlled, so that all point cloud data of the space environment where the target robot is located can be obtained.
Alternatively, for a scenario in which the protection device of the laser sensor is a pillar, during the control of the target robot to rotate (e.g., rotate in place at least one revolution, i.e., at least 360 degrees), the point cloud data of the spatial environment in which the laser sensor is located may be acquired by the laser sensor on the target robot in a similar manner as in the foregoing embodiment, and thus the target point cloud data displayed by the laser sensor on the display may be obtained.
According to the embodiment, in the process of rotating the robot, the laser sensor is used for acquiring the point cloud data of the space environment where the robot is located, so that the integrity of the acquisition of the point cloud data can be ensured, and the accuracy of exploring the space environment where the robot is located is improved by chickens.
In an exemplary embodiment, a clustering operation is performed on target point cloud data to obtain a set of class clusters, including:
s21, determining the distance between each target point in the plurality of target points and each reference ray in a group of reference rays, wherein each reference ray corresponds to a reference angle;
s22, determining a target point matched with each reference ray according to the distance between each target point and each reference ray;
s23, respectively determining target points matched with each reference ray as a class cluster, and obtaining a group of class clusters.
In this embodiment, a set of reference rays with a certain angle may be preset, and different reference rays may correspond to different angles, where the reference rays may be rays in a reference coordinate system with a preset point on the target robot as a coordinate origin and a preset direction as a coordinate axis direction. When the clustering operation is performed on the target point cloud data, the point cloud of the environment where the robot is located can be corresponding to the reference ray of each angle, and the clustering can be assisted by the angle information.
Alternatively, the distance between each of the plurality of target points and each of a set of reference rays may be determined first, where each reference ray corresponds to a reference angle, and where a set of reference rays corresponds to all preset angles in a reference coordinate system; and determining a reference ray matched with each target point (the reference ray matched with each target point can be the reference ray closest to each target point) according to the distance between each target point and each reference ray, further determining the target point matched with each reference ray, and respectively determining the target point matched with each reference ray as a class cluster, thereby obtaining a group of class clusters.
For example, 10 rays (i.e., 10 reference rays) are counted in the reference coordinate system, each ray corresponds to an angle, point cloud data acquired by a laser sensor of the robot is mapped to each ray of each angle, and points corresponding to the rays of each angle are determined, so that a plurality of clusters are determined. Here, if a ray does not have a matching point, such a ray may be removed from use in the present cluster.
According to the embodiment, the surrounding point cloud is corresponding to the rays of each angle, and the clustering is assisted by the angle information, so that the accuracy of clustering the point cloud data can be improved.
In one exemplary embodiment, determining a distance between each of a plurality of target points and each of a set of reference rays comprises:
s31, each target point is projected onto each reference ray, and the distance between each target point and each reference ray is obtained.
In this embodiment, in determining the distance between each target point and each reference ray, each target point may be first projected onto each reference ray, respectively, and the distance between each target point and the corresponding projected point (i.e., the length of the line between the two) may be determined as the distance between each target point and each ray.
According to the embodiment, the surrounding point cloud is corresponding to the rays of each angle, and the distance from each point in the point cloud data to each ray is determined based on projection, so that the accuracy of clustering the point cloud data can be improved.
In an exemplary embodiment, a clustering operation is performed on target point cloud data to obtain a set of class clusters, including:
s41, downsampling target point cloud data onto a two-dimensional grid map to obtain two-dimensional points corresponding to each of a plurality of target points;
s42, clustering operation is carried out on the plurality of target points according to the two-dimensional points corresponding to each target point, and a group of clusters is obtained.
In general, the laser point cloud map stores an original scanning point cloud of the laser sensor on an environmental space, and has the advantages of complete information retention, large calculation amount and incapability of being directly used for navigation obstacle avoidance and the like. The core idea of laser point cloud rasterization is: the method comprises the steps of processing an area scanned by a laser radar by using grids, and downsampling point cloud data into a two-dimensional grid map to enable each two-dimensional grid point cloud to represent a small area of a space, wherein the point cloud rasterization processing is divided into two-dimensional rasterization and three-dimensional rasterization, and two-dimensional is to project the three-dimensional point cloud.
In this embodiment, when performing clustering operation on the point cloud data, the point cloud of the environment where the robot is located may be downsampled into the two-dimensional grid map, and clustered based on the information of the two-dimensional grid. For the target point cloud data, the target point cloud data can be downsampled onto a two-dimensional grid map, for example, each target point in a plurality of target points is downsampled into a certain grid in the two-dimensional grid map, so that two-dimensional points corresponding to each target point are obtained, and the number of the point clouds required to be processed can be reduced.
And when the two-dimensional points corresponding to each target point are obtained, clustering operation can be performed on the two-dimensional points corresponding to each target point to obtain a clustering result of the two-dimensional points corresponding to a plurality of target points, wherein the clustering result can be a group of reference clusters, each reference cluster comprises at least part of the two-dimensional points corresponding to the target points in the plurality of target points, and the target points corresponding to the two-dimensional points contained in each reference cluster are determined as a cluster, so that the group of clusters is obtained.
Through the embodiment, the point cloud data is downsampled onto the two-dimensional grid map, so that the data volume required to be processed by clustering operation can be reduced, the efficiency of clustering the point cloud data is improved, and the efficiency of carrying out movement control on a robot in the follow-up process is further improved.
In an exemplary embodiment, clustering is performed on a plurality of target points according to two-dimensional points corresponding to each target point, so as to obtain a group of clusters, including:
s51, determining grids of two-dimensional points corresponding to each target point in the two-dimensional grid map to obtain a group of target grids, wherein each target grid in the group of target grids comprises two-dimensional points corresponding to at least one target point of a plurality of target points;
S52, clustering operation is carried out on the plurality of target points according to the two-dimensional points contained in each target grid and the neighborhood information of each target grid, and a group of clusters is obtained.
The two-dimensional grid map may be divided into a plurality of grids, each of which may have an adjacent grid, and each of the plurality of grids may or may not include a two-dimensional point corresponding to a part of the target points. In this embodiment, when performing the clustering operation on the plurality of target points according to the two-dimensional point corresponding to each target point, the grid to which the two-dimensional point corresponding to each target point belongs in the two-dimensional grid map may be first determined, thereby obtaining a set of target grids, where each target grid in the set of target grids includes the two-dimensional point corresponding to at least one target point of the plurality of target points.
After obtaining a set of target grids, clustering operation can be performed on the multiple target points according to two-dimensional points contained in each target grid and neighborhood information of each target grid, so as to obtain a set of clusters. The neighborhood information may be: after judging the grid unit with the maximum local density according to the two-dimensional points contained in each target grid, the number of the two-dimensional points and the distance from the grid unit are contained in the neighborhood grid in the given neighborhood radius.
For example, a neighborhood grid clustering algorithm may be used to cluster multiple target points, raw data (i.e., target point cloud data) may be mapped to a grid subspace (i.e., a two-dimensional grid map) to obtain a set of target grids, the target grids with the maximum local density are used as starting points, the target grids within the neighborhood range of the target grids are searched for with a given neighborhood radius and marked, and the target grids are continuously expanded and searched for possible target grids outwards based on the newly added target grids until no new target grid is added, so as to determine target grids belonging to the same class of clusters, further determine target points belonging to the same class of clusters, sequentially select the remaining target grids with the maximum local density, repeatedly perform the above process, and finally determine a set of class clusters.
Through the embodiment, the point cloud is downsampled into the grid, and the efficiency of the point cloud clustering can be improved through the grid neighborhood information clustering, so that the efficiency of the follow-up mobile control of the robot is improved.
In one exemplary embodiment, selecting a target notch from among the notches between each adjacent cluster in the set of clusters includes:
s61, determining the distance between each adjacent cluster as the size of a gap between each adjacent cluster;
S62, selecting a notch with the largest size from among the notches between each two adjacent clusters to obtain a target notch.
In this embodiment, after a set of clusters is obtained, the distance between each adjacent cluster (i.e., the inter-cluster distance) may be determined as the size of the gap between each adjacent cluster. After determining the size of the gap between each adjacent cluster, the largest gap (i.e., the gap between the adjacent clusters with the largest inter-class distance) may be selected from the gaps between each adjacent cluster, thereby obtaining the target gap.
Alternatively, the inter-class distance may be calculated in one or more ways, for example, the inter-class distance between adjacent class clusters may be calculated using a euclidean distance, where the euclidean distance may be a distance between a last point of one of the adjacent class clusters and a first point of another of the adjacent class clusters (may be calculated as angle information).
For example, the cluster 1 and the cluster 2 are two adjacent clusters, the distance between the last point of the cluster 1 and the first point of the cluster 2 can be regarded as the distance between the cluster 1 and the cluster 2, if there are a plurality of clusters, the distances between the adjacent clusters are sequentially calculated, so as to obtain a group of inter-cluster distances (i.e. the size of the notch), and by sorting the group of inter-cluster distances, the largest inter-cluster distance, i.e. the notch with the largest size, can be determined, so as to obtain the target notch.
According to the embodiment, the size of the gap between the adjacent clusters is determined based on the distance between the adjacent clusters, and the gap with the largest size is selected as the gap to be passed, so that the feasibility of robot movement control can be improved.
The movement control method of the robot in the present embodiment is explained below in conjunction with an alternative example. In this alternative example, the robot is an LDS robot (Laser Direct Structuring, laser direct structuring technique).
In this optional example, a scheme is provided for enabling a robot to quickly and transiently explore an indoor environment through a local point cloud, as shown in fig. 3, a flow of a movement control method of the robot in this optional example may include the following steps:
step S302, acquiring a surrounding environment point cloud.
The LDS robot obtains an environmental point cloud (i.e., point cloud data) of the surrounding environment in which the LDS robot is located through a laser radar in situ (LDS without a column and with the column rotating).
Step S304, clustering the point clouds.
After the environmental point cloud is obtained, the LDS robot can cluster the obtained environmental point cloud, and the clustering mode adopted can be that the environmental point cloud is projected on rays of each angle, the clustering is assisted by angle information, or the environmental point cloud is firstly downsampled into a grid, and then the clustering is carried out through grid neighborhood information.
Step S306, calculating the inter-class distance between each adjacent class cluster after clustering.
The LDS robot may calculate the inter-class distance between each adjacent cluster after clustering, identify the gap by the inter-class distance, i.e., determine the gap between two adjacent clusters with the largest inter-class distance as the optimal gap (i.e., the target gap).
And step S308, controlling the robot to search towards the optimal notch direction.
The robot is controlled to move in the gap-size maximum prescription direction (i.e., the optimal gap direction).
Through the example, the obtained surrounding environment point clouds are clustered, so that the gap direction of the movement of the robot is identified, and the efficiency of the robot in the indoor unknown environment for short exploration can be improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to each embodiment of the present application.
According to still another aspect of the embodiment of the present application, there is also provided a movement control apparatus of a robot for implementing the movement control method of the robot. Fig. 4 is a block diagram of a movement control device of an alternative robot according to an embodiment of the present application, and as shown in fig. 4, the device may include:
An obtaining unit 402, configured to obtain point cloud data of a spatial environment where a target robot is located, to obtain target point cloud data, where the target point cloud data includes a plurality of target points;
the clustering unit 404 is connected to the obtaining unit 402, and is configured to perform a clustering operation on the target point cloud data to obtain a set of class clusters, where each class cluster in the set of class clusters includes at least one target point of the plurality of target points;
a selecting unit 406, connected to the clustering unit 404, for selecting a target notch from the notch between each adjacent cluster in the group of clusters, where the target notch is a notch allowing the target robot to pass through;
and the control unit 408 is connected with the selection unit 406 and is used for controlling the target robot to move towards the direction of the target notch.
It should be noted that, the acquiring unit 402 in this embodiment may be used to perform the step S202, the clustering unit 404 in this embodiment may be used to perform the step S204, the selecting unit 406 in this embodiment may be used to perform the step S206, and the control unit 408 in this embodiment may be used to perform the step S208.
Acquiring point cloud data of a space environment where a target robot is located through the module to obtain target point cloud data, wherein the target point cloud data comprises a plurality of target points; clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points; selecting a target notch from among the notches between each adjacent cluster in a group of clusters, wherein the target notch is a notch allowing a target robot to pass through; the method for controlling the target robot to move towards the direction of the target notch solves the problem that the moving control method of the robot in the related art has long time consumption due to the fact that the full-image information is used for searching the notch, reduces the time consumption of moving control, and improves the space exploration efficiency.
In one exemplary embodiment, the acquisition unit includes:
and the acquisition module is used for acquiring point cloud data of the space environment where the target robot is positioned through a laser sensor on the target robot in the process of controlling the target robot to rotate so as to acquire the point cloud data of the target point.
In one exemplary embodiment, the clustering unit includes:
a first determining module for determining a distance between each of a plurality of target points and each of a set of reference rays, wherein each reference ray corresponds to a reference angle, respectively;
a second determining module, configured to determine a target point matched with each reference ray according to a distance between each target point and each reference ray;
and the third determining module is used for determining target points matched with each reference ray as a class cluster respectively to obtain a group of class clusters.
In one exemplary embodiment, the first determination module includes:
and the projection submodule is used for respectively projecting each target point onto each reference ray to obtain the distance between each target point and each reference ray.
In one exemplary embodiment, the clustering unit includes:
the downsampling module is used for downsampling the target point cloud data onto a two-dimensional grid map to obtain two-dimensional points corresponding to each of a plurality of target points;
And the clustering module is used for performing clustering operation on the plurality of target points according to the two-dimensional points corresponding to each target point to obtain a group of clusters.
In one exemplary embodiment, the clustering module includes:
the determining submodule is used for determining grids of two-dimensional points corresponding to each target point in the two-dimensional grid map to obtain a group of target grids, wherein each target grid in the group of target grids comprises two-dimensional points corresponding to at least one target point of the plurality of target points;
and the execution submodule is used for executing clustering operation on a plurality of target points according to the two-dimensional points contained in each target grid and the neighborhood information of each target grid to obtain a group of clusters.
In an exemplary embodiment, the selection unit comprises:
a fourth determining module, configured to determine a distance between each adjacent cluster as a size of a gap between each adjacent cluster;
the selecting module is used for selecting the notch with the largest size from the notches between each two adjacent clusters to obtain the target notch.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to yet another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to execute the program code of the movement control method of any one of the robots described above in the embodiment of the present application.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
s1, acquiring point cloud data of a space environment where a target robot is located, and obtaining target point cloud data, wherein the target point cloud data comprises a plurality of target points;
s2, clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points;
s3, selecting a target notch from the notch between each adjacent cluster in the group of clusters, wherein the target notch is a notch allowing the target robot to pass through;
s4, controlling the target robot to move towards the direction of the target notch.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the movement control method of the robot described above, which may be a server, a terminal, or a combination thereof.
Fig. 5 is a block diagram of an alternative electronic device, according to an embodiment of the present application, including a processor 502, a communication interface 504, a memory 506, and a communication bus 508, as shown in fig. 5, wherein the processor 502, the communication interface 504, and the memory 506 communicate with each other via the communication bus 508, wherein,
a memory 506 for storing a computer program;
the processor 502 is configured to execute the computer program stored in the memory 506, and implement the following steps:
s1, acquiring point cloud data of a space environment where a target robot is located, and obtaining target point cloud data, wherein the target point cloud data comprises a plurality of target points;
s2, clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in a plurality of target points;
S3, selecting a target notch from the notch between each adjacent cluster in the group of clusters, wherein the target notch is a notch allowing the target robot to pass through;
s4, controlling the target robot to move towards the direction of the target notch.
Alternatively, in the present embodiment, the communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM or nonvolatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
As an example, the memory 506 may include, but is not limited to, the acquisition unit 402, the clustering unit 404, the selection unit 406, and the control unit 408 in the control apparatus including the device. In addition, other module units in the control device of the above apparatus may be included, but are not limited to, and are not described in detail in this example.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 5 is only schematic, and the device implementing the method for controlling movement of the robot may be a terminal device, and the terminal device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 5 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in each embodiment of the present application.
In the foregoing embodiments of the present application, the description of each embodiment has emphasis, and for a portion of a certain embodiment that is not described in detail, reference may be made to the related description of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A movement control method of a robot, comprising:
acquiring point cloud data of a space environment where a target robot is located, and obtaining target point cloud data, wherein the target point cloud data comprises a plurality of target points;
clustering operation is carried out on the target point cloud data to obtain a group of class clusters, wherein each class cluster in the group of class clusters comprises at least one target point in the plurality of target points;
selecting a target notch from among the notches between each adjacent cluster in the group of clusters, wherein the target notch is a notch allowing the target robot to pass through;
and controlling the target robot to move towards the direction of the target notch.
2. The method according to claim 1, wherein the obtaining the point cloud data of the spatial environment in which the target robot is located, and obtaining the target point cloud data, includes:
And in the process of controlling the target robot to rotate, acquiring point cloud data of a space environment where the target robot is positioned by a laser sensor on the target robot to obtain the point cloud data of the target point.
3. The method of claim 1, wherein the performing a clustering operation on the target point cloud data results in a set of clusters, comprising:
determining a distance between each of the plurality of target points and each of a set of reference rays, wherein each of the reference rays corresponds to a reference angle;
determining target points matched with each reference ray according to the distance between each target point and each reference ray;
and respectively determining the target points matched with each reference ray as a cluster, and obtaining the group of clusters.
4. The method of claim 3, wherein the determining a distance between each of the plurality of target points and each of a set of reference rays comprises:
and respectively projecting each target point onto each reference ray to obtain the distance between each target point and each reference ray.
5. The method of claim 1, wherein the performing a clustering operation on the target point cloud data results in a set of clusters, comprising:
downsampling the target point cloud data onto a two-dimensional grid map to obtain two-dimensional points corresponding to each of the plurality of target points;
and clustering the plurality of target points according to the two-dimensional points corresponding to each target point to obtain the group of clusters.
6. The method according to claim 5, wherein the clustering operation is performed on the plurality of target points according to the two-dimensional points corresponding to each target point, so as to obtain the group of clusters, including:
determining grids of the two-dimensional points corresponding to each target point in the two-dimensional grid map to obtain a group of target grids, wherein each target grid in the group of target grids comprises two-dimensional points corresponding to at least one target point of the plurality of target points;
and clustering the target points according to the two-dimensional points contained in each target grid and the neighborhood information of each target grid to obtain the group of clusters.
7. The method of any one of claims 1 to 6, wherein selecting a target notch from among the notches between each adjacent cluster in the set of clusters comprises:
Determining the distance between each adjacent cluster as the size of a gap between each adjacent cluster;
and selecting a notch with the largest size from among the notches between each adjacent class of clusters to obtain the target notch.
8. A movement control device for a robot, comprising:
the acquisition unit is used for acquiring point cloud data of a space environment where the target robot is located to obtain target point cloud data, wherein the target point cloud data comprises a plurality of target points;
a clustering unit, configured to perform a clustering operation on the target point cloud data to obtain a set of class clusters, where each class cluster in the set of class clusters includes at least one target point in the plurality of target points;
a selecting unit, configured to select a target notch from a notch between each adjacent cluster in the group of clusters, where the target notch is a notch allowing the target robot to pass through;
and the control unit is used for controlling the target robot to move towards the direction of the target notch.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 7 by means of the computer program.
CN202210366407.0A 2022-04-08 2022-04-08 Method and device for controlling movement of robot, storage medium and electronic device Pending CN116931557A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210366407.0A CN116931557A (en) 2022-04-08 2022-04-08 Method and device for controlling movement of robot, storage medium and electronic device
PCT/CN2023/080705 WO2023193567A1 (en) 2022-04-08 2023-03-10 Movement control method and apparatus for robot, and storage medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210366407.0A CN116931557A (en) 2022-04-08 2022-04-08 Method and device for controlling movement of robot, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116931557A true CN116931557A (en) 2023-10-24

Family

ID=88243951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210366407.0A Pending CN116931557A (en) 2022-04-08 2022-04-08 Method and device for controlling movement of robot, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN116931557A (en)
WO (1) WO2023193567A1 (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101907081B1 (en) * 2011-08-22 2018-10-11 삼성전자주식회사 Method for separating object in three dimension point clouds
KR102063534B1 (en) * 2017-11-30 2020-01-09 주식회사 모빌테크 Method for building map using LiDAR
CN109993192B (en) * 2018-01-03 2024-07-19 北京京东乾石科技有限公司 Target object identification method and device, electronic equipment and storage medium
CN108460779B (en) * 2018-02-12 2021-09-24 浙江大学 Mobile robot image visual positioning method in dynamic environment
CN110390346A (en) * 2018-04-23 2019-10-29 北京京东尚科信息技术有限公司 Recongnition of objects method, apparatus, electronic equipment and storage medium
CN110244743B (en) * 2019-07-03 2022-02-01 浙江大学 Mobile robot autonomous escaping method fusing multi-sensor information
CN111429574B (en) * 2020-03-06 2022-07-15 上海交通大学 Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN111723866A (en) * 2020-06-19 2020-09-29 新石器慧通(北京)科技有限公司 Point cloud clustering method and device, unmanned vehicle and readable storage medium
CN113925390B (en) * 2021-10-19 2022-09-09 珠海一微半导体股份有限公司 Cross-regional channel identification method based on map image, robot and chip
CN114187425A (en) * 2021-12-13 2022-03-15 河北工业大学 Point cloud clustering and surrounding method based on binary occupied grids
CN114266801A (en) * 2021-12-23 2022-04-01 内蒙古工业大学 Ground segmentation method for mobile robot in cross-country environment based on three-dimensional laser radar

Also Published As

Publication number Publication date
WO2023193567A1 (en) 2023-10-12

Similar Documents

Publication Publication Date Title
CN109540142B (en) Robot positioning navigation method and device, and computing equipment
US11328429B2 (en) Method and apparatus for detecting ground point cloud points
CN109059902B (en) Relative pose determination method, device, equipment and medium
US10710579B2 (en) Collision prediction system
CN109993192B (en) Target object identification method and device, electronic equipment and storage medium
JP7239703B2 (en) Object classification using extraterritorial context
CN109521757B (en) Static obstacle identification method and device
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN111311925B (en) Parking space detection method and device, electronic equipment, vehicle and storage medium
Sless et al. Road scene understanding by occupancy grid learning from sparse radar clusters using semantic segmentation
EP3624055B1 (en) Ground detection method, apparatus, electronic device, vehicle and storage medium
CN111380510B (en) Repositioning method and device and robot
US10613546B2 (en) Stochastic map-aware stereo vision sensor model
CN111094895A (en) System and method for robust self-repositioning in pre-constructed visual maps
CN112171675B (en) Obstacle avoidance method and device for mobile robot, robot and storage medium
CN115273002A (en) Image processing method, device, storage medium and computer program product
CN114091515A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN115346192A (en) Data fusion method, system, equipment and medium based on multi-source sensor perception
CN112558035B (en) Method and device for estimating the ground
US20210190901A1 (en) Reducing Radar Signal Interference based on Semi-random and Random Configuration
CN116931557A (en) Method and device for controlling movement of robot, storage medium and electronic device
CN113440054B (en) Method and device for determining range of charging base of sweeping robot
CN114384911A (en) Multi-unmanned system collaborative autonomous exploration method and device based on boundary guide points
CN113313654A (en) Laser point cloud filtering and denoising method, system, equipment and storage medium
WO2022217522A1 (en) Target sensing method and device, detection system, movable platform and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination