CN111352430B - Path planning method and device and robot - Google Patents

Path planning method and device and robot Download PDF

Info

Publication number
CN111352430B
CN111352430B CN202010445762.8A CN202010445762A CN111352430B CN 111352430 B CN111352430 B CN 111352430B CN 202010445762 A CN202010445762 A CN 202010445762A CN 111352430 B CN111352430 B CN 111352430B
Authority
CN
China
Prior art keywords
point
target
path
pixel
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010445762.8A
Other languages
Chinese (zh)
Other versions
CN111352430A (en
Inventor
支涛
安吉斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202010445762.8A priority Critical patent/CN111352430B/en
Publication of CN111352430A publication Critical patent/CN111352430A/en
Application granted granted Critical
Publication of CN111352430B publication Critical patent/CN111352430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to the technical field of robots, in particular to a path planning method, a path planning device and a robot. The path planning method provided by the embodiment of the application comprises the following steps: acquiring a scene map of a target environment, wherein the pixel value of each pixel point in the scene map is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is positioned in the scene map; and acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and using the deflection path as an edge-approaching driving route of the robot. By the path planning method and device and the robot, the robot can run close to the side, and compared with the scheme that the robot runs according to the shortest path in the prior art, the scheme is more in accordance with the conventional running rules of people, so that the normal running of other people around cannot be influenced.

Description

Path planning method and device and robot
Technical Field
The application relates to the technical field of robots, in particular to a path planning method, a path planning device and a robot.
Background
The robot is a machine device for automatically executing work, can receive human commands, can run a pre-arranged program, can perform actions according to a principle schema established by an artificial intelligence technology, and can be applied to the service industry, the production industry and the construction industry to assist or replace human work. In the prior art, after a source point and an end point of a robot are determined, a shortest path between the source point and the end point is generally acquired as a driving route of the robot, but when the robot follows the driving route, the normal driving rules of people (for example, a pedestrian approaches the right or a pedestrian approaches the left) are not met, so that the normal driving of other people around the robot is influenced.
Disclosure of Invention
An object of the embodiments of the present application is to provide a path planning method, a device and a robot, so as to solve the above problems.
In a first aspect, a path planning method provided in an embodiment of the present application includes:
acquiring a scene map of a target environment, wherein the pixel value of each pixel point in the scene map is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is positioned in the scene map;
and acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and using the deflection path as an edge-approaching driving route of the robot.
In the embodiment of the application, by obtaining the scene map of the target environment, in the scene map, the pixel value of each pixel point is used for representing the minimum weighted distance between the pixel point and the first target point, the first target point is located in the scene map, and then the deflection path between the first target point and the second target point is obtained according to the deflection angle and the pixel value of each pixel point in the scene map, and is used as the driving route of the robot near the edge.
With reference to the first aspect, an embodiment of the present application provides a first optional implementation manner of the first aspect, and acquiring a scene map of a target environment includes:
acquiring a scene image of a target environment, wherein the scene image comprises an obstacle area for representing obstacles in the target environment;
setting the weighted width of each pixel point in the scene image according to the minimum spatial distance between the pixel point and the obstacle area;
acquiring the minimum weighted distance between a pixel point and a first target point by combining the weighted widths of at least part of other pixel points in the scene image;
and calculating the pixel value of the pixel point according to the minimum weighted distance between each pixel point and the first target point in the scene image so as to obtain a scene map of the target environment.
With reference to the first aspect, an embodiment of the present application provides a second optional implementation manner of the first aspect, where obtaining a deflection path between a first target point and a second target point according to a deflection angle and a pixel value of each pixel point in a scene map includes:
according to the feasible path direction from the second target point to the first target point, taking the second target point as a starting point, selecting a characteristic point with the minimum difference value with the starting point pixel value from a plurality of characteristic points as an original path point, and taking the plurality of characteristic points as other pixel points around the starting point;
deflecting the original path points based on the deflection angle to obtain target path points which are closer to an obstacle area in a scene map relative to the original path points, taking the target path points as new starting points, and continuously obtaining next target path points to obtain a plurality of target path points;
and connecting the target path points to obtain a deflection path between the first target point and the second target point.
With reference to the second optional implementation manner of the first aspect, an embodiment of the present application provides a third optional implementation manner of the first aspect, where the deflecting processing is performed on the original waypoint based on the deflection angle, and the target waypoint that is closer to the obstacle area in the scene map than the original waypoint is obtained, including:
obtaining an original deflection vector according to a coordinate difference value of the original path point and the initial point in a preset coordinate system;
obtaining a target deflection vector according to the original deflection vector and the deflection angle;
and deflecting the original path points according to the target deflection vectors to obtain target path points which are closer to the obstacle area in the scene map relative to the original path points.
With reference to the second or third optional implementation manner of the first aspect, an embodiment of the present application provides a fourth optional implementation manner of the first aspect, where the original route point is deflected based on a deflection angle, and a deflection route between the first target point and the second target point is obtained according to the deflection angle and a pixel value of each pixel point in the scene map before obtaining a target route point closer to the scene map and in the obstacle region relative to the original route point, and the method further includes:
acquiring the minimum spatial distance between an original path point and an obstacle area in a scene map;
and setting a deflection angle corresponding to the original path point according to the minimum space distance between the original path point and the obstacle area in the scene map.
With reference to the first aspect, an embodiment of the present application provides a fifth optional implementation manner of the first aspect, where a deflection path between the first target point and the second target point is obtained according to the deflection angle and a pixel value of each pixel point in the scene map, and after the deflection path is used as an edge-approaching driving route of the robot, the path planning method further includes:
selecting a plurality of speed limit adjusting points from a plurality of pixel points covered by the side driving route;
aiming at each speed-limiting adjusting point in the plurality of speed-limiting adjusting points, acquiring the maximum driving speed corresponding to the speed-limiting adjusting point according to the maximum speed limit of the robot and the environmental safety factor around the speed-limiting adjusting point;
and adjusting the running speed of the robot at the speed limit adjusting point according to the maximum running speed.
In the above embodiment, after the deflection path between the first target point and the second target point is obtained according to the deflection angle and the pixel value of each pixel point in the scene map, the path planning method further includes selecting a plurality of speed limit adjustment points from a plurality of pixel points covered by the deflection path, and for each speed limit adjustment point in the plurality of speed limit adjustment points, obtaining the maximum driving speed corresponding to the speed limit adjustment point according to the maximum speed limit of the robot and the environmental safety factor around the speed limit adjustment point, so as to adjust the driving speed of the robot at the speed limit adjustment point according to the maximum driving speed, thereby realizing the full-line adjustment of the driving speed of the robot on the side-by driving route, and ensuring the safe driving of the robot.
With reference to the fifth optional implementation manner of the first aspect, an embodiment of the present application provides the sixth optional implementation manner of the first aspect, where before acquiring the maximum driving speed corresponding to the speed-limit adjustment point according to the maximum speed limit of the robot and an environmental safety factor around the speed-limit adjustment point, the method for path planning further includes:
dividing a central point into speed-limiting adjusting points, wherein the central point comprises areas to be convolved of other pixel points around the speed-limiting adjusting points;
performing convolution processing on the area to be convolved to obtain a safety coefficient;
and obtaining the environmental safety factor around the speed limit adjusting point based on the safety factor and the minimum space distance between the speed limit adjusting point and the obstacle area in the scene map.
In a second aspect, a path planning apparatus provided in an embodiment of the present application includes:
the map acquisition module is used for acquiring a scene map of a target environment, wherein in the scene map, the pixel value of each pixel point is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is positioned in the scene map;
the target point determining module is used for determining a second target point from the scene map;
and the path planning module is used for acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and the deflection path is used as an edge-approaching driving route of the robot.
The path planning apparatus provided in the embodiment of the present application has the same beneficial effects as the path planning method provided in the first aspect, or any optional implementation manner of the first aspect, and details are not described here.
In a third aspect, the robot provided in the embodiments of the present application includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the computer program to implement the first aspect, or the path planning method provided in any optional implementation manner of the first aspect.
The robot provided in the embodiment of the present application has the same beneficial effects as the path planning method provided in the first aspect, or any optional implementation manner of the first aspect, and details are not repeated here.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed, the path planning method provided in the first aspect or any optional implementation manner of the first aspect is implemented.
The computer-readable storage medium provided in the embodiment of the present application has the same beneficial effects as the path planning method provided in the first aspect, or any optional implementation manner of the first aspect, and details are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural block diagram of a robot according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating steps of a path planning method according to an embodiment of the present disclosure.
Fig. 3 is a scene image of a target environment according to an embodiment of the present disclosure.
Fig. 4 is a scene map of a target environment according to an embodiment of the present disclosure.
Fig. 5 is a scene map of a target environment according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural block diagram of a path planning apparatus according to an embodiment of the present application.
Reference numerals: 100-a robot; 110-a processor; 120-a memory; 200-a path planning device; 210-a map acquisition module; 220-path planning module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Furthermore, it should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1, a schematic structural block diagram of a robot 100 applying a path planning method and apparatus according to an embodiment of the present application is shown. In the embodiment of the present application, the robot 100 may be, but is not limited to, a service robot, an industrial robot, and structurally, the robot 100 may include a processor 110 and a memory 120.
The processor 110 and the memory 120 are electrically connected directly or indirectly to enable data transmission or interaction, for example, the components may be electrically connected to each other via one or more communication buses or signal lines. The path planning device 200 includes at least one software module which may be stored in the form of software or Firmware (Firmware) in the memory 120 or solidified in an Operating System (OS) of the robot 100. The processor 110 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the path planning apparatus 200, so as to implement the path planning method. The processor 110 may execute the computer program upon receiving the execution instruction.
The processor 110 may be an integrated circuit chip having signal processing capabilities. The Processor 110 may also be a general-purpose Processor, for example, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a discrete gate or transistor logic device, a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. Further, a general purpose processor may be a microprocessor or any conventional processor or the like.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), and an electrically Erasable Programmable Read-Only Memory (EEPROM). The memory 120 is used for storing a program, and the processor 110 executes the program after receiving the execution instruction.
It should be understood that the structure shown in fig. 1 is merely illustrative, and the robot 100 provided in the embodiments of the present application may have fewer or more components than those shown in fig. 1, or may have a different configuration than that shown in fig. 1. Further, the components shown in fig. 1 may be implemented by software, hardware, or a combination thereof.
Referring to fig. 2, fig. 2 is a schematic flow chart of a path planning method according to an embodiment of the present disclosure, which is applied to the robot 100 shown in fig. 1. It should be noted that the path planning method provided in the embodiment of the present application is not limited by the sequence shown in fig. 2 and the following, and the specific flow and steps of the path planning method are described below with reference to fig. 2.
Step S100, a scene map of the target environment is obtained, in the scene map, the pixel value of each pixel point is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is located in the scene map.
In this embodiment of the application, the target environment may be a working environment of the robot, for example, an office building, a mall, a hotel, a restaurant, a factory, and the like, and the scene map may be used to represent environment information of the target environment, and in the scene map, a pixel value of each pixel point is used to represent a minimum weighted distance between the pixel point and the first target point, and the first target point is located in the scene map. In addition, regarding step S100, as an optional implementation manner, in the embodiment of the present application, step S110, step S120, step S130, and step S140 may be included.
Step S110, a scene image of the target environment is obtained, wherein the scene image comprises an obstacle area used for representing obstacles in the target environment.
In the embodiment of the application, the scene image can be understood as a grid map of the target environment, And in practical implementation, the grid map of the target environment can be constructed through an instant positioning And Mapping (SLAM) algorithm, And based on the SLAM algorithm, a robot placed at an unknown position in an unknown environment can gradually draw a complete map of the unknown environment as the grid map while moving. In addition, it can be understood that in the embodiment of the present application, the scene image includes an obstacle area for representing an obstacle in the target environment, and a travelable area other than the obstacle area, where the obstacle may be a wall, a partition, a table and a chair, and the like. Taking the scene image shown in fig. 3 as an example, the black grid is used to represent the obstacle area, and the gray grid is used to represent the travelable area other than the obstacle area.
Step S120, aiming at each pixel point in the scene image, setting the weighted width of the pixel point according to the minimum space distance between the pixel point and the obstacle area.
For step S120, in this embodiment of the application, as a first optional implementation manner, for each pixel point in the scene image, the weighting width of the pixel point may be set to be inversely proportional to the minimum spatial distance between the pixel point and the obstacle region, that is, the greater the minimum spatial distance between the pixel point and the obstacle region is, the smaller the weighting width of the pixel point is set to be, and the smaller the minimum spatial distance between the pixel point and the obstacle region is, the greater the weighting width of the pixel point is set to be. Continuing with the example of the scene image shown in fig. 3, the minimum spatial distance between the pixel point a and the obstacle area is L1, and the minimum spatial distance between the pixel point b and the obstacle area is L2, since L1 is greater than L2, the result of setting the weighting widths of the pixel point a and the pixel point b is that the weighting width of the pixel point a is smaller than the weighting width of the pixel point b, and the specific setting logic is not specifically limited in this embodiment of the application.
For step S120, as a second optional implementation manner, in this application embodiment, all pixel points whose minimum spatial distance from the obstacle area is located in a preset distance interval may also be determined from the scene image, and the weighting widths of all the determined pixel points are uniformly set as a first width value, and the weighting widths of other pixel points except for all the determined pixel points are uniformly set as a second width value, and the second width value is smaller than the first width value. Continuing with the example of the scene image shown in fig. 3, the preset distance interval may be [0, L3], and then the weighted widths of all the pixels between the dashed line and the obstacle area in the graph may be uniformly set as the first width value, and the weighted widths of other pixels may be uniformly set as the second width value.
Step S130, combining the weighted widths of at least some other pixel points in the scene image, to obtain the minimum weighted distance between the pixel point and the first target point.
After the weighted width of each pixel point in the scene image is set, for each pixel point in the scene image, the minimum weighted distance between the pixel point and the first target point can be obtained by combining the weighted widths of at least part of other pixel points in the scene image. The following describes a specific procedure and steps for executing step S130 by Dijkstra shortest path algorithm.
The method comprises the steps of (I) setting a set S and a set U, adding a first target point serving as a central point into the set S, adding other pixel points except the first target point into the set U in a scene image, and aiming at each pixel point in the set U, setting a reference distance of the pixel point to be a weighted distance between the pixel point and the intermediate point if the pixel point is a pixel point around the intermediate point, and setting the reference distance of the pixel point to be infinity if the pixel point is not a pixel point around the intermediate point.
And secondly, selecting a pixel point with the minimum reference distance from the set U, recording the pixel point as a pixel point c, and moving the pixel point c from the set U to the set S.
(III) taking the pixel point c as a new intermediate point, updating the reference distance of each residual pixel point in the set U, for example, for each remaining pixel in the set U, if the pixel is a pixel around the new intermediate point, judging whether the weighting distance corresponding to the pixel point reaching the previous intermediate point through the new intermediate point is less than the weighting distance corresponding to the pixel point directly reaching the previous intermediate point, if the weighting distance corresponding to the pixel point reaching the previous intermediate point through the new intermediate point is less than the weighting distance corresponding to the pixel point directly reaching the previous intermediate point, and taking the weighted distance corresponding to the pixel point from the new intermediate point to the previous intermediate point as the new reference distance of the pixel point, and if the pixel point is not the pixel point around the new intermediate point, setting the reference distance of the pixel point to be infinity.
And (IV) repeating the step (II) and the step (III) until the set U has no residual pixel points.
It can be understood that, in the embodiment of the present application, by using Dijkstra shortest path algorithm, for each pixel point in the final set S, the reference distance of the pixel point may be used as the minimum weighted distance between the pixel point and the first target point, that is, the minimum weighted distance between each pixel point and the first target point in the scene image is finally obtained.
Step S140, calculating a pixel value of each pixel point according to the minimum weighted distance between each pixel point and the first target point in the scene image, so as to obtain a scene map of the target environment.
In this embodiment of the application, after the minimum weighted distance between each pixel point and the first target point in the obtained scene image, normalization processing may be performed on the minimum weighted distance between each pixel point and the first target point in the scene image, so that the minimum weighted distance between each pixel point and the first target point in the scene image is all located in the interval [0, 255], and then, for each pixel point in the scene image, the minimum weighted distance after weighting processing corresponding to the pixel point is taken as the pixel value of the pixel point. Based on this, it can be understood that, for each pixel point in the scene image, the larger the minimum weighted distance between the pixel point and the first target point is, the larger the pixel value of the pixel point is.
And S200, acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and using the deflection path as an edge-approaching driving route of the robot.
First, it should be noted that in the embodiment of the present application, the side-by-side driving route of the robot may be a right-side driving route or a left-side driving route, and may be specifically set according to actual requirements, for example, according to a conventional driving rule of people. In addition, in the embodiment of the present application, the step S200 may include three substeps, namely step S210, step S220, and step S230.
Step S210, according to the feasible path direction from the second target point to the first target point, taking the second target point as a starting point, selecting a feature point with the minimum difference value from the pixel values of the starting point from the plurality of feature points as an original path point, and taking the plurality of feature points as other pixel points around the starting point.
In this embodiment, the first target point may be a source point of the robot, and the second target point may be a driving target point of the robot, and it should be noted that in this embodiment, the plurality of feature points may be 8 pixel points around the starting point and adjacent to the starting point, but in order to improve the path planning efficiency, the plurality of feature points may also be 8 pixel points around the starting point and spaced from the starting point, for example, if the coordinate value of the starting point in the preset coordinate system is (X, Y), the plurality of feature points may include a pixel point with a coordinate value of (X-N, Y-N), a pixel point with a coordinate value of (X-N, Y + N), a pixel point with a coordinate value of (X, Y-N), a pixel point with a coordinate value of (X, Y + N), a pixel point, The coordinate values of the pixel points are (X + N, Y-N), the coordinate values of the pixel points are (X + N, Y), and the coordinate values of the pixel points are (X + N, Y + N), and the value of N can be specifically set according to actual requirements.
After the plurality of feature points are determined, for each feature point in the plurality of feature points, the pixel value of the starting point can be subtracted from the pixel value of the feature point to obtain the pixel value difference value between the feature point and the starting point, and finally, the feature point with the minimum pixel value difference value with the starting point is selected from the plurality of feature points to serve as the original path point. Taking the scene map shown in fig. 4 as an example, the second target point is a starting point and is denoted as d, and among the 8 feature points, the feature point with the smallest pixel value is e, so the original path point is the feature point e.
Step S220, performing a deflection process on the original route point based on the deflection angle, obtaining a target route point closer to the obstacle area in the scene map than the original route point, and taking the target route point as a new starting point to continue obtaining a next target route point, so as to obtain a plurality of target route points.
In step S220, the original path point is deflected based on the deflection angle to obtain the target path point closer to the obstacle area in the scene map than the original path point, in this embodiment, the original deflection vector may be obtained according to the coordinate difference between the original path point and the start point in the preset coordinate system, the target deflection vector may be obtained according to the original deflection vector and the deflection angle, and then the original path point is deflected according to the target deflection vector to obtain the target path point closer to the obstacle area in the scene map than the original path point.
In practical implementation, the process of obtaining the target deflection vector according to the original deflection vector and the deflection angle can be represented by the following calculation logic:
Z1=[△X*cos(θ)-△Y*sin(θ),△X*sin(θ)+△Y*cos(θ)]……(1)
or, Z2= [ Δ X × cos (θ) - [ Δ Y × sin (θ) ], - [ Δ X × sin (θ) - [ Δ Y × cos (θ) ] … … (2)
Wherein Z1 and Z2 are target deflection vectors, and ([ delta ] X, [ delta ] Y) are original deflection vectors, that is, Δ X is the difference between the coordinate values of the original path point and the coordinate value of the starting point on the X axis in the preset coordinate system, Δ Y is the difference between the coordinate values of the original path point and the coordinate value of the starting point on the Y axis in the preset coordinate system, and θ is the deflection angle.
In the embodiment of the present application, if the process of obtaining the target deflection vector from the original deflection vector and the deflection angle is represented by the above calculation logic (1), the element corresponding to the Y axis in the obtained target deflection vector Z1 is a positive value, and if the process of obtaining the target deflection vector from the original deflection vector and the deflection angle is represented by the above calculation logic (2), the element corresponding to the Y axis in the obtained target deflection vector Z2 is a negative value.
It can be understood that, in the embodiment of the present application, the original path point is subjected to deflection processing according to the target deflection vector, specifically, the original path point is translated according to the target deflection vector. Based on this, it can be further understood that in the embodiment of the present application, if the obtained target deflection vector Z1 is obtained, the original path point is deflected according to the target deflection vector Z1, so as to obtain the feasible path direction from the second target point to the first target point after the target path point of the obstacle area, the target path point is located on the right side of the original path point according to the feasible path direction from the second target point to the first target point in the scene map that is closer to the original path point (as in fig. 5, the target path point f is located on the right side of the original path point e according to the feasible path direction from the second target point to the first target point), if the obtained target deflection vector Z2 is deflected according to the target deflection vector Z2, so as to obtain the feasible path direction from the second target point to the first target point after the target path point of the obstacle area in the scene map that is closer to the original path point, the target waypoint is located to the left of the original waypoint (as in fig. 5, the target waypoint g is located to the left of the original waypoint e in terms of the feasible path direction from the second destination point to the first destination point).
Step S230, connecting the plurality of target path points, and obtaining a deflection path between the first target point and the second target point.
It can be further understood that, in the embodiment of the present application, if the obtained target deflection vector Z1 is obtained, the original path point is deflected according to the target deflection vector Z1, the target path point is located on the right side of the original path point according to the feasible path direction from the second target point to the first target point after the target path point of the obstacle area which is closer to the original path point in the scene map is obtained, the deflection path is a right driving route (as shown in fig. 5, a driving route hfd) according to the feasible path direction from the first target point to the second target point, if the obtained target deflection vector Z2 is obtained, the target path point is located on the left side of the original path point according to the feasible path direction from the second target point to the first target point after the target path point of the obstacle area which is closer to the original path point in the scene map is obtained according to the target deflection vector Z2, the deflected path is a left driving route (as in fig. 5, the driving route hgd) according to the feasible path direction from the first destination point to the second destination point.
In addition, as for the deflection angle, in the embodiment of the present application, it may be a preset value or a variable value, if the deflection angle is a preset value, it may be set according to actual requirements, if the deflection angle is a variable value, after step S210 is executed, the minimum spatial distance between the original path point and the obstacle region in the scene map may be obtained, and then, the deflection angle corresponding to the original path point may be set according to the minimum spatial distance between the original path point and the obstacle region in the scene map, for example, the minimum spatial distance between the original path point and the obstacle region in the scene map may be set to be related in a positive proportion with respect to the deflection angle corresponding to the original path point, that is, the larger the minimum spatial distance between the original path point and the obstacle region in the scene map is, the smaller the deflection angle corresponding to the original path point is set, the smaller the minimum spatial distance between the original path point and the obstacle area in the scene map is, the smaller the deflection angle corresponding to the original path point is set to, and the specific logic is set.
Further, in order to ensure safe driving of the robot, the path planning method provided in the embodiment of the present application may further include, after step S200, step S300, step S400, and step S500.
Step S300, selecting a plurality of speed limit adjusting points from a plurality of pixel points covered by the side driving route.
In this embodiment of the application, after the step S200 is obtained and executed to obtain the side-by-side driving route of the robot, a plurality of speed-limiting adjustment points may be selected in real time from a plurality of pixels covered by the deflection route, for example, a speed-limiting adjustment point may be selected from a plurality of pixels covered by the deflection route, and one speed-limiting adjustment point may be selected at intervals of a predetermined number of pixels with the first target point as a starting point until reaching the second target point, so as to select a plurality of speed-limiting adjustment points.
And step S400, aiming at each speed-limiting adjusting point in the plurality of speed-limiting adjusting points, acquiring the maximum driving speed corresponding to the speed-limiting adjusting point according to the maximum speed limit of the robot and the environmental safety factor around the speed-limiting adjusting point.
In the embodiment of the application, aiming at each speed-limiting adjusting point in a plurality of speed-limiting adjusting points, a central point can be divided into the speed-limiting adjusting points, the speed-limiting adjusting points comprise areas to be convolved of other pixel points around the speed-limiting adjusting points, the areas to be convolved are convoluted to obtain the safety factor, and finally, the environmental safety factor around the speed-limiting adjusting points is obtained based on the safety factor and the minimum space distance between the speed-limiting adjusting points and the obstacle areas in the scene map.
In this embodiment of the application, the divided convolution region may be a 10 × 10 convolution region including 100 pixel points, and the process of obtaining the safety factor by performing convolution processing on the region to be convolved may be represented as:
Dcon=P(x,y)*Q(x,y)
where Dcon is a safety factor, P (x, y) is a region to be convolved, and Q (x, y) is a convolution kernel, and in practical implementation, the convolution kernel may be a 4 x 4 matrix including 16 elements each having a value of 1/16, or a 10 x 10 matrix including 100 elements each having a value of 1/100, based on which it is understood that the convolution kernel may be any M x M matrix including M x M2Each element having a value of 1/M2
After the safety factor is obtained, the environmental safety factor around the speed limit adjusting point can be obtained based on the safety factor and the minimum spatial distance between the speed limit adjusting point and the obstacle area in the scene map, and the process can be expressed as follows:
Denv=1/exp(α*Dmin+β*Dcon)
the Denv is an environmental safety factor, α and β are safety parameters, and the specific size can be set according to actual requirements, for example, α can be 0.8, β can be 0.2, Dmin is a minimum space distance between a speed limit adjusting point and a scene map in an obstacle area, and Dcon is a safety factor.
After obtaining the environmental safety factor around the speed-limiting adjustment point, the step S400 may be executed to obtain the maximum driving speed corresponding to the speed-limiting adjustment point according to the maximum speed limit of the robot and the environmental safety factor around the speed-limiting adjustment point, where the process may be represented as:
Venv=Denv×Vmax
wherein, Venv is the maximum driving speed corresponding to the speed limit adjusting point, Denv is the environmental safety factor around the speed limit adjusting point, and Vmax is the maximum speed limit of the robot, which can be determined according to the basic performance of the robot.
And step S500, adjusting the running speed of the robot at the speed limit adjusting point according to the maximum running speed.
Based on the same inventive concept as the path planning method, an embodiment of the present application further provides a path planning apparatus 200, and referring to fig. 6, the path planning apparatus 200 provided in the embodiment of the present application includes a map obtaining module 210 and a path planning module 220.
The map obtaining module 210 is configured to obtain a scene map of the target environment, where in the scene map, a pixel value of each pixel point is used to represent a minimum weighted distance between the pixel point and a first target point, and the first target point is located in the scene map.
The description of the map obtaining module 210 may refer to the above-mentioned embodiment of the path planning method, and the detailed description of step S100, that is, step S100 may be executed by the map obtaining module 210.
And the path planning module 220 is configured to obtain a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and use the deflection path as an edge-approaching driving route of the robot.
The description of the path planning module 220 may refer to the detailed description of the step S200 in the above embodiment of the path planning method, that is, the step S200 may be executed by the path planning module 220.
In this embodiment, the map obtaining module 210 may include an image obtaining unit, a width setting unit, a distance obtaining unit, and a map obtaining unit.
The image acquisition unit is used for acquiring a scene image of the target environment, wherein the scene image comprises an obstacle area used for representing obstacles in the target environment.
For the description of the image obtaining unit, reference may be made to the above-mentioned embodiment of the path planning method, and the detailed description of step S110 is described, that is, step S110 may be executed by the image obtaining unit.
And the width setting unit is used for setting the weighted width of the pixel points according to the minimum spatial distance between the pixel points and the obstacle area aiming at each pixel point in the scene image.
The description of the width setting unit may refer to the above-mentioned embodiment of the path planning method, and the detailed description of step S120, that is, step S120 may be executed by the width setting unit.
And the distance acquisition unit is used for acquiring the minimum weighted distance between the pixel point and the first target point by combining the weighted widths of at least part of other pixel points in the scene image.
For the description of the distance obtaining unit, reference may be made to the above-mentioned embodiment of the path planning method, and the detailed description of step S130 is described, that is, step S130 may be executed by the distance obtaining unit.
And the map acquisition unit is used for calculating the pixel values of the pixel points according to the minimum weighted distance between each pixel point and the first target point in the scene image so as to obtain the scene map of the target environment.
The description of the map obtaining unit may refer to the above-mentioned embodiment of the path planning method, and the detailed description of step S140, that is, step S140 may be executed by the map obtaining unit.
In this embodiment, the path planning module 220 may include an original path point obtaining unit, a target path point obtaining unit, and a path planning unit.
And the original path point acquisition unit is used for selecting the feature point with the minimum difference value with the pixel value of the starting point from the plurality of feature points as the original path point according to the feasible path direction from the second target point to the first target point, and the plurality of feature points are other pixel points around the starting point.
For the description of the original path point obtaining unit, reference may be made to the above-mentioned embodiment of the path planning method, and the detailed description of step S210 is described, that is, step S210 may be executed by the original path point obtaining unit.
And the target path point acquisition unit is used for deflecting the original path points based on the deflection angle, acquiring target path points which are closer to an obstacle area in the scene map relative to the original path points, taking the target path points as new starting points, and continuously acquiring next target path points to acquire a plurality of target path points.
The target path point obtaining unit is specifically configured to obtain an original deflection vector according to a coordinate difference value of the original path point and the starting point in a preset coordinate system, obtain a target deflection vector according to the original deflection vector and the deflection angle, and perform deflection processing on the original path point according to the target deflection vector to obtain a target path point which is closer to an obstacle area in a scene map than the original path point.
For the description of the target path point obtaining unit, reference may be made to the above-mentioned embodiment of the path planning method, and the detailed description of step S220 is described, that is, step S220 may be executed by the original path point obtaining unit.
And the path planning unit is used for connecting the plurality of target path points and obtaining a deflection path between the first target point and the second target point.
For the description of the path planning unit, reference may be made to the above-mentioned path planning method embodiment, and the detailed description of step S230 is described, that is, step S230 may be executed by the path planning unit.
The path planning module 220 in the embodiment of the present application may further include a deflection angle obtaining unit.
And the deflection angle acquisition unit is used for acquiring the minimum spatial distance between the original path point and the obstacle area in the scene map, and setting the deflection angle corresponding to the original path point according to the minimum spatial distance between the original path point and the obstacle area in the scene map.
For the description of the deflection angle obtaining unit, reference may be made to the above embodiment of the path planning method, and detailed description of the deflection angle obtaining method is omitted in this embodiment of the present application.
The embodiment of the present application provides a path planning apparatus 200. The device also comprises a speed limit adjusting point obtaining module, a maximum running speed obtaining module and a running speed adjusting module.
And the speed limit adjusting point acquisition module is used for selecting a plurality of speed limit adjusting points from a plurality of pixel points covered by the deflection path.
For the description of the speed limit adjustment point obtaining module, reference may be made to the above-mentioned embodiment of the path planning method, and the detailed description of step S230 is referred to, that is, step S230 may be executed by the speed limit adjustment point obtaining module.
And the maximum driving speed acquisition module is used for acquiring the maximum driving speed corresponding to the speed-limiting adjusting point according to the maximum speed limit of the robot and the environmental safety factor around the speed-limiting adjusting point aiming at each speed-limiting adjusting point in the plurality of speed-limiting adjusting points.
The description of the maximum driving speed obtaining module may refer to the above-mentioned embodiment of the path planning method, and the detailed description of step S230, that is, step S230 may be executed by the maximum driving speed obtaining module.
And the running speed adjusting module is used for adjusting the running speed of the robot at the speed limit adjusting point according to the maximum running speed.
The description of the driving speed adjustment module may refer to the above-mentioned embodiment of the path planning method, and the detailed description of step S230, that is, step S230 may be executed by the driving speed adjustment module.
The path planning apparatus 200 provided in the embodiment of the present application may further include an environmental safety factor obtaining module.
And the environment safety factor acquisition module is used for dividing the central point into the speed-limiting adjusting point, including the to-be-convolved areas of other pixel points around the speed-limiting adjusting point, convolving the to-be-convolved areas to acquire a safety factor, and acquiring the environment safety factor around the speed-limiting adjusting point based on the safety factor and the minimum space distance between the speed-limiting adjusting point and the obstacle area in the scene map.
For the description of the environment safety factor obtaining module, reference may be made to the above embodiment of the path planning method, and for detailed description of the environment safety factor obtaining method, which is not described in detail in this embodiment of the present application.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed, the path planning method provided in the foregoing method embodiment may be implemented, which may be specifically referred to in the foregoing method embodiment, and details are not described here again.
To sum up, the path planning method, the path planning device, and the robot provided in the embodiments of the present application can obtain a scene map of a target environment, where a pixel value of each pixel point in the scene map is used to represent a minimum weighted distance between the pixel point and a first target point, the first target point is located in the scene map, and then determine a second target point from the scene map, and obtain a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, as an edge-approaching driving route of the robot.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the functional modules in each embodiment of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
Further, the functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in each embodiment of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
It is further noted that, herein, relational terms such as "first," "second," "third," and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

Claims (9)

1. A method of path planning, comprising:
acquiring a scene map of a target environment, wherein the pixel value of each pixel point in the scene map is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is positioned in the scene map;
acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and using the deflection path as an edge-approaching driving route of the robot;
the acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map includes:
according to the feasible path direction from the second target point to the first target point, taking the second target point as a starting point, selecting a characteristic point with the minimum difference value with the starting point pixel value from a plurality of characteristic points as an original path point, wherein the plurality of characteristic points are other pixel points around the starting point;
deflecting the original path points based on the deflection angle to obtain target path points which are closer to an obstacle area in the scene map relative to the original path points, taking the target path points as new starting points, and continuously obtaining next target path points to obtain a plurality of target path points;
and connecting the target path points to obtain a deflection path between the first target point and the second target point.
2. The path planning method according to claim 1, wherein the obtaining of the scene map of the target environment includes:
acquiring a scene image of the target environment, wherein the scene image comprises an obstacle area used for representing obstacles in the target environment;
setting the weighted width of each pixel point in the scene image according to the minimum spatial distance between the pixel point and the obstacle area;
combining the weighted widths of at least part of other pixel points in the scene image to obtain the minimum weighted distance between the pixel point and the first target point;
and calculating the pixel value of each pixel point according to the minimum weighted distance between each pixel point and the first target point in the scene image so as to obtain the scene map of the target environment.
3. The method for planning a route according to claim 1, wherein the deflecting the original waypoint based on the deflection angle to obtain a target waypoint closer to an obstacle area in the scene map than the original waypoint comprises:
obtaining an original deflection vector according to a coordinate difference value of the original path point and the initial point in a preset coordinate system;
obtaining a target deflection vector according to the original deflection vector and the deflection angle;
and deflecting the original path point according to the target deflection vector to obtain a target path point which is closer to an obstacle area in a scene map relative to the original path point.
4. The method according to claim 1 or 3, wherein the deflecting processing is performed on the original route point based on the deflection angle, and a deflection route between the first target point and the second target point is obtained according to the deflection angle and a pixel value of each pixel point in the scene map before obtaining a target route point closer to an obstacle area in the scene map than the original route point, further comprising:
acquiring the minimum space distance between the original path point and an obstacle area in the scene map;
and setting a deflection angle corresponding to the original path point according to the minimum space distance between the original path point and an obstacle area in the scene map.
5. The path planning method according to claim 1, wherein after the deflected path between the first target point and the second target point is obtained according to the deflected angle and the pixel value of each pixel point in the scene map and is used as the side-by-side driving route of the robot, the path planning method further includes:
selecting a plurality of speed limit adjusting points from a plurality of pixel points covered by the side driving route;
aiming at each speed-limiting adjusting point in the plurality of speed-limiting adjusting points, acquiring the maximum driving speed corresponding to the speed-limiting adjusting point according to the maximum speed limit of the robot and the environmental safety factor around the speed-limiting adjusting point;
and adjusting the running speed of the robot at the speed limit adjusting point according to the maximum running speed.
6. The path planning method according to claim 5, wherein before the maximum driving speed corresponding to the speed limit adjustment point is obtained according to the maximum speed limit of the robot and an environmental safety factor around the speed limit adjustment point, the path planning method further comprises:
dividing a central point into the speed-limiting adjusting point and including a to-be-convolved area of other pixel points around the speed-limiting adjusting point;
performing convolution processing on the area to be convolved to obtain a safety coefficient;
and obtaining the environmental safety factor around the speed limit adjusting point based on the safety factor and the minimum space distance between the speed limit adjusting point and the obstacle area in the scene map.
7. A path planning apparatus, comprising:
the map acquisition module is used for acquiring a scene map of a target environment, wherein in the scene map, the pixel value of each pixel point is used for representing the minimum weighted distance between the pixel point and a first target point, and the first target point is positioned in the scene map;
the path planning module is used for acquiring a deflection path between the first target point and the second target point according to the deflection angle and the pixel value of each pixel point in the scene map, and the deflection path is used as an edge-approaching driving route of the robot;
the path planning module comprises an original path point acquisition unit, a target path point acquisition unit and a path planning unit;
the original path point obtaining unit is configured to select, according to a feasible path direction from the second target point to the first target point, a feature point having a smallest difference value from a pixel value of the start point from among a plurality of feature points as an original path point, where the plurality of feature points are other pixel points around the start point, with the second target point as the start point;
the target path point obtaining unit is configured to perform deflection processing on the original path point based on the deflection angle, obtain a target path point closer to an obstacle area in the scene map than the original path point, use the target path point as a new starting point, and continue to obtain a next target path point to obtain multiple target path points;
the path planning unit is configured to connect the plurality of target path points to obtain a deflection path between the first target point and the second target point.
8. A robot comprising a processor and a memory, the memory having a computer program stored thereon, the processor being configured to execute the computer program to implement the path planning method of any of claims 1-7.
9. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed, implements a path planning method as claimed in any one of claims 1 to 6.
CN202010445762.8A 2020-05-25 2020-05-25 Path planning method and device and robot Active CN111352430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010445762.8A CN111352430B (en) 2020-05-25 2020-05-25 Path planning method and device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010445762.8A CN111352430B (en) 2020-05-25 2020-05-25 Path planning method and device and robot

Publications (2)

Publication Number Publication Date
CN111352430A CN111352430A (en) 2020-06-30
CN111352430B true CN111352430B (en) 2020-09-25

Family

ID=71195188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010445762.8A Active CN111352430B (en) 2020-05-25 2020-05-25 Path planning method and device and robot

Country Status (1)

Country Link
CN (1) CN111352430B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113108794B (en) * 2021-03-30 2022-09-16 北京深睿博联科技有限责任公司 Position identification method, device, equipment and computer readable storage medium
CN113741431A (en) * 2021-08-17 2021-12-03 嘉兴市敏硕智能科技有限公司 Obstacle avoidance path determining method, obstacle avoidance device and storage medium
CN114326462A (en) * 2021-11-16 2022-04-12 深圳市普渡科技有限公司 Robot system, method, computer device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007007802A (en) * 2005-07-01 2007-01-18 Toyota Motor Corp Legged robot and control method thereof
CN101813475A (en) * 2010-04-24 2010-08-25 上海交通大学 Method for adaptively detecting remote obstacle
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium
CN111006652A (en) * 2019-12-20 2020-04-14 深圳无境智能机器人有限公司 Method for running robot close to edge

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11579298B2 (en) * 2017-09-20 2023-02-14 Yujin Robot Co., Ltd. Hybrid sensor and compact Lidar sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007007802A (en) * 2005-07-01 2007-01-18 Toyota Motor Corp Legged robot and control method thereof
CN101813475A (en) * 2010-04-24 2010-08-25 上海交通大学 Method for adaptively detecting remote obstacle
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium
CN111006652A (en) * 2019-12-20 2020-04-14 深圳无境智能机器人有限公司 Method for running robot close to edge

Also Published As

Publication number Publication date
CN111352430A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111352430B (en) Path planning method and device and robot
Lau et al. Improved updating of Euclidean distance maps and Voronoi diagrams
CN110260867B (en) Method, equipment and device for determining and correcting neutral position in robot navigation
Sola et al. Undelayed initialization in bearing only SLAM
EP3367061B1 (en) Navigation system based on slow feature gradients
CN111536964A (en) Robot positioning method and device, and storage medium
CN109434831B (en) Robot operation method and device, robot, electronic device and readable medium
CN109785247B (en) Method and device for correcting abnormal point cloud data of laser radar and storage medium
CN112946612B (en) External parameter calibration method and device, electronic equipment and storage medium
CN115683100A (en) Robot positioning method, device, robot and storage medium
Eren et al. Implementation of the spline method for mobile robot path control
CN110509293B (en) Working environment analysis method and device and robot
Demim et al. Visual SVSF-SLAM algorithm based on adaptive boundary layer width
CN113091736A (en) Robot positioning method, device, robot and storage medium
Martins et al. An improved robot path planning model using cellular automata
US11756312B2 (en) Orientation-agnostic lane tracking in a vehicle
US11613272B2 (en) Lane uncertainty modeling and tracking in a vehicle
CN110945423B (en) Diaphragm control method and device, diaphragm equipment and shooting equipment
CN115375713B (en) Ground point cloud segmentation method and device and computer readable storage medium
CN114216451B (en) Robot map updating method and device
CN113867147B (en) Training and control method, device, computing equipment and medium
CN116295447B (en) Path tracking method and automatic navigation vehicle
CN115290098B (en) Robot positioning method and system based on variable step length
JP2012141662A (en) Method for estimating self-position of robot
EP4310798A1 (en) Method for determination of a free space boundary of a physical environment in a vehicle assistance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee after: Beijing Yunji Technology Co.,Ltd.

Address before: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee before: BEIJING YUNJI TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder