CN116079717A - Robot control method, robot, and computer-readable storage medium - Google Patents

Robot control method, robot, and computer-readable storage medium Download PDF

Info

Publication number
CN116079717A
CN116079717A CN202211614789.0A CN202211614789A CN116079717A CN 116079717 A CN116079717 A CN 116079717A CN 202211614789 A CN202211614789 A CN 202211614789A CN 116079717 A CN116079717 A CN 116079717A
Authority
CN
China
Prior art keywords
robot
obstacle
control method
distance
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211614789.0A
Other languages
Chinese (zh)
Inventor
蒋云
杨辉
李通
王翰
徐壮
周恩德
吴松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Qinglang Intelligent Technology Co ltd
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Wuhan Qinglang Intelligent Technology Co ltd
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Qinglang Intelligent Technology Co ltd, Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Wuhan Qinglang Intelligent Technology Co ltd
Priority to CN202211614789.0A priority Critical patent/CN116079717A/en
Publication of CN116079717A publication Critical patent/CN116079717A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a robot control method, a robot and a computer readable storage medium, wherein the robot has a visual field range with a preset distance and a visual field angle, and the visual field range is divided into at least two perception areas in a preset mode, and the robot control method comprises the following steps: respectively detecting obstacles in the respective ranges of the at least two sensing areas; triggering anti-collision constraint and controlling the robot to execute the action of avoiding the obstacle or continuously keeping the current motion state according to the obstacle information in different sensing areas. According to the embodiment of the invention, the visual field range of the robot is divided into at least two different areas, the different areas are detected respectively, the movement mode of the robot is controlled according to the detection results of different areas, the robot can avoid the obstacle, the expansion coefficient is controlled, the passing capacity of the robot is improved under the condition that the operation safety of the robot is ensured, and all positions in the moving range of the robot are covered as much as possible.

Description

Robot control method, robot, and computer-readable storage medium
Technical Field
The present invention relates generally to the field of intelligent device control technology, and in particular, to a robot control method, a robot, and a computer readable storage medium.
Background
For robots moving in preset scenes, such as meal delivery robots in restaurants and hotels, guide robots in exhibition activities and the like, collision problems can be caused when the robots are required to move, the more complex the scene is, the greater the possibility of collision is, and personnel activities generally exist in the moving scene of the robots. In order to ensure the safety of robots and other personnel equipment, robots have strict requirements on safety and anti-collision performance.
At present, the mobile robot avoids the obstacle through a local planning algorithm in the navigation process, but the collision problem cannot be avoided, so that the expansion coefficient of an expansion layer is adjusted in a cost map to increase the expansion edge of the map in the prior art, and the collision problem is avoided better. Although the cost map based expansion layer mode can improve the collision problem, if the expansion coefficient is set to be too small, the collision problem still exists due to factors such as perception detection, positioning problems or defects of a local planning algorithm; and if the expansion coefficient of the expansion layer is set too large, the passing performance of the robot in some narrow channels may be affected. In addition, for some boundary areas, such as application scenes of edge functions of commercial or household cleaning robots, due to the fact that the expansion coefficient of an expansion layer of a cost map is set too large, navigation of the robot may not reach a target position or sink into the cost map, positioning is failed, a new navigation task cannot be started, and adverse effects are caused on development and application of the edge functions of the robot.
The matters in the background section are only those known to the inventors and do not, of course, represent prior art in the field.
Disclosure of Invention
In view of one or more drawbacks of the prior art, the present invention provides a robot control method, in which the robot has a field of view range of a preset distance and a field of view angle, and the field of view range is divided into at least two sensing areas in a preset manner, the robot control method comprising:
respectively detecting obstacles in the respective ranges of the at least two sensing areas; and
triggering anti-collision constraint and controlling the robot to execute the action of avoiding the obstacle or continuously keeping the current motion state according to the obstacle information in different sensing areas.
According to one aspect of the invention, wherein the obstacle information includes a distance of the obstacle relative to the robot and an azimuth angle of the obstacle relative to the robot;
the anti-collision constraint comprises anti-collision course constraint, and in the step of controlling the robot to execute the action of avoiding the obstacle or continuously keeping the current motion state according to the obstacle information in different sensing areas, when the obstacle is detected in any sensing area and the distance between the obstacle and the robot is smaller than a first threshold value, the anti-collision course constraint is triggered, and the robot is controlled to rotate in the opposite direction of the detected obstacle.
According to an aspect of the present invention, in the step of controlling the robot to rotate in the opposite direction in which the obstacle is detected, the rotation angle of the robot is a complementary angle of an azimuth angle of the obstacle with respect to the advancing direction of the robot in the rotation direction.
According to one aspect of the invention, the robot moves forward simultaneously during rotation and adjusts the rotation angle in real time according to the detected azimuth angle of the obstacle with respect to the robot.
According to one aspect of the invention, the visual field of the robot is divided by a center line in the advancing direction of the robot, and the sensing region of the robot includes a left sensing region and a right sensing region, which do not overlap each other.
According to one aspect of the invention, the robot detects obstacle information at a preset period, and when the left sensing area and the right sensing area detect obstacles in one period, the obstacles are ordered from near to far according to the distance of the obstacles relative to the robot, and the rotation direction and the rotation angle of the robot are sequentially controlled.
According to one aspect of the present invention, the step of controlling the robot to perform the action of avoiding the obstacle further includes:
continuously acquiring obstacle information in the visual field range of the robot;
and when the obstacles are detected in different sensing areas and the distance between the obstacle and the robot is smaller than a second threshold value, controlling the robot to stop moving or rotate in situ, and performing course searching.
According to one aspect of the present invention, the step of controlling the robot to perform the action of avoiding the obstacle further includes:
continuously acquiring the distance of the current obstacle relative to the robot;
and when the distance between the obstacle and the robot is greater than a third threshold value, or the obstacle is separated from the visual field range of the robot, controlling the robot to jump out of the anti-collision course constraint.
According to one aspect of the invention, the sensing area of the robot further comprises a forward sensing area, the forward sensing area takes the center of the forward direction of the robot as a central axis, the left side and the right side of the forward sensing area extend by a preset angle respectively, and the forward sensing area is partially overlapped with the left sensing area and the right sensing area.
According to one aspect of the invention, the anti-collision constraints further include an anti-collision speed constraint that adaptively adjusts the movement speed of the robot according to the distance of the obstacle relative to the robot so that the movement speed of the robot is positively correlated with the distance value of the obstacle relative to the robot.
According to one aspect of the invention, the invention also includes a robot comprising:
a main body;
the motion device is arranged on the main body and can be driven to drive the main body to move;
the sensor is arranged on the main body and is configured to acquire obstacle information of the surrounding environment of the robot; and
a control system in communication with the sensor and the motion device and configured to be capable of performing the robotic control method as described above.
According to one aspect of the invention, the invention also includes a computer readable storage medium comprising computer executable instructions stored thereon, which when executed by a processor, implement the robot control method as described above. .
Compared with the prior art, the embodiment of the invention provides a robot control method, the visual field range of the robot is divided into at least two different areas, the different areas are detected by the robot, the movement process of the robot is mainly divided into forward and left and right rotation, part of the robot has a backward function, the existing method for controlling the robot to avoid the obstacle is mainly realized by changing the forward direction of the robot through rotation and controlling the robot to bypass the obstacle, the visual field range of the robot is partitioned, the movement mode of the robot is controlled according to the detection results of the different partitions, the obstacle avoidance of the robot can be realized, the expansion coefficient is controlled, the throughput of the robot is improved under the condition that the operation safety of the robot is ensured, and all positions in the movement range of the robot are covered as much as possible. The present invention also includes an embodiment of a robot and a computer-readable storage medium capable of executing the aforementioned robot control method.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a robot control method in one embodiment of the invention;
FIG. 2 is a flow diagram of a robot control method including an anti-collision heading constraint in one embodiment of the invention;
FIG. 3 is a flow chart of a robot control method including the case where an obstacle is too close to a robot in one embodiment of the invention;
FIG. 4 is a flow diagram of a robot control method including an anti-collision speed constraint in one embodiment of the invention;
FIG. 5 is a schematic illustration of a robot field of view division in one embodiment of the invention;
FIG. 6 is a schematic view of a robot in one embodiment of the invention;
fig. 7 is a block diagram of a robot in one embodiment of the invention.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be fixedly connected, detachably connected, or integrally connected, and may be mechanically connected, electrically connected, or may communicate with each other, for example; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Fig. 1 shows a specific flow of a robot control method 100 according to an embodiment of the present invention, in which a robot has a field of view of a preset distance and a preset field angle, information of a surrounding environment of the robot is acquired, for example, by a sensor mounted on the robot, a position of an obstacle, and a distance from the robot are acquired by the sensor. In the embodiment of the present invention, the visual field range of the robot is divided into at least two sensing areas in a preset manner, and according to the preferred embodiment of the present invention, the visual field range of the robot is divided in a manner shown in fig. 5, for example, and a specific division manner is described in the following embodiments. The robot control method 100 will be described with reference to fig. 1.
As shown in fig. 1, in step S101 of the robot control method 100, detection of the obstacle is performed in each of at least two sensing areas, where the sensing areas may be divided by setting a plurality of sensors for acquiring obstacle information on the robot and controlling the sensors respectively, and preferably, after the sensors in the robot acquire information in the complete field of view of the robot, the sensors acquire the obstacle information in different sensing areas through subsequent analysis processing. Furthermore, the plurality of sensing areas form a complete visual field range of the robot, and no vacant position exists in the middle of the sensing areas so as to analyze obstacle information in the complete visual field range of the robot.
In step S101, a sensing area where the obstacle is located can be obtained, and in step S102, according to the obstacle information in different sensing areas, anti-collision constraint is triggered and the robot is controlled to execute the action of avoiding the obstacle, or the current movement device is kept continuously. According to the obstacle detection results of different sensing areas, the robot is controlled to select to execute different avoidance logics aiming at specific obstacle positions, the accuracy of obstacle avoidance of the robot is improved, the risk of collision of the robot is reduced, and the expansion coefficient is not required to be improved.
In a preferred embodiment of the present invention, the obstacle information acquired by the robot includes a distance of the obstacle with respect to the robot and an azimuth angle of the obstacle with respect to the robot, for example, a robot coordinate system is established, wherein the position information of the obstacle is represented by a polar coordinate of the obstacle, and a structural range of the robot is stored in advance, that is, the distance of the obstacle from the robot can be known, and the azimuth angles of the obstacle and the robot can be simplified to polar angles of the center position and the advancing direction of the obstacle and the robot.
In the preferred embodiment of the present invention, the anti-collision constraint of the robot includes an anti-collision course constraint, which refers to constraint of a moving direction of the robot, in order to prevent collision of a robot during actual movement of the robot, a structural size of the robot cannot be ignored, so that an analysis is required for an obstacle existing on a side of a moving path of the robot, a concept of an azimuth angle of the obstacle with respect to the robot is introduced in the present embodiment, in the specific anti-collision course constraint, when the obstacle is detected in any one sensing area of the robot, it is required to analyze whether or not there is a collision risk for the obstacle, and an obstacle located on a moving path of the robot (right in front of the moving direction) needs to be avoided, and a limitation in a collision distance is different from that of the obstacle not on the moving path of the robot, so that the analysis is performed separately in the present embodiment.
Fig. 2 shows a specific flow of a robot control method 200 including an anti-collision heading constraint according to a preferred embodiment of the present invention, wherein step S201 is substantially the same as step S101 in the robot control method 100, and will not be described again.
In step S202, the sensing area where the obstacle is detected is determined, and the specific sensing area among the plurality of sensing areas of the robot field of view is determined in the present embodiment according to the different distance and azimuth limits in which the obstacle exists in the different sensing areas of the robot field of view. Further, according to the preferred embodiment of the present invention, when there are a plurality of sensing regions in which an obstacle is detected, in step S203, the obstacles are sorted according to the distance between the obstacle and the robot, and analyzed individually in the near-to-far order, the complex situation is simply handled, and the obstacle avoidance operation is preferentially performed according to the obstacle that is closer to the robot.
In step S204, it is determined whether the distance between the obstacle and the robot is smaller than a first threshold, where the first threshold may be preset according to a specific robot, for example, 15cm, or may be set for different sensing areas, or the first threshold of different values may be obtained according to the current movement speed of the robot, for example, the value of the first threshold increases with the current movement speed of the robot, so as to provide a deceleration distance for the robot, and avoid collision.
When the distance of the obstacle from the robot is not greater than the first threshold, in step S205, an anti-collision heading constraint is triggered, the robot is controlled to rotate in the opposite direction of the detected obstacle, for example, the obstacle is detected in the sensing area on the left side of the robot, then in this step, the robot is controlled to rotate to the right, further according to the preferred embodiment of the present invention, wherein the rotation angle of the robot is the complementary angle of the obstacle in the rotation direction relative to the azimuth angle of the robot in the forward direction of the robot during the opposite direction of the detected obstacle rotation of the robot, for example, the obstacle is located at the position 40 ° on the left side in front of the forward direction of the robot, and in accordance with this step, the robot is controlled to rotate to the right, wherein the complementary angle of the azimuth angle of the obstacle relative to the forward direction of the robot in the rotation direction is 50 °, then in the preferred embodiment of the present invention the robot is controlled to rotate to the right by 50 °. According to the rotation angle in the present embodiment, it is possible to position the obstacle at a position of 90 ° on the side of the robot after the robot rotates. In the process of robot rotation, the course of the robot is updated in real time, in the process of obtaining the obstacle, the relative position relation between the obstacle and the robot is changed smoothly, and when the obstacle of the robot faced by the robot is relatively close, the current course of the robot can be corrected and adjusted in real time, so that the two sides of the robot can appear on the obstacle as much as possible. Further, in order to prevent the robot from sinking into the cyclic rotation, after the robot detects a plurality of obstacles, the robot avoids one of the obstacles according to the sorting mode, so that the robot is prevented from rotating and avoiding two obstacles at the same time.
According to the preferred embodiment of the invention, the forward movement of the robot can be controlled simultaneously in the process of controlling the rotation of the robot, so that after the rotation of the robot is controlled, the obstacle moves to the rear side of the movement direction of the robot relative to the robot, and the movement of the robot is not disturbed. In the process of rotating and moving forward of the robot, the azimuth angle of the obstacle relative to the robot can be changed, further, the robot is kept to continuously detect the obstacle, the rotating angle of the robot is adjusted in real time according to the detected azimuth angle of the obstacle relative to the robot, and according to the preferred embodiment of the invention, the distance between the obstacle and the robot is continuously acquired until the distance between the obstacle and the robot is larger than a third threshold value, or the obstacle leaves the visual field range of the robot, and the robot is controlled to jump out of the anti-collision course constraint. The third threshold may be set to be greater than the first threshold, so as to ensure the stability of the cutting-in and cutting-out of the anti-collision course constraint in the robot moving process, when the robot adjusts the first threshold detection which is separated from the left side or the right side in the moving process, the anti-collision course constraint is not immediately jumped out, but a dead zone section (for example, 5 cm) exists, that is, if the first threshold is set to be 15cm, the third threshold may be set to be 20cm, and the anti-collision course constraint can only be jumped out when the distance between the obstacle and the robot is greater than the third threshold.
For an obstacle positioned right in front of the robot, the robot can be controlled to rotate to any side, or the judgment can be performed according to the specific shape of the obstacle. The size of the obstacle cannot be ignored in the process of controlling the robot to avoid the obstacle, and the robot is preferably controlled to rotate in the direction with smaller size proportion according to the size of the obstacle on the left side or the right side.
When the distance between the obstacle and the robot is greater than the first threshold, in step S206, the robot is controlled to move in the current movement state, and under the condition that no shielding exists, the distance between the field of view of the robot is greater than the first threshold, so that the obstacle detected by the robot does not generate collision risk to the movement of the robot, and when the distance between the obstacle and the robot is greater than the first threshold, the distance between the obstacle and the robot is far, and the current movement of the robot is not influenced.
In the embodiment of the invention, the corresponding relation between the obstacle information and the sensing area corresponds to the dividing mode of the sensing area, the visual field range of the robot extends in the advancing direction of the robot, the visual field range is unfolded towards two sides at a preset angle, the approximately fan-shaped or semicircular visual field range is obtained, and the division is carried out according to the central axis of the robot. Specifically, as shown in fig. 5, according to a preferred embodiment of the present invention, the visual field of the robot is divided by a center line in the advancing direction of the robot, the sensing region of the robot includes a left sensing region and a right sensing region, for the purpose of distinction, in fig. 5, the left sensing region is diagonal stripe filling, the right sensing region is transverse stripe filling, and in the present embodiment, the left sensing region and the right sensing region do not overlap each other. In the process of controlling the rotation of the robot, after the obstacle is detected in the left sensing area of the robot, the robot is controlled to rotate to the right, the obstacle is detected in the right sensing area of the robot, and the robot rotates in the opposite direction, so that the position of the obstacle relative to the robot is changed, and the influence of the obstacle on the movement process of the robot is eliminated.
The range of the left sensing area and the right sensing area is determined according to the specific visual field range of the robot, and the forward direction of the robot can be taken as axisymmetry, or two sides of the robot can be asymmetric. Preferably, the robot has a field of view of 180 ° in the direction of advance of the robot, with the direction of advance of the robot being 0 °, the two side sensing areas then range from 0-90 ° and-90 ° -0.
Fig. 3 shows a specific flow of the robot control method 300 according to the preferred embodiment of the present invention, in which the situation that the distance between the obstacle and the robot is too close is included, specifically, step S301, step S302, step S304, step S305, step S306 and step S308 in the robot control method 300 are substantially the same as step S201, step S202, step S203, step S204, step S205 and step S206 in the robot control method 200, respectively, which are not described in detail.
In step S303, it is determined whether the distance between the obstacle and the robot is greater than a second threshold, which in this embodiment is smaller than the first threshold, and may specifically be set to a distance closer to the robot, for example, to a limit value at which the risk of collision of the obstacle cannot be avoided by rotation. When the distance between the obstacle and the robot is greater than the second threshold, in step S304, the following steps are continued. When the distance between the obstacle and the robot is not greater than the second threshold, the robot is controlled to stop moving or to rotate in place and perform course search in step S307. When the distance between the obstacle and the robot is too close, the obstacle cannot get rid of the influence of the obstacle, such as an obstacle which suddenly appears, or the obstacle which can move, when the obstacle exists in the obstacle with too close distance, the robot is controlled to stop moving, collision is avoided, course searching can be carried out, for example, the robot rotates in situ, the obstacle information of the surrounding environment of the robot is obtained, and the movement path of the obstacle can be got rid of according to the information of the surrounding obstacle of the robot.
According to various embodiments of the present invention, step S303 is not limited to the sequence shown in fig. 3, specifically, it is possible to continuously acquire the obstacle information in the visual field of the robot, for example, acquire the images of the surrounding environment of the robot at a preset frequency, and detect the obstacle in different sensing areas, and meanwhile, the relative positional relationship between the obstacle and the robot is changed along with the movement of the robot, and when the situation of the obstacle is detected in the different sensing areas of the robot, and the distance between the obstacle and the robot is smaller than the second threshold, the robot is controlled to stop moving or rotate in place, and the course search is performed.
Fig. 4 shows a specific flow of the robot control method 400 according to a preferred embodiment of the present invention, wherein step S401, step S402, step S403, step S404, step S405, step S406, step S408 and step S409 in the robot control method 400 are substantially the same as step S301, step S302, step S303, step S304, step S305, step S306, step S307 and step S308 in the robot control method 300, respectively, and are not described in detail.
In step S407, the motion speed of the robot is adjusted according to the distance between the robot and the obstacle, and the anti-collision constraint further includes an anti-collision speed constraint, where the anti-collision speed constraint adaptively adjusts the motion speed of the robot according to the distance between the robot and the obstacle, so that the motion speed of the robot is positively related to the distance value of the obstacle relative to the robot. Further, the distance of the robot from the obstacle has a different influence on the speed of the robot for obstacles in different sensing areas.
According to a preferred embodiment of the present invention, the sensing region of the robot further includes a forward sensing region, such as a solid color filled region in fig. 5, and specifically, the forward sensing region may be centered on the center of the forward direction of the robot and extend to the left and right sides (under the view angle of the robot) by a predetermined angle, as shown in fig. 5, and the forward sensing region partially coincides with the left sensing region and the right sensing region. The preset angle of the forward sensing area extending to the left and right sides is, for example, plus or minus 30 degrees, specifically, the preset angle can be obtained by calculation according to the detection distance of the robot and the width of the structural size of the robot, and can also be set manually. For the obstacle in the forward sensing area, because the obstacle is positioned on the movement path of the robot, when the distance between the obstacle and the robot is a larger value, for example, 100cm, the speed of the robot can be controlled to be reduced, collision is avoided, and the robot is controlled to be further reduced along with the reduction of the distance between the robot and the obstacle.
The invention also comprises an embodiment of the robot 1, wherein the robot 1 comprises a body 10, a movement structure 20, a sensor 30 and a control system 40. Wherein the body 10 is the main structural frame of the robot 1, and the components of the robot 1 are all fixed by the body, as shown for example in fig. 6. The main body 10 can be made of alloy materials or organic materials to form a fixed character, and each part of components in the robot 1 are arranged at corresponding positions in the main body 10 so as to realize specific functions.
The motion structure 20 is arranged on the main body 10, the motion structure 20 is used for controlling and driving the robot 1 to move, for example, the motion device 20 is arranged on differential gears on two sides of the bottom of the main body 10, the differential gears are controlled to rotate so as to drive the robot 1 to move, and the differential gears on two sides are controlled to rotate at different speeds so as to realize the steering of the robot 1.
The sensor 30 is provided on the main body 10, and the sensor 30 can acquire obstacle information in a visual field range of a movement direction of the robot 1, and the sensor 30 may be an image acquisition device such as a camera, or the sensor 30 may be a device capable of acquiring point cloud information such as a laser radar.
As shown in fig. 7, the control system 40 communicates with the sensor 30 and is capable of executing the robot control method as described in the foregoing embodiment to acquire obstacle information of the surrounding environment of the robot 1. Meanwhile, the control system 40 also communicates with the movement device 20, performs the aforementioned robot control method, and controls the movement state of the robot 1 according to the acquired obstacle information.
Specifically, the robot 1 further includes a mobile chassis, and the control system 40 provides a function controller for user operation, an underlying controller for map generation and path planning, and an element controller for controlling the mobile unit and the environment detection unit.
Wherein the movement device 20 is provided with at least two groups of driving wheels, each group of driving wheels is respectively positioned at one side of the movable chassis, and the robot 1 is driven to move by the driving wheels. The element controller is used for controlling the rotation speed of the driving wheel so as to drive the robot 1 to move, preferably, the driving wheels at different sides of the movable chassis are respectively controlled, and the robot steering is realized by controlling the driving wheels to rotate at different rotation speeds.
Specifically, the movement device 20 is provided with two sets of driving wheels, one set of driving wheels is used as a left driving wheel, and the other set of driving wheels is used as a right driving wheel, wherein the left driving wheel and the right driving wheel are positioned on opposite sides of the chassis, and when the robot 1 needs to turn, the left driving wheel and the right driving wheel are controlled to rotate at different rotation speeds, so that the robot 1 turns. Optionally, the movement device 20 may further include at least two sets of driven wheels, where one set of driven wheels corresponds to one set of driven wheels, where at least one set of driven wheels is used as a left driven wheel, and at the same time, at least one set of driven wheels is used as a right driven wheel, where the left driven wheel and the right driven wheel are used to assist the left driven wheel and the right driven wheel to drive the main body 10 and the moving chassis of the robot 1 to move, so as to reduce the load pressure of the driven wheels, and improve the stability of the robot 1. The driven wheels may be provided on the middle lines of the left and right driving wheels to disperse the weight of the robot 1 as uniformly as possible.
The bottom of the movable chassis is provided with at least one turn signal lamp unit, and each turn signal lamp unit comprises at least one turn signal lamp. The element controller is also capable of controlling the turn signal lamps in the turn signal lamp units to be turned on in a preset manner when the robot 1 turns. According to the robot 1 provided by the embodiment of the invention, when the robot 1 turns, the element controller controls and controls the turning light to be turned on so as to remind pedestrians of paying attention, and the movement safety of the robot 1 is improved.
In combination with the foregoing embodiment, optionally, the element controller controls the turn signal lamp in the turn signal lamp unit to be turned on in a preset manner when a speed difference of the driving wheels on both sides of the moving chassis is greater than a preset value. Further, the robot 1 further includes a voice module electrically connected to the element controller, and when the robot 1 turns, the voice module is controlled to send out voice prompt information.
The lidar system may be disposed on the main body 10, and the lidar system may rotate along a set plane, so that a photoelectric receiving array of the lidar system forms a scan cylinder.
Specifically, the laser radar system can be arranged at the slotting position of the shell of the robot 1, and sends out laser signals to detect surrounding objects, and the laser radar system comprises a photoelectric receiving array and a laser emitting unit array, so that when the laser radar system rotates along a set plane, the photoelectric receiving array can form a scanning cylindrical surface to obtain a larger scanning range, the details of the object form can be conveniently obtained, and the situation that the robot 1 equipment collides with obstacles is avoided. Optionally, the setting plane may be a horizontal plane, so that the robot 1 device may detect an object during the traveling process, and in addition, other setting planes, such as a vertical plane, may be selected according to specific user requirements, which is not limited in this embodiment.
According to the embodiment of the invention, the method for planning the path of the robot can be applied to any robot, and it is to be noted that each step can be performed sequentially or simultaneously according to actual conditions when the robot is operated, and the method is not limited herein. The robot path planning method provided by the application comprises the following steps:
firstly, constructing a map in the moving range of a robot, and determining a positioning map of a current working area, wherein the positioning map is formed by the robot for constructing the map of the environment in which the robot is located. Specifically, the robot is configured with an acquisition sensor and a modeling processor that models by acquiring environmental data acquired by the sensor to construct an environmental map. In this embodiment, the collecting sensor includes a laser radar, an ultrasonic sensor and an infrared sensor, the data of the working area where the robot is located is collected through the laser radar, the ultrasonic sensor and the infrared sensor, the modeling processor uses the data collected by the sensors to create a map, different map layers, such as a static layer, a dynamic obstacle layer, an ultrasonic layer map, a visual layer and the like, are generated through different sensors in the process of creating the map, and the layers are fused to obtain a positioning map for positioning and navigation of the robot.
And secondly, planning a path according to the positioning map.
Further, a current position and a target position of the robot are determined according to the positioning map, a position of an obstacle is determined according to the positioning map, or a path is planned according to the current position, the target position and the obstacle position.
Specifically, the target position is a position set by a user or a position to be moved determined by a processing system of the robot, wherein the target position may be a position to be moved next determined in the moving process or a position to be finally reached by the robot. The current position is real-time position information of the robot determined by the position sensor.
And determining the position of the obstacle on the positioning map by the positioning map. By the aid of the method, the robot can determine the position of the obstacle and plan the route without changing navigation accuracy.
And finally, controlling the robot to move according to the planned path.
According to a preferred embodiment of the present invention, there is also included a computer-readable storage medium comprising computer-executable instructions stored thereon, which when executed by a processor, implement the load-bearing structure contour extraction method as described above.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A robot control method, the robot having a field of view range of a preset distance and field of view angle, and the field of view range being divided into at least two sensing areas in a preset manner, the robot control method comprising:
respectively detecting obstacles in the respective ranges of the at least two sensing areas; and
triggering anti-collision constraint and controlling the robot to execute the action of avoiding the obstacle or continuously keeping the current motion state according to the obstacle information in different sensing areas.
2. The robot control method according to claim 1, wherein the obstacle information includes a distance of the obstacle with respect to the robot and an azimuth angle of the obstacle with respect to the robot;
the anti-collision constraint comprises anti-collision course constraint, and in the step of controlling the robot to execute the action of avoiding the obstacle or continuously keeping the current motion state according to the obstacle information in different sensing areas, when the obstacle is detected in any sensing area and the distance between the obstacle and the robot is smaller than a first threshold value, the anti-collision course constraint is triggered, and the robot is controlled to rotate in the opposite direction of the detected obstacle.
3. The robot control method according to claim 2, wherein in the step of controlling the robot to rotate in the opposite direction in which the obstacle is detected, the rotation angle of the robot is a complementary angle of an azimuth angle of the obstacle with respect to the advancing direction of the robot in the rotation direction.
4. A robot control method according to claim 3, wherein the robot moves forward simultaneously during rotation and adjusts the rotation angle in real time according to the detected azimuth angle of the obstacle with respect to the robot.
5. The robot control method according to any one of claims 2 to 4, wherein the visual field of the robot is divided by a center line in a forward direction of the robot, and the sensing region of the robot includes a left sensing region and a right sensing region, which do not overlap each other.
6. The robot control method according to claim 5, wherein the robot detects the obstacle information at a preset period, and when the left sensing area and the right sensing area each detect the obstacle in one period, the robot is sequenced from near to far according to the distance of the obstacle with respect to the robot, and the rotation direction and the rotation angle of the robot are sequentially controlled.
7. The robot control method according to any one of claims 2 to 4, wherein the step of controlling the robot to perform the action of avoiding the obstacle further comprises:
continuously acquiring obstacle information in the visual field range of the robot;
and when the obstacles are detected in different sensing areas and the distance between the obstacle and the robot is smaller than a second threshold value, controlling the robot to stop moving or rotate in situ, and performing course searching.
8. The robot control method according to any one of claims 2 to 4, wherein the step of controlling the robot to perform the action of avoiding the obstacle further comprises:
continuously acquiring the distance of the current obstacle relative to the robot;
and when the distance between the obstacle and the robot is greater than a third threshold value, or the obstacle is separated from the visual field range of the robot, controlling the robot to jump out of the anti-collision course constraint.
9. The robot control method according to claim 5, wherein the sensing region of the robot further comprises a forward sensing region which is centered on the forward direction of the robot and extends to the left and right sides by a predetermined angle, the forward sensing region partially overlapping with the left and right sensing regions.
10. The robot control method according to any one of claims 1 to 4, wherein the collision avoidance constraint further includes a collision avoidance speed constraint that adaptively adjusts a movement speed of the robot according to a distance of the obstacle with respect to the robot so that the movement speed of the robot is positively correlated with a distance value of the obstacle with respect to the robot.
11. A robot, comprising:
a main body;
the motion device is arranged on the main body and can be driven to drive the main body to move;
the sensor is arranged on the main body and is configured to acquire obstacle information of the surrounding environment of the robot; and
a control system in communication with the sensor and the motion device and configured to be capable of performing the robotic control method of any one of claims 1-10.
12. A computer readable storage medium comprising computer executable instructions stored thereon, which when executed by a processor implement the robot control method of any of claims 1-10.
CN202211614789.0A 2022-12-15 2022-12-15 Robot control method, robot, and computer-readable storage medium Pending CN116079717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211614789.0A CN116079717A (en) 2022-12-15 2022-12-15 Robot control method, robot, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211614789.0A CN116079717A (en) 2022-12-15 2022-12-15 Robot control method, robot, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116079717A true CN116079717A (en) 2023-05-09

Family

ID=86207296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211614789.0A Pending CN116079717A (en) 2022-12-15 2022-12-15 Robot control method, robot, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116079717A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117207176A (en) * 2023-07-21 2023-12-12 北京小米机器人技术有限公司 Robot control method, device, robot and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117207176A (en) * 2023-07-21 2023-12-12 北京小米机器人技术有限公司 Robot control method, device, robot and storage medium

Similar Documents

Publication Publication Date Title
KR101900228B1 (en) Method and apparatus for parking assistance
JP6441993B2 (en) Method and system for object detection using a laser point cloud
EP3391166B1 (en) Autonomous visual navigation
EP3392093B1 (en) Parking support method and device
CN110389586B (en) System and method for ground and free space exploration
US20200111358A1 (en) Vehicle path planning
US20190101649A1 (en) Systems, devices, and methods for autonomous vehicle localization
US11688155B2 (en) Lane detection and tracking techniques for imaging systems
CN111044066B (en) Support control system
CN107995962B (en) Obstacle avoidance method and device, movable object and computer readable storage medium
CN106371104A (en) Vehicle targets recognizing method and anti-collision device using multi-line point cloud data machine learning
CN111258318A (en) Automatic driving system of sanitation vehicle and control method thereof
CN109521780B (en) Control system and control method for remote control work vehicle
US20210300419A1 (en) Mobile object control method, mobile object control device, and storage medium
CN116079717A (en) Robot control method, robot, and computer-readable storage medium
JP6949771B2 (en) Travel area shape identification device
CN113557523A (en) Method and device for operating a robot with improved object detection
JP2020163903A (en) Vehicle control device, vehicle control method, and program
JP6923472B2 (en) Obstacle detection system
US20210370832A1 (en) Data processing methods, devices, and apparatuses, and movable platforms
CN112581771B (en) Driving control device, parking target, and driving control system for automatic driving vehicle
CN113432615A (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN110916562A (en) Autonomous mobile device, control method, and storage medium
JP7230787B2 (en) Obstacle detection device
CN113759787A (en) Unmanned robot for closed park and working method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination