CN111857126A - Robot obstacle avoidance method, robot and storage medium - Google Patents

Robot obstacle avoidance method, robot and storage medium Download PDF

Info

Publication number
CN111857126A
CN111857126A CN202010478923.3A CN202010478923A CN111857126A CN 111857126 A CN111857126 A CN 111857126A CN 202010478923 A CN202010478923 A CN 202010478923A CN 111857126 A CN111857126 A CN 111857126A
Authority
CN
China
Prior art keywords
robot
obstacle
virtual
collision
collision area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010478923.3A
Other languages
Chinese (zh)
Inventor
缪昭侠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Technology Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN202010478923.3A priority Critical patent/CN111857126A/en
Publication of CN111857126A publication Critical patent/CN111857126A/en
Priority to PCT/CN2020/142473 priority patent/WO2021238222A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/28Floor-scrubbing machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4072Arrangement of castors or wheels
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/009Carrying-vehicles; Arrangements of trollies or wheels; Means for avoiding mechanical obstacles
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2805Parameters or conditions being sensed
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • A47L9/2836Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means characterised by the parts which are controlled
    • A47L9/2852Elements for displacement of the vacuum cleaner or the accessories therefor, e.g. wheels, casters or nozzles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the field of robots, and discloses a robot obstacle avoidance method, a robot and a storage medium, wherein the robot obstacle avoidance method firstly establishes a coordinate system based on a clean environment where the robot is located, then obtains the positions of the robot and an obstacle in the coordinate system, and the advancing direction of the robot, the obstacle comprises a virtual obstacle, then determines an effective collision area according to the position, the advancing direction and a first safety distance of the robot, judges whether the virtual obstacle at least partially falls into the effective collision area, and if so, controls the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance. Therefore, the robot obstacle avoidance method can control the robot to walk along the virtual obstacle through the position of the robot, the position of the virtual obstacle and the first safety distance, the shape of the virtual obstacle is not limited, a better obstacle avoidance effect is achieved, and user experience is further improved.

Description

Robot obstacle avoidance method, robot and storage medium
Technical Field
The present invention relates to the field of robots, and in particular, to a robot obstacle avoidance method, a robot, and a storage medium.
Background
With continuous iterative upgrade of smart homes, the functions that can be realized by the cleaning robot are more and more. If the user does not want to let the cleaning robot enter a certain area for cleaning, for example, to prevent the cleaning robot from entering an area such as a bedroom or a bathroom, a function of an exclusion zone or a virtual wall may be set so that the cleaning robot can bypass the exclusion zone or the virtual wall.
However, in the conventional technology at present, the forbidden zone is set to be a rectangular zone, the rectangular zone must be parallel to a map where the robot walks, and the direction of the virtual wall is also parallel to or perpendicular to the map where the robot walks, the robot can only walk along the rectangular zone, and cannot walk along forbidden zones or virtual walls of other shapes, for example, a curved virtual wall or any polygonal forbidden zone, and the cleaning effect is poor, and the use experience is poor.
Disclosure of Invention
The present invention solves at least one of the above technical problems to a certain extent, and therefore the present invention provides a robot obstacle avoidance method, a robot and a storage medium, which enable the robot to walk along virtual obstacles of various shapes, achieve a better obstacle avoidance effect, and improve user experience.
In one aspect, an embodiment of the present invention provides an obstacle avoidance method for a robot, including:
establishing a coordinate system based on the clean environment where the robot is located;
acquiring the position and the advancing direction of the robot in the coordinate system and the position of an obstacle in the coordinate system, wherein the obstacle comprises a virtual obstacle;
determining an effective collision area according to the position of the robot, the advancing direction and a first safety distance;
judging whether the virtual barrier at least partially falls into the effective collision area;
if yes, controlling the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance.
In some embodiments, the determining an effective collision zone from the position, the heading, and the first safe distance of the robot comprises:
determining a virtual collision area positioned on the periphery side of the robot, wherein the distance between the boundary of the virtual collision area and the robot is equal to the first safety distance;
and a part of the virtual collision area, which is close to the advancing direction of the robot, forms the effective collision area, and the included angle between the connecting line direction of any point on the boundary of the effective collision area and the center of the robot and the advancing direction is smaller than or equal to a first preset angle.
In some embodiments, the effective collision zone comprises a left side collision zone, a front side collision zone, and a right side collision zone, and the determining whether the virtual obstacle at least partially falls within the effective collision zone comprises:
judging whether the virtual obstacle has an intersection point with the boundary of the effective collision area or not according to the position of the robot and the position of the virtual obstacle;
if so, determining that at least part of the virtual barrier falls into the effective collision area;
if the virtual obstacle is judged to fall into the effective collision area at least partially, determining a virtual collision point according to the position of the virtual obstacle falling into the effective collision area and the position of the robot, wherein the virtual collision point is the point, closest to the center of the robot, of a point on the virtual obstacle falling into the effective collision area;
and determining the position of the virtual barrier at least partially falling into the effective collision area according to the virtual collision point, the left side collision area, the front side collision area and the right side collision area.
In some embodiments, said determining a location at which the virtual obstacle falls at least partially within the effective collision zone based on the virtual collision point, the left side collision zone, the front side collision zone, and the right side collision zone comprises:
Marking the connecting line direction of the virtual collision point and the center of the robot as an obstacle direction,
if the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the left side of the advancing direction, at least part of the virtual obstacle falls into the left collision area;
if the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the right side of the advancing direction, at least part of the virtual obstacle falls into the right collision area;
if the included angle between the obstacle direction and the advancing direction is smaller than or equal to a second preset angle, and the obstacle direction is positioned on the left side or the right side or the same side of the advancing direction, at least part of the virtual obstacle falls into the front side collision area,
wherein the second preset angle is smaller than the first preset angle.
In some embodiments, said determining whether said virtual obstacle falls at least partially within said effective collision zone further comprises:
and if the virtual barrier does not fall into the right side collision area, controlling the robot to rotate until at least part of the virtual barrier falls into the right side collision area.
In some embodiments, the virtual obstacle comprises a first virtual obstacle and a second virtual obstacle, the method further comprising, when the robot walks along the first virtual obstacle:
determining whether the second virtual barrier at least partially falls within the left side collision zone or the front side collision zone;
if so, controlling the robot to rotate until at least part of the second virtual obstacle falls into the right collision zone, and controlling the robot to walk along the second virtual obstacle according to the position of the robot, the position of the second virtual obstacle and the first safety distance.
In some embodiments, said controlling the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle, and the first safe distance comprises:
generating a virtual detection signal which takes the position of the robot as an original point and a preset direction as a direction and extends to the position of the virtual obstacle, wherein the preset direction and the advancing direction of the robot form an acute angle;
determining the intersection point of the virtual detection signal and the virtual obstacle as a virtual intersection point;
Determining the distance between the robot and the virtual intersection point as a first actual distance;
and controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance.
In some embodiments, the obstacle further comprises a real obstacle, and if the real obstacle and the virtual obstacle exist at the same time, the method further comprises:
acquiring a first difference value between the first actual distance and the first safe distance;
acquiring a second actual distance from the robot to the real obstacle;
acquiring a second difference value between the second actual distance and the second safe distance;
judging whether the first difference value is smaller than the second difference value;
if so, controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance;
if not, controlling the robot to walk along the real obstacle according to whether the second actual distance meets the second safety distance.
In a second aspect, an embodiment of the present invention provides a robot, including: a main body;
the driving wheel component is arranged on the main body and used for driving the robot to move;
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a robot obstacle avoidance method as described above.
In a third aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, which stores computer-executable instructions for causing a robot to perform the robot obstacle avoidance method described above.
Compared with the prior art, the invention at least has the following beneficial effects: the robot obstacle avoidance method comprises the steps of firstly establishing a coordinate system based on a clean environment where a robot is located, then obtaining the position and the advancing direction of the robot in the coordinate system and the position of an obstacle in the coordinate system, wherein the obstacle comprises a virtual obstacle, then determining an effective collision area according to the position, the advancing direction and a first safety distance of the robot, then judging whether at least part of the virtual obstacle falls into the effective collision area, and if so, finally controlling the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance. Therefore, no matter what kind of shape of the virtual obstacle, as long as the virtual obstacle exists in the effective collision area, the robot obstacle avoidance method can control the robot to walk along the virtual obstacle through the position of the robot and the first safe distance, the shape of the virtual obstacle is not limited, a better obstacle avoidance effect is achieved, and user experience is further improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1a is a schematic perspective view of a robot according to an embodiment of the present invention;
FIG. 1b is a bottom schematic view of the robot of FIG. 1 a;
fig. 2 is a schematic representation diagram of an exclusion zone and a virtual wall according to an embodiment of the present invention;
fig. 3 is a flowchart of a robot obstacle avoidance method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of step S34 in FIG. 3;
FIG. 5 is a schematic diagram of a method for determining an effective collision zone according to an embodiment of the present invention;
fig. 6 is a flowchart of a robot obstacle avoidance method according to another embodiment of the present invention;
FIG. 7 is a schematic flow chart of step S35 in FIG. 6;
FIG. 8 is a schematic diagram of determining a virtual intersection point according to an embodiment of the present invention;
fig. 9 is a flowchart of a robot obstacle avoidance method according to yet another embodiment of the present invention;
FIG. 10 is a schematic diagram of a method for determining an actual distance from a robot to a real obstacle and an actual distance from a virtual obstacle according to an embodiment of the present invention;
Fig. 11 is a schematic view of an obstacle avoidance apparatus for a robot according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a controller according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but distinguish the same items or similar items having substantially the same function and action.
Referring to fig. 1a and 1b, the robot 100 according to the embodiment of the present invention may be configured in any suitable shape to implement a specific service function operation, for example, the robot 100 according to the embodiment of the present invention may be a cleaning robot, a pet robot, a handling robot, a nursing robot, etc. The cleaning robot includes, but is not limited to, a sweeping robot, a dust collecting robot, a mopping robot, or a floor washing robot.
The robot 100 includes a main body 11 and a driving wheel part 12, and a controller. The body 11 may be generally oval, triangular, D-shaped or otherwise shaped in profile. The controller is arranged in the main body 11, the driving wheel component 12 is mounted in the main body 11 and is used for driving the robot 100 to move, and if the robot 100 is a cleaning robot, the driving wheel component 12 drives the robot 100 to move on a surface to be cleaned, wherein the surface to be cleaned can be a relatively smooth floor surface, a surface paved with a carpet, or other surfaces needing to be cleaned.
In some embodiments, the driving wheel assembly 12 includes a left driving wheel 121, a right driving wheel 122, and an omni-directional wheel 123, and the left driving wheel 121 and the right driving wheel 122 are respectively mounted to opposite sides of the main body 11. The omni-directional wheel 123 is installed at a front position of the bottom of the main body 11, and the omni-directional wheel 123 is a movable caster wheel and can horizontally rotate 360 degrees, so that the robot 100 can flexibly steer. The left driving wheel 121, the right driving wheel 122 and the omni-directional wheel 123 are installed to form a triangle, so that the walking stability of the robot 100 is improved.
In some embodiments, the omni-directional wheel 123 may be omitted, and only the left driving wheel 121 and the right driving wheel 122 may be left to drive the robot 100 to normally walk.
In some embodiments, a controller is disposed inside the main body 11, and the controller is electrically connected to the left driving wheel 121, the right driving wheel 122, and the omni-wheel 123, respectively. The controller serves as a control core of the robot 100 and is used for controlling the robot 100 to walk, go backwards and perform some business logic processing.
In some embodiments, the controller may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an ARM (Acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the controller may be any conventional processor, controller, microcontroller, or state machine. A controller may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
In some embodiments, during the movement of the robot 100, the controller uses SLAM (simultaneous localization and mapping) technology to construct a map and a position according to the environmental data. The controller instructs the robot 100 to fully traverse an environmental space based on the established map and the position of the robot 100. In addition, the user may preset some forbidden zones or virtual wall positions to prevent the robot 100 from entering the forbidden zones or the virtual walls, the set forbidden zones or virtual wall positions correspond to the position of a virtual obstacle, and the map may indicate the area that the robot 100 needs to traverse and the coordinate position where the obstacle located in the traversal area is located. It will be appreciated that the controller may also identify traversed locations or areas, or identify obstacles, in a variety of ways to develop a control strategy that meets product needs. Also, since the virtual obstacle is marked in a coordinate manner in the map, for example: the representation mode of the virtual wall in the map can be a line segment formed by continuous points with the distance less than or equal to 1cm, the line segment can be a straight line segment or a curve segment, similarly, the forbidden zone can also be set to be in the shape of any polygon, and even the line segment of the polygon can also be curved. As shown in fig. 2, a curved virtual wall a1, a first forbidden zone a2 formed by a five-pointed star area, and a second forbidden zone A3 formed by a polygon are disposed in the area. In the conventional technology, when the robot 100 avoids the obstacle of the virtual obstacle, the robot can only walk along the rectangular or straight-line virtual obstacle, and cannot walk along the virtual obstacle of any shape, so that the obstacle avoiding effect is poor.
Referring to fig. 3 and fig. 5 together, an embodiment of the invention provides a robot obstacle avoidance method, as shown in fig. 3, the robot obstacle avoidance method S30 includes:
s31, establishing a coordinate system based on the clean environment where the robot is located;
s32, acquiring the position and the advancing direction of the robot in the coordinate system and the position of an obstacle in the coordinate system, wherein the obstacle comprises a virtual obstacle;
s33, determining an effective collision area according to the position of the robot, the advancing direction and the first safety distance;
s34, judging whether the virtual barrier at least partially falls into the effective collision area;
and S35, if yes, controlling the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance.
The robot 100 encounters some obstacles including real obstacles and virtual obstacles during walking or cleaning, and the obstacle avoidance processing of the robot 100 on the real obstacles is to detect a collision by a collision sensor and then perform subsequent processing. The virtual obstacle includes a forbidden zone, a virtual wall, etc., and a real collision does not occur between the robot 100 and the virtual obstacle, and the corresponding sensor does not detect the collision, so in the embodiment of the present invention, the virtual collision between the robot 100 and the virtual obstacle is simulated. If the virtual collision occurs, the robot 100 takes corresponding measures, such as braking or walking along the obstacle, and the first safety distance is set as an effective braking range of the robot 100, within which the robot 100 can brake in time to prevent the collision with the obstacle.
To simulate a virtual collision, the method first establishes a coordinate system based on the clean environment in which the robot 100 is located, sets effective collision zones in the coordinate system, such as a4, a5, and a6 in fig. 5, and considers that the robot 100 has a virtual collision with the virtual obstacle L1 as long as the virtual obstacle L1 at least partially falls within the effective collision zones. While the effective collision zones a4, a5 and a6 are determined according to the position, the advancing direction and the first safe distance of the robot 100, in some embodiments, the effective collision zones a4, a5 and a6 are determined by firstly acquiring the position and the advancing direction of the robot 100 in the coordinate system and the position of the virtual obstacle L1 in the coordinate system, and according to the position, the advancing direction and the first safe distance of the robot 100, firstly determining a virtual collision zone on the peripheral side of the robot 100, wherein the boundary of the virtual collision zone is equal to the first safe distance from the robot 100, only the zone within the first safe distance is a virtual collision zone, and the zone outside the first safe distance can be disregarded even if the virtual obstacle L1 exists, because the zone is farther from the robot 100. In the virtual collision area, a part of the virtual collision area, which is close to the advancing direction of the robot 100, forms an effective collision area, and the included angle between the connecting line direction of any point on the boundary of the effective collision area and the center of the robot 100 and the advancing direction is smaller than or equal to a first preset angle, namely, the virtual collision area is in the area within the left and right preset ranges of the advancing direction. In the embodiment of the present invention, the first preset angle may be 90 degrees, and an angle between a connection line direction between any point on the boundary of the virtual collision zone and the center of the robot 100 and the advancing direction is smaller than 90 degrees. Meanwhile, for the robot 100, only the area in the forward direction of the robot 100 needs to be considered when avoiding the obstacle, and for each area behind the robot 100, the existence of the obstacle L1 does not need to be considered, because the robot 100 rotates to the rear direction first and then walks forwards when needing to move backwards. Therefore, based on the position of the robot 100, the advancing direction, and the first safety distance, an effective collision zone can be determined.
If the virtual obstacle L1 at least partially falls into the effective collision zones a4, a5 and a6, it is considered that a virtual collision occurs between the robot 100 and the virtual obstacle L1, and the robot 100 is controlled to walk along the virtual obstacle L1 according to the position of the robot 100 and the first safety distance. Specifically, the robot 100 is controlled to walk along the virtual obstacle L1 and keep a first safe distance with the virtual obstacle L1 with the first safe distance as a target value, the actual distance between the robot 100 and the virtual obstacle L1 is also acquired in real time during the walking process, the robot 100 is controlled to walk along the virtual obstacle L1 according to whether the actual distance satisfies the first safe distance, if the robot 100 is far away from the virtual obstacle L1, the robot 100 is controlled to walk to the right side by one point to reach the first safe distance, and if the robot 100 is close to the virtual obstacle L1, the robot 100 is controlled to walk to the left side by one point to reach the first safe distance, namely, the L1 between the robot 100 and the virtual obstacle is kept at the first safe distance and walks along the virtual obstacle L1.
In summary, when the virtual obstacle L1 at least partially falls into the effective collision zone of the robot 100, the robot 100 is controlled to walk along the virtual obstacle L1 according to the position of the robot 100, the position of the virtual obstacle L1 and the first safety distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle L1, achieves a better obstacle avoidance effect, and further improves user experience.
In some embodiments, referring to fig. 4 and 5 together, the effective collision zone includes a left collision zone a5, a front collision zone a4 and a right collision zone a6, and referring to fig. 4, step S34 includes:
s341, judging whether the virtual obstacle and the boundary of the effective collision area have an intersection point according to the position of the robot and the position of the virtual obstacle;
s342, if yes, determining that at least part of the virtual barrier falls into the effective collision area;
s343, if the virtual obstacle is judged to at least partially fall into the effective collision area, determining a virtual collision point according to the position of the virtual obstacle falling into the effective collision area and the position of the robot, wherein the virtual collision point is a point, closest to the center of the robot, of a point on the virtual obstacle falling into the effective collision area;
s344, determining the position of the virtual barrier, at least part of which falls into the effective collision area, according to the virtual collision point, the left side collision area, the front side collision area and the right side collision area.
The virtual obstacle L1 is set in advance by the user, for example, by software interacting with the robot 100, and after setting, the virtual obstacle L1 exists on a map of a space where the robot 100 travels, and information such as a coordinate position of the virtual obstacle L1 can be acquired. According to the coordinate information of the virtual obstacle L1 and the coordinate information of the effective collision zone, it can be determined whether the virtual obstacle L1 has an intersection with the boundary of the effective collision zone, specifically, if the virtual obstacle L1 is a virtual wall, it can be represented by a first-order-of-two equation in the coordinate system, the boundary of the effective collision zone can be represented by a second-order-of-two equation, the two equations are combined and solved, the coordinates of the intersection can be obtained, and if the virtual obstacle L1 has an intersection with the boundary of the effective collision zone, it is determined that the virtual obstacle L1 at least partially falls into the effective collision zone.
If it is determined that the virtual obstacle L1 at least partially falls into the effective collision region, and then it is determined which effective collision region the virtual obstacle L1 at least partially falls into, that is, whether the virtual obstacle is present in the left collision region a5, the front collision region a4, or the right collision region a6, the effective collision region includes a plurality of regions, and if the forward direction is the positive direction, the range of each effective collision region may not be limited, and the sizes of the three regions are not necessarily the same, for example, as shown in fig. 5, the forward direction of the robot 100 is shown by a solid arrow in the figure, the two boundary line angles of the front collision region a4 are first predetermined angles, which may be 90 degrees, and the region thereof is on the left side or the right side of the forward direction, the two boundary line angles of the left collision region a5 are second predetermined angles, which may be 45 degrees, and the two boundary line angles of the right collision region a6 are second predetermined angles, it may be 45 degrees and to the right in the direction of advance. Of course, in other embodiments, the first preset angle and the second preset angle are not limited to the above examples, and may be flexibly set according to actual needs.
If it is determined that the virtual obstacle L1 at least falls within the effective collision zone, and it is considered that the robot 100 and the virtual obstacle L1 have a virtual collision, the virtual collision point C is determined according to the positions of the virtual obstacle and the robot 100 falling within the effective collision zone, and the virtual collision point C is defined as a point on the virtual obstacle L1 falling within the effective collision zone, which is closest to the center point a of the robot 100, and this step can also be implemented by coordinate calculation, for example: if two intersection points exist between the virtual obstacle L1 and the effective collision area boundary, the coordinates of points L1 on all virtual obstacles between the two intersection points are taken, the distance between all the points and the center point a of the robot 100 is calculated, and the point with the shortest distance is the virtual collision point C, or if a plurality of intersection points exist between the virtual obstacle L1 and the effective collision area boundary, the coordinates of points on all the virtual obstacles L1 between the plurality of intersection points are taken, the distance between all the points and the center point a of the robot 100 is calculated, and the point with the shortest distance is the virtual collision point C, or if only one intersection point exists between the virtual obstacle L1 and the effective collision area boundary, the intersection point is the point at which the point on the virtual obstacle L1 is closest to the center point a of the robot 100, and the intersection point is the virtual collision point C.
Finally, according to the virtual collision point C, the left collision area a5, the front collision area a4 and the right collision area a6, it is determined that the virtual obstacle L1 at least partially falls into a specific position of the effective collision area, and it is determined which effective collision area the virtual obstacle L1 specifically falls into, there are many determination methods, in some embodiments, a connecting line direction of the virtual collision point C and the center a of the robot 100 is first marked, the connecting line direction is a obstacle direction, and if an included angle between the obstacle direction and the advancing direction is greater than a second preset angle, and the obstacle direction is located on the left side of the advancing direction, the virtual obstacle L1 at least partially falls into the left collision area a4, wherein the second preset angle is smaller than the first preset angle, and a specific value of the second preset angle can be set by the user as needed, and in the embodiment of the present invention, it is 45 degrees. If the included angle between the obstacle direction and the advancing direction is larger than the second preset angle and the obstacle direction is positioned at the right side of the advancing direction, the virtual obstacle at least partially falls into the right collision area a 6. If the included angle between the obstacle direction and the advancing direction is smaller than or equal to a second preset angle, and the obstacle direction is located on the left side or the right side of the advancing direction, or the obstacle direction is located on the same side as the advancing direction, the virtual obstacle L1 at least partially falls into the front side collision zone a 4.
In some embodiments, it may also be directly determined that the virtual obstacle L1 at least partially falls into the effective collision zone through the coordinates of the virtual obstacle L1 and the coordinates of each effective collision zone, specifically, the coordinates of the virtual obstacle L1 and the coordinates of each effective collision zone are subjected to coordinate comparison, whether there is a coordinate overlapping area is determined, if the coordinates of the virtual obstacle L1 and the coordinates of the left collision zone a5 overlap, it is determined that the virtual obstacle L1 at least partially falls into the left collision zone 5, if the coordinates of the virtual obstacle L1 and the coordinates of the right collision zone A6 overlap, it is determined that the virtual obstacle L1 at least partially falls into the right collision zone A6, and if the coordinates of the virtual obstacle L1 and the coordinates of the front collision zone a4 overlap, it is determined that the virtual obstacle L1 and the front collision zone a4 at least partially fall.
Thus, the location at which the virtual obstacle falls at least partially within the effective collision zone may be determined by a variety of methods as described above. In some embodiments, if the virtual obstacle does not fall into the right collision zone, the robot 100 is controlled to rotate until the virtual obstacle at least partially falls into the right collision zone.
Therefore, if the virtual obstacle L1 does not fall into the right collision zone a6 but at least partially falls into the left collision zone a5 or the front collision zone a4, the robot 100 is controlled to rotate until the virtual obstacle L1 at least partially falls into the right collision zone a6, a virtual collision point of the virtual obstacle L1 with the right collision zone a6 is determined, and then the virtual collision is responded according to the virtual collision point.
In some embodiments, the robot 100 further detects the obstacle situation in the left area and the forward direction area simultaneously during the process of walking along the obstacle, specifically, the virtual obstacle L1 includes a first virtual obstacle and a second virtual obstacle, and when the robot 100 walks along the first virtual obstacle, referring to fig. 6, the robot obstacle avoidance method S60 further includes:
s61, judging whether the second virtual obstacle at least partially falls into the left side collision area or the front side collision area;
and S62, if so, controlling the robot to rotate until at least part of the second virtual obstacle falls into the right collision area, and controlling the robot to walk along the second virtual obstacle according to the position of the robot, the position of the second virtual obstacle and the first safety distance.
During the process that the robot 100 walks along the first virtual obstacle, the first virtual obstacle at least partially falls into the right side collision area a6 of the robot 100, meanwhile, the robot 100 detects whether the second virtual obstacle partially falls into the left side collision area a5 or the front side collision area a4, if the second virtual obstacle partially falls into the left side collision area a5 or the front side collision area a4, the robot 100 represents that the robot 100 virtually collides with the second virtual obstacle in the left side collision area a5 or the second virtual obstacle in the front side collision area a4, the robot 100 immediately responds to the virtual collision, and then walks along the second virtual obstacle, specifically, when the virtual collision occurs, the robot 100 is controlled to rotate until the second virtual obstacle at least partially falls into the right side collision area a6, and then the robot 100 is controlled to walk along the second virtual obstacle according to the position of the robot 100, the position of the second virtual obstacle and the first safe distance, a new edgewise movement is performed. The method can enable the robot 100 to detect the virtual obstacles in other directions in the obstacle avoidance process, thereby achieving better obstacle avoidance effect.
In summary, when the virtual obstacle L1 at least partially falls into the effective collision zone of the robot 100, the robot 100 is controlled to walk along the virtual obstacle L1 according to the position of the robot 100, the position of the virtual obstacle L1 and the first safety distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle L1, achieves a better obstacle avoidance effect, and further improves user experience.
In some embodiments, referring to fig. 7 and 8, step S35 includes:
s351, generating a virtual detection signal which takes the position of the robot as an original point and a preset direction as a direction and extends to the position of the virtual obstacle, wherein the preset direction and the advancing direction of the robot form an acute angle;
after determining that the virtual collision occurs between the virtual obstacle L1 and the effective collision area, the robot 100 is controlled to walk along the virtual obstacle L1 by using the connection distance between the robot 100 and the virtual collision point C as the initial actual distance and the first safe distance as the target distance, and during the walking process, a virtual detection signal is generated that extends to the position of the virtual obstacle L1 with the position of the robot 100 as the origin and the preset direction as the direction, wherein the preset direction is set at an acute angle to the advancing direction of the robot 100. The origin of the virtual detection signal is the position of the robot 100, and specifically, it may be a central point a of the robot 100, and the direction is a preset direction and points to the virtual position. Generally, the preset direction is a direction in which the origin a points to the tof sensor of the robot 100, wherein if the forward direction of the robot 100 is taken as the positive direction, the tof sensor is disposed at the upper right position of the positive direction of the robot 100, can detect the distance between the real obstacle and the robot 100, and is mounted at the upper right position instead of the right position, so that the distance of the obstacle on the right side can be pre-determined, the obstacle avoidance preparation is made in advance, and the robot can be controlled to travel along the obstacle better. Therefore, the acute angle set between the preset direction and the advancing direction of the robot 100 is the angle between the tof sensor of the robot 100 and the advancing direction of the robot 100.
S352, determining the intersection point of the virtual detection signal and the virtual obstacle as a virtual intersection point;
s353, determining the distance between the robot and the virtual intersection point as a first actual distance;
s354, controlling the robot to walk along the virtual obstacle according to whether the first actual distance meets the first safety distance.
The virtual detection signal points to a virtual position, if the virtual detection signal is extended to an extent, an intersection point with the virtual obstacle L1 is determined to be a virtual intersection point C, and finally, the distance from the robot 100 to the virtual intersection point C is determined to be a first actual distance, that is, the actual distance from the position of the robot 100 to the position of the virtual wall. For example: as shown in fig. 8, the virtual wall L1 is a curved line and is represented by a dashed line segment, the advancing direction of the robot 100 is represented by a solid arrow in the figure, the origin of the robot 100 is point a, the position of the tof sensor of the robot 100 is point B, the direction in which the point a points to the point B is a preset direction, a virtual detection signal is generated at the position of the virtual wall L1, the virtual detection signal is represented by a straight line L, the intersection point of the straight line L and the virtual wall is point C, and the point C is a virtual intersection point, and the length of the line segment BC is taken as a first actual distance, that is, the distance from the robot 100 to the virtual intersection point C.
In some embodiments, after the first actual distance is determined, the robot 100 is controlled to walk along the virtual obstacle L1 according to whether the first actual distance satisfies the first safe distance. Specifically, the first actual distance is actually an actual distance between the right side of the robot 100 and the virtual wall L1, if the first actual distance is greater than the first safety distance, the robot 100 is controlled to walk to the right side until the distance between the robot 100 and the virtual wall L1 is kept at the first safety distance, and if the first actual distance is less than the first safety distance, the robot 100 is controlled to walk to the left side until the distance between the robot 100 and the virtual wall L1 is kept at the first safety distance. The first safety distance may be set to 5cm, and the user may set it as desired, and this step may be implemented by the pid control algorithm.
Therefore, no matter which part of the effective collision zone the virtual obstacle L1 falls at least partially into, the first actual distance of the robot 100 from the virtual obstacle L1 is determined by the actual distance of the right side of the robot 100 from the virtual obstacle L1. Even if the virtual obstacle L1 is detected to at least partially fall into the effective collision area in the left collision area A5 or the front collision area A4, the obstacle avoidance method of the robot can well achieve obstacle avoidance and achieve a better obstacle avoidance effect.
In some embodiments, if the left impact zone a5, the right impact zone A6, and the front impact zone a4 each detect that the virtual obstacle L1 at least partially falls in, then a virtual collision is responded to according to a preset priority, specifically, the preset priority may be that the left impact zone a5 response priority > the front impact zone a4 response priority > the right impact zone A6 response priority, i.e., the virtual collision of the left impact zone a5 is responded to first, if the virtual obstacle L1 does not at least partially fall in the left impact zone a5, the virtual collision of the front impact zone a4 is responded to first, and if the virtual obstacle L1 does not at least partially fall in the left impact zone a5 and not at least partially in the front impact zone a4, the virtual collision of the right impact zone A6 is responded to.
In summary, when the virtual obstacle L1 at least partially falls into the effective collision zone of the robot 100, the robot 100 is controlled to walk along the virtual obstacle L1 according to the position of the robot 100, the position of the virtual obstacle L1 and the first safety distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle L1, achieves a better obstacle avoidance effect, and further improves user experience.
In some embodiments, the obstacle further includes a real obstacle, and if the real obstacle and the virtual obstacle L1 exist at the same time, referring to fig. 9, the robot obstacle avoidance method S90 includes:
S91, acquiring a first difference value between the first actual distance and the first safe distance;
s92, acquiring a second actual distance between the robot and the real obstacle;
s93, acquiring a second difference value between the second actual distance and the second safe distance;
s94, judging whether the first difference value is smaller than the second difference value;
s95, if yes, controlling the robot to walk along the virtual obstacle according to whether the first actual distance meets the first safety distance;
and S96, if not, controlling the robot to walk along the real obstacle according to whether the second actual distance meets the second safety distance.
While the robot 100 is walking along an obstacle, it may encounter a situation where a real obstacle and a virtual obstacle L1 exist simultaneously, for example: when the robot 100 walks along the virtual obstacle L1, the tof sensor disposed at the upper right of the robot 100 detects that a real obstacle exists at the right side, and then both the virtual obstacle L1 and the real obstacle exist in the right space of the robot 100. For the real obstacle, the safe distance of the robot 100 is a second safe distance, and the robot 100 and the real obstacle are kept within the second safe distance range, and simultaneously walk along the real obstacle to perform the edgewise movement along the real obstacle. In the embodiment of the present invention, the second safe distance is 1cm, and the user may set the second safe distance as needed.
In the process of walking along the edge, when the robot 100 simultaneously exists a real obstacle and a virtual obstacle L1 in the right space of the robot 100, the robot 100 measures the distance between the real obstacle and the robot 100 in real time by using the tof sensor, the distance is called as a second actual distance, a first difference between the first actual distance and the first safe distance is firstly obtained, a second difference between the second actual distance and the second safe distance is then obtained, different measures are taken according to the magnitude of the two differences, if the first difference is smaller than the second difference, the robot 100 is controlled to walk along the virtual obstacle L1 according to the first actual distance and the first safe distance, and if the first difference is larger than the second difference, the robot 100 is controlled to walk along the real obstacle according to the second actual distance and the second safe distance. That is, a smaller difference is selected as a feedback value for motion control, and the control target value may be set to zero during the control of the robot 100 walking along the obstacle, and the smaller difference is used as the feedback value to control the robot 100 to walk along the obstacle corresponding to the smaller difference. For example: as shown in fig. 10, the advancing direction of the robot 100 is as indicated by an arrow in the figure, the actual distance from the robot 100 to the virtual obstacle L1 is S1, the safe distance is S2, the actual distance from the robot 100 to the first real obstacle L2 is S3, the safe distance is S4, the actual distance from the robot 100 to the second real obstacle L3 is S5, and the safe distance is S4 (not shown in the figure), the difference between each actual distance and the corresponding safe distance is obtained, for better illustration, S2 is equal to S4 and is set to be 1cm, and in other embodiments, the two may be different, for example, S2 is 5cm, and S4 is 1 cm. If S2 is equal to S4, the difference Δ S1-S1-S2-0 for the virtual wall L1, the difference Δ S2-S3-S4 <0 for the first real obstacle L2, and the difference Δ S3-S5-S4 >0 for the second real obstacle L3, the difference Δ S2 is fed back to the robot 100, and the robot 100 controls the robot 100 to walk along the first real obstacle L2 and move along the edge with the difference Δ S2 as a feedback value and zero as a target value.
When the robot walks, if a real obstacle and a virtual obstacle exist at the same time, the method can seamlessly switch the obstacles walking along the edge of the robot, and the robot is controlled to walk along the corresponding obstacle according to the actual situation of the obstacles, so that the better obstacle avoidance effect is realized.
In conclusion, when the virtual obstacle at least partially falls into the effective collision area of the robot, the robot is controlled to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle, achieves a better obstacle avoidance effect, and further improves user experience.
It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and it can be understood by those skilled in the art from the description of the embodiments of the present invention that, in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed in an exchange manner, and the like.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides an obstacle avoidance device for a robot. The robot obstacle avoidance device may be a software module, the software module includes a plurality of instructions, the instructions are stored in a memory in the electric tilt, and the processor may access the memory and call the instructions to execute the instructions, so as to complete the robot obstacle avoidance method described in each of the above embodiments.
In some embodiments, the robot obstacle avoidance device may also be built by hardware devices, for example, the robot obstacle avoidance device may be built by one or more than two chips, and each chip may work in coordination with each other to complete the robot obstacle avoidance method described in each embodiment. For another example, the obstacle avoidance apparatus may be constructed by various logic devices, such as a general processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an arm (acorn RISC machine) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination of these components.
Referring to fig. 11, fig. 11 is a diagram illustrating an obstacle avoidance apparatus 200 of a robot according to an embodiment of the present invention, where the obstacle avoidance apparatus 200 of the robot includes an establishing module 21, a first obtaining module 22, a first determining module 23, a first determining module 24, and a first control module 25.
The establishing module 21 is used for establishing a coordinate system based on the clean environment where the robot is located;
the first acquiring module 22 is configured to acquire a position and a heading direction of the robot in the coordinate system, and a position of an obstacle in the coordinate system, where the obstacle includes a virtual obstacle;
The first determining module 23 is configured to determine an effective collision zone according to the position of the robot, the forward direction, and a first safety distance;
the first judging module 24 is configured to judge whether the virtual obstacle at least partially falls into the effective collision area;
the first control module 25 is configured to control the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle, and the first safety distance.
Therefore, when the virtual obstacle at least partially falls into the effective collision area of the robot, the robot obstacle avoidance device controls the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safe distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle, achieves a better obstacle avoidance effect, and further improves user experience.
In some embodiments, the first determining module 23 is specifically configured to: determining a virtual collision area positioned on the periphery side of the robot, wherein the distance between the boundary of the virtual collision area and the robot is equal to the first safety distance; and a part of the virtual collision area, which is close to the advancing direction of the robot, forms the effective collision area, and the included angle between the connecting line direction of any point on the boundary of the effective collision area and the center of the robot and the advancing direction is smaller than or equal to a first preset angle.
In some embodiments, the effective collision zones include a left collision zone, a front collision zone, and a right collision zone, and the first determining module 23 is further specifically configured to: judging whether the virtual obstacle has an intersection point with the boundary of the effective collision area or not according to the position of the robot and the position of the virtual obstacle; if so, determining that at least part of the virtual barrier falls into the effective collision area; if the virtual obstacle is judged to fall into the effective collision area at least partially, determining a virtual collision point according to the position of the virtual obstacle falling into the effective collision area and the position of the robot, wherein the virtual collision point is the point, closest to the center of the robot, of a point on the virtual obstacle falling into the effective collision area; and determining the position of the virtual barrier at least partially falling into the effective collision area according to the virtual collision point, the left side collision area, the front side collision area and the right side collision area.
In some embodiments, the first determining module 23 is further specifically configured to: marking the connecting line direction of the virtual collision point and the center of the robot as an obstacle direction,
if the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the left side of the advancing direction, at least part of the virtual obstacle falls into the left collision area;
If the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the right side of the advancing direction, at least part of the virtual obstacle falls into the right collision area;
if the included angle between the obstacle direction and the advancing direction is smaller than or equal to a second preset angle, and the obstacle direction is positioned on the left side or the right side or the same side of the advancing direction, at least part of the virtual obstacle falls into the front side collision area,
wherein the second preset angle is smaller than the first preset angle.
In some embodiments, the first determining module 24 is specifically configured to, if the virtual obstacle does not fall into the right collision zone, control the robot to rotate until the virtual obstacle at least partially falls into the right collision zone.
In some embodiments, the virtual obstacles include a first virtual obstacle and a second virtual obstacle, and please continue to refer to fig. 11 when the robot walks along the first virtual obstacle, the robot obstacle avoidance apparatus 200 further includes a second determining module 26 for determining whether the second virtual obstacle at least partially falls into the left side collision zone or the front side collision zone; and a second control module 27, configured to control the robot to rotate until at least a part of the second virtual obstacle falls into the right collision zone, and then control the robot to walk along the second virtual obstacle according to the position of the robot, the position of the second virtual obstacle, and the first safety distance.
In some embodiments, the first control module 25 is specifically configured to generate a virtual detection signal that extends to the position of the virtual obstacle with the position of the robot as an origin and a preset direction as a direction, where the preset direction is set at an acute angle with the advancing direction of the robot; determining the intersection point of the virtual detection signal and the virtual obstacle as a virtual intersection point; determining the distance between the robot and the virtual intersection point as a first actual distance; and controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance.
In some embodiments, if the real obstacle and the virtual obstacle exist at the same time, please continue to refer to fig. 11, the robot obstacle avoidance apparatus 200 further includes a second obtaining module 28, configured to obtain a first difference between the first actual distance and the first safe distance; acquiring a second actual distance from the robot to the real obstacle; acquiring a second difference value between the second actual distance and the second safe distance; a third control module 29, configured to determine whether the first difference is smaller than the second difference; if so, controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance; if not, controlling the robot to walk along the real obstacle according to whether the second actual distance meets the second safety distance.
The robot obstacle avoidance device can execute the robot obstacle avoidance method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the embodiment of the robot obstacle avoidance device, reference may be made to the robot obstacle avoidance method provided in the embodiment of the present invention.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a controller according to an embodiment of the present invention. As shown in fig. 12, the controller 300 includes one or more processors 31 and a memory 32. In fig. 12, one processor 31 is taken as an example.
The processor 31 and the memory 32 may be connected by a bus or other means, and fig. 12 illustrates the connection by a bus as an example.
The memory 32 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the robot obstacle avoidance method in the embodiment of the present invention. The processor 31 executes various functional applications and data processing of the robot obstacle avoidance device by running the nonvolatile software program, instructions and modules stored in the memory 32, that is, the functions of the robot obstacle avoidance method provided by the above method embodiment and the modules or units of the above device embodiment are realized.
The memory 32 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 32 may optionally include memory located remotely from the processor 31, and these remote memories may be connected to the processor 31 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 32 and, when executed by the one or more processors 31, perform the robot obstacle avoidance method of any of the above-described method embodiments.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium, which stores computer-executable instructions, which are executed by one or more processors, such as one of the processors 31 in fig. 12, so that the one or more processors can execute the robot obstacle avoidance method in any of the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a computer program stored on a non-transitory computer readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by an electronic device, the electronic device is caused to execute any one of the robot obstacle avoidance methods.
In summary, when the virtual obstacle at least partially falls into the effective collision zone of the robot, the robot obstacle avoidance device controls the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance. The robot obstacle avoidance method does not limit the shape and type of the virtual obstacle, achieves a better obstacle avoidance effect, and further improves user experience.
The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A robot obstacle avoidance method is characterized by comprising the following steps:
establishing a coordinate system based on the clean environment where the robot is located;
acquiring the position and the advancing direction of the robot in the coordinate system and the position of an obstacle in the coordinate system, wherein the obstacle comprises a virtual obstacle;
determining an effective collision area according to the position of the robot, the advancing direction and a first safety distance;
Judging whether the virtual barrier at least partially falls into the effective collision area;
if yes, controlling the robot to walk along the virtual obstacle according to the position of the robot, the position of the virtual obstacle and the first safety distance.
2. The method of claim 1, wherein determining an effective collision zone based on the position of the robot, the heading, and the first safe distance comprises:
determining a virtual collision area positioned on the periphery side of the robot, wherein the distance between the boundary of the virtual collision area and the robot is equal to the first safety distance;
and a part of the virtual collision area, which is close to the advancing direction of the robot, forms the effective collision area, and the included angle between the connecting line direction of any point on the boundary of the effective collision area and the center of the robot and the advancing direction is smaller than or equal to a first preset angle.
3. The method of claim 2, wherein the effective collision zones include a left side collision zone, a front side collision zone, and a right side collision zone, and wherein determining whether the virtual obstacle at least partially falls within the effective collision zone comprises:
Judging whether the virtual obstacle has an intersection point with the boundary of the effective collision area or not according to the position of the robot and the position of the virtual obstacle;
if so, determining that at least part of the virtual barrier falls into the effective collision area;
if the virtual obstacle is judged to fall into the effective collision area at least partially, determining a virtual collision point according to the position of the virtual obstacle falling into the effective collision area and the position of the robot, wherein the virtual collision point is the point, closest to the center of the robot, of a point on the virtual obstacle falling into the effective collision area;
and determining the position of the virtual barrier at least partially falling into the effective collision area according to the virtual collision point, the left side collision area, the front side collision area and the right side collision area.
4. The method of claim 3, wherein determining the location at which the virtual obstacle at least partially falls within the effective collision zone based on the virtual collision point, the left side collision zone, the front side collision zone, and the right side collision zone comprises:
marking the connecting line direction of the virtual collision point and the center of the robot as an obstacle direction,
If the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the left side of the advancing direction, at least part of the virtual obstacle falls into the left collision area;
if the included angle between the obstacle direction and the advancing direction is larger than a second preset angle and the obstacle direction is positioned on the right side of the advancing direction, at least part of the virtual obstacle falls into the right collision area;
if the included angle between the obstacle direction and the advancing direction is smaller than or equal to a second preset angle, and the obstacle direction is positioned on the left side or the right side or the same side of the advancing direction, at least part of the virtual obstacle falls into the front side collision area,
wherein the second preset angle is smaller than the first preset angle.
5. The method of claim 3, wherein said determining whether the virtual obstacle falls at least partially within the effective collision zone further comprises:
and if the virtual barrier does not fall into the right side collision area, controlling the robot to rotate until at least part of the virtual barrier falls into the right side collision area.
6. The method of claim 3, wherein the virtual obstacle comprises a first virtual obstacle and a second virtual obstacle, the method further comprising, when the robot walks along the first virtual obstacle:
Determining whether the second virtual barrier at least partially falls within the left side collision zone or the front side collision zone;
if so, controlling the robot to rotate until at least part of the second virtual obstacle falls into the right collision zone, and controlling the robot to walk along the second virtual obstacle according to the position of the robot, the position of the second virtual obstacle and the first safety distance.
7. The method of any one of claims 1-6, wherein said controlling the robot to walk along the virtual obstacle based on the position of the robot, the position of the virtual obstacle, and the first safe distance comprises:
generating a virtual detection signal which takes the position of the robot as an original point and a preset direction as a direction and extends to the position of the virtual obstacle, wherein the preset direction and the advancing direction of the robot form an acute angle;
determining the intersection point of the virtual detection signal and the virtual obstacle as a virtual intersection point;
determining the distance between the robot and the virtual intersection point as a first actual distance;
and controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance.
8. The method of claim 7, wherein the obstacle further comprises a real obstacle, and wherein if the real obstacle and the virtual obstacle are present at the same time, the method further comprises:
acquiring a first difference value between the first actual distance and the first safe distance;
acquiring a second actual distance from the robot to the real obstacle;
acquiring a second difference value between the second actual distance and the second safe distance;
judging whether the first difference value is smaller than the second difference value;
if so, controlling the robot to walk along the virtual barrier according to whether the first actual distance meets the first safety distance;
if not, controlling the robot to walk along the real obstacle according to whether the second actual distance meets the second safety distance.
9. A robot, characterized in that the robot comprises:
a main body;
the driving wheel component is arranged on the main body and used for driving the robot to move;
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the robotic obstacle avoidance method of any of claims 1-8.
10. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions for causing a robot to perform the robot obstacle avoidance method of any of claims 1-8.
CN202010478923.3A 2020-05-29 2020-05-29 Robot obstacle avoidance method, robot and storage medium Pending CN111857126A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010478923.3A CN111857126A (en) 2020-05-29 2020-05-29 Robot obstacle avoidance method, robot and storage medium
PCT/CN2020/142473 WO2021238222A1 (en) 2020-05-29 2020-12-31 Obstacle avoidance method for robot, obstacle avoidance device for robot, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478923.3A CN111857126A (en) 2020-05-29 2020-05-29 Robot obstacle avoidance method, robot and storage medium

Publications (1)

Publication Number Publication Date
CN111857126A true CN111857126A (en) 2020-10-30

Family

ID=72985691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478923.3A Pending CN111857126A (en) 2020-05-29 2020-05-29 Robot obstacle avoidance method, robot and storage medium

Country Status (2)

Country Link
CN (1) CN111857126A (en)
WO (1) WO2021238222A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112698654A (en) * 2020-12-25 2021-04-23 珠海市一微半导体有限公司 Single-point TOF-based mapping and positioning method, chip and mobile robot
CN113064437A (en) * 2021-03-31 2021-07-02 成都莱洁科技有限公司 Automatic collision avoidance system and method for robot
WO2021238222A1 (en) * 2020-05-29 2021-12-02 深圳市银星智能科技股份有限公司 Obstacle avoidance method for robot, obstacle avoidance device for robot, robot, and storage medium
CN114983293A (en) * 2022-06-30 2022-09-02 深圳银星智能集团股份有限公司 Self-moving robot
WO2024001799A1 (en) * 2022-06-27 2024-01-04 华为技术有限公司 Anti-collision method for virtual reality (vr) device, and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115268470B (en) * 2022-09-27 2023-08-18 深圳市云鼠科技开发有限公司 Obstacle position marking method, device and medium for cleaning robot
CN115443795B (en) * 2022-09-29 2024-01-30 宁波东贝智能科技有限公司 Mower collision detection method, mower collision detection system, storage medium and intelligent terminal
CN117294025B (en) * 2023-11-27 2024-02-27 泰安众诚自动化设备股份有限公司 Mining high-voltage switch protection control method, system, controller and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234527A1 (en) * 2008-03-17 2009-09-17 Ryoko Ichinose Autonomous mobile robot device and an avoidance method for that autonomous mobile robot device
CN107688342A (en) * 2017-03-27 2018-02-13 平安科技(深圳)有限公司 The obstruction-avoiding control system and method for robot
CN107807641A (en) * 2017-10-25 2018-03-16 上海思岚科技有限公司 method for mobile robot obstacle avoidance
CN108908331A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学(深圳) The barrier-avoiding method and system, computer storage medium of super redundancy flexible robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10180683B1 (en) * 2015-10-29 2019-01-15 Fellow Robotics Ltd. Robotic platform configured to identify obstacles and follow a user device
CN105717929B (en) * 2016-04-29 2018-06-15 中国人民解放军国防科学技术大学 Mobile robot mixed path planing method under a kind of multiresolution obstacle environment
EP3627250B1 (en) * 2018-09-21 2023-12-06 Tata Consultancy Services Limited Method and system for free space detection in a cluttered environment
CN110353573B (en) * 2019-06-05 2021-08-20 深圳市杉川机器人有限公司 Poverty-escaping method of sweeping robot, computing equipment and storage medium
CN111857126A (en) * 2020-05-29 2020-10-30 深圳市银星智能科技股份有限公司 Robot obstacle avoidance method, robot and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234527A1 (en) * 2008-03-17 2009-09-17 Ryoko Ichinose Autonomous mobile robot device and an avoidance method for that autonomous mobile robot device
CN107688342A (en) * 2017-03-27 2018-02-13 平安科技(深圳)有限公司 The obstruction-avoiding control system and method for robot
CN107807641A (en) * 2017-10-25 2018-03-16 上海思岚科技有限公司 method for mobile robot obstacle avoidance
CN108908331A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学(深圳) The barrier-avoiding method and system, computer storage medium of super redundancy flexible robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238222A1 (en) * 2020-05-29 2021-12-02 深圳市银星智能科技股份有限公司 Obstacle avoidance method for robot, obstacle avoidance device for robot, robot, and storage medium
CN112698654A (en) * 2020-12-25 2021-04-23 珠海市一微半导体有限公司 Single-point TOF-based mapping and positioning method, chip and mobile robot
CN113064437A (en) * 2021-03-31 2021-07-02 成都莱洁科技有限公司 Automatic collision avoidance system and method for robot
WO2024001799A1 (en) * 2022-06-27 2024-01-04 华为技术有限公司 Anti-collision method for virtual reality (vr) device, and electronic device
CN114983293A (en) * 2022-06-30 2022-09-02 深圳银星智能集团股份有限公司 Self-moving robot

Also Published As

Publication number Publication date
WO2021238222A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
CN111857126A (en) Robot obstacle avoidance method, robot and storage medium
KR102492242B1 (en) Terrain Aware Step Planning System
CN109984689B (en) Cleaning robot and path optimization method thereof
US9844876B2 (en) Robot cleaner and control method thereof
EP3552072B1 (en) Robotic cleaning device with operating speed variation based on environment
KR20190022435A (en) Robot Obstacle Avoidance Control System, Method, Robot and Storage Medium
US9764472B1 (en) Methods and systems for automated robotic movement
CN108733065B (en) Obstacle avoidance method and device for robot and robot
CN111158353A (en) Movement control method for a plurality of robots and system thereof
JP7322276B2 (en) swing trajectory of the leg
CN113287991B (en) Control method and control device for cleaning robot
EP4203760B1 (en) Cleaning control method and device, cleaning robot and storage medium
CN117795445A (en) Alternate route discovery for waypoint-based navigation maps
CN114355887B (en) Narrow-lane passage method and device for robot, robot and storage medium
WO2024067852A1 (en) Ground medium detection method and apparatus and cleaning device
CN114019951A (en) Robot control method and device, robot and readable storage medium
Lunenburg et al. A representation method based on the probability of collision for safe robot navigation in domestic environments
KR20200054694A (en) Cleaning apparatus and controlling method thereof
CN114967695A (en) Robot and its escaping method, device and storage medium
CN114690755A (en) Cleaning robot and obstacle detouring method thereof
CN114587208A (en) Control method of cleaning robot
WO2024022452A1 (en) Method for exploring ground material, cleaning robot, and storage medium
CN117158828A (en) Cleaning method of cleaning robot and cleaning robot
CN117958689A (en) Cleaning device control method and device, storage medium and cleaning device
CN114237220A (en) Robot, robot control method, robot control device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518000 building A1, Yinxing hi tech Industrial Park, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.

CB02 Change of applicant information