CN110850885A - Autonomous robot - Google Patents

Autonomous robot Download PDF

Info

Publication number
CN110850885A
CN110850885A CN201911333329.9A CN201911333329A CN110850885A CN 110850885 A CN110850885 A CN 110850885A CN 201911333329 A CN201911333329 A CN 201911333329A CN 110850885 A CN110850885 A CN 110850885A
Authority
CN
China
Prior art keywords
obstacle
main body
robot
robot main
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911333329.9A
Other languages
Chinese (zh)
Inventor
杨勇
吴泽晓
郑志帆
罗志佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3irobotix Co Ltd
Original Assignee
Shenzhen 3irobotix Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3irobotix Co Ltd filed Critical Shenzhen 3irobotix Co Ltd
Priority to CN201911333329.9A priority Critical patent/CN110850885A/en
Publication of CN110850885A publication Critical patent/CN110850885A/en
Priority to PCT/CN2020/131393 priority patent/WO2021120999A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an autonomous robot, which comprises a robot main body, a depth sensor and a control module, wherein the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outlines of the obstacles; the control module is arranged on the robot main body, is electrically connected with the depth sensor and is used for constructing a motion map of the robot main body based on a detection signal of the depth sensor; the control module is further used for acquiring the contour and constructing a virtual obstacle in the motion map by taking the contour as an object; and when the robot main body is close to the virtual obstacle in the motion map, controlling the robot main body to execute obstacle avoidance action. The autonomous robot has the advantage of high obstacle avoidance precision.

Description

Autonomous robot
Technical Field
The invention relates to the technical field of robots, in particular to an autonomous robot.
Background
The existing sweeping robot adopts an infrared sensor to avoid obstacles, but the infrared sensor is greatly influenced by the surface color of the obstacles. For example, when the surface of an obstacle is black, the black is not easy to reflect, and infrared rays are hardly or not reflected, so that the distance measurement is disabled, and the obstacle avoidance of the sweeping robot is affected. Namely, the current sweeping robot has the problem of low obstacle avoidance success rate.
Disclosure of Invention
The invention mainly aims to provide an autonomous robot, and aims to solve the problem that the existing sweeping robot is low in obstacle avoidance success rate.
To achieve the above object, the present invention provides an autonomous robot, comprising:
a robot main body;
the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outlines of the obstacles;
the control module is arranged on the robot main body and is electrically connected with the depth sensor;
the control module is used for constructing a motion map of the robot main body based on the detection signals of the depth sensor;
the control module is further used for acquiring the contour and constructing a virtual obstacle in the motion map according to the contour; and
and when the robot main body is close to the virtual obstacle in the motion map, controlling the robot main body to execute obstacle avoidance action.
Optionally, the control module is configured to obtain a contour of the obstacle when the height of the obstacle is within a preset obstacle height detection interval, and construct the virtual obstacle in the motion map according to the contour.
Optionally, the upper limit height of the preset obstacle height detection interval is the installation height of the depth sensor plus a first preset distance, and the lower limit height of the preset obstacle height detection interval is the installation height of the depth sensor minus a second preset distance.
Optionally, the obstacle crossing height of the robot main body is higher than a lower limit height of the preset obstacle height detection interval, and the control module is further configured to control the robot main body to continue to perform a forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body.
Optionally, the autonomous robot further includes a camera module, and the camera module is configured to acquire image data on a forward route of the robot main body;
the control module is further electrically connected with the camera shooting module and used for acquiring the image data and comparing the image data with an image of a preset obstacle, if the depth sensor detects that the height of the obstacle is lower than the obstacle crossing height and the preset obstacle appears in the image data, the control module acquires the outline of the preset obstacle from the local or cloud end and constructs a virtual obstacle matched with the preset obstacle in an area corresponding to the movement map and the preset obstacle according to a detection signal of the depth sensor.
Optionally, the depth sensor is further configured to scan a profile of a junction between the recessed environment and the traveling road surface when it is detected that a recessed environment occurs on the traveling road surface of the robot main body and a height difference between the recessed environment and the traveling road surface is greater than the obstacle crossing height.
Optionally, the autonomous robot further includes an edge sensor, and the edge sensor is disposed on a side of the robot main body;
the control module is further used for determining a rotation angle of the robot main body according to a detection signal of the depth sensor when an obstacle avoidance action is executed, and controlling the robot main body to rotate according to the rotation angle, so that the edge sensor faces the virtual obstacle, and the robot main body walks around the periphery of the obstacle through the edge traditioner.
Optionally, the control module is further configured to:
when the obstacle avoidance action is executed, the robot main body is controlled to decelerate to stop when the robot main body is away from the obstacle by a preset distance;
after the robot main body stops, calculating an included angle between the barrier and the edge sensor, taking the included angle as a rotation angle, and controlling the robot main body to rotate the rotation angle.
Optionally, the depth sensor is disposed at the front side of the robot body, or the depth sensor is disposed near the top of the robot body at the upper edge of the front side of the robot body.
Optionally, the depth sensor is a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, or a millimeter wave sensor.
Optionally, the autonomous robot is a sweeping robot.
According to the technical scheme, the outline of the obstacle is obtained through the depth sensor, the virtual obstacle corresponding to the actual obstacle is established in the motion map of the robot main body, the robot main body performs accurate obstacle avoidance action on the virtual obstacle in the simulated motion map through calculation of the control module, the robot main body can accurately avoid the obstacle on the actual traveling route, and the obstacle avoidance precision of the autonomous robot is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a block diagram of an autonomous robot according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of the embodiment shown in fig. 1.
The reference numbers illustrate:
reference numerals Name (R) Reference numerals Name (R)
10 Robot main body 20 Depth sensor
30 Control module 40 Camera module
50 Edge sensor
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, if there is a description of "first", "second", etc. in an embodiment of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" appearing throughout is to include three juxtapositions, exemplified by "A and/or B" including either scheme A, or scheme B, or a scheme in which both A and B are satisfied. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides an autonomous robot. In this embodiment, the autonomous robot is a sweeping robot that can navigate across a floor surface. In some examples, an autonomous robot may clean a surface while traversing the surface. The robot may remove debris from the surface by: the debris is fanned and/or collected by applying a negative pressure (e.g., a partial vacuum) over the surface. To perform the locomotion function of the autonomous robot, the robot includes a robot body 10 supported by a drive system that can drive the steering robot across a floor surface. The robot main body 10 has a forward portion and a backward portion. The robot body 10 may be circular in shape, but may be other shapes including, but not limited to, a square shape, a rectangular shape, or a shape in which a forward portion is square and a backward portion is circular, or a shape in which a forward portion is circular and a backward portion is square. The drive system includes a right drive wheel module and a left drive wheel module. The driving wheel module comprises a driving wheel and a driving motor for driving the corresponding wheel.
In the embodiment of the present invention, as shown in fig. 1 and 2, the autonomous robot includes a robot main body 10, a depth sensor 20, and a control module 30. Wherein, the depth sensor 20 is arranged on the robot main body 10, and is used for identifying an obstacle on the advancing route of the robot main body 10 and scanning the outline of the obstacle; the control module 30 is arranged on the robot main body 10 and is electrically connected with the depth sensor 20, and the control module 30 is used for constructing a motion map of the robot main body 10 based on a detection signal of the depth sensor 20; the control module 30 is further configured to obtain the contour, and construct a virtual obstacle in the motion map with the contour as an object; and when the robot main body 10 approaches the virtual obstacle in the motion map, controlling the robot main body 10 to perform obstacle avoidance action.
It is worth explaining that, the existing sweeping robot adopts an infrared sensor to avoid obstacles, but the infrared sensor is greatly influenced by the surface color of the obstacles. For example, when the surface of an obstacle is black, the infrared sensor fails to measure distance due to the fact that the black reflectivity is low and light is hardly reflected, and therefore obstacle avoidance of the sweeping robot is affected, and the problem that the obstacle avoidance success rate of the existing sweeping robot is low is caused.
Aiming at the defects of the existing sweeping robot, the technical scheme of the invention is that the depth sensor 20 is arranged on the robot main body 10, and the obstacle on the advancing route of the robot main body 10 can be identified through the depth sensor 20. Since the depth sensor 20 is one type of depth sensor, the profile of the obstacle, including but not limited to the width and height data of the obstacle, and the length data of the detectable obstacle, can also be obtained by the depth sensor 20. Meanwhile, the depth sensor 20 may also detect a distance between the obstacle and the robot main body 10, and the control module 30 may construct a motion map of the robot main body 10 using a detection signal of the depth sensor 20. In this way, the control module 30 may acquire the contour of the obstacle acquired by the depth sensor 20 and construct a virtual obstacle in the constructed motion map with the contour of the obstacle as an object. In this way, when the robot main body 10 approaches the virtual obstacle in the moving map, the control module 30 may control the robot main body 10 to perform an obstacle avoidance operation, so that the robot main body 10 accurately avoids the obstacle in the actual scene. Also, after the motion map is constructed, the robot main body 10 can store the motion map of the robot, and at the same time, virtual obstacles constructed in the motion map can be retained. When the autonomous robot encounters the obstacle again in the later movement, the autonomous robot can directly avoid the obstacle without acquiring the outline of the obstacle again, and the obstacle avoiding efficiency of the autonomous robot is improved.
It can be seen that, compared with the existing sweeping robot, the autonomous robot of the present application can obtain the outline of the obstacle through the depth sensor 20, and establish the virtual obstacle corresponding to the actual obstacle in the motion map of the robot main body 10, and through the calculation of the control module 30, the robot main body 10 performs an accurate obstacle avoidance action for the virtual obstacle in the simulated motion map, so that the robot main body 10 accurately avoids the obstacle on the actual traveling route, and the obstacle avoidance accuracy of the autonomous robot is greatly improved.
If the depth sensor 20 does not find an obstacle at the position where the obstacle originally exists during the movement of the autonomous robot, the control module 30 removes the virtual obstacle in the movement map; if the obstacle detected by the depth sensor 20 at the position where the obstacle originally exists changes during the movement of the autonomous robot, the depth sensor 20 will rescan the new obstacle and obtain the contour of the new obstacle, and the control module 30 will correspondingly update the virtual obstacle in the movement map.
Specifically, in the present embodiment, the depth sensor 20 may be a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, a millimeter wave sensor, or the like. Further, in order to improve the detection accuracy, in the present embodiment, when the depth sensor 20 has a light emitting source (e.g., a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, etc.), the emitted light is laser light. Of course, the design of the present application is not limited thereto, and in other embodiments of the present application, the light emitted by the depth sensor 20 may also be infrared light.
Further, the control module 30 is configured to obtain a contour of the obstacle when the height of the obstacle is within a preset obstacle height detection interval, and construct the virtual obstacle in the motion map according to the contour. It is understood that, in the present embodiment, the height section of the obstacle acquired by the depth sensor 20 may be defined by a preset obstacle height detection section. Then, when the low point of the obstacle is higher than the upper limit height of the preset obstacle height detection interval or the high point of the obstacle is lower than the lower limit height of the preset obstacle height detection interval, the control module 30 does not set a virtual obstacle at the corresponding position of the motion map, and thus the robot main body 10 will continue to perform the forward movement. That is, if the low point of the obstacle is higher than the upper limit height of the preset obstacle height detection section or the high point of the obstacle is lower than the lower limit height of the preset obstacle height detection section, the robot main body 10 continues to perform the forward movement. By the arrangement, the autonomous robot can adapt to cleaning work in different environments.
For example, a case where the low point of the obstacle is higher than the upper limit height of the preset obstacle height detection section is described by taking a table as an example: it is known that the depth sensor 20 can find the table at a distant place during the autonomous robot traveling, that is, an obstacle is found on the traveling route of the robot main body 10, but the table of the table is generally high and the robot main body 10 can smoothly pass under the table. Then, it is apparent that the table portion of the table is not an obstacle avoidance object of the robot main body 10. At this time, the height of the table low point is higher than the upper limit height by adjusting the upper limit height of the preset obstacle height detection interval, so that the control module 30 does not construct virtual obstacles on the moving map, and the robot main body 10 can smoothly pass through the area at the bottom of the table to be cleaned sanitarily. Of course, the table is only an example object, and in an actual application scenario, a chair, a sofa, a bed, and the like may all be obstacle avoidance objects of the robot main body. The height of the obstacle is lower than the lower limit height of the preset obstacle height detection section, and the robot main body 10 can normally pass different types of garbage to clean the garbage by adjusting the lower limit height of the preset obstacle height detection section in consideration of the difference in height of different garbage.
The obstacle crossing height of the robot main body 10 refers to a height of a convex object or a concave environment through which the robot main body 10 can pass back and forth without obstacles. The robot main body 10 is controlled to avoid obstacles which cannot be crossed, so that the cleaning area of the autonomous robot can be increased while the autonomous robot is prevented from colliding with the obstacles.
Specifically, in this embodiment, the upper limit height of the preset obstacle height detection interval is the installation height of the depth sensor 20 plus a first preset distance, and the lower limit height of the preset obstacle height detection interval is the installation height of the depth sensor 20 minus a second preset distance. It is understood that the depth sensor 20 has a certain scanning angle in the vertical direction, and thus, in order to raise the scanning range of the depth sensor 20 as much as possible, the installation position of the depth sensor 20 is correspondingly raised, so that the installation height of the depth sensor 20 is similar to the height of the robot main body 10. In this way, the preset obstacle height detection section is determined by adding the first preset distance to the installation height of the depth sensor 20 or subtracting the second preset distance from the installation height of the depth sensor 20, and the section value of the preset obstacle detection height section can be enlarged as much as possible, thereby enlarging the number of obstacles that pass detection. Of course, the design of the present application is not limited to this, and in other embodiments of the present application, the range of the preset obstacle height detection section may be: the lower limit height is 0, and the upper limit height is 0+ a preset value, where 0 refers to the height of the traveling road surface of the robot main body 10, and the preset value is the height of the obstacle on the traveling road surface of the robot main body 10.
Specifically, in the present embodiment, the first preset distance is 90 mm, and the second preset distance is also 90 mm, that is, the preset obstacle height detection section is plus or minus 90 mm of the installation height of the depth sensor 20. If the installation height of the depth sensor 20 is 90 mm (the ground is used as a reference surface), the maximum height of the obstacle to be detected by the depth sensor 20 is 180 mm, that is, when there is a distance of 180 mm or more between the lower point of the obstacle and the ground, the robot main body 10 continues to perform the forward movement. Generally, the height of the sweeping robot is lower than 18 cm, so that the sweeping robot can be adapted to most sweeping robots. Of course, the design of the present application is not so limited, and in other implementations, depth sensor 20 may be adapted according to changes in actual usage conditions. Correspondingly, the first preset distance and the second preset distance can be adjusted adaptively according to practical application.
Further, in the present embodiment, the obstacle crossing height of the robot main body 10 is higher than the lower limit height of the preset obstacle height detection section, and the control module 30 is further configured to control the robot main body 10 to continue to perform the forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body 10. It can be understood that the lower limit height of the preset obstacle height detection section is set to be lower than the obstacle crossing height of the robot main body 10, so that when the height of the obstacle is lower than the obstacle crossing height of the robot main body 10, the robot main body 10 continues to advance to cross the obstacle, and then the robot main body 10 can clean the obstacle when crossing the obstacle, so as to ensure the cleaning effect of the autonomous robot. Further, since the height of the obstacle is lower than the obstacle crossing height of the robot main body 10, the movement of the robot main body 10 is not affected even if the obstacle is not a garbage to be cleaned.
In one embodiment, the autonomous robot further comprises a camera module 40, wherein the camera module 40 is used for acquiring image data on the advancing route of the robot main body 10; the control module 30 is further electrically connected to the camera module 40, the control module 30 is configured to acquire the image data, compare the image data with an image of a preset obstacle, and if the depth sensor 20 detects that the height of the obstacle is lower than the obstacle crossing height and the preset obstacle appears in the image data, the control module 30 acquires the contour of the preset obstacle from a local area or a cloud end, and constructs a virtual obstacle matched with the preset obstacle in an area corresponding to the movement map and the preset obstacle according to a detection signal of the depth sensor 20. It will be appreciated that some obstacles may have a height below the obstacle crossing height of the robot main body 10 (e.g., socks, wires, etc.) during autonomous robot cleaning, but it is not necessary for the robot main body 10 to cross over them in order to avoid damage to such obstacles by the robot main body 10. Therefore, by adding the camera module 40 to obtain the image data on the forward route of the robot main body 10, the control module 30 can compare the image data obtained by the camera module 40 with the image of the preset obstacle, and the image data of the preset obstacle can be stored locally or in the cloud. In this case, whether the image data of the preset obstacle is stored locally or in the cloud, the update and addition/deletion can be performed. The comparison process and steps between the image collected by the camera module 40 and the preset obstacle can refer to the AI recognition diagram of the mobile terminal, and the AI recognition diagram function of the mobile terminal is widely applied at present, and is not repeated here. When the control module 30 identifies a preset obstacle from the image acquired by the camera module 40, the depth detection function of the depth sensor 20 may be used to locate the position of the preset obstacle in the moving map, and then the contour of the preset obstacle acquired from the local or cloud is constructed at the corresponding position of the moving map, so that the robot main body 10 may be controlled to perform an obstacle avoidance operation, that is, avoid the obstacle to be avoided. Illustratively, the predetermined obstacle may be a wire, a sock, a carpet, a socket, or the like.
Specifically, in the present embodiment, the camera module 40 is a color camera. In other embodiments, the camera module 40 can also be a black and white camera or an infrared camera.
Further, the depth sensor 20 is further configured to scan a profile of a boundary between the recessed environment and the traveling road surface when it is detected that a recessed environment occurs on the traveling road surface of the robot main body 10 and a height difference between the recessed environment and the traveling road surface is greater than the obstacle crossing height. It can be understood that, for the sweeping robot, when the drop from the recessed environment to the normal traveling road exceeds the obstacle crossing height of the robot main body 10, when the drop is small, the robot main body 10 cannot return to the recessed environment, and the execution of the sweeping task of the sweeping robot is affected; when the fall is large, the robot main body 10 is damaged after falling from the traveling road surface. Therefore, after the depth sensor 20 detects the profile of the boundary between the scanned recessed environment and the traveling road surface, the control module 30 obtains the profile and constructs a corresponding virtual obstacle in the motion map of the robot body 10 to control the robot body 10 to avoid the recessed environment. Note that, the obstacle crossing height of the robot main body 10 is different depending on the actual robot, and is not particularly limited here.
Further, the autonomous robot further includes a rim sensor 50, and the rim sensor 50 is disposed at a side of the robot main body 10; the control module 30 is further configured to determine a rotation angle of the robot main body 10 according to a detection signal of the depth sensor 20 when performing an obstacle avoidance operation, and control the robot main body 10 to rotate according to the rotation angle, so that the edge sensor 50 faces the virtual obstacle, and the robot main body 10 walks around the periphery of the obstacle through the edge sensor.
It should be noted that, when a conventional sweeping robot performs an edgewise sweeping operation after encountering an obstacle, it usually utilizes a collision sensor mounted on the edge of the robot. Specifically, after detecting the obstacle, the sweeping robot continues to move forward after rotating by a fixed angle, and continues to rotate by the fixed angle after colliding with the obstacle until the edge sensor 50 is aligned with the obstacle, and then performs edge sweeping. In the process, the sweeping robot can collide with the obstacles for many times, so that the process is very long and tedious, and the sweeping robot is easily damaged. In the autonomous robot of the present invention, the contour of the obstacle is acquired in advance by the depth sensor 20, and a virtual obstacle is constructed in the motion map of the autonomous robot main body 10 in accordance with the position of the obstacle. Then, the relative position relationship between the robot main body 10 and the obstacle can be obtained in real time, and meanwhile, the installation position of the edge sensor 50 on the robot main body 10 is fixed and unchanged, so that the control module 30 can reasonably calculate the rotation angle required by the robot main body 10 when the edge sensor 50 faces the obstacle. Here, since the relative positional relationship between the robot main body 10 and the obstacle can be obtained in real time, the rotation angle of the robot main body 10 can also be calculated in real time. Then, after the robot encounters an obstacle, the robot main body 10 only needs to rotate once to face the edgewise sensor 50 to the obstacle for the edgewise cleaning work. Also, the robot main body 10 does not collide with an obstacle during steering. Therefore, compared with a common sweeping robot, the autonomous robot has the advantages of high working efficiency and long service life.
Specifically, in this embodiment, the control module 30 is further configured to: when the obstacle avoidance action is executed, the robot main body 10 is controlled to decelerate to stop when the robot main body is away from the obstacle by a preset distance; after the robot main body 10 stops, an included angle between the obstacle and the edge sensor 50 is calculated, the included angle is used as the rotation angle, and the robot main body 10 is controlled to rotate the rotation angle. That is, in the present embodiment, the obstacle avoidance operation of the robot main body 10 is specifically performed by decelerating to a stop when approaching an obstacle by a preset distance, and then rotating at the rotation angle to continue the forward movement. Here, the control of the robot to stop before turning is performed for the convenience of calculating the turning angle of the robot main body 10 and for the convenience of the robot main body 10 performing the turning operation. Of course, the design of the present application is not limited thereto, and in other embodiments, the control module 30 may control the robot main body 10 to complete the steering at the same time of the deceleration, or control the robot main body 10 to directly steer without the deceleration.
Optionally, in this embodiment, the preset distance is at least 10 cm. It can be understood that if the preset distance is set to be too short, for example, less than 10 cm, the braking distance of the robot main body 10 is not enough, and the robot main body 10 may be too close to the obstacle after stopping, which is not favorable for the steering of the robot main body 10; or directly bump into the obstacle and cannot complete the obstacle avoidance action.
Specifically, in the present embodiment, the depth sensor 20 is disposed near the top of the robot main body 10 at the upper edge of the front side of the robot main body 10. It will be appreciated that this arrangement is advantageous to enhance the viewing angle range of the depth sensor 20. Of course, the design of the application is not limited thereto, and in other embodiments, the depth sensor 20 may be provided on the front side, the rear side, or the lateral side of the robot main body 10.
It should be noted that, in order to ensure that the camera module 40 and the depth sensor 20 have the same viewing angle, in the present embodiment, the camera module 40 and the depth sensor 20 are installed on the same side of the robot body 10 at an interval.
The above description is only an alternative embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (11)

1. An autonomous robot, comprising:
a robot main body;
the depth sensor is arranged on the robot main body and used for identifying obstacles on the advancing route of the robot main body and scanning the outlines of the obstacles;
the control module is arranged on the robot main body and is electrically connected with the depth sensor;
the control module is used for constructing a motion map of the robot main body based on the detection signals of the depth sensor;
the control module is further used for acquiring the contour and constructing a virtual obstacle in the motion map by taking the contour as an object; and
and when the robot main body is close to the virtual obstacle in the motion map, controlling the robot main body to execute obstacle avoidance action.
2. The autonomous robot of claim 1, wherein the control module is configured to obtain a contour of the obstacle when the height of the obstacle is within a preset obstacle height detection interval, and construct the virtual obstacle with the contour in the motion map.
3. The autonomous robot of claim 2, wherein an upper limit height of the preset obstacle height detection section is a mounting height of the depth sensor plus a first preset distance, and a lower limit height of the preset obstacle height detection section is a mounting height of the depth sensor minus a second preset distance.
4. The autonomous robot of claim 2, wherein the obstacle crossing height of the robot main body is higher than a lower limit height of the preset obstacle height detection section, and the control module is further configured to control the robot main body to continue to perform the forward movement when the obstacle height is not higher than the obstacle crossing height of the robot main body.
5. The autonomous robot of claim 4, further comprising a camera module for capturing image data on a path of travel of the robot body;
the control module is further electrically connected with the camera shooting module and used for acquiring the image data and comparing the image data with an image of a preset obstacle, if the depth sensor detects that the height of the obstacle is lower than the obstacle crossing height and the preset obstacle appears in the image data, the control module acquires the outline of the preset obstacle from the local or cloud end and constructs a virtual obstacle matched with the preset obstacle in an area corresponding to the movement map and the preset obstacle according to a detection signal of the depth sensor.
6. The autonomous robot of claim 2, wherein the depth sensor is further configured to scan a profile of an interface between the recessed environment and the travel surface of the robot body when the recessed environment is detected to be present on the travel surface and a difference in height between the recessed environment and the travel surface is greater than the obstacle crossing height.
7. The autonomous robot of any one of claims 1 to 6, further comprising a edgewise sensor provided at a side of the robot main body;
the control module is further used for determining a rotation angle of the robot main body according to a detection signal of the depth sensor when an obstacle avoidance action is executed, and controlling the robot main body to rotate according to the rotation angle, so that the edge sensor faces the virtual obstacle, and the robot main body walks around the periphery of the obstacle through the edge traditioner.
8. The autonomous robot of claim 7, wherein the control module is further to:
when the obstacle avoidance action is executed, the robot main body is controlled to decelerate to stop when the robot main body is away from the obstacle by a preset distance;
after the robot main body stops, calculating an included angle between the barrier and the edge sensor, taking the included angle as a rotation angle, and controlling the robot main body to rotate the rotation angle.
9. The autonomous robot of claim 1, wherein the depth sensor is provided at a front side of the robot body, or wherein the depth sensor is provided near a top of the robot body at an upper edge of the front side of the robot body.
10. The autonomous robot of claim 1, wherein the depth sensor is a TOF sensor, a 3D structured light sensor, a binocular sensor, a lidar sensor, or a millimeter wave sensor.
11. The autonomous robot of claim 1, wherein the autonomous robot is a sweeping robot.
CN201911333329.9A 2019-12-20 2019-12-20 Autonomous robot Pending CN110850885A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911333329.9A CN110850885A (en) 2019-12-20 2019-12-20 Autonomous robot
PCT/CN2020/131393 WO2021120999A1 (en) 2019-12-20 2020-11-25 Autonomous robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911333329.9A CN110850885A (en) 2019-12-20 2019-12-20 Autonomous robot

Publications (1)

Publication Number Publication Date
CN110850885A true CN110850885A (en) 2020-02-28

Family

ID=69610207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911333329.9A Pending CN110850885A (en) 2019-12-20 2019-12-20 Autonomous robot

Country Status (2)

Country Link
CN (1) CN110850885A (en)
WO (1) WO2021120999A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857155A (en) * 2020-08-02 2020-10-30 珠海市一微半导体有限公司 Robot control method
CN111897335A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Obstacle avoidance control method and control system for robot walking in Chinese character' gong
CN112327841A (en) * 2020-10-29 2021-02-05 广东杜尼智能机器人工程技术研究中心有限公司 Optimal edgewise path planning and sorting method for sweeping robot
CN112489239A (en) * 2020-09-09 2021-03-12 北京潞电电气设备有限公司 Inspection system
CN112540612A (en) * 2020-09-28 2021-03-23 深圳市银星智能科技股份有限公司 Virtual wall signal adjusting method, virtual wall equipment, robot and navigation system thereof
CN112698654A (en) * 2020-12-25 2021-04-23 珠海市一微半导体有限公司 Single-point TOF-based mapping and positioning method, chip and mobile robot
CN112783158A (en) * 2020-12-28 2021-05-11 广州辰创科技发展有限公司 Method, equipment and storage medium for fusing multiple wireless sensing identification technologies
CN112987725A (en) * 2021-02-07 2021-06-18 珠海市一微半导体有限公司 Obstacle-based avoidance method, chip and cleaning robot
WO2021120999A1 (en) * 2019-12-20 2021-06-24 深圳市杉川机器人有限公司 Autonomous robot
CN113552589A (en) * 2020-04-01 2021-10-26 杭州萤石软件有限公司 Obstacle detection method, robot, and storage medium
WO2023216555A1 (en) * 2022-05-10 2023-11-16 丰疆智能(深圳)有限公司 Obstacle avoidance method and apparatus based on binocular vision, and robot and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117958664B (en) * 2024-04-02 2024-06-11 追觅创新科技(苏州)有限公司 Active obstacle surmounting control method and system for cleaning robot and cleaning robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5502638A (en) * 1992-02-10 1996-03-26 Honda Giken Kogyo Kabushiki Kaisha System for obstacle avoidance path planning for multiple-degree-of-freedom mechanism
JP2796949B2 (en) * 1995-06-19 1998-09-10 川崎重工業株式会社 Automatic guided vehicle and its non-contact type obstacle detection method
US9630319B2 (en) * 2015-03-18 2017-04-25 Irobot Corporation Localization and mapping using physical features
CN105700525B (en) * 2015-12-07 2018-09-07 沈阳工业大学 Method is built based on Kinect sensor depth map robot working environment uncertainty map
CN108459596A (en) * 2017-06-30 2018-08-28 炬大科技有限公司 A kind of method in mobile electronic device and the mobile electronic device
CN110850885A (en) * 2019-12-20 2020-02-28 深圳市杉川机器人有限公司 Autonomous robot

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021120999A1 (en) * 2019-12-20 2021-06-24 深圳市杉川机器人有限公司 Autonomous robot
CN113552589A (en) * 2020-04-01 2021-10-26 杭州萤石软件有限公司 Obstacle detection method, robot, and storage medium
CN111857155A (en) * 2020-08-02 2020-10-30 珠海市一微半导体有限公司 Robot control method
CN111897335A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Obstacle avoidance control method and control system for robot walking in Chinese character' gong
CN111857155B (en) * 2020-08-02 2024-06-18 珠海一微半导体股份有限公司 Robot control method
CN112489239A (en) * 2020-09-09 2021-03-12 北京潞电电气设备有限公司 Inspection system
CN112540612A (en) * 2020-09-28 2021-03-23 深圳市银星智能科技股份有限公司 Virtual wall signal adjusting method, virtual wall equipment, robot and navigation system thereof
CN112327841A (en) * 2020-10-29 2021-02-05 广东杜尼智能机器人工程技术研究中心有限公司 Optimal edgewise path planning and sorting method for sweeping robot
CN112698654A (en) * 2020-12-25 2021-04-23 珠海市一微半导体有限公司 Single-point TOF-based mapping and positioning method, chip and mobile robot
CN112783158A (en) * 2020-12-28 2021-05-11 广州辰创科技发展有限公司 Method, equipment and storage medium for fusing multiple wireless sensing identification technologies
CN112987725A (en) * 2021-02-07 2021-06-18 珠海市一微半导体有限公司 Obstacle-based avoidance method, chip and cleaning robot
WO2023216555A1 (en) * 2022-05-10 2023-11-16 丰疆智能(深圳)有限公司 Obstacle avoidance method and apparatus based on binocular vision, and robot and medium

Also Published As

Publication number Publication date
WO2021120999A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
CN110850885A (en) Autonomous robot
CN211559963U (en) Autonomous robot
CN112415998B (en) Obstacle classification obstacle avoidance control system based on TOF camera
CN107765688B (en) Autonomous mobile robot and automatic docking control method and device thereof
JP2022546289A (en) CLEANING ROBOT AND AUTOMATIC CONTROL METHOD FOR CLEANING ROBOT
EP3104194B1 (en) Robot positioning system
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
CN110852312B (en) Cliff detection method, mobile robot control method, and mobile robot
CN112004645A (en) Intelligent cleaning robot
CN110908378B (en) Robot edge method and robot
CN110554696B (en) Robot system, robot and robot navigation method based on laser radar
CN113841098A (en) Detecting objects using line arrays
CN112051844A (en) Self-moving robot and control method thereof
CN113848944A (en) Map construction method and device, robot and storage medium
CN111505652A (en) Map establishing method, device and operation equipment
CN114052561A (en) Self-moving robot
CN114594482A (en) Obstacle material detection method based on ultrasonic data and robot control method
CN112493926B (en) A robot of sweeping floor for scanning furniture bottom profile
US20230225580A1 (en) Robot cleaner and robot cleaner control method
CN113741441A (en) Operation method and self-moving equipment
KR20180080877A (en) Robot cleaner
CN118161087A (en) TOF optical system for sweeping robot and sweeping robot
CN105511468B (en) A kind of double reflection method of discrimination of light beam of laser radar and line-structured light vision system
EP4349234A1 (en) Self-moving device
CN112445208A (en) Robot, method and device for determining travel route, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Yang Yong

Inventor after: Wu Zexiao

Inventor after: Zheng Zhifan

Inventor after: Luo Zhijia

Inventor before: Yang Yong

Inventor before: Wu Zexiao

Inventor before: Zheng Zhifan

Inventor before: Luo Zhijia