WO2023142711A1 - Robot control method and device, and robot and storage medium - Google Patents

Robot control method and device, and robot and storage medium Download PDF

Info

Publication number
WO2023142711A1
WO2023142711A1 PCT/CN2022/137587 CN2022137587W WO2023142711A1 WO 2023142711 A1 WO2023142711 A1 WO 2023142711A1 CN 2022137587 W CN2022137587 W CN 2022137587W WO 2023142711 A1 WO2023142711 A1 WO 2023142711A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
scene
sensor
difficult
detected
Prior art date
Application number
PCT/CN2022/137587
Other languages
French (fr)
Chinese (zh)
Inventor
顾一休
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023142711A1 publication Critical patent/WO2023142711A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Definitions

  • the invention belongs to the technical field of robots, and in particular relates to a robot control method, device, robot and storage medium.
  • the present invention proposes a robot control method, device, robot and storage medium. This method can judge the current scene, and control the robot to perform different escape actions based on different scenes, thereby improving the cleaning effect.
  • a robot control method comprising:
  • the robot is controlled to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  • the difficult scene types include one or a combination of full drop scenarios, partial drop scenarios, false trigger scenarios, and edge change scenarios.
  • the signal triggered by the sensor includes the look-down height of the obstacle below the robot and the drop position data used to indicate the location of the robot;
  • the methods for determining the complete drop scenario and the partial drop scenario include:
  • the look-down height is less than the second height threshold, it is determined that the current difficult scene is a partial fall scene in combination with the drop position data and surrounding environment information.
  • the robot when the type of difficult scene is any single difficult scene, the robot is controlled to execute the escape action according to the escape strategy corresponding to the type of difficult scene until it detects that the signal triggered by the sensor returns to normal, include:
  • the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal;
  • the type of difficult scene is a local drop scene, evaluate the safety of the current difficult scene, and control the robot to leave the current difficult scene when it is determined to be unsafe until the signal triggered by the sensor is detected and returns to normal; and/or
  • control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • the robot when the type of difficult scene is a combination of multiple single difficult scenes, the robot is controlled to perform the action of getting out of trouble according to the strategy corresponding to the type of difficult scene until it detects that the signal triggered by the sensor is restored. normal, including:
  • the robot is controlled to perform the escape actions corresponding to each difficult scene in accordance with the priority order of each difficult scene until it is detected that the signal triggered by the sensor returns to normal.
  • controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal includes:
  • controlling the robot to rotate, move forward or backward until it detects that the signal triggered by the sensor returns to normal includes:
  • the robot is controlled to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • controlling the robot to leave the current difficult scene includes:
  • the safe side includes the boundary of the obstacle.
  • controlling the robot to leave the current difficult scene further includes:
  • the robot is controlled to back away from the current difficult scene based on the historical trajectory.
  • a robot control device comprising:
  • Detection module used to measure the signal triggered by the sensor when the robot is moving;
  • Processing module used to determine the type of difficult scene that the robot is currently in when it detects that the signal triggered by the sensor is abnormal, combined with the surrounding environment information obtained in real time;
  • An acquisition module used to acquire a corresponding escape strategy based on the type of difficult scene
  • Control module used to control the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  • a robot including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, the robot described in the first aspect is implemented. described method.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method according to the first aspect is implemented.
  • the present invention uses the sensor as a switch for abnormal signal detection.
  • the sensor triggers the abnormal signal, it combines the surrounding environment information to determine the current scene of the robot, and determines the escape strategy corresponding to the current scene to complete the escape action.
  • the present invention realizes the identification of different scenes, and helps the robot get out of trouble based on the algorithms corresponding to different scenes. The whole process is fast, efficient and adaptable;
  • the present invention can identify scenes where stairs or cliffs may cause the robot to fall completely, scenes where grooves or hollow furniture may cause the robot to partially fall, scenes where sensors are falsely triggered due to dark materials, etc., and changes at the edge of the carpet that cause the bottom of the robot Scenarios such as rolling up to cause suspension;
  • the present invention distinguishes the complete drop scene and the partial drop scene mainly through different height thresholds combined with the drop position, surrounding environment information, etc., ensuring the correct identification of the two scenes;
  • the present invention realizes different escape methods for different scenes.
  • the robot cannot be crossed, so the robot is directly controlled to leave; for a partial fall scene, it needs to be analyzed in combination with the surrounding environment information, and if it cannot be crossed, the robot is controlled to leave ;
  • the false triggering scene or the edge change scene the false triggering signal is restored to normal by controlling the robot to rotate, move forward or backward;
  • the present invention also realizes the control when multiple scenes are combined, that is, the escape actions corresponding to each scene can be executed in accordance with the priority order of each scene, so as to help the robot escape smoothly;
  • the present invention controls the robot to move forward or backward according to the position of the sensor that triggers the abnormal signal by rotating the preset angle, so that the false trigger of the sensor can be released;
  • the present invention determines the rolling position according to the position of the sensor that triggers the abnormal signal, and controls the two wheels of the robot through different control speeds until the robot returns to the flat ground safely.
  • the process is more stable;
  • the present invention controls the robot to leave the current scene, it first judges whether it can rotate. When the robot cannot rotate, it controls the robot to leave based on the point cloud data of the rear safe side. When there is no safe side behind, the robot is controlled to leave through the historical trajectory, thereby It can ensure that the robot moves back smoothly.
  • FIG. 1 is an exemplary flowchart of a robot control method according to an embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of a cliff scene according to an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a track groove scene according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of a carpet edge scene according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a robot control device according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
  • the sweeper encounters various obstacles when cleaning. Usually, when encountering obstacles such as tables, chairs, stairs, or cliffs, it can identify and select a corresponding escape method.
  • obstacles such as tables, chairs, stairs, or cliffs
  • the applicant of the present application creatively thought of classifying the difficult scenes in the homework area, and setting a processing method corresponding to each difficult scene, so that the follow-up robot can quickly judge the scene and execute the corresponding escape when cleaning method to reduce the risk of the robot getting stuck.
  • FIG. 1 shows an exemplary flowchart of a robot control method according to an embodiment of the present disclosure, and the details of the robot control method are as follows:
  • Step 101 detecting the signal triggered by the sensor when the robot is moving.
  • the above-mentioned sensors are used to collect some motion parameters of the robot and various data of the environmental space, and can be one or more of various sensors such as laser radar, camera, infrared sensor, pressure sensor, etc. It should be understood that the sensor is not limited to Therefore, those skilled in the art can select corresponding sensors according to actual needs. Moreover, those skilled in the art can also install different sensors at different positions (such as front, side, bottom, etc.) of the robot according to actual needs to obtain data of different directions/positions.
  • step 101 In this embodiment, in order for the robot to better recognize the scene, the following processing steps are also included before step 101:
  • the robot Before the robot cleans, it first needs to obtain the approximate zoning information of the home area and build a home map model. Specifically, the robot can run around each room/cleaning area to obtain relevant data and build a map model, which can then be based on This model is better for sweeping work.
  • Step 102 when it is detected that the signal triggered by the sensor is abnormal, combined with the surrounding environment information acquired in real time, determine the type of difficult scene where the robot is currently located.
  • the signal triggered by the above sensor includes the look-down height of the obstacle below the robot and the drop position data used to indicate the location of the robot.
  • the sensor for detecting the look-down height may be an infrared sensor, and the infrared sensor is installed at the bottom of the robot for height scanning during the driving of the robot.
  • the infrared sensor includes a transmitter and a receiver. The transmitter emits an infrared signal of a specific frequency, and the receiver receives the infrared signal of this frequency. When the infrared detection direction encounters an obstacle, the infrared signal is reflected back and received by the receiver. , and then the height of the robot from the obstacle can be calculated.
  • the number of infrared sensors installed can be determined according to the cleaning environment and the structure of the sensor itself. For example, four infrared sensors are usually installed at the bottom of the robot, namely: two sides in front of the bottom and two behind the bottom. The four infrared sensors can detect various scenes such as stairs and cliffs to help the robot not fall in such scenes.
  • an infrared sensor is also installed on the two sides of the bottom of the robot, that is, a total of six sensors are installed. Infrared sensors are used to obtain the height information of the robot at different positions from the bottom obstacle.
  • the above-mentioned triggered drop position data can be obtained after being identified and processed by sensors that sense whether the robot is suspended in the air or whether it is hit by a collision. Sensors for sensing whether the robot is suspended/knocked can be installed on the two wheels at the bottom of the robot, so that it can sense whether the wheels are under pressure.
  • the aforementioned sensors installed on the wheels of the robot may be pressure sensors.
  • the pressure sensor When the robot is running normally, its wheels are under pressure, and the pressure sensor will detect the pressure data; when encountering scenes such as cliffs and stairs, the wheel part of the robot may not be in contact with the ground, so the pressure sensor installed at the bottom of the robot will If no pressure data is detected, quickly judge the current position of the robot on the map, that is, record the data of the falling position, and then use this data to determine whether the robot is in a difficult scene.
  • the above-mentioned sensors installed on the wheels of the robot may also be mechanical switches, such as limit switches.
  • the limit switch includes an operating head and a contact system. When the wheel is under pressure, the operating head is squeezed, driving the contact system to act to output a connection signal; when the wheel is not under pressure, the circuit is disconnected. Once the circuit is disconnected, it will The data of the falling position can be recorded, which can also be used later to determine whether the robot is in a difficult scene.
  • the surrounding environment information acquired in real time includes: map information, point cloud information of obstacles, room partition information, and the like.
  • the above map information and room partition information can be obtained before the robot cleans for the first time. Since the room partition information only describes the general situation of the room, let the robot understand the cleaning area initially, but the robot does not understand some furniture and furnishings in the room. Based on this, it is also necessary to obtain the point cloud of obstacles in real time during the cleaning process. information, so as to achieve better cleaning.
  • the laser radar is installed directly in front of the robot and obtains obstacle information through continuous scanning.
  • the specific process of using the laser radar to obtain point cloud data is as follows:
  • Lidar includes a laser and a receiving system.
  • the laser generates and emits light pulses.
  • the light pulse will hit the obstacle and reflect back, and finally received by the receiver.
  • the receiver can accurately measure the travel time of the light pulse from when it is emitted to when it is reflected back. Given that the speed of light is known, the distance to the obstacle can be calculated. Combined with the height of the laser and the laser scanning angle, the three-dimensional coordinates of the light spot indicated by each obstacle can be accurately calculated, that is, the point cloud data.
  • the AI sensor is also installed directly in front of the robot to take pictures of the scene, such as real-time images of various furniture in the room, and process them.
  • the line laser sensor can be installed directly in front of the robot near the bottom to capture some scenes that are slightly higher than the machine wheels, such as steps and so on.
  • the type of difficult scene where the robot is currently located can be determined.
  • the type of difficult scene includes one or a combination of a full drop scene, a partial drop scene, a false trigger scene, and an edge change scene.
  • the full fall scene is a scene where stairs or cliffs may cause the robot to fall completely, refer to Figure 2, which is a schematic diagram of the cliff scene;
  • Partial drop scenarios are scenarios where grooves or hollowed-out furniture may cause the robot to partially fall.
  • Figure 3 is a schematic diagram of a track groove scenario
  • the false triggering scene is the scene where the sensor is falsely triggered due to dark materials or long-haired carpets;
  • the edge change scene refers to the scene where the change of the edge of the carpet causes the bottom of the robot to roll up and cause the robot to hang in the air.
  • Figure 4 is a schematic diagram of the scene of the edge of the carpet.
  • the methods for determining the complete drop scenario and the partial drop scenario include:
  • the current difficult scene is determined to be a partial falling scene by combining the data of the falling position and the surrounding environment information.
  • the robot cannot cross over. Once it falls, it is impossible to climb up automatically.
  • the local falling scene is usually lower than the height of stairs and cliffs, and the robot may cross it. Therefore, in order to correctly distinguish between two different scenes, it can be achieved by setting different height thresholds.
  • Figure 2 and Figure 3 are the schematic diagrams of the complete drop scene and the partial drop scene respectively.
  • the look-down height and the first look-down height can be compared first. Comparing the first height threshold and the second height threshold, combined with the surrounding environment data, it can determine which scene it is driving in front of.
  • Step 103 obtain the corresponding escape strategy based on the type of difficult scene.
  • the escape algorithm corresponding to each scene is stored in the robot controller, after the type of difficult scene is determined, the escape algorithm can be obtained based on the corresponding relationship table between the type of difficult scene and the escape strategy.
  • Step 104 controlling the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  • step 104 includes:
  • the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal;
  • the type of difficult scene is a partial fall scene
  • evaluate the safety of the current difficult scene control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal if it is determined to be unsafe, and control the robot to cross the fall if it is determined to be safe area until the detection of a signal triggered by a sensor returns to normal;
  • the difficult scene type is a false trigger scene or an edge change scene
  • control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • the present invention realizes different methods of getting out of trouble in different scenes.
  • the robot cannot be crossed, so it directly controls the robot to leave; for a partial fall scene, it needs to be analyzed in combination with the surrounding environment information, and if it cannot be crossed, the robot is controlled to leave;
  • control the robot For false triggering scenarios or edge change scenarios, control the robot to rotate, move forward or backward to restore the robot to a flat ground and restore the falsely triggered signal to normal.
  • controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal specifically includes:
  • the signal triggered by the sensor may be abnormal due to the color or material relationship, and it may be mistaken for the current location such as a staircase or a cliff.
  • first control the robot to rotate the preset angle and then control the robot to move forward or backward.
  • the robot can be controlled to move forward for a short distance
  • the front sensor triggers an abnormal signal
  • the robot can be controlled to retreat a short distance.
  • the false triggering of the sensor is released by rotating, moving forward, and retreating.
  • controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal includes:
  • control the robot According to the corresponding speeds of the wheels on the suspended side and the wheels on the non-suspended side, control the robot to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • the rolling position is determined according to the position of the sensor that triggers the abnormal signal, and the two wheels of the robot are controlled through different control speeds until the robot returns to the flat ground safely, and the whole process is more stable.
  • step 104 includes:
  • the robot is controlled to perform the escape actions corresponding to each difficult scene in accordance with the priority order of each difficult scene until the signal triggered by the sensor is detected and returns to normal.
  • control robot to leave the current difficult scene includes:
  • the safe side includes the boundary of the obstacle.
  • the present invention uses the sensor as a switch for abnormal signal detection.
  • the sensor triggers the abnormal signal, it combines the surrounding environment information to determine the current scene of the robot, and determines the escape strategy corresponding to the current scene to complete the escape action.
  • the present invention realizes the identification of different scenes, and helps the robot get out of trouble based on algorithms corresponding to different scenes, and the whole process is fast, efficient and adaptable.
  • FIG. 5 as an implementation of the method shown in FIG. 1 above, an embodiment of a robot control device is provided.
  • the device embodiment corresponds to the method embodiment shown in FIG. 1 , as shown in FIG. 5 ,
  • the robot control device of this embodiment includes:
  • Detection module 501 used to measure the signal triggered by the sensor when the robot is moving;
  • Processing module 502 used to determine the type of difficult scene where the robot is currently located in combination with the surrounding environment information acquired in real time when it is detected that the signal triggered by the sensor is abnormal;
  • Obtaining module 503 used to obtain the corresponding escape strategy based on the type of difficult scene
  • Control module 504 used to control the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  • the aforementioned difficult scene types include a combination of one or more of a complete drop scene, a partial drop scene, a false trigger scene, and an edge change scene.
  • the signal triggered by the above sensor includes the height of the obstacle below the robot and the drop position data used to indicate the position of the robot; the above processing module 502 is specifically used for:
  • the current difficult scene is determined to be a partial falling scene by combining the data of the falling position and the surrounding environment information.
  • control module 504 is specifically configured to:
  • the difficult scene type is a complete fall scene
  • control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal; and/or used for
  • the type of difficult scene is a local drop scene, evaluate the safety of the current difficult scene, and control the robot to leave the current difficult scene when it is determined to be unsafe until the signal triggered by the sensor is detected and returns to normal; and/or used for
  • the difficult scene type is a false trigger scene or an edge change scene
  • control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • control module 504 is specifically configured to:
  • the robot is controlled to perform the corresponding escape actions of each difficult scene in accordance with the priority order of each difficult scene until the signal triggered by the sensor is detected and returns to normal.
  • control module 504 is further specifically configured to:
  • control module 504 is further specifically configured to:
  • the difficult scene type is an edge change scene, control the robot to rotate the preset angle according to the surrounding environment information;
  • control the robot According to the corresponding speeds of the wheels on the suspended side and the wheels on the non-suspended side, control the robot to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  • control module 504 is further specifically configured to:
  • the safe side includes the boundary of the obstacle.
  • control module 504 is further specifically configured to:
  • Fig. 6 discloses a schematic diagram of a robot provided by an embodiment of the present invention.
  • the robot includes: a memory 61, a processor 62, and a computer program 63 stored in the memory 61 and operable on the processor 62, such as a program of a robot control method.
  • the processor 62 executes the computer program 63
  • the steps in the above-mentioned embodiment of the robot control method are implemented, for example, steps 101 to 103 shown in FIG. 1 .
  • the processor 62 executes the computer program 63
  • the functions of each module in the above-mentioned robot control device embodiment such as the functions of the modules 501 to 504 shown in FIG. 5 , are realized.
  • the above-mentioned robot also includes a measuring element 64 and a motion unit 65 .
  • the measuring element 64 can be a radar, a sensor, etc.; wherein the radar can be a laser radar or an infrared radar, and the laser radar can be a single-line radar or a multi-line radar.
  • the motion unit 65 is used to control the motion of the robot.
  • the processor 62 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 61 may be an internal storage unit of the robot, such as a hard disk or memory of the robot. Described memory 61 also can be the external storage device of a kind of robot, for example a kind of plug-in type hard disk equipped on the robot, smart memory card (Smart Media Card, SMC), secure digital (SecureDigital, SD) card, flash memory card ( Flash Card), etc. Further, the memory 61 may also include both an internal storage unit of the robot and an external storage device. The memory 61 is used to store the computer program and other programs and data required by the robot. The memory 61 can also be used to temporarily store data that has been output or will be output.
  • Fig. 6 is only an example of a robot, and does not constitute a limitation to a robot, and may include more or less components than those shown in the illustration, or combine certain components, or different components
  • the robot may also include input and output devices, network access devices, buses, and the like.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the program part in the technology can be regarded as a "product” or “article” existing in the form of executable code and/or related data, participated in or realized through a computer-readable medium.
  • Tangible, permanent storage media may include the internal memory or storage used by any computer, processor, or similar device or related modules. For example, various semiconductor memories, tape drives, disk drives, or any similar device that provides storage for software.
  • All or portions of the Software may from time to time communicate over a network, such as the Internet or other communication network. Such communications may load software from one computer device or processor to another. Therefore, another medium that can transmit software elements can also be used as a physical connection between local devices, such as light waves, radio waves, electromagnetic waves, etc., and can be transmitted through cables, optical cables, or air.
  • the physical medium used for carrier waves, such as electrical cables, wireless links, or fiber optic cables, may also be considered software-carrying medium.
  • tangible "storage” media other terms referring to computer or machine "readable media” mean media that participate in the execution of any instructions by a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A robot control method, comprising detecting a signal triggered by a sensor during travelling of a robot; when it is detected that the signal triggered by the sensor is abnormal, determining, on the basis of surrounding environment information obtained in real time, the type of a trap scenario where the robot is currently located; obtaining a corresponding escape strategy on the basis of the type of the trap scenario; and controlling the robot to execute an escape action according to the escape strategy corresponding to the trap scenario until it is detected that the signal triggered by the sensor is recovered to be normal. Recognition of different scenarios can be realized, and the robot is helped to escape on the basis of algorithms corresponding to different scenarios. The present invention further relates to a robot control device, a robot, and a storage medium.

Description

机器人控制方法、装置、机器人及存储介质Robot control method, device, robot and storage medium
本公开要求如下专利申请的优先权:于2022年01月27日提交中国专利局、申请号为202210098722.X、发明名称为“机器人控制方法、装置、机器人及存储介质”的中国专利申请,上述专利申请的全部内容通过引用结合在本公开中。This disclosure claims the priority of the following patent application: the Chinese patent application submitted to the China Patent Office on January 27, 2022, with the application number 202210098722.X, and the title of the invention is "robot control method, device, robot and storage medium", the above The entire content of the patent application is incorporated by reference in this disclosure.
技术领域technical field
本发明属于机器人技术领域,尤其涉及一种机器人控制方法、装置、机器人及存储介质。The invention belongs to the technical field of robots, and in particular relates to a robot control method, device, robot and storage medium.
背景技术Background technique
随着社会经济的发展以及人民生活水平的提高,人们对于居住或工作的环境要求也越来越来高。为了减轻人们在家居或工作场所清洁方面的工作负担,缓解清洁过程中的劳累程度,各种地面清洁产品应运而生,如:扫地机等等。With the development of social economy and the improvement of people's living standards, people's requirements for living or working environment are getting higher and higher. In order to reduce people's work burden in cleaning their homes or workplaces and ease the fatigue during the cleaning process, various floor cleaning products have emerged, such as sweepers and so on.
扫地机在进行清扫时,会遇到各种各样的障碍物,在遇到如桌椅、楼梯或者悬崖等障碍物时,扫地机能很好的辨别,通过绕行或者后退等方式离开障碍物。然而在现实生活中,并不仅仅只有楼梯、悬崖,还会存在如滑轨凹槽、镂空家具、各类深色地毯、长毛地毯等场景。在凹槽、镂空、锥形家具等略高于机器人轮高场景下,机器人容易触发悬空报警从而远离当前场景;另外,对于一些深色地毯、长毛地毯等场景,机器人容易误触发。因此,在上述这些场景中,现有的扫地机控制算法往往无法正确识别,出现误判从而影响清洁效果。When the sweeper is cleaning, it will encounter various obstacles. When it encounters obstacles such as tables, chairs, stairs or cliffs, the sweeper can identify them very well and leave the obstacles by going around or backing away. . However, in real life, there are not only stairs and cliffs, but also scenes such as slide rail grooves, hollow furniture, various dark carpets, and long-haired carpets. In the scene of grooves, hollows, tapered furniture, etc. slightly higher than the wheel height of the robot, the robot is easy to trigger the suspension alarm and stay away from the current scene; in addition, for some dark carpets, long-haired carpets and other scenes, the robot is easy to trigger by mistake. Therefore, in the above-mentioned scenarios, the existing sweeping machine control algorithm often cannot correctly identify, and misjudgment occurs, which affects the cleaning effect.
发明内容Contents of the invention
为了解决现有技术的问题,本发明提出一种机器人控制方法、装置、机器人及存储介质。该方法可以判断当前所处场景,基于不同的场景控制机器人执行不同的脱困动作,进而可以提高清洁效果。In order to solve the problems of the prior art, the present invention proposes a robot control method, device, robot and storage medium. This method can judge the current scene, and control the robot to perform different escape actions based on different scenes, thereby improving the cleaning effect.
本发明实施例提供的具体技术方案如下:The specific technical scheme that the embodiment of the present invention provides is as follows:
第一方面,提供一种机器人控制方法,所述方法包括:In a first aspect, a robot control method is provided, the method comprising:
检测机器人行进时由传感器触发的信号;Detect signals triggered by sensors as the robot travels;
当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定所述机器人当前所处的困难场景类型;When it is detected that the signal triggered by the sensor is abnormal, combined with the surrounding environment information obtained in real time, determine the type of difficult scene where the robot is currently in;
基于所述困难场景类型获取相对应的脱困策略;Obtaining a corresponding escape strategy based on the difficult scene type;
控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。The robot is controlled to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
在一些实施例中,所述困难场景类型包括完全跌落场景、局部跌落场景、误触发场景、边缘变化场景中一种或者多种的组合。In some embodiments, the difficult scene types include one or a combination of full drop scenarios, partial drop scenarios, false trigger scenarios, and edge change scenarios.
在一些实施例中,所述传感器触发的信号包括所述机器人下方障碍物的下视高度以及用于指示所述机器人所在位置的跌落位置数据;In some embodiments, the signal triggered by the sensor includes the look-down height of the obstacle below the robot and the drop position data used to indicate the location of the robot;
所述完全跌落场景和局部跌落场景的确定方法包括:The methods for determining the complete drop scenario and the partial drop scenario include:
获取所述下视高度分别与第一高度阈值和第二高度阈值比较后的比较结果;其中,所述第一高度阈值大于等于所述第二高度阈值;Obtain the comparison result after comparing the down-looking height with the first height threshold and the second height threshold respectively; wherein, the first height threshold is greater than or equal to the second height threshold;
当所述下视高度大于第一高度阈值时,结合所述跌落位置数据、周围环境信息,确定当前困难场景为完全跌落场景;When the look-down height is greater than the first height threshold, combined with the drop position data and surrounding environment information, determine that the current difficult scene is a complete drop scene;
当所述下视高度小于第二高度阈值时,结合所述跌落位置数据、周围环境信息,确定当前困难场景为局部跌落场景。When the look-down height is less than the second height threshold, it is determined that the current difficult scene is a partial fall scene in combination with the drop position data and surrounding environment information.
在一些实施例中,当所述困难场景类型为任一单一困难场景时,所述控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常,包括:In some embodiments, when the type of difficult scene is any single difficult scene, the robot is controlled to execute the escape action according to the escape strategy corresponding to the type of difficult scene until it detects that the signal triggered by the sensor returns to normal, include:
若所述困难场景类型为完全跌落场景,控制所述机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;和/或If the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal; and/or
若所述困难场景类型为局部跌落场景,评估当前困难场景的安全性,在确定非安全的情况下控制所述机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;和/或If the type of difficult scene is a local drop scene, evaluate the safety of the current difficult scene, and control the robot to leave the current difficult scene when it is determined to be unsafe until the signal triggered by the sensor is detected and returns to normal; and/or
若所述困难场景类型为误触发场景或边缘变化场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常。If the type of difficult scene is a false trigger scene or an edge change scene, control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
在一些实施例中,当所述困难场景类型为多个单一困难场景的组合时,所述控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常,包括:In some embodiments, when the type of difficult scene is a combination of multiple single difficult scenes, the robot is controlled to perform the action of getting out of trouble according to the strategy corresponding to the type of difficult scene until it detects that the signal triggered by the sensor is restored. normal, including:
控制所述机器人按照各困难场景的优先级别顺序执行各困难场景对应的脱困动作直至检测到传感器触发的信号恢复正常。The robot is controlled to perform the escape actions corresponding to each difficult scene in accordance with the priority order of each difficult scene until it is detected that the signal triggered by the sensor returns to normal.
在一些实施例中,若所述困难场景类型为误触发场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常包括:In some embodiments, if the type of difficult scene is a false trigger scene, controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal includes:
根据周围环境信息控制所述机器人旋转预设角度;controlling the robot to rotate at a preset angle according to surrounding environment information;
确定触发异常信号的传感器的位置,基于位置信息控制所述机器人前进或后退直至检测到传感器触发的信号恢复正常。Determine the position of the sensor that triggers the abnormal signal, and control the robot to move forward or backward based on the position information until the signal triggered by the detected sensor returns to normal.
在一些实施例中,若所述困难场景类型为边缘变化场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常包括:In some embodiments, if the type of difficult scene is an edge change scene, controlling the robot to rotate, move forward or backward until it detects that the signal triggered by the sensor returns to normal includes:
根据周围环境信息控制所述机器人旋转预设角度;controlling the robot to rotate at a preset angle according to surrounding environment information;
基于触发异常信号的传感器的位置确定机器人的悬空侧和非悬空侧;Determining the dangling and non-dangling sides of the robot based on the position of the sensor that triggered the anomaly signal;
按照悬空侧的轮子和非悬空侧的轮子各自对应的速度,控制所述机器人前进或后退直至检测到传感器触发的信号恢复正常。According to the corresponding speeds of the wheels on the suspended side and the wheels on the non-suspended side, the robot is controlled to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
在一些实施例中,所述控制所述机器人离开当前困难场景,包括:In some embodiments, the controlling the robot to leave the current difficult scene includes:
判断所述机器人是否能够旋转;judging whether the robot can rotate;
当确定所述机器人无法旋转时,获取所述机器人后方安全侧的点云数据;When it is determined that the robot cannot rotate, acquiring point cloud data of the safe side behind the robot;
控制所述机器人基于安全侧的点云数据后退离开当前困难场景;Controlling the robot to retreat from the current difficult scene based on the point cloud data on the safe side;
其中,所述安全侧包括障碍物的边界。Wherein, the safe side includes the boundary of the obstacle.
在一些实施例中,所述控制所述机器人离开当前困难场景还包括:In some embodiments, the controlling the robot to leave the current difficult scene further includes:
当所述机器人后方不存在安全侧时,获取所述机器人进入当前困难场景的历史轨迹;When there is no safe side behind the robot, obtain the historical trajectory of the robot entering the current difficult scene;
控制所述机器人基于历史轨迹后退离开当前困难场景。The robot is controlled to back away from the current difficult scene based on the historical trajectory.
第二方面,提供一种机器人控制装置,所述装置包括:In a second aspect, a robot control device is provided, the device comprising:
检测模块:用于测机器人行进时由传感器触发的信号;Detection module: used to measure the signal triggered by the sensor when the robot is moving;
处理模块:用于当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定机器人当前所处的困难场景类型;Processing module: used to determine the type of difficult scene that the robot is currently in when it detects that the signal triggered by the sensor is abnormal, combined with the surrounding environment information obtained in real time;
获取模块:用于基于所述困难场景类型获取相对应的脱困策略;An acquisition module: used to acquire a corresponding escape strategy based on the type of difficult scene;
控制模块:用于控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。Control module: used to control the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
第三方面,提供一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面所述的方法。In a third aspect, a robot is provided, including a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the computer program, the robot described in the first aspect is implemented. described method.
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的方法。In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method according to the first aspect is implemented.
本发明实施例具有如下有益效果:Embodiments of the present invention have the following beneficial effects:
1、本发明将传感器作为异常信号检测的开关,当传感器触发异常信号后,结合周围环境信息来确定机器人当前所处场景,并确定与当前场景对应的脱困策略进而完成脱困动作,相比于现有技术,本发明实现了不同场景的识别,基于不同场景对应的算法帮助机器人脱困,整个过程快速、高效、适应性强;1. The present invention uses the sensor as a switch for abnormal signal detection. When the sensor triggers the abnormal signal, it combines the surrounding environment information to determine the current scene of the robot, and determines the escape strategy corresponding to the current scene to complete the escape action. Compared with the existing With technology, the present invention realizes the identification of different scenes, and helps the robot get out of trouble based on the algorithms corresponding to different scenes. The whole process is fast, efficient and adaptable;
2、本发明可以识别楼梯或悬崖可能导致机器人完全跌落的场景、凹槽或镂空家具等可能导致机器人局部跌落的场景、由于深色材质等造成传感器误触发的场景、在地毯边缘变化导致机器人底部卷起造成悬空等场景;2. The present invention can identify scenes where stairs or cliffs may cause the robot to fall completely, scenes where grooves or hollow furniture may cause the robot to partially fall, scenes where sensors are falsely triggered due to dark materials, etc., and changes at the edge of the carpet that cause the bottom of the robot Scenarios such as rolling up to cause suspension;
3、本发明主要通过不同的高度阈值并结合跌落位置、周围环境信息等来进行区分完全跌落场景和局部跌落场景,保证了两个场景的正确识别;3. The present invention distinguishes the complete drop scene and the partial drop scene mainly through different height thresholds combined with the drop position, surrounding environment information, etc., ensuring the correct identification of the two scenes;
4、本发明实现了不同场景的不同脱困方法,对于完全跌落场景来说由于无法跨越,因此直接控制机器人离开;对于局部跌落场景来说需要结合周围环境 信息具体分析,如果无法跨越则控制机器人离开;对于误触发场景或边缘变化场景来说,通过控制机器人旋转、前进或后退来使得误触发的信号恢复正常;4. The present invention realizes different escape methods for different scenes. For a complete fall scene, the robot cannot be crossed, so the robot is directly controlled to leave; for a partial fall scene, it needs to be analyzed in combination with the surrounding environment information, and if it cannot be crossed, the robot is controlled to leave ;For the false triggering scene or the edge change scene, the false triggering signal is restored to normal by controlling the robot to rotate, move forward or backward;
5、本发明还实现多种场景组合时的控制,即可以按照各个场景的优先级别顺序执行各场景对应的脱困动作,从而也能帮助机器人顺利脱困;5. The present invention also realizes the control when multiple scenes are combined, that is, the escape actions corresponding to each scene can be executed in accordance with the priority order of each scene, so as to help the robot escape smoothly;
6、在遇到由深色地毯等造成的误触发场景时,本发明通过旋转预设角度、根据触发异常信号的传感器的位置来控制机器人前行或后退,从而可以使得传感器误触发被解除;6. When encountering a false trigger scene caused by dark carpets, etc., the present invention controls the robot to move forward or backward according to the position of the sensor that triggers the abnormal signal by rotating the preset angle, so that the false trigger of the sensor can be released;
7、在遇到地毯边缘等边缘变化场景时,本发明根据触发异常信号的传感器的位置确定卷起位置,并通过不同的控制速度来控制机器人的两个轮子,直至机器人安全恢复至平地,整个过程更加平稳;7. When encountering an edge change scene such as the edge of a carpet, the present invention determines the rolling position according to the position of the sensor that triggers the abnormal signal, and controls the two wheels of the robot through different control speeds until the robot returns to the flat ground safely. The process is more stable;
8、本发明在控制机器人离开当前场景时,首先判断能否旋转,当机器人无法旋转时基于后方安全侧的点云数据控制机器人离开,在后方无安全侧时,通过历史轨迹控制机器人离开,从而可以保证机器人平稳后退。8. When the present invention controls the robot to leave the current scene, it first judges whether it can rotate. When the robot cannot rotate, it controls the robot to leave based on the point cloud data of the rear safe side. When there is no safe side behind, the robot is controlled to leave through the historical trajectory, thereby It can ensure that the robot moves back smoothly.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present invention. For those skilled in the art, other drawings can also be obtained based on these drawings without creative effort.
图1是根据本公开实施例的机器人控制方法的示例性流程图;FIG. 1 is an exemplary flowchart of a robot control method according to an embodiment of the present disclosure;
图2是根据本公开实施例的悬崖场景示意图;Fig. 2 is a schematic diagram of a cliff scene according to an embodiment of the present disclosure;
图3是根据本公开实施例的轨道凹槽场景示意图;Fig. 3 is a schematic diagram of a track groove scene according to an embodiment of the present disclosure;
图4是根据本公开实施例的地毯边缘场景示意图;Fig. 4 is a schematic diagram of a carpet edge scene according to an embodiment of the present disclosure;
图5是根据本公开实施例的机器人控制装置的结构示意图;5 is a schematic structural diagram of a robot control device according to an embodiment of the present disclosure;
图6是根据本公开实施例的机器人的结构示意图。Fig. 6 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案行进清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Some, but not all, embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
如背景技术所述,目前,扫地机在进行清扫时,会遇到各种各样的障碍物,通常在遇到如桌椅、楼梯或者悬崖等障碍物时可以识别并选择对应的脱困方法。但是在现实生活中,由于各个家庭的作业区域千差万别,并不仅仅只有楼梯、悬崖等场景,还会存在如滑轨凹槽、镂空家具、各类深色地毯、长毛地毯等场景。在上述这些场景中,现有技术中不存在相应的控制算法来帮助机器人脱困,从而影响了清洁效果。As mentioned in the background art, at present, the sweeper encounters various obstacles when cleaning. Usually, when encountering obstacles such as tables, chairs, stairs, or cliffs, it can identify and select a corresponding escape method. However, in real life, since the work areas of each family are very different, there are not only scenes such as stairs and cliffs, but also scenes such as slide rail grooves, hollowed-out furniture, various dark carpets, and long-haired carpets. In the above scenarios, there is no corresponding control algorithm in the prior art to help the robot get out of trouble, thus affecting the cleaning effect.
为了解决上述问题,本申请申请人创造性想到将家庭作业区域中的困难场景分类,设置与每一困难场景对应的处理方法,那么后续机器人在清扫时便可快速判断所处场景并执行对应的脱困方法,降低机器人陷入困境的风险。In order to solve the above problems, the applicant of the present application creatively thought of classifying the difficult scenes in the homework area, and setting a processing method corresponding to each difficult scene, so that the follow-up robot can quickly judge the scene and execute the corresponding escape when cleaning method to reduce the risk of the robot getting stuck.
图1示出了根据本公开实施例的机器人控制方法的示例性流程图,该机器人控制方法的详述如下:FIG. 1 shows an exemplary flowchart of a robot control method according to an embodiment of the present disclosure, and the details of the robot control method are as follows:
步骤101、检测机器人行进时由传感器触发的信号。Step 101, detecting the signal triggered by the sensor when the robot is moving.
上述传感器用于采集机器人的一些运动参数及环境空间各类数据,可以为激光雷达、摄像机、红外传感器、压力传感器等各类传感器的一种或者多种,应该理解的是,传感器并不局限于此,本领域的技术人员可以根据实际需求选择相应的传感器。并且,本领域的技术人员还可以根据实际需求将不同的传感器安装于机器人的不同位置(如正前方、侧方、底部等等)以获取不同方向/位置的数据。The above-mentioned sensors are used to collect some motion parameters of the robot and various data of the environmental space, and can be one or more of various sensors such as laser radar, camera, infrared sensor, pressure sensor, etc. It should be understood that the sensor is not limited to Therefore, those skilled in the art can select corresponding sensors according to actual needs. Moreover, those skilled in the art can also install different sensors at different positions (such as front, side, bottom, etc.) of the robot according to actual needs to obtain data of different directions/positions.
本实施例中,为了机器人更好识别场景,在步骤101之前还包括如下处理步骤:In this embodiment, in order for the robot to better recognize the scene, the following processing steps are also included before step 101:
获取房间分区信息,构建地图模型。Obtain room partition information and build a map model.
在机器人进行清扫之前,首先需要获取家庭区域的大致分区信息,构建家庭地图模型,具体的,可以让机器人绕着各个房间/清扫区域跑一圈,获取相关数据,构建地图模型,后续便可基于此模型更好进行清扫工作。Before the robot cleans, it first needs to obtain the approximate zoning information of the home area and build a home map model. Specifically, the robot can run around each room/cleaning area to obtain relevant data and build a map model, which can then be based on This model is better for sweeping work.
步骤102、当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定机器人当前所处的困难场景类型。Step 102, when it is detected that the signal triggered by the sensor is abnormal, combined with the surrounding environment information acquired in real time, determine the type of difficult scene where the robot is currently located.
在一些实施例中,上述传感器触发的信号包括机器人下方障碍物的下视高度以及用于指示机器人所在位置的跌落位置数据。In some embodiments, the signal triggered by the above sensor includes the look-down height of the obstacle below the robot and the drop position data used to indicate the location of the robot.
其中,检测下视高度的传感器可以为红外传感器,红外传感器安装在机器人底部,用于在机器人行驶过程中进行高度扫描。具体的,红外传感器包括发射器与接收器,发射器发射特定频率的红外信号,接收器接收这种频率的红外信号,当红外的检测方向遇到障碍物时,红外信号反射回来被接收器接收,进而可以计算得到机器人距障碍物的高度。Wherein, the sensor for detecting the look-down height may be an infrared sensor, and the infrared sensor is installed at the bottom of the robot for height scanning during the driving of the robot. Specifically, the infrared sensor includes a transmitter and a receiver. The transmitter emits an infrared signal of a specific frequency, and the receiver receives the infrared signal of this frequency. When the infrared detection direction encounters an obstacle, the infrared signal is reflected back and received by the receiver. , and then the height of the robot from the obstacle can be calculated.
红外传感器的安装数量可以根据清扫环境、传感器自身结构等来决定。如,通常在机器人底部设置四个红外传感器,分别为:底部前方两侧、底部后方两个,通过四个红外传感器可以检测楼梯、悬崖等各种场景,帮助机器人不在此类场景中跌落。The number of infrared sensors installed can be determined according to the cleaning environment and the structure of the sensor itself. For example, four infrared sensors are usually installed at the bottom of the robot, namely: two sides in front of the bottom and two behind the bottom. The four infrared sensors can detect various scenes such as stairs and cliffs to help the robot not fall in such scenes.
在本实施例中,为了更好实现各种场景的识别,除了底部前方两侧、底部后方两个红外传感器之外,还在机器人底部的两个侧面分别安装一个红外传感器,即总共安装六个红外传感器来获取不同位置下的机器人距底部障碍物的高度信息。In this embodiment, in order to better realize the recognition of various scenes, in addition to two infrared sensors on both sides of the front and rear of the bottom, an infrared sensor is also installed on the two sides of the bottom of the robot, that is, a total of six sensors are installed. Infrared sensors are used to obtain the height information of the robot at different positions from the bottom obstacle.
上述触发的跌落位置数据可以由感测机器人是否悬空/是否受到碰撞等传感器进行识别、处理后得到。用于感测机器人是否悬空/是否受到碰撞的传感器可以安装在机器人底部的两个轮子上,从而可以感测轮子是否受压。The above-mentioned triggered drop position data can be obtained after being identified and processed by sensors that sense whether the robot is suspended in the air or whether it is hit by a collision. Sensors for sensing whether the robot is suspended/knocked can be installed on the two wheels at the bottom of the robot, so that it can sense whether the wheels are under pressure.
例如,上述安装在机器人轮子上的传感器可以为压力传感器。在机器人正常行驶时,其轮子是处于受压状态,压力传感器会检测到压力数据;当遇到悬 崖、楼梯等场景时,机器人轮子部分可能与地面不接触,那么安装在机器人底部的压力传感器就检测不到压力数据,此时迅速判断当前机器人在地图中的位置,即记录得到跌落位置数据,后续便可根据该数据来确定机器人是否处于困难场景。For example, the aforementioned sensors installed on the wheels of the robot may be pressure sensors. When the robot is running normally, its wheels are under pressure, and the pressure sensor will detect the pressure data; when encountering scenes such as cliffs and stairs, the wheel part of the robot may not be in contact with the ground, so the pressure sensor installed at the bottom of the robot will If no pressure data is detected, quickly judge the current position of the robot on the map, that is, record the data of the falling position, and then use this data to determine whether the robot is in a difficult scene.
此外,除了压力传感器之外,上述安装在机器人轮子上的传感器还可以为机械开关,如限位开关。限位开关包括操作头和触点***,当轮子受压时,操作头被挤压,带动触点***动作从而输出接通信号;当轮子不受压时,电路断开,一旦电路断开即可记录得到跌落位置数据,后续便同样可根据该数据来确定机器人是否处于困难场景。In addition, in addition to the pressure sensor, the above-mentioned sensors installed on the wheels of the robot may also be mechanical switches, such as limit switches. The limit switch includes an operating head and a contact system. When the wheel is under pressure, the operating head is squeezed, driving the contact system to act to output a connection signal; when the wheel is not under pressure, the circuit is disconnected. Once the circuit is disconnected, it will The data of the falling position can be recorded, which can also be used later to determine whether the robot is in a difficult scene.
除了压力传感器、机械开关之外,还可以由其他类型的传感器来获取跌落位置数据,本方案对传感器的类型不加以限制。在检测到传感器触发的信号处于异常时,便可结合周围环境信息,确定机器人当前所处的困难场景类型。In addition to pressure sensors and mechanical switches, other types of sensors can also be used to obtain drop position data, and this solution does not limit the type of sensors. When it is detected that the signal triggered by the sensor is abnormal, it can combine the surrounding environment information to determine the type of difficult scene the robot is currently in.
在一些实施例中,上述实时获取的周围环境信息包括:地图信息、障碍物的点云信息、房间分区信息等。In some embodiments, the surrounding environment information acquired in real time includes: map information, point cloud information of obstacles, room partition information, and the like.
具体的,上述地图信息、房间分区信息在机器人第一次清扫前即可获得。由于房间分区信息只是描述了房间的大致情况,让机器人初步了解清洁区域,但是房间内部的一些家具、摆设等情况机器人并不了解,基于此,还需要在清扫过程中实时获取障碍物的点云信息,由此才能实现更好地清扫。Specifically, the above map information and room partition information can be obtained before the robot cleans for the first time. Since the room partition information only describes the general situation of the room, let the robot understand the cleaning area initially, but the robot does not understand some furniture and furnishings in the room. Based on this, it is also necessary to obtain the point cloud of obstacles in real time during the cleaning process. information, so as to achieve better cleaning.
在本实施例中,为了获取详尽的障碍物信息,通过三种不同的传感器来实现。分别为:安装在机器人正前方的激光雷达、AI摄像头、线激光传感器。In this embodiment, in order to obtain detailed obstacle information, it is realized through three different sensors. They are: laser radar, AI camera, and line laser sensor installed directly in front of the robot.
其中,激光雷达安装于机器人的正前方并通过不断扫描获取障碍物信息,上述利用激光雷达获取点云数据的具体过程如下:Among them, the laser radar is installed directly in front of the robot and obtains obstacle information through continuous scanning. The specific process of using the laser radar to obtain point cloud data is as follows:
激光雷达包括激光器和接收***,激光器产生并发射光脉冲,当存在障碍物时,光脉冲会打在障碍物表明并反射回来,最终被接收器所接收。接收器可以准确地测量光脉冲从发射到被反射回的传播时间。鉴于光速已知,因此可以计算得到距障碍物的距离,结合激光器的高度,激光扫描角度,从而可以准确 地计算出每一个障碍物表明光斑的三维坐标,即点云数据。Lidar includes a laser and a receiving system. The laser generates and emits light pulses. When there is an obstacle, the light pulse will hit the obstacle and reflect back, and finally received by the receiver. The receiver can accurately measure the travel time of the light pulse from when it is emitted to when it is reflected back. Given that the speed of light is known, the distance to the obstacle can be calculated. Combined with the height of the laser and the laser scanning angle, the three-dimensional coordinates of the light spot indicated by each obstacle can be accurately calculated, that is, the point cloud data.
由于激光雷达安装角度或者其结构限制,可能会存在扫描盲区,为了弥补盲区遗憾,还利用AI摄像头、线激光传感器进行补充。Due to the installation angle of the lidar or its structural limitations, there may be scanning blind spots. In order to make up for the blind spots, AI cameras and line laser sensors are also used to supplement.
AI传感器同样安装在机器人正前方用于拍摄场景图片,如房间内的各种家具的实时图像,并进行处理。The AI sensor is also installed directly in front of the robot to take pictures of the scene, such as real-time images of various furniture in the room, and process them.
线激光传感器可安装于机器人正前方靠近底部位置,用于获取一些比机器轮子稍高的场景,如台阶等等。The line laser sensor can be installed directly in front of the robot near the bottom to capture some scenes that are slightly higher than the machine wheels, such as steps and so on.
在获取上述周围环境信息后,即可确定机器人当前所处的困难场景类型。After obtaining the above surrounding environment information, the type of difficult scene where the robot is currently located can be determined.
在一些实施例中,困难场景类型包括完全跌落场景、局部跌落场景、误触发场景、边缘变化场景中一种或者多种的组合。In some embodiments, the type of difficult scene includes one or a combination of a full drop scene, a partial drop scene, a false trigger scene, and an edge change scene.
其中,完全跌落场景为楼梯或悬崖可能导致机器人完全跌落的场景,参考图2,图2为悬崖场景示意图;Among them, the full fall scene is a scene where stairs or cliffs may cause the robot to fall completely, refer to Figure 2, which is a schematic diagram of the cliff scene;
局部跌落场景为凹槽或镂空家具等可能导致机器人局部跌落的场景,参考图3,图3为轨道凹槽场景示意图;Partial drop scenarios are scenarios where grooves or hollowed-out furniture may cause the robot to partially fall. Refer to Figure 3, which is a schematic diagram of a track groove scenario;
误触发场景为由于深色材质或长毛地毯等造成传感器误触发的场景;The false triggering scene is the scene where the sensor is falsely triggered due to dark materials or long-haired carpets;
边缘变化场景为在地毯边缘变化导致机器人底部卷起造成悬空等场景,参考图4,图4为地毯边缘场景示意图。The edge change scene refers to the scene where the change of the edge of the carpet causes the bottom of the robot to roll up and cause the robot to hang in the air. Refer to Figure 4, which is a schematic diagram of the scene of the edge of the carpet.
当传感器触发的信号包括机器人下方障碍物的下视高度以及用于指示机器人所在位置的跌落位置数据时,上述完全跌落场景和局部跌落场景的确定方法包括:When the signal triggered by the sensor includes the look-down height of the obstacle below the robot and the drop position data used to indicate the location of the robot, the methods for determining the complete drop scenario and the partial drop scenario include:
获取下视高度分别与第一高度阈值和第二高度阈值比较后的比较结果;其中,第一高度阈值大于等于第二高度阈值;Obtain the comparison result after comparing the look-down height with the first height threshold and the second height threshold respectively; wherein, the first height threshold is greater than or equal to the second height threshold;
当下视高度大于第一高度阈值时,结合跌落位置数据、周围环境信息,确定当前困难场景为完全跌落场景;When the downward looking height is greater than the first height threshold, combined with the data of the falling position and the surrounding environment information, it is determined that the current difficult scene is a complete falling scene;
当下视高度小于第二高度阈值时,结合跌落位置数据、周围环境信息,确定当前困难场景为局部跌落场景。When the downward looking height is less than the second height threshold, the current difficult scene is determined to be a partial falling scene by combining the data of the falling position and the surrounding environment information.
通常来说,像楼梯、悬崖等场景,机器人是无法跨越的,一旦跌落就不可能自动爬起,比较危险,会对机器人造成一定冲击,甚至损坏机器人;对于凹槽或镂空家具等可能导致机器人局部跌落的场景,其在高度上通常会比楼梯、悬崖的高度要低,机器人有可能跨越过去,因此为了正确区分两种不同场景,可以通过设置不同的高度阈值来实现。Generally speaking, in scenarios such as stairs and cliffs, the robot cannot cross over. Once it falls, it is impossible to climb up automatically. The local falling scene is usually lower than the height of stairs and cliffs, and the robot may cross it. Therefore, in order to correctly distinguish between two different scenes, it can be achieved by setting different height thresholds.
图2和图3分别为完全跌落场景、局部跌落场景示意图,当机器人在两个场景中行驶时,当检测到传感器触发的下视高度以及跌落位置数据异常时,首先可以将下视高度与第一高度阈值、第二高度阈值进行比较,再结合周围环境数据,便可确定其行驶前方到底属于哪一种场景。Figure 2 and Figure 3 are the schematic diagrams of the complete drop scene and the partial drop scene respectively. When the robot is driving in the two scenarios, when it detects that the sensor-triggered look-down height and drop position data are abnormal, the look-down height and the first look-down height can be compared first. Comparing the first height threshold and the second height threshold, combined with the surrounding environment data, it can determine which scene it is driving in front of.
步骤103、基于困难场景类型获取相对应的脱困策略。Step 103, obtain the corresponding escape strategy based on the type of difficult scene.
由于机器人控制器中存有每一场景对应的算法,因此在确定得到困难场景类型后,便可基于困难场景类型与脱困策略的对应关系表中得到脱困算法。Since the algorithm corresponding to each scene is stored in the robot controller, after the type of difficult scene is determined, the escape algorithm can be obtained based on the corresponding relationship table between the type of difficult scene and the escape strategy.
步骤104、控制机器人按照与困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。Step 104, controlling the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
由于现实生活场景复杂,有可能出现单场景(如楼梯、悬崖)等情况,也有可能出现多场景(如在楼梯上铺上深色地毯)等情况,因此对于单场景和多场景有不同的处理方法,具体如下:Due to the complexity of real-life scenes, there may be single scenes (such as stairs, cliffs), etc., and multiple scenes (such as laying dark carpets on stairs), etc., so there are different treatments for single scenes and multiple scenes method, as follows:
当困难场景类型为任一单一困难场景时,步骤104包括:When the difficult scene type is any single difficult scene, step 104 includes:
若困难场景类型为完全跌落场景,控制机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;和/或If the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal; and/or
若困难场景类型为局部跌落场景,评估当前困难场景的安全性,在确定非安全的情况下控制机器人离开当前困难场景直至检测到传感器触发的信号恢复正常,在确定安全的情况下控制机器人跨越跌落区域直至检测到传感器触发的信号恢复正常;和/或If the type of difficult scene is a partial fall scene, evaluate the safety of the current difficult scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal if it is determined to be unsafe, and control the robot to cross the fall if it is determined to be safe area until the detection of a signal triggered by a sensor returns to normal; and/or
若困难场景类型为误触发场景或边缘变化场景,控制机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常。If the difficult scene type is a false trigger scene or an edge change scene, control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
本发明实现了不同场景的不同脱困方法,对于完全跌落场景来说由于无法跨越,因此直接控制机器人离开;对于局部跌落场景来说需要结合周围环境信息具体分析,如果无法跨越则控制机器人离开;对于误触发场景或边缘变化场景来说,通过控制机器人旋转、前进或后退来使机器人恢复至平地并使得误触发的信号恢复正常。The present invention realizes different methods of getting out of trouble in different scenes. For a complete fall scene, the robot cannot be crossed, so it directly controls the robot to leave; for a partial fall scene, it needs to be analyzed in combination with the surrounding environment information, and if it cannot be crossed, the robot is controlled to leave; For false triggering scenarios or edge change scenarios, control the robot to rotate, move forward or backward to restore the robot to a flat ground and restore the falsely triggered signal to normal.
在一些实施例中,若困难场景类型为误触发场景,控制机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常具体包括:In some embodiments, if the difficult scene type is a false trigger scene, controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal specifically includes:
根据周围环境信息控制机器人旋转预设角度;Control the robot to rotate the preset angle according to the surrounding environment information;
确定触发异常信号的传感器的位置,基于位置信息控制机器人前进或后退直至检测到传感器触发的信号恢复正常。Determine the position of the sensor that triggers the abnormal signal, and control the robot to move forward or backward based on the position information until the signal triggered by the sensor is detected and returns to normal.
在遇到由深色地毯或者长毛地毯等造成的误触发场景时,由于颜色或材质关系,传感器触发的信号可能会异常,可能会误认为当前处于楼梯或悬崖等位置,在这种情况下,首先控制机器人旋转预设角度,随后再控制机器人前行或后退。具体的,若后方传感器触发异常信号,那么可控制机器人前行一小段距离,若前方传感器触发异常信号,那么可控制机器人后退一小段距离。通过旋转、前行、后退等方式使得传感器误触发被解除。When encountering a false trigger scene caused by a dark carpet or a long-haired carpet, the signal triggered by the sensor may be abnormal due to the color or material relationship, and it may be mistaken for the current location such as a staircase or a cliff. In this case , first control the robot to rotate the preset angle, and then control the robot to move forward or backward. Specifically, if the rear sensor triggers an abnormal signal, the robot can be controlled to move forward for a short distance, and if the front sensor triggers an abnormal signal, the robot can be controlled to retreat a short distance. The false triggering of the sensor is released by rotating, moving forward, and retreating.
在一些实施例中,若困难场景类型为边缘变化场景,控制机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常包括:In some embodiments, if the difficult scene type is an edge change scene, controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal includes:
根据周围环境信息控制机器人旋转预设角度;Control the robot to rotate the preset angle according to the surrounding environment information;
基于触发异常信号的传感器的位置确定机器人的悬空侧和非悬空侧;Determining the dangling and non-dangling sides of the robot based on the position of the sensor that triggered the anomaly signal;
按照悬空侧的轮子和非悬空侧的轮子各自对应的速度,控制机器人前进或后退直至检测到传感器触发的信号恢复正常。According to the corresponding speeds of the wheels on the suspended side and the wheels on the non-suspended side, control the robot to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
在遇到地毯边缘等边缘变化场景时,根据触发异常信号的传感器的位置确定卷起位置,并通过不同的控制速度来控制机器人的两个轮子,直至机器人安全恢复至平地,整个过程更加平稳。When encountering edge change scenes such as the edge of the carpet, the rolling position is determined according to the position of the sensor that triggers the abnormal signal, and the two wheels of the robot are controlled through different control speeds until the robot returns to the flat ground safely, and the whole process is more stable.
当困难场景类型为多个单一困难场景的组合时,步骤104包括:When the difficult scenario type is a combination of multiple single difficult scenarios, step 104 includes:
控制机器人按照各困难场景的优先级别顺序执行各困难场景对应的脱困动作直至检测到传感器触发的信号恢复正常。The robot is controlled to perform the escape actions corresponding to each difficult scene in accordance with the priority order of each difficult scene until the signal triggered by the sensor is detected and returns to normal.
例如,若当前遇到楼梯和深色地毯场景的组合时,首先可以基于误触发场景使机器人旋转、前行或后退,在进行上述步骤后发现传感器触发的信号仍然异常,那么可以直接控制机器离开。For example, if you encounter a combination of stairs and dark carpet scenes, you can first make the robot rotate, move forward or backward based on the false trigger scene. After performing the above steps, you find that the signal triggered by the sensor is still abnormal, then you can directly control the machine to leave .
在一些实施例中,上述控制机器人离开当前困难场景,包括:In some embodiments, the control robot to leave the current difficult scene includes:
判断机器人是否能够旋转;Determine whether the robot can rotate;
当确定机器人无法旋转时,获取机器人后方安全侧的点云数据;When it is determined that the robot cannot rotate, obtain the point cloud data of the safe side behind the robot;
控制机器人基于安全侧的点云数据后退离开当前困难场景;Control the robot to retreat from the current difficult scene based on the point cloud data on the safe side;
当机器人后方不存在安全侧时,获取机器人进入当前困难场景的历史轨迹;When there is no safe side behind the robot, obtain the historical trajectory of the robot entering the current difficult scene;
控制机器人基于历史轨迹后退离开当前困难场景。Control the robot to retreat from the current difficult scene based on the historical trajectory.
其中,安全侧包括障碍物的边界。Wherein, the safe side includes the boundary of the obstacle.
由于市面上大部分机器人不存在后退算法,因此,在判断机器人需要离开时,首先可以确定机器人能否旋转,如果可以旋转,那么便可旋转相应角度后离开。然而,市面上除了圆形机,其实还有相当一部分机器是异形机,对于异形机来说,只有当其旋转半径小于通道半径时才能旋转。基于此,本方案增加了后退算法来控制机器人离开,并且,在离开时可以根据安全侧的点云数据行驶,保证机器人后退时的平稳性。Since most robots on the market do not have a back-off algorithm, when judging that the robot needs to leave, it is first possible to determine whether the robot can rotate. If it can rotate, then it can rotate the corresponding angle and then leave. However, in addition to circular machines on the market, there are actually quite a few machines that are special-shaped machines. For special-shaped machines, they can only rotate when their rotation radius is smaller than the channel radius. Based on this, this scheme adds a backward algorithm to control the robot to leave, and when leaving, it can drive according to the point cloud data on the safe side to ensure the stability of the robot when retreating.
本发明将传感器作为异常信号检测的开关,当传感器触发异常信号后,结合周围环境信息来确定机器人当前所处场景,并确定与当前场景对应的脱困策略进而完成脱困动作,相比于现有技术,本发明实现了不同场景的识别,基于不同场景对应的算法帮助机器人脱困,整个过程快速、高效、适应性强。The present invention uses the sensor as a switch for abnormal signal detection. When the sensor triggers the abnormal signal, it combines the surrounding environment information to determine the current scene of the robot, and determines the escape strategy corresponding to the current scene to complete the escape action. Compared with the prior art , the present invention realizes the identification of different scenes, and helps the robot get out of trouble based on algorithms corresponding to different scenes, and the whole process is fast, efficient and adaptable.
继续参见图5,作为对上述图1所示方法的实现,提供了一种机器人控制装置的一个实施例,该装置实施例与图1所示的方法实施例相对应,如图5所示,本实施例的机器人控制装置包括:Continuing to refer to FIG. 5 , as an implementation of the method shown in FIG. 1 above, an embodiment of a robot control device is provided. The device embodiment corresponds to the method embodiment shown in FIG. 1 , as shown in FIG. 5 , The robot control device of this embodiment includes:
检测模块501:用于测机器人行进时由传感器触发的信号;Detection module 501: used to measure the signal triggered by the sensor when the robot is moving;
处理模块502:用于当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定机器人当前所处的困难场景类型;Processing module 502: used to determine the type of difficult scene where the robot is currently located in combination with the surrounding environment information acquired in real time when it is detected that the signal triggered by the sensor is abnormal;
获取模块503:用于基于困难场景类型获取相对应的脱困策略;Obtaining module 503: used to obtain the corresponding escape strategy based on the type of difficult scene;
控制模块504:用于控制机器人按照与困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。Control module 504: used to control the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
在本实施例的一些可选的实现方式中,上述困难场景类型包括完全跌落场景、局部跌落场景、误触发场景、边缘变化场景中一种或者多种的组合。In some optional implementation manners of this embodiment, the aforementioned difficult scene types include a combination of one or more of a complete drop scene, a partial drop scene, a false trigger scene, and an edge change scene.
在本实施例的一些可选的实现方式中,上述传感器触发的信号包括机器人下方障碍物的下视高度以及用于指示机器人所在位置的跌落位置数据;上述处理模块502具体用于:In some optional implementations of this embodiment, the signal triggered by the above sensor includes the height of the obstacle below the robot and the drop position data used to indicate the position of the robot; the above processing module 502 is specifically used for:
获取下视高度分别与第一高度阈值和第二高度阈值比较后的比较结果;其中,第一高度阈值大于等于第二高度阈值;Obtain the comparison result after comparing the look-down height with the first height threshold and the second height threshold respectively; wherein, the first height threshold is greater than or equal to the second height threshold;
当下视高度大于第一高度阈值时,结合跌落位置数据、周围环境信息,确定当前困难场景为完全跌落场景;When the downward looking height is greater than the first height threshold, combined with the data of the falling position and the surrounding environment information, it is determined that the current difficult scene is a complete falling scene;
当下视高度小于第二高度阈值时,结合跌落位置数据、周围环境信息,确定当前困难场景为局部跌落场景。When the downward looking height is less than the second height threshold, the current difficult scene is determined to be a partial falling scene by combining the data of the falling position and the surrounding environment information.
在本实施例的一些可选的实现方式中,上述控制模块504具体用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is specifically configured to:
若困难场景类型为完全跌落场景,控制机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;以及用于/或者用于If the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal; and/or used for
若困难场景类型为局部跌落场景,评估当前困难场景的安全性,在确定非安全的情况下控制所述机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;以及用于/或者用于If the type of difficult scene is a local drop scene, evaluate the safety of the current difficult scene, and control the robot to leave the current difficult scene when it is determined to be unsafe until the signal triggered by the sensor is detected and returns to normal; and/or used for
若困难场景类型为误触发场景或边缘变化场景,控制机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常。If the difficult scene type is a false trigger scene or an edge change scene, control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
在本实施例的一些可选的实现方式中,上述控制模块504具体用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is specifically configured to:
当所述困难场景类型为多个单一困难场景的组合时,控制机器人按照各 困难场景的优先级别顺序执行各困难场景对应的脱困动作直至检测到传感器触发的信号恢复正常。When the difficult scene type is a combination of multiple single difficult scenes, the robot is controlled to perform the corresponding escape actions of each difficult scene in accordance with the priority order of each difficult scene until the signal triggered by the sensor is detected and returns to normal.
在本实施例的一些可选的实现方式中,上述控制模块504具体还用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is further specifically configured to:
若困难场景类型为误触发场景,根据周围环境信息控制机器人旋转预设角度;If the difficult scene type is a false trigger scene, control the robot to rotate the preset angle according to the surrounding environment information;
确定触发异常信号的传感器的位置,基于位置信息控制机器人前进或后退直至检测到传感器触发的信号恢复正常。Determine the position of the sensor that triggers the abnormal signal, and control the robot to move forward or backward based on the position information until the signal triggered by the sensor is detected and returns to normal.
在本实施例的一些可选的实现方式中,上述控制模块504具体还用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is further specifically configured to:
若困难场景类型为边缘变化场景,根据周围环境信息控制机器人旋转预设角度;If the difficult scene type is an edge change scene, control the robot to rotate the preset angle according to the surrounding environment information;
基于触发异常信号的传感器的位置确定机器人的悬空侧和非悬空侧;Determining the dangling and non-dangling sides of the robot based on the position of the sensor that triggered the anomaly signal;
按照悬空侧的轮子和非悬空侧的轮子各自对应的速度,控制机器人前进或后退直至检测到传感器触发的信号恢复正常。According to the corresponding speeds of the wheels on the suspended side and the wheels on the non-suspended side, control the robot to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
在本实施例的一些可选的实现方式中,上述控制模块504具体还用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is further specifically configured to:
判断机器人是否能够旋转;Determine whether the robot can rotate;
当确定机器人无法旋转时,获取机器人后方安全侧的点云数据;When it is determined that the robot cannot rotate, obtain the point cloud data of the safe side behind the robot;
控制机器人基于安全侧的点云数据后退离开当前困难场景;Control the robot to retreat from the current difficult scene based on the point cloud data on the safe side;
其中,安全侧包括障碍物的边界。Wherein, the safe side includes the boundary of the obstacle.
在本实施例的一些可选的实现方式中,上述控制模块504具体还用于:In some optional implementation manners of this embodiment, the above-mentioned control module 504 is further specifically configured to:
当机器人后方不存在安全侧时,获取机器人进入当前困难场景的历史轨迹;When there is no safe side behind the robot, obtain the historical trajectory of the robot entering the current difficult scene;
控制机器人基于历史轨迹后退离开当前困难场景。Control the robot to retreat from the current difficult scene based on the historical trajectory.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the sequence numbers of the steps in the above embodiments do not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
图6公开了本发明一实施例提供的一种机器人的示意图。如图6所示,机 器人包括:存储器61、处理器62以及存储在存储器61中并可在处理器62上运行的计算机程序63,例如一种机器人控制方法的程序。处理器62执行计算机程序63时实现上述一种机器人控制方法实施例中的步骤,例如图1所示的步骤101至步骤103。或者,处理器62执行计算机程序63时实现上述一种机器人控制装置实施例中各模块的功能,例如图5所示模块501至504的功能。此外,上述机器人还包括测量元件64、运动单元65。Fig. 6 discloses a schematic diagram of a robot provided by an embodiment of the present invention. As shown in FIG. 6, the robot includes: a memory 61, a processor 62, and a computer program 63 stored in the memory 61 and operable on the processor 62, such as a program of a robot control method. When the processor 62 executes the computer program 63 , the steps in the above-mentioned embodiment of the robot control method are implemented, for example, steps 101 to 103 shown in FIG. 1 . Alternatively, when the processor 62 executes the computer program 63, the functions of each module in the above-mentioned robot control device embodiment, such as the functions of the modules 501 to 504 shown in FIG. 5 , are realized. In addition, the above-mentioned robot also includes a measuring element 64 and a motion unit 65 .
测量元件64可以是雷达、传感器等;其中雷达可以为激光雷达或红外雷达,激光雷达可以是单线雷达或多线雷达。The measuring element 64 can be a radar, a sensor, etc.; wherein the radar can be a laser radar or an infrared radar, and the laser radar can be a single-line radar or a multi-line radar.
运动单元65用于控制机器人运动。The motion unit 65 is used to control the motion of the robot.
所述处理器62可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 62 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
所述存储器61可以是一种机器人的内部存储单元,例如一种机器人的硬盘或内存。所述存储器61也可以是一种机器人的外部存储设备,例如一种机器人上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(SecureDigital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述一种机器人的内部存储单元也包括外部存储设备。所述存储器61用于存储所述计算机程序以及所述一种机器人所需的其他程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。The memory 61 may be an internal storage unit of the robot, such as a hard disk or memory of the robot. Described memory 61 also can be the external storage device of a kind of robot, for example a kind of plug-in type hard disk equipped on the robot, smart memory card (Smart Media Card, SMC), secure digital (SecureDigital, SD) card, flash memory card ( Flash Card), etc. Further, the memory 61 may also include both an internal storage unit of the robot and an external storage device. The memory 61 is used to store the computer program and other programs and data required by the robot. The memory 61 can also be used to temporarily store data that has been output or will be output.
本领域技术人员可以理解,图6仅仅是一种机器人的示例,并不构成对一种机器人的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述一种机器人还可以包括输入输出设备、网络接入设备、总线等。Those skilled in the art can understand that Fig. 6 is only an example of a robot, and does not constitute a limitation to a robot, and may include more or less components than those shown in the illustration, or combine certain components, or different components For example, the robot may also include input and output devices, network access devices, buses, and the like.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiment of the present application, and its specific functions and technical effects can be found in the method embodiment section. I won't repeat them here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of description, only the division of the above-mentioned functional units and modules is used for illustration. In practical applications, the above-mentioned functions can be assigned to different functional units, Completion of modules means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit, and the above-mentioned integrated units may adopt hardware It can also be implemented in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above system, reference may be made to the corresponding processes in the aforementioned method embodiments, and details will not be repeated here.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。The embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
技术中的程序部分可以被认为是以可执行的代码和/或相关数据的形式而存在的“产品”或“制品”,通过计算机可读的介质所参与或实现的。有形的、永久的储存介质可以包括任何计算机、处理器、或类似设备或相关的模块所用到的内存或存储器。例如,各种半导体存储器、磁带驱动器、磁盘驱动器或者类似任何能够为软件提供存储功能的设备。The program part in the technology can be regarded as a "product" or "article" existing in the form of executable code and/or related data, participated in or realized through a computer-readable medium. Tangible, permanent storage media may include the internal memory or storage used by any computer, processor, or similar device or related modules. For example, various semiconductor memories, tape drives, disk drives, or any similar device that provides storage for software.
所有软件或其中的一部分有时可能会通过网络进行通信,如互联网或其他通信网络。此类通信可以将软件从一个计算机设备或处理器加载到另一个。因此,另一种能够传递软件元素的介质也可以被用作局部设备之间的物理连接,例如光波、电波、电磁波等,通过电缆、光缆或者空气等实现传播。用来载波的物理介质如电缆、无线连接或光缆等类似设备,也可以被认为是承载软件的 介质。在这里的用法除非限制了有形的“储存”介质,其他表示计算机或机器“可读介质”的术语都表示在处理器执行任何指令的过程中参与的介质。All or portions of the Software may from time to time communicate over a network, such as the Internet or other communication network. Such communications may load software from one computer device or processor to another. Therefore, another medium that can transmit software elements can also be used as a physical connection between local devices, such as light waves, radio waves, electromagnetic waves, etc., and can be transmitted through cables, optical cables, or air. The physical medium used for carrier waves, such as electrical cables, wireless links, or fiber optic cables, may also be considered software-carrying medium. As used herein, unless restricted to tangible "storage" media, other terms referring to computer or machine "readable media" mean media that participate in the execution of any instructions by a processor.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.
以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above examples only express several implementation modes of the present invention, and the description thereof is relatively specific and detailed, but it should not be construed as limiting the scope of the patent for the invention. It should be noted that, for those skilled in the art, several modifications and improvements can be made without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.

Claims (13)

  1. 一种机器人控制方法,其特征在于,所述方法包括:A method for controlling a robot, characterized in that the method comprises:
    检测机器人行进时由传感器触发的信号;Detect signals triggered by sensors as the robot travels;
    当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定所述机器人当前所处的困难场景类型;When it is detected that the signal triggered by the sensor is abnormal, combined with the surrounding environment information obtained in real time, determine the type of difficult scene where the robot is currently in;
    基于所述困难场景类型获取相对应的脱困策略;Obtaining a corresponding escape strategy based on the difficult scene type;
    控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。The robot is controlled to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  2. 根据权利要求1所述的方法,其特征在于,所述困难场景类型包括完全跌落场景、局部跌落场景、误触发场景、边缘变化场景中一种或者多种的组合。The method according to claim 1, wherein the difficult scene types include a combination of one or more of complete drop scenarios, partial drop scenarios, false trigger scenarios, and edge change scenarios.
  3. 根据权利要求2所述的方法,其特征在于,所述传感器触发的信号包括所述机器人下方障碍物的下视高度以及用于指示所述机器人所在位置的跌落位置数据;The method according to claim 2, wherein the signal triggered by the sensor includes the height of the obstacle below the robot and the drop position data used to indicate the location of the robot;
    所述完全跌落场景和局部跌落场景的确定方法包括:The methods for determining the complete drop scenario and the partial drop scenario include:
    获取所述下视高度分别与第一高度阈值和第二高度阈值比较后的比较结果;其中,所述第一高度阈值大于等于所述第二高度阈值;Obtain the comparison result after comparing the down-looking height with the first height threshold and the second height threshold respectively; wherein, the first height threshold is greater than or equal to the second height threshold;
    当所述下视高度大于第一高度阈值时,结合所述跌落位置数据、周围环境信息,确定当前困难场景为完全跌落场景;When the look-down height is greater than the first height threshold, combined with the drop position data and surrounding environment information, determine that the current difficult scene is a complete drop scene;
    当所述下视高度小于第二高度阈值时,结合所述跌落位置数据、周围环境信息,确定当前困难场景为局部跌落场景。When the look-down height is less than the second height threshold, it is determined that the current difficult scene is a partial fall scene in combination with the drop position data and surrounding environment information.
  4. 根据权利要求2所述的方法,其特征在于,当所述困难场景类型为任一单一困难场景时,所述控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常,包括:The method according to claim 2, characterized in that, when the type of difficult scene is any single difficult scene, the robot is controlled to perform the action of getting out of trouble according to the strategy corresponding to the type of difficult scene until it detects Signals triggered by the sensor returned to normal, including:
    若所述困难场景类型为完全跌落场景,控制所述机器人离开当前困难场景直至检测到传感器触发的信号恢复正常;和/或If the difficult scene type is a complete fall scene, control the robot to leave the current difficult scene until the signal triggered by the sensor is detected to return to normal; and/or
    若所述困难场景类型为局部跌落场景,评估当前困难场景的安全性,在确定非安全的情况下控制所述机器人离开当前困难场景直至检测到传感器触 发的信号恢复正常;和/或If the type of difficult scene is a local drop scene, evaluate the safety of the current difficult scene, and control the robot to leave the current difficult scene if it is determined to be unsafe until the signal triggered by the sensor is detected and returns to normal; and/or
    若所述困难场景类型为误触发场景或边缘变化场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常。If the type of difficult scene is a false trigger scene or an edge change scene, control the robot to rotate, move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  5. 根据权利要求4所述的方法,当所述困难场景类型为多个单一困难场景的组合时,所述控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常,包括:According to the method according to claim 4, when the type of difficult scene is a combination of multiple single difficult scenes, the robot is controlled to perform an escape action according to the escape strategy corresponding to the type of difficult scene until a sensor trigger is detected The signals returned to normal, including:
    控制所述机器人按照各困难场景的优先级别顺序执行各困难场景对应的脱困动作直至检测到传感器触发的信号恢复正常。The robot is controlled to perform the escape actions corresponding to each difficult scene in accordance with the priority order of each difficult scene until it is detected that the signal triggered by the sensor returns to normal.
  6. 根据权利要求4或5所述的方法,其特征在于,若所述困难场景类型为误触发场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常包括:The method according to claim 4 or 5, wherein if the type of difficult scene is a false trigger scene, controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected to return to normal includes:
    根据周围环境信息控制所述机器人旋转预设角度;controlling the robot to rotate at a preset angle according to surrounding environment information;
    确定触发异常信号的传感器的位置,基于位置信息控制所述机器人前进或后退直至检测到传感器触发的信号恢复正常。Determine the position of the sensor that triggers the abnormal signal, and control the robot to move forward or backward based on the position information until the signal triggered by the detected sensor returns to normal.
  7. 根据权利要求6所述的方法,其特征在于,所述确定触发异常信号的传感器的位置,基于位置信息控制所述机器人前进或后退直至检测到传感器触发的信号恢复正常,包括:The method according to claim 6, wherein the determining the position of the sensor that triggers the abnormal signal, and controlling the robot to move forward or backward based on the position information until the signal triggered by the sensor is detected and returns to normal, comprises:
    当所述机器人的前方传感器触发异常信号时,控制所述机器人后退直至检测到传感器触发的信号恢复正常;和/或,When the front sensor of the robot triggers an abnormal signal, the robot is controlled to back up until the detected signal triggered by the sensor returns to normal; and/or,
    当所述机器人的后方传感器触发异常信号时,控制所述机器人前进直至检测到传感器触发的信号恢复正常。When the rear sensor of the robot triggers an abnormal signal, the robot is controlled to move forward until the detected signal triggered by the sensor returns to normal.
  8. 根据权利要求4-7中任一项所述的方法,其特征在于,若所述困难场景类型为边缘变化场景,控制所述机器人旋转、前行或后退直至检测到传感器触发的信号恢复正常包括:The method according to any one of claims 4-7, wherein if the difficult scene type is an edge change scene, controlling the robot to rotate, move forward or backward until the signal triggered by the sensor is detected to return to normal includes :
    根据周围环境信息控制所述机器人旋转预设角度;controlling the robot to rotate at a preset angle according to surrounding environment information;
    基于触发异常信号的传感器的位置确定机器人的悬空侧和非悬空侧;Determining the dangling and non-dangling sides of the robot based on the position of the sensor that triggered the anomaly signal;
    按照悬空侧的轮子和非悬空侧的轮子各自对应的速度,控制所述机器人 前进或后退直至检测到传感器触发的信号恢复正常。According to the respective speeds of the wheels on the suspended side and the wheels on the non-suspended side, the robot is controlled to move forward or backward until the signal triggered by the sensor is detected and returns to normal.
  9. 根据权利要求4-7中任一项的方法,其特征在于,所述控制所述机器人离开当前困难场景,包括:The method according to any one of claims 4-7, wherein the controlling the robot to leave the current difficult scene comprises:
    判断所述机器人是否能够旋转;judging whether the robot can rotate;
    当确定所述机器人无法旋转时,获取所述机器人后方安全侧的点云数据;When it is determined that the robot cannot rotate, acquiring point cloud data of the safe side behind the robot;
    控制所述机器人基于安全侧的点云数据后退离开当前困难场景;Controlling the robot to retreat from the current difficult scene based on the point cloud data on the safe side;
    其中,所述安全侧包括障碍物的边界。Wherein, the safe side includes the boundary of the obstacle.
  10. 根据权利要求9所述的方法,其特征在于,所述控制所述机器人离开当前困难场景还包括:The method according to claim 9, wherein the controlling the robot to leave the current difficult scene further comprises:
    当所述机器人后方不存在安全侧时,获取所述机器人进入当前困难场景的历史轨迹;When there is no safe side behind the robot, obtain the historical trajectory of the robot entering the current difficult scene;
    控制所述机器人基于历史轨迹后退离开当前困难场景。The robot is controlled to back away from the current difficult scene based on the historical trajectory.
  11. 一种机器人控制装置,其特征在于,所述装置包括:A robot control device, characterized in that the device comprises:
    检测模块:用于测机器人行进时由传感器触发的信号;Detection module: used to measure the signal triggered by the sensor when the robot is moving;
    处理模块:用于当检测到传感器触发的信号处于异常时,结合实时获取的周围环境信息,确定机器人当前所处的困难场景类型;Processing module: used to determine the type of difficult scene that the robot is currently in when it detects that the signal triggered by the sensor is abnormal, combined with the surrounding environment information obtained in real time;
    获取模块:用于基于所述困难场景类型获取相对应的脱困策略;An acquisition module: used to acquire a corresponding escape strategy based on the type of difficult scene;
    控制模块:用于控制所述机器人按照与所述困难场景类型对应的脱困策略执行脱困动作直至检测到传感器触发的信号恢复正常。Control module: used to control the robot to execute the escape action according to the escape strategy corresponding to the difficult scene type until the signal triggered by the sensor is detected and returns to normal.
  12. 一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1~10任意一项所述的方法。A robot, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, characterized in that, when the processor executes the computer program, the computer program according to claims 1-10 is realized. any one of the methods described.
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1~10任意一项所述的方法。A computer-readable storage medium, the computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the method according to any one of claims 1-10 is implemented.
PCT/CN2022/137587 2022-01-27 2022-12-08 Robot control method and device, and robot and storage medium WO2023142711A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210098722.X 2022-01-27
CN202210098722.XA CN116551663A (en) 2022-01-27 2022-01-27 Robot control method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
WO2023142711A1 true WO2023142711A1 (en) 2023-08-03

Family

ID=87470347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137587 WO2023142711A1 (en) 2022-01-27 2022-12-08 Robot control method and device, and robot and storage medium

Country Status (2)

Country Link
CN (1) CN116551663A (en)
WO (1) WO2023142711A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108873878A (en) * 2017-06-22 2018-11-23 北京石头世纪科技有限公司 Autonomous robot and its control method, device, system and computer-readable medium
CN111195105A (en) * 2018-11-16 2020-05-26 北京奇虎科技有限公司 Abnormal processing method, device and equipment for sweeping robot and readable storage medium
CN111775151A (en) * 2020-06-28 2020-10-16 河南工业职业技术学院 Intelligent control system of robot
CN111984014A (en) * 2020-08-24 2020-11-24 上海高仙自动化科技发展有限公司 Robot control method, device, robot and storage medium
CN112526984A (en) * 2020-09-30 2021-03-19 深圳市银星智能科技股份有限公司 Robot obstacle avoidance method and device and robot
WO2021208530A1 (en) * 2020-04-14 2021-10-21 北京石头世纪科技股份有限公司 Robot obstacle avoidance method, device, and storage medium
CN113961007A (en) * 2021-10-22 2022-01-21 追觅创新科技(苏州)有限公司 Self-moving device, obstacle information acquisition method, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108873878A (en) * 2017-06-22 2018-11-23 北京石头世纪科技有限公司 Autonomous robot and its control method, device, system and computer-readable medium
CN111195105A (en) * 2018-11-16 2020-05-26 北京奇虎科技有限公司 Abnormal processing method, device and equipment for sweeping robot and readable storage medium
WO2021208530A1 (en) * 2020-04-14 2021-10-21 北京石头世纪科技股份有限公司 Robot obstacle avoidance method, device, and storage medium
CN111775151A (en) * 2020-06-28 2020-10-16 河南工业职业技术学院 Intelligent control system of robot
CN111984014A (en) * 2020-08-24 2020-11-24 上海高仙自动化科技发展有限公司 Robot control method, device, robot and storage medium
CN112526984A (en) * 2020-09-30 2021-03-19 深圳市银星智能科技股份有限公司 Robot obstacle avoidance method and device and robot
CN113961007A (en) * 2021-10-22 2022-01-21 追觅创新科技(苏州)有限公司 Self-moving device, obstacle information acquisition method, and storage medium

Also Published As

Publication number Publication date
CN116551663A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
KR102670610B1 (en) Robot for airport and method thereof
CN110622085A (en) Mobile robot and control method and control system thereof
CN112415998B (en) Obstacle classification obstacle avoidance control system based on TOF camera
CN108052097B (en) Method for training heterogeneous sensing system and heterogeneous sensing system
US10350762B2 (en) Autonomously moving body, movement controlling method, and recording medium storing movement controlling program
CN113189614B (en) Obstacle recognition method, obstacle recognition device and storage medium
WO2021120999A1 (en) Autonomous robot
CN110403528A (en) A kind of method and system improving cleaning coverage rate based on clean robot
KR20190134969A (en) A plurality of robot cleaner and a controlling method for the same
KR102560462B1 (en) Moving robot
CN112386177A (en) Cleaning device motion control method, storage medium and cleaning device
CN116211168A (en) Operation control method and device of cleaning equipment, storage medium and electronic device
CN113786125B (en) Operation method, self-mobile device, and storage medium
WO2023142711A1 (en) Robot control method and device, and robot and storage medium
US20230116867A1 (en) Autonomous mobile device and control method thereof
CN114587210B (en) Cleaning robot control method and control device
JP6342169B2 (en) Object detection sensor and program
CN111880532B (en) Autonomous mobile device, method, apparatus, device, and storage medium thereof
CN113741441A (en) Operation method and self-moving equipment
WO2023142710A1 (en) Robot navigation method and apparatus, robot and storage medium
CN113531839A (en) Control method and device of cabinet air conditioner and cabinet air conditioner
CN112882472A (en) Autonomous mobile device
US20240197135A1 (en) Detection method for autonomous mobile device, autonomous mobile device and storage medium
CN113331744B (en) Traveling control method of cleaning robot, obstacle avoidance module and cleaning robot
US20230277024A1 (en) Cleaning robot and controlling method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923501

Country of ref document: EP

Kind code of ref document: A1