CN111618854A - Task segmentation and collaboration method for security robot - Google Patents

Task segmentation and collaboration method for security robot Download PDF

Info

Publication number
CN111618854A
CN111618854A CN202010457573.2A CN202010457573A CN111618854A CN 111618854 A CN111618854 A CN 111618854A CN 202010457573 A CN202010457573 A CN 202010457573A CN 111618854 A CN111618854 A CN 111618854A
Authority
CN
China
Prior art keywords
robot
module
task
layer
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010457573.2A
Other languages
Chinese (zh)
Inventor
唐玉华
易伟
李明龙
蔡中轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202010457573.2A priority Critical patent/CN111618854A/en
Publication of CN111618854A publication Critical patent/CN111618854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a task segmentation and collaboration method of a security robot, which comprises the steps of firstly segmenting a task of the security robot, and establishing a contained task collaboration model according to the priority of each task; when the high-level task is triggered, the low-level task is automatically preempted to control the robot, and when the high-level task is suspended or terminated, the low-level task can be reactivated. The method has the advantages of simple principle, strong environmental adaptability, high control robustness and the like.

Description

Task segmentation and collaboration method for security robot
Technical Field
The invention mainly relates to the technical field of robots, in particular to a task segmentation and collaboration method for a security robot.
Background
Robotics, as a subject of multi-domain interdiscipline, has emerged as a result of the composite development of science and technology. In the application fields of industry, medical treatment, exploration and the like, and in dangerous environments such as underwater, outer space and the like, the robot gradually replaces human beings to complete work. Robotics has driven socio-economic, a corollary to productivity development, and a corollary to the human need to extend its own abilities. The original purpose of developing robots was to help people get rid of heavy and repetitive work and to replace people to work in hazardous environments with radiation and the like. Therefore, robots were first widely used in the automotive and nuclear industries. However, as robotics has been developed, a lot of robots have been used in industrial fields such as welding, painting, transportation, assembly, and casting. In addition, robots are also used in military, marine exploration, aerospace, medical, agricultural, forestry, and even in the service and entertainment industries. From the robot use, there are three main categories: industrial robots, service robots and special robots. Industrial robots are mainly used in factory lines, for example, large robotic arms can assist workers in completing the assembly of vehicles, and transfer robots can be responsible for the handling and delivery of goods. The service robot is closely related to the daily life of people, such as a restaurant waiter robot, a stage performance robot, a hospital care robot, and the like. The special robot executes a specific task aiming at a certain specific application scene, such as a fire-fighting robot, a natural disaster rescue search robot and the like. Robots have been developed and used in a number of different fields.
The auction method based on the market is a method for task cooperation, which simulates auction behaviors such as human bidding, dividing the overall task into subtasks, conducting auction on the subtasks, distributing the tasks to each task module of the robot according to the idea of the person with higher price for execution, and can be used for designing the robot. However, the auction method has many disadvantages: first, the auction method is not adaptive, and it is difficult to implement fault-tolerant adaptive task switching. Moreover, the auction method requires a central arbitration node, which lacks robustness. Finally, the auction method is more dependent on module-to-module communication.
A method based on a finite state machine is also one of task cooperation, after an integral task is divided into subtasks, the subtasks are defined into states in the state machine, and state transition is triggered through predefined state transition conditions, so that subtask switching is realized, and subtask cooperation is realized. If a state node fails, the process of the node crashes, so that the state cannot be transferred after the node is switched to the failed node, and the whole system is subjected to avalanche crash and has no fault tolerance.
The prior art has the disadvantages that on one hand, only simple flat task segmentation is carried out, and the set of subtasks formed after the segmentation cannot provide support for the adaptive software architecture design. On the other hand, a self-adaptive fault-tolerant task cooperative framework is lacked, and support is provided for software design of the security robot. In this way, when the existing robot is designed by using such a technique, the robot has low robustness, poor adaptability to the environment, and low efficiency in executing tasks.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the task segmentation and cooperation method of the security robot, which has the advantages of simple principle, strong environmental adaptability and high control robustness.
In order to solve the technical problems, the invention adopts the following technical scheme:
a task segmentation and collaboration method for a security robot comprises the steps of firstly segmenting a task of the security robot, and establishing an inclusive task collaboration model according to the priority of each task; when the high-level task is triggered, the low-level task is automatically preempted to control the robot, and when the high-level task is suspended or terminated, the low-level task can be reactivated.
As a further improvement of the process of the invention: the control outputs of the high-level tasks and the low-level tasks are coordinated by a elimination module and a replacement module.
As a further improvement of the process of the invention: the task is divided into the following five tasks, each task occupies one layer, and the priority of the lower-layer task is smaller than that of the higher-layer task:
layer 0: the self-protection task of the robot is that when a target approaches the robot, the target is avoided;
layer 1: a random roaming moving task with an obstacle avoidance function;
layer 2: the robot can explore an idle area and actively move to the idle area;
layer 3: the robot navigates among the appointed target points through a pre-established map to finish a patrol task;
layer 4: the robot detects the suspicious target, tracks the suspicious target and sends out the chasing task of an alarm;
layer 5: the images seen by the robot are monitored on the remote terminal and suspicious objects can be manually specified, manually controlling the manual task of robot movement.
As a further improvement of the process of the invention: in the 0 th layer, a perception module is a driving module of a robot vision sensor, the perception module senses RGB-D information of the surrounding environment, namely a color image with depth information, the perception module respectively sends out two messages including a depth image message and a color image message, an avoidance module receives the depth image message, judges the distance between a surrounding obstacle and the robot, if the distance is smaller than a set threshold value, the robot escapes in the opposite direction, sends a forward message to a speed smoothing module, smoothes a forward speed signal, and then sends the forward speed signal to a chassis to drive, so that the robot is controlled to move.
As a further improvement of the process of the invention: in the layer 1, a collision detection module is a drive of a collision sensor and can sense collision information, a collision message is sent when a collision is found, a roaming module sends a forward message with random speed, an obstacle avoidance roaming module integrates the two messages to calculate a final forward message, and in order to coordinate with the forward message of the layer 0 task, a substitution module inserted into the layer 0 replaces the output of the layer 0 avoidance module when the forward message output exists in the first layer task, so that the robot is endowed with the capability of executing higher-level tasks, and the robot can execute the random roaming task with obstacle avoidance.
As a further improvement of the process of the invention: in the layer 2, the 'exploration' module receives the depth message image sent by the 'perception' module, explores an open area, sends a forward message for controlling the movement of the robot, and inserts a replacement module into the layer 1 to replace the output of the lower 'roaming' module when the task output exists in the layer 2, so that the robot executes the task of roaming in the open area.
As a further improvement of the process of the invention: in the layer 3, the patrol module is responsible for sending a target message to the navigation module, the navigation module performs navigation to a target point for the robot by receiving the depth image message and the mobile feedback message according to a preset environment map and sends a forward message, when the target point is reached, the forward message is sent to the patrol module, and then the patrol module sends a next target point message. The output of the "explore" module is replaced by inserting a replacement module into layer 2, causing the robot to perform patrol tasks between specified target points.
As a further improvement of the process of the invention: in the layer 4, the 'target' module is responsible for detecting suspicious targets in the color image messages and sending alarm messages to the 'alarm' module, the robot immediately starts to alarm through the 'sounder' module and simultaneously sends target frame messages of the suspicious targets to the 'tracking' module, the 'tracking' module receives the color image messages and is responsible for tracking the suspicious targets in the color images captured by the sensor, the tracked target frame messages are sent to the 'tracking' module, and the 'tracking' module converts the positions and sizes of the targets in the visual field into motion control instructions of the robot and sends forward messages according to the 'near-large-far-small' principle of target objects. By inserting the replacement module into the layer 3 and replacing the output of the navigation module, the robot has the task of detecting and tracking the suspicious target, and the lower-layer task can be automatically replaced when the suspicious target is found.
As a further improvement of the process of the invention: in layer 5, the "remote" module represents a manually operated remote computer terminal, which can remotely communicate with the robot, run a program with an interface to display a color image captured by the robot in real time, manually select an object to be tracked by the robot from the color image through a mouse, and the robot tracks the designated object and displays the object on the remote terminal. Meanwhile, the robot can be manually controlled by switching buttons to replace a motion control instruction sent by the robot. By inserting a replacement module into layer 4 to suppress the output of the "track" module, and by inserting a "cancel" module into the output of the "navigation" module of layer 3, the autonomous action of the robot can be terminated manually and a manual control task can be performed when a suspicious object is detected
Compared with the prior art, the invention has the advantages that:
1. the task segmentation and collaboration method of the security robot is based on a bionic idea, and is used for grading task capabilities so as to abstract different behavior layers, wherein the task capabilities are contained and inhibited from each other layer by layer, and the task is segmented based on the principle, so that the support for a self-adaptive and robust software system structure is realized. The invention designs the elimination module and the replacement module, organizes each task module to form different behavior levels, is used for the self-adaptive cooperation of high-level tasks and low-level tasks, and has fault tolerance.
2. The invention discloses a task segmentation and cooperation method of a security robot. The second point is a hierarchical adaptive software framework design method based on an elimination module and a replacement module. The software system designed based on the method has fault tolerance and certain tolerance to software or hardware system errors, for example, when a certain software process crashes or a certain sensor fails, the whole robot can still work. The software system designed based on the method has self-adaptability, and is embodied in the adaptability to environmental changes and self-behavior adjusting capability. The software system designed based on the method has robustness, and is embodied in that the software system is robust and is not easy to crash. And even if a certain software module crashes, the whole software system cannot be crashed in an avalanche mode, and the whole software system can still carry out other tasks.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIG. 2 is a schematic diagram of the functional layering of the present invention in a specific application example.
Detailed Description
The invention will be described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1, according to the task segmentation and collaboration method for a security robot, firstly, a task of the security robot is segmented, a contained task collaboration model is established according to the priority of each task, when a high-level task is triggered, a low-level task can be automatically preempted to control the robot, and when the high-level task is suspended or terminated, the low-level task can be reactivated. In order to realize the cooperation between the tasks, the invention designs a removal module and a replacement module which are used for coordinating the control output of the high-level tasks and the low-level tasks.
Specifically, the security robot task is divided into the following five tasks, each task occupies one layer, and the priority of the low-layer task is lower than that of the high-layer task:
layer 0: and in the self-protection task of the robot, when a target approaches the robot, the target is avoided.
Layer 1: and (4) carrying out random roaming mobile tasks with an obstacle avoidance function.
Layer 2: the robot can explore the idle area and actively move to the idle area.
Layer 3: the robot navigates among the appointed target points through a pre-established map to complete the patrol task.
Layer 4: the robot detects the suspicious target, tracks the suspicious target and sends out the chasing task of an alarm.
Layer 5: the images seen by the robot are monitored on the remote terminal and suspicious objects can be manually specified, manually controlling the manual task of robot movement.
In a specific application example, the design of each layer is shown in fig. 1, where a square represents a functional module, an arrow represents message transmission between modules, and circles represent an elimination module and a replacement module, respectively:
in the 0 th layer, a perception module is a driving module of a robot vision sensor and can perceive RGB-D information of the surrounding environment, namely a color image with depth information, the perception module respectively sends out two messages including a depth image message and a color image message, an avoidance module receives the depth image message and judges the distance between a surrounding obstacle and the robot, if the distance is smaller than a set threshold value, the robot escapes in the opposite direction, a forward message is sent to a speed smoothing module, a forward speed signal is smoothed and then sent to a chassis to drive, and the robot is controlled to move.
In the layer 1, a collision detection module is a drive of a collision sensor, can sense collision information, sends a collision message when finding a collision, a roaming module sends a forward message with random speed, an obstacle avoidance roaming module integrates the two messages to calculate a final forward message, and in order to coordinate with the forward message of the layer 0 task, the invention replaces the output of the 0 th layer 'avoidance' module when the forward message output exists in the first layer task, so that the robot is endowed with the capability of executing higher-level tasks, and the robot executes the random roaming task with the obstacle avoidance.
In the layer 2, the 'exploration' module receives the depth message image sent by the 'perception' module, explores an open area, sends a forward message for controlling the movement of the robot, and inserts a replacement module into the layer 1 to replace the output of the lower 'roaming' module when the task output exists in the layer 2, so that the robot executes the task of roaming in the open area.
In the layer 3, the patrol module is responsible for sending a target message to the navigation module, the navigation module performs navigation to a target point for the robot by receiving the depth image message and the mobile feedback message according to a preset environment map and sends a forward message, when the target point is reached, the forward message is sent to the patrol module, and then the patrol module sends a next target point message. The output of the "explore" module is replaced by inserting a replacement module into layer 2, causing the robot to perform patrol tasks between specified target points.
In the layer 4, the 'target' module is responsible for detecting suspicious targets in the color image messages and sending alarm messages to the 'alarm' module, the robot immediately starts to alarm through the 'sounder' module and simultaneously sends target frame messages of the suspicious targets to the 'tracking' module, the 'tracking' module receives the color image messages and is responsible for tracking the suspicious targets in the color images captured by the sensor, the tracked target frame messages are sent to the 'tracking' module, and the 'tracking' module converts the positions and sizes of the targets in the visual field into motion control instructions of the robot and sends forward messages according to the 'near-large-far-small' principle of target objects. By inserting the replacement module into the layer 3 and replacing the output of the navigation module, the robot has the task of detecting and tracking the suspicious target, and the lower-layer task can be automatically replaced when the suspicious target is found.
In layer 5, the "remote" module represents a manually operated remote computer terminal, which can remotely communicate with the robot, run a program with an interface to display a color image captured by the robot in real time, manually select an object to be tracked by the robot from the color image through a mouse, and the robot tracks the designated object and displays the object on the remote terminal. Meanwhile, the robot can be manually controlled by switching buttons to replace a motion control instruction sent by the robot. By inserting a replacement module into layer 4 to suppress the output of the "track" module, and by inserting an "elimination" module into the output of the "navigation" module of layer 3, the autonomous action of the robot can be terminated manually, performing a manual control task, when a suspicious object is detected.
Through the six-layer task segmentation and layering structure, the robot can execute a complex security patrol function with suspicious target detection, and can automatically coordinate the output of tasks at all levels to execute high-level tasks when the high-level tasks are triggered. Meanwhile, the task segmentation method also provides good fault tolerance, for example, when the RGB-D sensor is damaged and the color image message and the depth image message cannot be transmitted out, the original navigation and tracking functions of the robot cannot be completed. Under the layered task structure, the robot does not completely fail at the moment, but is degraded to a roaming task with a lower level, which is equivalent to a basic patrol function with a lower capability level. Or when the collision sensor is damaged and cannot detect collision information, the chassis driving module cannot transmit movement feedback information to the navigation module, even the exploration module is abnormal in operation and cannot send forward messages, and then the robot can avoid an approaching target through the bottommost avoidance module to complete a self-protection function. The whole robot system has better self-adaptability and robustness.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (9)

1. A task segmentation and collaboration method for a security robot is characterized in that firstly, a task of the security robot is segmented, and a contained task collaboration model is established according to the priority of each task; when the high-level task is triggered, the low-level task is automatically preempted to control the robot, and when the high-level task is suspended or terminated, the low-level task can be reactivated.
2. The task segmentation and coordination method for a security robot according to claim 1, wherein the control output of the high-level task and the low-level task is coordinated through a elimination module and a replacement module.
3. The task segmentation and collaboration method for the security robot as claimed in claim 1 or 2, wherein the task is segmented into the following five tasks, each task occupies one layer, and the priority of the lower-layer task is lower than that of the higher-layer task:
layer 0: the self-protection task of the robot is that when a target approaches the robot, the target is avoided;
layer 1: a random roaming moving task with an obstacle avoidance function;
layer 2: the robot can explore an idle area and actively move to the idle area;
layer 3: the robot navigates among the appointed target points through a pre-established map to finish a patrol task;
layer 4: the robot detects the suspicious target, tracks the suspicious target and sends out the chasing task of an alarm;
layer 5: the images seen by the robot are monitored on the remote terminal and suspicious objects can be manually specified, manually controlling the manual task of robot movement.
4. The task segmentation and collaboration method for the security robot as claimed in claim 3, wherein in layer 0, the "perception" module is a driving module of a robot vision sensor, senses RGB-D information of a surrounding environment, namely a color image with depth information, the "perception" module sends out two messages respectively including a depth image message and a color image message, the "avoidance" module receives the depth image message, judges a distance between a surrounding obstacle and the robot, if the distance is smaller than a set threshold value, the robot escapes in an opposite direction, sends out a forward message to the "speed smoothing module", smoothes a forward speed signal, and then sends out the "chassis" to drive, and controls the robot to move.
5. The task segmentation and coordination method for the security robot according to claim 3, wherein in layer 1, the collision detection module is a driver of a collision sensor, and is capable of sensing collision information and sending a collision message when a collision is found, the roaming module is capable of sending a random-speed forward message, the obstacle avoidance roaming module is capable of fusing the two messages to calculate a final forward message, and in order to coordinate with the forward message of the task in layer 0, the replacement module is inserted into layer 0, and when the forward message output exists in the task in layer 0, the output of the "avoidance" module in layer 0 is replaced, so that the robot is capable of executing higher-level tasks, and the robot is capable of executing random roaming tasks with obstacle avoidance.
6. The task segmentation and collaboration method for the security robot as claimed in claim 3, wherein in layer 2, the "exploration" module receives the depth message image sent by the "perception" module, explores an open area, sends a forward message for robot motion control, and inserts a replacement module into layer 1 to replace the output of the lower "roaming" module when there is a task output in layer 2, so that the robot executes a task of open area roaming.
7. The task segmentation and collaboration method for the security robot as claimed in claim 3, wherein in layer 3, the patrol module is responsible for sending a target message to the navigation module, the navigation module performs navigation to a target point for the robot by receiving a depth image message and a movement feedback message according to a preset environment map, sends a forward message, sends an arrival message to the patrol module when the target point is reached, and then the patrol module sends a next target point message; the output of the "explore" module is replaced by inserting a replacement module into layer 2, causing the robot to perform patrol tasks between specified target points.
8. The task segmentation and coordination method of the security robot according to claim 3, characterized in that in layer 4, the "target" module is responsible for detecting suspicious targets in the color image message and sending alarm messages to the "alarm" module, the robot immediately starts to alarm through the "sounder" module and also sends target frame messages of the suspicious targets to the "tracking" module, the "tracking" module receives the color image message and is responsible for tracking the suspicious targets in the color image captured by the sensor and sends the tracked target frame messages to the "tracking" module, and the "tracking" module converts the positions and sizes of the targets in the visual field into the motion control commands of the robot and sends forward messages according to the principle of the near-far-near size of the target objects; by inserting the replacement module into the layer 3 and replacing the output of the navigation module, the robot has the task of detecting and tracking the suspicious target, and the lower-layer task can be automatically replaced when the suspicious target is found.
9. The task segmentation and collaboration method for the security robot as claimed in claim 3, wherein in layer 5, the "remote" module represents a manually operated remote computer terminal, the remote terminal can remotely communicate with the robot, and runs a program with an interface thereon to display a color image captured by the robot in real time, and a mouse can manually select an object to be tracked by the robot in the color image, and the robot tracks the designated object and displays the object on the remote terminal; meanwhile, the robot can be manually controlled by switching buttons to replace a motion control instruction sent by the robot; by inserting a replacement module into layer 4 to suppress the output of the "track" module, and by inserting an "elimination" module into the output of the "navigation" module of layer 3, the autonomous action of the robot can be terminated manually, performing a manual control task, when a suspicious object is detected.
CN202010457573.2A 2020-05-26 2020-05-26 Task segmentation and collaboration method for security robot Pending CN111618854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457573.2A CN111618854A (en) 2020-05-26 2020-05-26 Task segmentation and collaboration method for security robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457573.2A CN111618854A (en) 2020-05-26 2020-05-26 Task segmentation and collaboration method for security robot

Publications (1)

Publication Number Publication Date
CN111618854A true CN111618854A (en) 2020-09-04

Family

ID=72256177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457573.2A Pending CN111618854A (en) 2020-05-26 2020-05-26 Task segmentation and collaboration method for security robot

Country Status (1)

Country Link
CN (1) CN111618854A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112454369A (en) * 2021-01-27 2021-03-09 苏州盈科电子有限公司 Robot control method and device
CN113433941A (en) * 2021-06-29 2021-09-24 之江实验室 Multi-modal knowledge graph-based low-level robot task planning method
CN117260688A (en) * 2023-10-23 2023-12-22 北京小米机器人技术有限公司 Robot, control method and device thereof, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3570008A (en) * 1963-12-31 1971-03-09 Bell Telephone Labor Inc Telephone switching system
CN105786605A (en) * 2016-03-02 2016-07-20 中国科学院自动化研究所 Task management method and system in robot
CN110058592A (en) * 2019-04-25 2019-07-26 重庆大学 A kind of mobile robot control method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3570008A (en) * 1963-12-31 1971-03-09 Bell Telephone Labor Inc Telephone switching system
CN105786605A (en) * 2016-03-02 2016-07-20 中国科学院自动化研究所 Task management method and system in robot
CN110058592A (en) * 2019-04-25 2019-07-26 重庆大学 A kind of mobile robot control method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112454369A (en) * 2021-01-27 2021-03-09 苏州盈科电子有限公司 Robot control method and device
CN113433941A (en) * 2021-06-29 2021-09-24 之江实验室 Multi-modal knowledge graph-based low-level robot task planning method
CN117260688A (en) * 2023-10-23 2023-12-22 北京小米机器人技术有限公司 Robot, control method and device thereof, and storage medium

Similar Documents

Publication Publication Date Title
CN111618854A (en) Task segmentation and collaboration method for security robot
AU2018271237B2 (en) Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot
Chen et al. Occlusion-based cooperative transport with a swarm of miniature mobile robots
US8725273B2 (en) Situational awareness for teleoperation of a remote vehicle
US20240153314A1 (en) Engagement Detection and Attention Estimation for Human-Robot Interaction
US10035264B1 (en) Real time robot implementation of state machine
Bruemmer et al. Collaborative tools for mixed teams of humans and robots
CN113618731A (en) Robot control system
Mitsou et al. Visuo-haptic interface for teleoperation of mobile robot exploration tasks
Doki et al. AR video presentation using 3D LiDAR information for operator support in mobile robot teleoperation
CN107363831B (en) Teleoperation robot control system and method based on vision
Hietanen et al. Proof of concept of a projection-based safety system for human-robot collaborative engine assembly
Jing et al. Remote live-video security surveillance via mobile robot with raspberry pi IP camera
CN114888768A (en) Mobile duplex robot cooperative grabbing system and method based on multi-sensor fusion
Yang et al. Research into the application of AI robots in community home leisure interaction
Cleary et al. Canonical targets for mobile robot control by deictic visual servoing
Gustafson et al. Swarm technology for search and rescue through multi-sensor multi-viewpoint target identification
Park Supervisory control of robot manipulator for gross motions
Chen et al. Workspace Modeling: Visualization and Pose Estimation of Teleoperated Construction Equipment from Point Clouds
Kamezaki et al. Video presentation based on multiple-flying camera to provide continuous and complementary images for teleoperation
US20230384788A1 (en) Information processing device, information processing system, information processing method, and recording medium storing program
Jena et al. Chaos to Control: Human Assisted Scene Inspection
Liu et al. Laser slam-based autonomous navigation for fire patrol robots
Pieskä et al. Multilayered dynamic safety for high-payload collaborative robotic applications
Bhat et al. Night Vision Based Optimum Robot Path Planning in Rescue Operations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904

RJ01 Rejection of invention patent application after publication