CN109333532B - Patrol robot control method based on man-machine cooperation system - Google Patents

Patrol robot control method based on man-machine cooperation system Download PDF

Info

Publication number
CN109333532B
CN109333532B CN201811186640.0A CN201811186640A CN109333532B CN 109333532 B CN109333532 B CN 109333532B CN 201811186640 A CN201811186640 A CN 201811186640A CN 109333532 B CN109333532 B CN 109333532B
Authority
CN
China
Prior art keywords
task
robot
parameters
user
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811186640.0A
Other languages
Chinese (zh)
Other versions
CN109333532A (en
Inventor
邹林
刘旭
王颂
朱小舟
林清霖
刘国梁
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Power Grid International Co ltd
China Southern Power Grid Co Ltd
Original Assignee
CSG Electric Power Research Institute
China Southern Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSG Electric Power Research Institute, China Southern Power Grid Co Ltd filed Critical CSG Electric Power Research Institute
Priority to CN201811186640.0A priority Critical patent/CN109333532B/en
Publication of CN109333532A publication Critical patent/CN109333532A/en
Application granted granted Critical
Publication of CN109333532B publication Critical patent/CN109333532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a patrol robot control method based on a man-machine cooperation system, which comprises task-level and instruction-level control and comprises the following specific steps: a user carries out task description on the robot system at a task level, obtains a target state of the robot system through an inference mechanism based on task description quantity, obtains current characteristic parameters of the robot from the target state, and stores the current characteristic parameters as target instructions in a background; in the instruction level, the user plans the execution process of the task by himself according to the feedback result in the task level planning, the system judges the rationality of the user planning in the background and gives the prompt and the recommended task path. The method provided by the invention combines the human planning and decision-making capability with the autonomous planning of the humanoid robot, so that the robot has the capability of adapting to the environmental change, and the task-level parameters and the instruction-level parameters are combined for use, so that the motion trail of the robot is more accurate, and the completion of the intelligent task is more accurate and reliable.

Description

Patrol robot control method based on man-machine cooperation system
Technical Field
The invention relates to the field of robots, in particular to a patrol robot control method based on a man-machine cooperation system.
Background
At present, the construction and the enhancement of power supply reliability of the smart power grid are raised as the national strategy, and the state detection and the monitoring of the power equipment are used as the emerging industries developed in recent years, so that huge growth potential and development space are presented. The intelligent power grid construction planning can greatly promote the market demand of the intelligent inspection robot, and becomes a long-term driving force for the continuous growth of the inspection robot industry. The indoor robot demand analysis takes an outdoor robot as an example, and is mainly applied to a transformer substation. A substation is an electrical facility in an electrical power system that transforms voltage, receives and distributes electrical energy, controls the flow of electrical power, and regulates voltage. In addition, according to the plan of action of construction and transformation of power distribution networks (2015-2020), published by the national energy agency, the coverage rate of domestic distribution automation reaches 90% in 2020. If 20% of automatic power distribution stations adopt intelligent inspection equipment, the annual demand of domestic indoor robots exceeds 10,000 within five years.
However, the inspection robots in the prior art all work by inputting parameters once by a user, but in the case of complex and variable field environments, the robots often work according to the old parameters without changing according to changes of the environments, so that the manual operation of the robots is not perfect and reliable. Especially when the routing inspection tasks of some large key facilities are completed, the requirements on the accuracy and the safety of the routing inspection robot are higher. At present, the safety and reliability of the inspection task executed by the inspection robot without manual intervention and the flexibility of timely error correction are difficult to completely guarantee.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a patrol robot control method based on a man-machine cooperative system, which comprises task-level and instruction-level control and comprises the following specific steps:
a patrol robot control method based on a man-machine cooperation system is characterized by comprising task level and instruction level control, and specifically comprising the following steps:
a user carries out task description on the robot system at a task level, obtains a target state of the robot system through an inference mechanism based on task description quantity, obtains current characteristic parameters of the robot from the target state, and stores the current characteristic parameters as target instructions in a background;
in the instruction level, a user plans the execution process of a task by himself according to a feedback result in task level planning, the system judges the rationality of user planning in the background and gives a prompt and a recommended task plan;
and after the user confirms the recommended task plan, the system controls the robot to execute the task.
Further, the task description of the user to the robot system at a task level includes:
importing a scene file containing scene object model information and inspection robot model information related in a task, and constructing a robot simulation environment in a virtual space;
setting a task entity for determining an entity device of system operation;
and setting a task body, and distinguishing qualitative description and quantitative description of the task.
Further, the step of setting a task entity for the robot by the user includes:
setting an active entity, setting an entity which has a certain degree of autonomy and can actively send out interaction so as to make clear a specific implementation device;
and setting a passive entity, namely setting an object operated by the active entity and an entity which can not actively send out interaction by the active entity so as to clearly implement the device of the action.
Further, the step of setting a task body for the robot by the user includes:
setting task attributes, presenting multi-dimensional qualitative description and static constraint conditions of a task, and setting a task target, a task rule and a current state;
and setting task parameters, quantitatively describing the tasks, and obviously influencing task results on the premise of determining task attributes.
Further, the step of setting the task parameters of the robot by the user includes:
setting input parameters, restricting the task execution process, and limiting the movement speed, time or path;
and setting output parameters and feeding back the current state of the system, wherein the output parameters comprise the angle of a chassis motor, the tail end pose parameter and the joint angle of a holder.
Further, the imported scene file includes a task scene and a spatial layout:
the task scene is irrelevant to the task target and is an external factor which influences the execution process of the task, wherein the external factor comprises obstacles and space limitation;
the spatial layout is an information set of the spatial position of an entity object involved in task execution, and comprises a standing position coordinate of the robot, an initial position of a tool, the spatial position of a task object and the position of an obstacle;
the relative position relation of each entity in the task space information is described by a homogeneous transformation matrix.
Further, the executing process of the planning task performed by the user to the robot system at the instruction level includes:
setting a control mode, and issuing different motion instructions to various control modes to realize accurate control of the robot so as to complete a specified task;
setting control parameters, namely setting parameters under different control modes, wherein the parameters comprise input specified joint angles, terminal pose parameters and the like;
and setting a time sequence relation for describing the sequence among different control modes, so that the robot executes according to the planned sequence to ensure the accuracy of actions.
Furthermore, the user sets a time sequence relationship for the robot, the time sequence relationship is divided into a parallel relationship and a serial relationship, the control mode in the parallel relationship can simultaneously control the robot and issue an execution instruction, and the control mode in the serial relationship must be sequentially executed according to a sequence.
Further, the system judges the rationality of user planning in the background and gives a step of prompting and recommending a task path, and the step comprises the following steps:
the system checks the programming reasonability of each step according to the parameters input by the user on the instruction level, and simultaneously makes a judgment on whether the programming is reasonable or not;
if the judgment is not reasonable, a prompt and recommended task path is given, and the prompt and recommended task path is fed back to the user to execute the judgment again after the parameters are reset;
if the judgment is reasonable, storing the edited corresponding parameters into a background;
the system judges whether all parameters required by the task are set or not;
if the setting is not finished, prompting the user to drag the control mode to the planning area for parameter setting and connecting the front mode and the rear mode, and after the user finishes the parameter setting, continuing to execute the judgment action of the planning rationality by the system;
and if the setting is judged to be finished, all the parameters stored in the background are stored to form a task flow file.
Further, the rationality of the planning is judged by the system according to the fact that the parameters edited by the user should gradually approach the target instruction parameters.
Compared with the prior art, the invention has the following beneficial effects:
1) the work content is divided according to the patrol characteristics of the patrol machine, the respective characteristics are fully exerted, the cooperative patrol work content is defined, and insufficient patrol or excessive patrol is avoided.
2) The fusion with the production business can realize the business closed loop of the state maintenance system and the production management system of the power transmission and transformation equipment, particularly, the tasks are classified according to the inspection equipment and the inspection content, the task feedback is realized on the basis of maintaining the business closed loop, and the tasks are optimized in scheduling and distribution.
3) The integration of service flow and information flow of 'inspection task issuing', 'inspection data real-time analysis', 'automatic generation maintenance strategy' is realized, and a man-machine cooperation information model is established.
4) The robot has the advantages that the human planning and decision-making capability is combined with the autonomous planning of the humanoid robot, so that the robot has the capability of adapting to environmental changes, and the task-level parameters and the instruction-level parameters are combined for use, so that the motion track of the robot is more accurate, and the completion of an intelligent task is more accurate and reliable.
Drawings
FIG. 1: a system structure based on man-machine cooperation;
FIG. 2: the steps of the control method in the embodiment are as follows.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a patrol robot control method based on a man-machine cooperative system, which comprises task-level and instruction-level control and specifically comprises the following steps:
(1) importing a task model, and importing a scene file into a virtual space to construct a robot simulation environment;
(2) inputting information such as task entities, task body parameters and the like, and searching and generating target instruction parameters in a background database;
(3) the robot system autonomously infers a target state when a task is completed by analyzing data input by a user and feeds the target state back to the user;
(4) a user autonomously plans the execution flow of a task in a mode of dragging a control mode and editing action parameters;
(5) the robot system judges the rationality of user planning on parameters input by a user at the background, and if the parameters are unreasonable, the robot system feeds back information to prompt action parameters to be input again; if the task path is reasonable, a prompt and a recommended task path are given;
(6) after judging that the parameters are reasonable, storing the edited corresponding parameters into a background by a user, and storing the parameters in the form of xm l files to form a task flow file;
(7) and the inspection robot reads the task flow file to execute the action.
As shown in fig. 1 and 2, a control task is divided into a task level and an instruction level, where the task level planning is a macro-planning of the task at a certain abstraction level and does not involve a motion instruction of a bottom layer of a robot; whereas instruction level planning is the integration of robot executable instructions. Two levels are not isolated, and task-level planning guides and constrains instruction-level planning, which is a concrete implementation of task-level planning at a finer granularity. At the task level, the user is responsible for inputting own intentions into the system and expressing tasks to be executed and related constraints, and the robot system autonomously infers a target state when the tasks are completed and feeds the target state back to the user by analyzing the intentions of the user; in the instruction level, a user plans the execution process of a task by himself according to a feedback result in task level planning, the system judges the rationality of user planning in the background and gives a prompt and a recommended task path, and the whole process of the instruction level planning is completed together through the cooperation of the user and the robot system. A user describes tasks and scenes to be executed to the humanoid robot system at a task level, appoints task objects, task purposes and the like, constructs a virtual scene through a scene file, obtains a target state of the robot system through an inference mechanism based on task description quantities, obtains current characteristic parameters of the robot from the target state, and stores the current characteristic parameters as target instructions in a background. In the instruction level, a user combines control modes, sets control parameters and control modes before and after connection, the user issues an intermediate motion instruction to the robot by planning each control mode, and the intermediate instruction, the target instruction and the current state of the robot system are comprehensively analyzed and the reasonability of the intermediate instruction is judged through instruction analysis and prejudgment. Reasonable meaning means that all intermediate instructions should in some way be continually trending toward the target instruction, otherwise the planning result is unreasonable; and when the judgment result is reasonable, the robot can execute the motion instruction.
In an embodiment, the task entity includes an active entity and a passive entity in the task, and the robot as the active entity in the task system interacts with the passive entity under the control of the controller, changes the state or attribute of the passive entity, and so on. The object operated by the robot is a passive entity, and if the robot executes a patrol task, the instrument is the passive entity. The task ontology comprises two element sets of task parameters and task attributes and aims to distinguish qualitative description and quantitative description of the task. The task attribute is the inherent characteristic of the task, and represents the multidimensional qualitative description and the static constraint conditions of the task, including the task target, the task rule, the current state and the like. The task parameters are quantitative description of the task, including input and output parameters, which are quantities that have significant influence on the task result on the premise of determining the task attributes, such as movement speed, movement time and the like, and the input parameters are constraints on the task execution process and limit the movement speed, time or path and the like; and the output parameters are feedback on the current state of the system and comprise the angle of a chassis motor, the tail end pose parameter and the joint angle of the holder. The task space comprises two sets of task scenes and spatial layout. The task environment is independent of the task target, and is an external factor that affects the execution process of the task, such as an obstacle, a space limitation, and the like. The spatial layout is an information set of spatial positions of physical objects involved in task execution, including the coordinates of the standing position of the robot, the initial position of the tool, the spatial position of the task object, and the position of the obstacle. The task space information is all recorded in the scene file, and the relative position relation of each entity is described by a homogeneous transformation matrix.
The task level planning obtains target instruction level parameters capable of completing tasks by inputting macroscopic task parameters into a system, provides basis for the instruction level planning, takes three task element sets of a task entity, a task body and a task space as input quantities, is represented by a logic symbol > t, takes state parameters of a mechanical arm when the tasks are completed as final output quantities, and is represented by a logic symbol t < and specifically represented as follows:
>t=E∪B∪L;t<={JA,EP,FJ},
in the formula: e, B and L are respectively a task entity, a task body and a task space; JA, EP and FJ respectively represent the angle of a chassis motor, the tail end pose parameter and the tripod head joint angle instruction.
The robot system stores task parameters input by a user in the language form of predicate logic in the background, obtains the positions of all entities in a scene from a scene file by analyzing the predicate logic, and calculates joint angle information of a mechanical arm and the like according to the positions of task objects. Taking a simple task of grabbing a wrench by a right hand as an example, predicate logic is expressed as the reading of a patrol instrument, the patrol action judges that the involved motion main bodies are a chassis and a holder, and the position and the pose of the patrol robot on the map and the angle of the holder can be obtained according to the position information set.
The instruction level planning adopts a modular planning mode, and various planning modules are divided according to different motion instruction types of the robot.
Furthermore, the formula: t ═ C, P, S },
in the formula: c is a control mode set; p is a parameter set corresponding to the control mode; and S is the time sequence relation among the control modes.
In an embodiment, the joint angle control requires the user to input a specified joint angle, and the end pose control requires the input of specified end pose parameters. Since a single control scheme can only achieve limited functionality, it is necessary to combine multiple control schemes to accomplish the task. The time sequence relation is used for describing the sequence between different control modes, and is roughly divided into a parallel control mode and a serial control mode, the control mode in the parallel relation can simultaneously control the robot and issue an execution instruction, and the control mode in the serial relation must be executed in sequence according to the sequence.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.

Claims (9)

1. A patrol robot control method based on a man-machine cooperation system is characterized by comprising task level and instruction level control, and specifically comprising the following steps:
a user carries out task description on a robot system at a task level, obtains a target state of the robot system when a task is completed through an inference mechanism based on task description quantity, and obtains current characteristic parameters of the robot from the target state to be stored in a background as target instructions; the task description quantity comprises user intentions, wherein the user intentions comprise tasks to be executed, task scenes, task objects and task purposes;
in the instruction level, a user plans the execution process of a task by himself according to a feedback result in task level planning, the system judges the rationality of user planning in the background and gives a prompt and a recommended task plan; the rationality refers to the fact that the user plan should in some way be continually trending towards the target instructions; after the user confirms the recommended task plan, the system controls the robot to execute the task;
the task level planning method comprises the following steps of inputting macroscopic task parameters into a system to obtain target instruction level parameters capable of completing tasks, providing basis for the instruction level planning, taking three task element sets of a task entity, a task body and a task space as input quantities, expressing the input quantities by using a logic symbol > t, taking state parameters of a mechanical arm when the tasks are completed as final output quantities, expressing the output quantities by using the logic symbol t < and specifically expressing the output quantities as follows:
>t=E∪B∪L;t<={JA,EP,FJ},
in the formula: e, B and L are respectively a task entity, a task body and a task space; JA, EP and FJ respectively represent the angle of a chassis motor, a terminal pose parameter and a tripod head joint angle instruction;
the execution process of the planning task performed by the user to the robot system at the instruction level comprises the following steps:
setting a control mode, and issuing different motion instructions to various control modes to realize accurate control of the robot so as to complete a specified task;
setting control parameters, namely setting parameters in different control modes, wherein the parameters comprise input of specified joint angles and terminal pose parameters;
and setting a time sequence relation for describing the sequence among different control modes, so that the robot executes according to the planned sequence to ensure the accuracy of actions.
2. The patrol robot control method according to claim 1, wherein the user performs task description to the robot system at a task level, and the task description comprises:
importing a scene file containing scene object model information and inspection robot model information related in a task, and constructing a robot simulation environment in a virtual space;
setting a task entity for determining an entity device of system operation;
and setting a task body, and distinguishing qualitative description and quantitative description of the task.
3. The inspection robot control method according to claim 2, the step of the user setting a task entity to the robot, comprising:
setting an active entity, setting an entity which has a certain degree of autonomy and can actively send out interaction so as to make clear a specific implementation device;
and setting a passive entity, namely setting an object operated by the active entity and an entity which can not actively send out interaction by the active entity so as to clearly implement the device of the action.
4. The inspection robot control method according to claim 2, the step of the user setting a task body for the robot includes:
setting task attributes, presenting multi-dimensional qualitative description and static constraint conditions of a task, and setting a task target, a task rule and a current state;
and setting task parameters, quantitatively describing the task, and obviously influencing a task result on the premise of determining the task attribute.
5. The inspection robot control method according to claim 4, the step of the user setting the task parameters for the robot includes:
setting input parameters, restricting the task execution process, and limiting the movement speed, time or path;
and setting output parameters and feeding back the current state of the system, wherein the output parameters comprise the angle of a chassis motor, the tail end pose parameter and the joint angle of a holder.
6. The inspection robot control method according to claim 2, wherein the imported scene file includes a task scene and a spatial layout:
the task scene is irrelevant to the task target and is an external factor which influences the execution process of the task, wherein the external factor comprises obstacles and space limitation;
the spatial layout is an information set of the spatial position of an entity object involved in task execution, and comprises a standing position coordinate of the robot, an initial position of a tool, the spatial position of a task object and the position of an obstacle;
the relative position relation of each entity in the task space information is described by a homogeneous transformation matrix.
7. The inspection robot control method according to claim 1, wherein the user sets a time sequence relationship for the robot, the time sequence relationship is divided into a parallel relationship and a serial relationship, the control mode in the parallel relationship can control the robot and issue an execution instruction at the same time, and the control mode in the serial relationship must be executed in sequence.
8. The inspection robot control method according to claim 1, wherein the step of the system judging the reasonableness of user planning in the background and giving a prompt and recommended task path includes:
the system checks the programming reasonability of each step according to the parameters input by the user on the instruction level, and simultaneously makes a judgment on whether the programming is reasonable or not;
if the judgment is not reasonable, prompting and recommended task paths are given, and the prompting and recommended task paths are fed back to the user to execute the judgment again after the parameters are reset;
if the judgment is reasonable, storing the edited corresponding parameters into a background;
the system judges whether all parameters required by the task are set or not;
if the setting is not finished, prompting the user to drag the control mode to the planning area for parameter setting and connecting the front mode and the rear mode, and after the user finishes the parameter setting, continuing to execute the judgment action of the planning rationality by the system;
and if the setting is judged to be finished, all the parameters stored in the background are stored to form a task flow file.
9. The patrol robot control method according to claim 8, wherein the rationality of the plan is judged by the system based on that the parameter edited by the user should gradually approach the target instruction parameter.
CN201811186640.0A 2018-10-12 2018-10-12 Patrol robot control method based on man-machine cooperation system Active CN109333532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811186640.0A CN109333532B (en) 2018-10-12 2018-10-12 Patrol robot control method based on man-machine cooperation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811186640.0A CN109333532B (en) 2018-10-12 2018-10-12 Patrol robot control method based on man-machine cooperation system

Publications (2)

Publication Number Publication Date
CN109333532A CN109333532A (en) 2019-02-15
CN109333532B true CN109333532B (en) 2022-05-06

Family

ID=65309159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811186640.0A Active CN109333532B (en) 2018-10-12 2018-10-12 Patrol robot control method based on man-machine cooperation system

Country Status (1)

Country Link
CN (1) CN109333532B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110379247B (en) * 2019-07-19 2021-12-07 武汉理工大学 Multitask and multi-role ROV (remote operated vehicle) collaborative training simulation system and method
CN110647110B (en) * 2019-08-19 2023-04-28 广东电网有限责任公司 Inspection instruction generation system and method for power grid dispatching inspection robot
CN110716567A (en) * 2019-10-18 2020-01-21 上海快仓智能科技有限公司 Mobile equipment and control method and control device thereof
CN111399513B (en) * 2020-03-27 2023-09-19 拉扎斯网络科技(上海)有限公司 Robot motion planning method, apparatus, electronic device and storage medium
CN112053050A (en) * 2020-08-27 2020-12-08 北京云迹科技有限公司 Assessment method, device and system suitable for cooperation efficiency between robots
CN113340231B (en) * 2021-05-21 2024-05-24 武汉中观自动化科技有限公司 Scanner automatic control system and method
CN115577775A (en) * 2022-09-06 2023-01-06 华南理工大学 System success path planning method, system, equipment and medium for target function realization

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102645220A (en) * 2012-05-21 2012-08-22 诚迈科技(南京)有限公司 Intelligent trip mode real-time planning recommendation method
CN103971311A (en) * 2014-05-09 2014-08-06 北京化工大学 Reasoning drill method and system based on man-machine coordination
US9880553B1 (en) * 2015-04-28 2018-01-30 Hrl Laboratories, Llc System and method for robot supervisory control with an augmented reality user interface
CN105892994B (en) * 2016-04-05 2018-04-24 东南大学 A kind of mobile robot mission planning is with performing abnormal conditions processing method and processing device
CN106003050B (en) * 2016-07-13 2018-03-09 广东奥讯智能设备技术有限公司 A kind of implementation method based on dining room service robot man-machine interface

Also Published As

Publication number Publication date
CN109333532A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109333532B (en) Patrol robot control method based on man-machine cooperation system
Lv et al. A digital twin-driven human-robot collaborative assembly approach in the wake of COVID-19
He et al. Digital twin-based sustainable intelligent manufacturing: a review
Zhang et al. A reinforcement learning method for human-robot collaboration in assembly tasks
Kanazawa et al. Adaptive motion planning for a collaborative robot based on prediction uncertainty to enhance human safety and work efficiency
Tabuada et al. Motion feasibility of multi-agent formations
CN111210184B (en) Digital twin workshop material on-time distribution method and system
Luo et al. Human–robot shared control based on locally weighted intent prediction for a teleoperated hydraulic manipulator system
Luders et al. Bounds on tracking error using closed-loop rapidly-exploring random trees
Aehnelt et al. Information Assistance for Smart Assembly Stations.
Ma et al. Can robots replace human beings?—Assessment on the developmental potential of construction robot
Zhao et al. Dynamic and unified modelling of sustainable manufacturing capability for industrial robots in cloud manufacturing
Zheng et al. Knowledge-based program generation approach for robotic manufacturing systems
Fusaro et al. A human-aware method to plan complex cooperative and autonomous tasks using behavior trees
Liu et al. Digital twin-driven robotic disassembly sequence dynamic planning under uncertain missing condition
Escobar-Naranjo et al. Applications of Artificial Intelligence Techniques for trajectories optimization in robotics mobile platforms
Wang et al. Digital twin-based design and operation of human-robot collaborative assembly
Li et al. A unified trajectory planning and tracking control framework for autonomous overtaking based on hierarchical mpc
Zhang et al. A digital twin-driven flexible scheduling method in a human–machine collaborative workshop based on hierarchical reinforcement learning
Zhu et al. Automatic Control System Design for Industrial Robots Based on Simulated Annealing and PID Algorithms
Wang et al. GMAW welding procedure expert system based on machine learning
Cubuktepe et al. Shared control with human trust and workload models
Rani et al. A Human–Machine Interaction Mechanism: Additive Manufacturing for Industry 5.0—Design and Management
Kang et al. Application of PID control and improved ant colony algorithm in path planning of substation inspection robot
Wu et al. Production automation and financial cost control based on intelligent control technology in sustainable manufacturing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191213

Address after: 510000 Guangdong city of Guangzhou province Luogang District Science City Kexiang Road No. 11

Applicant after: CHINA SOUTHERN POWER GRID Co.,Ltd.

Applicant after: China South Power Grid International Co.,Ltd.

Address before: 510000 3 building, 3, 4, 5 and J1 building, 11 building, No. 11, Ke Xiang Road, Luogang District Science City, Guangzhou, Guangdong.

Applicant before: China South Power Grid International Co.,Ltd.

Applicant before: SINOMACH INTELLIGENCE TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant