Patrol robot control method based on man-machine cooperation system
Technical Field
The invention relates to the field of robots, in particular to a patrol robot control method based on a man-machine cooperation system.
Background
At present, the construction and the enhancement of power supply reliability of the smart power grid are raised as the national strategy, and the state detection and the monitoring of the power equipment are used as the emerging industries developed in recent years, so that huge growth potential and development space are presented. The intelligent power grid construction planning can greatly promote the market demand of the intelligent inspection robot, and becomes a long-term driving force for the continuous growth of the inspection robot industry. The indoor robot demand analysis takes an outdoor robot as an example, and is mainly applied to a transformer substation. A substation is an electrical facility in an electrical power system that transforms voltage, receives and distributes electrical energy, controls the flow of electrical power, and regulates voltage. In addition, according to the plan of action of construction and transformation of power distribution networks (2015-2020), published by the national energy agency, the coverage rate of domestic distribution automation reaches 90% in 2020. If 20% of automatic power distribution stations adopt intelligent inspection equipment, the annual demand of domestic indoor robots exceeds 10,000 within five years.
However, the inspection robots in the prior art all work by inputting parameters once by a user, but in the case of complex and variable field environments, the robots often work according to the old parameters without changing according to changes of the environments, so that the manual operation of the robots is not perfect and reliable. Especially when the routing inspection tasks of some large key facilities are completed, the requirements on the accuracy and the safety of the routing inspection robot are higher. At present, the safety and reliability of the inspection task executed by the inspection robot without manual intervention and the flexibility of timely error correction are difficult to completely guarantee.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a patrol robot control method based on a man-machine cooperative system, which comprises task-level and instruction-level control and comprises the following specific steps:
a patrol robot control method based on a man-machine cooperation system is characterized by comprising task level and instruction level control, and specifically comprising the following steps:
a user carries out task description on the robot system at a task level, obtains a target state of the robot system through an inference mechanism based on task description quantity, obtains current characteristic parameters of the robot from the target state, and stores the current characteristic parameters as target instructions in a background;
in the instruction level, a user plans the execution process of a task by himself according to a feedback result in task level planning, the system judges the rationality of user planning in the background and gives a prompt and a recommended task plan;
and after the user confirms the recommended task plan, the system controls the robot to execute the task.
Further, the task description of the user to the robot system at a task level includes:
importing a scene file containing scene object model information and inspection robot model information related in a task, and constructing a robot simulation environment in a virtual space;
setting a task entity for determining an entity device of system operation;
and setting a task body, and distinguishing qualitative description and quantitative description of the task.
Further, the step of setting a task entity for the robot by the user includes:
setting an active entity, setting an entity which has a certain degree of autonomy and can actively send out interaction so as to make clear a specific implementation device;
and setting a passive entity, namely setting an object operated by the active entity and an entity which can not actively send out interaction by the active entity so as to clearly implement the device of the action.
Further, the step of setting a task body for the robot by the user includes:
setting task attributes, presenting multi-dimensional qualitative description and static constraint conditions of a task, and setting a task target, a task rule and a current state;
and setting task parameters, quantitatively describing the tasks, and obviously influencing task results on the premise of determining task attributes.
Further, the step of setting the task parameters of the robot by the user includes:
setting input parameters, restricting the task execution process, and limiting the movement speed, time or path;
and setting output parameters and feeding back the current state of the system, wherein the output parameters comprise the angle of a chassis motor, the tail end pose parameter and the joint angle of a holder.
Further, the imported scene file includes a task scene and a spatial layout:
the task scene is irrelevant to the task target and is an external factor which influences the execution process of the task, wherein the external factor comprises obstacles and space limitation;
the spatial layout is an information set of the spatial position of an entity object involved in task execution, and comprises a standing position coordinate of the robot, an initial position of a tool, the spatial position of a task object and the position of an obstacle;
the relative position relation of each entity in the task space information is described by a homogeneous transformation matrix.
Further, the executing process of the planning task performed by the user to the robot system at the instruction level includes:
setting a control mode, and issuing different motion instructions to various control modes to realize accurate control of the robot so as to complete a specified task;
setting control parameters, namely setting parameters under different control modes, wherein the parameters comprise input specified joint angles, terminal pose parameters and the like;
and setting a time sequence relation for describing the sequence among different control modes, so that the robot executes according to the planned sequence to ensure the accuracy of actions.
Furthermore, the user sets a time sequence relationship for the robot, the time sequence relationship is divided into a parallel relationship and a serial relationship, the control mode in the parallel relationship can simultaneously control the robot and issue an execution instruction, and the control mode in the serial relationship must be sequentially executed according to a sequence.
Further, the system judges the rationality of user planning in the background and gives a step of prompting and recommending a task path, and the step comprises the following steps:
the system checks the programming reasonability of each step according to the parameters input by the user on the instruction level, and simultaneously makes a judgment on whether the programming is reasonable or not;
if the judgment is not reasonable, a prompt and recommended task path is given, and the prompt and recommended task path is fed back to the user to execute the judgment again after the parameters are reset;
if the judgment is reasonable, storing the edited corresponding parameters into a background;
the system judges whether all parameters required by the task are set or not;
if the setting is not finished, prompting the user to drag the control mode to the planning area for parameter setting and connecting the front mode and the rear mode, and after the user finishes the parameter setting, continuing to execute the judgment action of the planning rationality by the system;
and if the setting is judged to be finished, all the parameters stored in the background are stored to form a task flow file.
Further, the rationality of the planning is judged by the system according to the fact that the parameters edited by the user should gradually approach the target instruction parameters.
Compared with the prior art, the invention has the following beneficial effects:
1) the work content is divided according to the patrol characteristics of the patrol machine, the respective characteristics are fully exerted, the cooperative patrol work content is defined, and insufficient patrol or excessive patrol is avoided.
2) The fusion with the production business can realize the business closed loop of the state maintenance system and the production management system of the power transmission and transformation equipment, particularly, the tasks are classified according to the inspection equipment and the inspection content, the task feedback is realized on the basis of maintaining the business closed loop, and the tasks are optimized in scheduling and distribution.
3) The integration of service flow and information flow of 'inspection task issuing', 'inspection data real-time analysis', 'automatic generation maintenance strategy' is realized, and a man-machine cooperation information model is established.
4) The robot has the advantages that the human planning and decision-making capability is combined with the autonomous planning of the humanoid robot, so that the robot has the capability of adapting to environmental changes, and the task-level parameters and the instruction-level parameters are combined for use, so that the motion track of the robot is more accurate, and the completion of an intelligent task is more accurate and reliable.
Drawings
FIG. 1: a system structure based on man-machine cooperation;
FIG. 2: the steps of the control method in the embodiment are as follows.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a patrol robot control method based on a man-machine cooperative system, which comprises task-level and instruction-level control and specifically comprises the following steps:
(1) importing a task model, and importing a scene file into a virtual space to construct a robot simulation environment;
(2) inputting information such as task entities, task body parameters and the like, and searching and generating target instruction parameters in a background database;
(3) the robot system autonomously infers a target state when a task is completed by analyzing data input by a user and feeds the target state back to the user;
(4) a user autonomously plans the execution flow of a task in a mode of dragging a control mode and editing action parameters;
(5) the robot system judges the rationality of user planning on parameters input by a user at the background, and if the parameters are unreasonable, the robot system feeds back information to prompt action parameters to be input again; if the task path is reasonable, a prompt and a recommended task path are given;
(6) after judging that the parameters are reasonable, storing the edited corresponding parameters into a background by a user, and storing the parameters in the form of xm l files to form a task flow file;
(7) and the inspection robot reads the task flow file to execute the action.
As shown in fig. 1 and 2, a control task is divided into a task level and an instruction level, where the task level planning is a macro-planning of the task at a certain abstraction level and does not involve a motion instruction of a bottom layer of a robot; whereas instruction level planning is the integration of robot executable instructions. Two levels are not isolated, and task-level planning guides and constrains instruction-level planning, which is a concrete implementation of task-level planning at a finer granularity. At the task level, the user is responsible for inputting own intentions into the system and expressing tasks to be executed and related constraints, and the robot system autonomously infers a target state when the tasks are completed and feeds the target state back to the user by analyzing the intentions of the user; in the instruction level, a user plans the execution process of a task by himself according to a feedback result in task level planning, the system judges the rationality of user planning in the background and gives a prompt and a recommended task path, and the whole process of the instruction level planning is completed together through the cooperation of the user and the robot system. A user describes tasks and scenes to be executed to the humanoid robot system at a task level, appoints task objects, task purposes and the like, constructs a virtual scene through a scene file, obtains a target state of the robot system through an inference mechanism based on task description quantities, obtains current characteristic parameters of the robot from the target state, and stores the current characteristic parameters as target instructions in a background. In the instruction level, a user combines control modes, sets control parameters and control modes before and after connection, the user issues an intermediate motion instruction to the robot by planning each control mode, and the intermediate instruction, the target instruction and the current state of the robot system are comprehensively analyzed and the reasonability of the intermediate instruction is judged through instruction analysis and prejudgment. Reasonable meaning means that all intermediate instructions should in some way be continually trending toward the target instruction, otherwise the planning result is unreasonable; and when the judgment result is reasonable, the robot can execute the motion instruction.
In an embodiment, the task entity includes an active entity and a passive entity in the task, and the robot as the active entity in the task system interacts with the passive entity under the control of the controller, changes the state or attribute of the passive entity, and so on. The object operated by the robot is a passive entity, and if the robot executes a patrol task, the instrument is the passive entity. The task ontology comprises two element sets of task parameters and task attributes and aims to distinguish qualitative description and quantitative description of the task. The task attribute is the inherent characteristic of the task, and represents the multidimensional qualitative description and the static constraint conditions of the task, including the task target, the task rule, the current state and the like. The task parameters are quantitative description of the task, including input and output parameters, which are quantities that have significant influence on the task result on the premise of determining the task attributes, such as movement speed, movement time and the like, and the input parameters are constraints on the task execution process and limit the movement speed, time or path and the like; and the output parameters are feedback on the current state of the system and comprise the angle of a chassis motor, the tail end pose parameter and the joint angle of the holder. The task space comprises two sets of task scenes and spatial layout. The task environment is independent of the task target, and is an external factor that affects the execution process of the task, such as an obstacle, a space limitation, and the like. The spatial layout is an information set of spatial positions of physical objects involved in task execution, including the coordinates of the standing position of the robot, the initial position of the tool, the spatial position of the task object, and the position of the obstacle. The task space information is all recorded in the scene file, and the relative position relation of each entity is described by a homogeneous transformation matrix.
The task level planning obtains target instruction level parameters capable of completing tasks by inputting macroscopic task parameters into a system, provides basis for the instruction level planning, takes three task element sets of a task entity, a task body and a task space as input quantities, is represented by a logic symbol > t, takes state parameters of a mechanical arm when the tasks are completed as final output quantities, and is represented by a logic symbol t < and specifically represented as follows:
>t=E∪B∪L;t<={JA,EP,FJ},
in the formula: e, B and L are respectively a task entity, a task body and a task space; JA, EP and FJ respectively represent the angle of a chassis motor, the tail end pose parameter and the tripod head joint angle instruction.
The robot system stores task parameters input by a user in the language form of predicate logic in the background, obtains the positions of all entities in a scene from a scene file by analyzing the predicate logic, and calculates joint angle information of a mechanical arm and the like according to the positions of task objects. Taking a simple task of grabbing a wrench by a right hand as an example, predicate logic is expressed as the reading of a patrol instrument, the patrol action judges that the involved motion main bodies are a chassis and a holder, and the position and the pose of the patrol robot on the map and the angle of the holder can be obtained according to the position information set.
The instruction level planning adopts a modular planning mode, and various planning modules are divided according to different motion instruction types of the robot.
Furthermore, the formula: t ═ C, P, S },
in the formula: c is a control mode set; p is a parameter set corresponding to the control mode; and S is the time sequence relation among the control modes.
In an embodiment, the joint angle control requires the user to input a specified joint angle, and the end pose control requires the input of specified end pose parameters. Since a single control scheme can only achieve limited functionality, it is necessary to combine multiple control schemes to accomplish the task. The time sequence relation is used for describing the sequence between different control modes, and is roughly divided into a parallel control mode and a serial control mode, the control mode in the parallel relation can simultaneously control the robot and issue an execution instruction, and the control mode in the serial relation must be executed in sequence according to the sequence.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the invention, may occur to those skilled in the art and are intended to be included within the scope of the invention.