CN112346419B - Human-computer safe interaction method, robot and computer readable storage medium - Google Patents

Human-computer safe interaction method, robot and computer readable storage medium Download PDF

Info

Publication number
CN112346419B
CN112346419B CN202011200616.5A CN202011200616A CN112346419B CN 112346419 B CN112346419 B CN 112346419B CN 202011200616 A CN202011200616 A CN 202011200616A CN 112346419 B CN112346419 B CN 112346419B
Authority
CN
China
Prior art keywords
information
target
robot
preset
control information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011200616.5A
Other languages
Chinese (zh)
Other versions
CN112346419A (en
Inventor
刁思勉
陈再励
钟震宇
谭鹏辉
李娜
李锡康
雷欢
李志谋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yejiawei Technology Co ltd
Original Assignee
Shenzhen Yejiawei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yejiawei Technology Co ltd filed Critical Shenzhen Yejiawei Technology Co ltd
Priority to CN202011200616.5A priority Critical patent/CN112346419B/en
Publication of CN112346419A publication Critical patent/CN112346419A/en
Application granted granted Critical
Publication of CN112346419B publication Critical patent/CN112346419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/4184Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by fault tolerance, reliability of production system
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/31From computer integrated manufacturing till monitoring
    • G05B2219/31088Network communication between supervisor and cell, machine group
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a human-computer safety interaction method, a robot and a computer readable storage medium, wherein the human-computer safety interaction method is applied to the robot and comprises the following steps: acquiring current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the environment information comprises operation information of the robot to be detected and activity state information of human, and the robot to be detected is different from the target robot; determining control information according to the environment information, the current operation information and preset target operation information; determining target control parameters according to the control information and an error function, wherein the error function is the error function of the current operation information and the target operation information; and controlling the running state of the target robot according to the target control parameters. The accuracy of controlling the robot is improved.

Description

Human-computer safe interaction method, robot and computer readable storage medium
Technical Field
The invention relates to the field of industrial manufacturing, in particular to a human-computer safe interaction method and a robot computer readable storage medium.
Background
In the field of industrial manufacturing, unexpected collision may occur between a robot and a human in a human-computer interaction process, the collision may generate a safety risk, in order to reduce the collision, a potential field method is adopted in the prior art to perform collision avoidance, that is, the robot defines the human as an obstacle, determines a collision occurrence area according to a generated virtual field, and generates a corresponding collision avoidance strategy to complete collision avoidance.
Disclosure of Invention
The invention mainly aims to provide a human-computer safety interaction method, a robot and a computer readable storage medium, and aims to solve the technical problem of potential safety hazards caused by inaccurate control in human-computer safety interaction.
In order to achieve the above object, the present invention provides a human-machine safe interaction method, which is applied to a robot, and includes:
acquiring current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the environment information comprises operation information of the robot to be detected and activity state information of human, and the robot to be detected is different from the target robot;
determining control information according to the environment information, the current operation information and preset target operation information;
determining target control parameters according to the control information and an error function, wherein the error function is the error function of the current operation information and the target operation information;
and controlling the running state of the target robot according to the target control parameters.
Preferably, the step of determining control information according to the environment information, the current operation information, and preset target operation information includes:
determining a state equation of the target robot according to the current operation information and the environment information;
and determining the control information according to the environment information, the state equation and the target operation information.
Preferably, the step of determining the state equation of the target robot according to the current operation information and the environment information includes:
constructing the current operation information, the environment information, the control information of the target robot, the control information of the robot to be detected, the control information of the human and a system state equation of system noise interference;
and decoupling the system state equation to obtain the decoupled state equation of the target robot.
Preferably, the step of determining the control information according to the environment information, the state equation and the target operation information includes:
determining an observation equation according to the environment information, the current operation information and preset observation noise;
and determining the control information according to the observation equation, the state equation and the target operation information.
Preferably, after the step of determining control information according to the environment information, the current operation information, and preset target operation information, the method further includes:
detecting the state and the current operation time of the target robot;
and when the state is a non-safety state or the current running time is more than the starting time, executing the step of determining the target control parameter according to the control information and the error function.
Preferably, the step of determining the target control parameter according to the control information and the error function comprises:
acquiring a preset control information range, wherein the preset control information range indicates a range of preset control information of the target robot in a preset safety state, and the safety state indicates that the position of the target robot is in a preset position range or the moving track of the target robot is a preset moving track;
when the control information is within the range of the preset control information, determining the preset control information as the target control parameter;
and when the control information is not in the preset control information range, determining the error function according to the control information and the preset control information range, and determining the target control parameter according to the error function.
Preferably, the step of determining the error function according to the control information and the preset control information range, and determining the target control parameter according to the error function includes:
determining the error function according to the Lyapunov theorem, the control information and the preset control information range;
determining the target control parameter according to the error function, wherein the target control parameter is
Figure BDA0002753394800000031
Wherein, the
Figure BDA0002753394800000032
For the target control parameter, the
Figure BDA0002753394800000033
For the preset control information, M is a first preset coefficient, N is a second preset coefficient, and Q is a positive definite matrix.
Preferably, before the step of determining control information according to the environment information, the current operation information, and preset target operation information, the method further includes:
acquiring execution information of a target task sent by terminal equipment, wherein the execution information is used for indicating the target robot to change operation information;
and determining the preset target operation information according to the execution information.
In addition, in order to achieve the above object, the present invention further provides a robot, including a memory, a processor, and a human-computer safe interaction program stored on the memory and operable on the processor, wherein the human-computer safe interaction program, when executed by the processor, implements the steps of the human-computer safe interaction program according to any one of the above aspects.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, where a human-computer security interaction program is stored, and when executed by a processor, the computer readable storage medium implements the steps of the human-computer security interaction program according to any one of the above aspects.
The embodiment of the invention provides a man-machine safe interaction method, a robot and a computer readable storage medium, which comprises the steps of firstly obtaining current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the current operation information is information that the robot maintains the operation state or the motion state of the robot, the environment information is operation information of a robot to be detected in the environment where the robot is located and activity state information of human, the robot determines control information according to the detected environment information, the current operation information and preset target operation information, and further determines target control parameters according to the control information and an error function, the error function is an error function of the current operation information and the target operation information, the operation state of the target robot is controlled according to the target control parameters, and the corresponding target control parameters under the condition of the minimum error are determined through the error function, and environmental information has been obtained to make the target robot can carry out the degree of accuracy of controlling to the target robot according to the timely target control parameter of adjustment of the change of environment, in addition, because the degree of accuracy of target control parameter is higher, make at man-machine safety interactive in-process, can further avoid the possibility of bumping, promote man-machine interaction's security.
Drawings
FIG. 1 is a schematic structural diagram of a robot according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a human-computer security interaction method according to the present invention;
FIG. 3 is a flowchart illustrating a man-machine interaction method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a human-computer secure interaction method according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a man-machine interaction method according to a fourth embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
As shown in fig. 1, fig. 1 is a schematic structural diagram of a robot according to an embodiment of the present invention.
As shown in fig. 1, the robot may include: a processor 1001 such as a CPU, a communication interface 1002, a memory 1003, and a communication bus 1004. Wherein a communication bus 1004 is used to enable connective communication between these components. The memory 1003 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). The memory 1003 may alternatively be a storage device separate from the processor 1001.
Optionally, the robot may further comprise a sensor. Wherein, sensor such as laser sensor, vision sensor, temperature sensor etc. laser sensor can be used for the range finding, and vision sensor can be used for acquireing the image in the environment, and temperature sensor is used for detecting ambient temperature.
Those skilled in the art will appreciate that the configuration of the robot shown in fig. 1 does not constitute a limitation of the robot, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a communication module, and a human-computer secure interaction application.
In the robot shown in fig. 1, the processor 1001 may be configured to call a human-computer security interaction application stored in the memory 1003, and perform the following operations:
acquiring current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the environment information comprises operation information of the robot to be detected and activity state information of human, and the robot to be detected is different from the target robot;
determining control information according to the environment information, the current operation information and preset target operation information;
determining target control parameters according to the control information and an error function, wherein the error function is the error function of the current operation information and the target operation information;
and controlling the running state of the target robot according to the target control parameters.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
determining a state equation of the target robot according to the current operation information and the environment information;
and determining the control information according to the environment information, the state equation and the target operation information.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
constructing the current operation information, the environment information, the control information of the target robot, the control information of the robot to be detected, the control information of the human and a system state equation of system noise interference;
and decoupling the system state equation to obtain the decoupled state equation of the target robot.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
determining an observation equation according to the environment information, the current operation information and preset observation noise;
and determining the control information according to the observation equation, the state equation and the target operation information.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
detecting the state and the current operation time of the target robot;
and when the state is a non-safety state or the current running time is more than the starting time, executing the step of determining the target control parameter according to the control information and the error function.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
acquiring a preset control information range, wherein the preset control information range indicates a range of preset control information of the target robot in a preset safety state, and the safety state indicates that the position of the target robot is in a preset position range or the moving track of the target robot is a preset moving track;
when the control information is within the range of the preset control information, determining the preset control information as the target control parameter;
and when the control information is not in the preset control information range, determining the error function according to the control information and the preset control information range, and determining the target control parameter according to the error function.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
determining the error function according to the Lyapunov theorem, the control information and the preset control information range;
determining the target control parameter according to the error function, wherein the target control parameter is
Figure BDA0002753394800000061
Wherein, the
Figure BDA0002753394800000062
For the target control parameter, the
Figure BDA0002753394800000063
For the preset control information, M is a first preset coefficient, N is a second preset coefficient, and Q is a positive definite matrix.
Further, the processor 1001 may call the human-computer secure interaction application stored in the memory 1003, and further perform the following operations:
acquiring execution information of a target task sent by terminal equipment, wherein the execution information is used for indicating the target robot to change operation information;
and determining the preset target operation information according to the execution information.
Referring to fig. 2, a first embodiment of the present invention provides a human-computer secure interaction method, where the human-computer secure interaction method includes:
step S10, acquiring current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the environment information comprises operation information of the robot to be detected and activity state information of human, and the robot to be detected is different from the target robot;
in this embodiment, the executing body is a robot, and at the same time, the executing body is also a target robot, that is, the robot acquires current operation information of the target robot as its current operation information, the current operation information refers to operation information or motion information of the target robot in a current period of time, the operation information or motion information is, for example, current position information, moving speed information, and moving direction information, the current operation information is used to describe an operation state or a motion state of the target robot, the environment information is information used to describe an environment state in an area where the target robot is located detected by a sensor, the environment information includes operation information of the robot to be detected and activity state information of a human, the operation information of the robot to be detected is used to describe an operation state or a motion state of the robot to be detected, the operation information of the robot to be inspected such as position information and moving speed information of the robot to be inspected, since the target robot and the robot to be detected are both robots, the operation information may include the same type of data, for example, the current operation information and the operation information of the robot to be inspected may both include the position, the moving speed, etc. thereof, and in addition, the position information may be coordinate information of the robot, longitude and latitude information, or distance and orientation with respect to a reference object, it can be understood that the target robot and the robot to be detected are not the same robot, and the activity state of the human mainly refers to a moving state in which the human changes its own position in a certain area, and may also refer to a state in which the human changes its own limb activity, such as staying, walking, running, etc., and a state in which the limb activity is such as waving a hand.
The target robot can detect environmental information in real time or regularly through the sensor to know the influence that other objects probably produced to self in the environment that self was located, can add the sensor according to actual need and detect corresponding environmental information, for example detect through laser sensor and the distance that is detected the thing, detect the image information who is detected the thing through visual sensor, detect the temperature information who is detected the thing through temperature sensor, can also add other sensors, do not do the restriction here.
The robot can be applied to various scenes, wherein in the field of industrial intelligent manufacturing, the robot is often required to interact with human to complete various operations, and for the scenes, the safety of the robot or the human is particularly important.
Step S20, determining control information according to the environment information, the current operation information and preset target operation information;
after obtaining the environmental information and the current operation information, further combining preset target operation information to obtain control information, where the preset target operation information is target operation information pre-stored in a memory of the robot, the target operation information refers to operation information that the target robot needs to finally reach, for example, the operation information that the robot needs to finally reach is to maintain a certain fixed position or a relative position or to maintain a certain moving speed, the control information refers to an instruction for controlling the target robot, and the control information refers to, for example, an instruction for controlling a motion state (stop, move) of the target robot, or an instruction for controlling an operation state (operating current, operating voltage) of the robot; for example, the target operation information is information that maintains a specific relative position with respect to the human being, such as information that maintains a distance within 10m from the human being, and in order to make the current operation information close to the target operation information, the target robot needs to obtain control information in combination with the environmental information, change various operation states or motion states, and finally reach various states indicated by the target operation information.
In addition, the target robot can also obtain the execution information of the target task sent by the terminal device, the execution information is used for indicating the target robot to change the operation information, and the preset target operation information is determined according to the execution information, the execution information is computer data, the execution information can be a computer instruction, the execution information comprises the operation information used for indicating the target robot to change the operation information, the changed operation information is determined according to the execution information, and the changed operation information is used as the preset target operation information.
Step S30, determining target control parameters according to the control information and an error function, wherein the error function is the error function of the current operation information and the target operation information;
after the control information is obtained, in order to improve the efficiency of the robot interacting with human in industrial manufacturing, the control information needs to be further accurate to meet the requirement of more refined operation, at the moment, a target control parameter needs to be further determined by combining an error function, the target control parameter is the optimal solution of the control information calculated by combining the error function, namely the value of the control information under the condition that the value of the error function is minimum, the error function is the error function of the current operation information and the target operation information, and the error function is set as
Li(ui) Then there is
Figure BDA0002753394800000091
Wherein Q is a positive definite matrix, uiIn order to control the information, it is,
Figure BDA0002753394800000092
control information is preset, wherein,
ui=gi(yi,Gi),
yiis an observation equation of the robot passing through sensor environment information, GiIs the target operation information that is the information of the target operation,
yi=hi(x,vi),
wherein x comprises the current operation information and the environment information of the robot, so that u in the error function can be obtained by combining the current operation information, the target operation information and other parametersiAnd obtaining error functions related to the current operation information and the target operation information in one step.
And step S40, controlling the operation state of the target robot according to the target control parameters.
After the target control parameters are obtained, the operation state of the target robot is controlled according to the target control parameters, wherein the operation state can be an internal operation state of the robot or an external operation state, the internal operation state can be current or voltage, and the external operation state can be position.
In this embodiment, first, current operation information of the target robot and environment information detected by a sensor of the target robot are obtained, wherein the current operation information is information that the robot maintains its operation state or motion state, the environment information is operation information of the robot to be detected and activity state information of a human in an environment where the robot is located, the robot determines control information according to the detected environment information, the current operation information and preset target operation information, and further determines target control parameters according to the control information and an error function, the error function is an error function of the current operation information and the target operation information, and controls the operation state of the target robot according to the target control parameters, since the corresponding target control parameters in the case of minimum error are determined by the error function and the environment information is obtained, thereby make the target robot can be according to the timely target control parameter of adjustment self of the change of environment to the degree of accuracy that the improvement is controlled the target robot.
Referring to fig. 3, a second embodiment of the present invention provides a method for man-machine secure interaction, based on the first embodiment shown in fig. 2, where the step S20 includes:
step S21, determining a state equation of the target robot according to the current operation information and the environment information;
after the target robot obtains the current operation information and the environment information, the target robot first obtains overall state information formed by the current operation information and the environment information according to the operation information of the robot and the activity information of the human in the current operation information and the environment information, the overall state information describes the overall state of a human-computer interaction system formed by the target robot, the robot to be detected and the human, the overall state is a set of the state of each individual in the human-computer interaction system, the individual indicates the target robot, the robot to be detected or the human, the state of each individual indicates the operation state of the robot or the activity state of the human, each individual has corresponding control information, and the control information is used for controlling the behavior of each individual in the human-computer interaction system, so that each individual achieves a safe state, in order to achieve such a safe state, a system state equation of the human-computer interaction system needs to be constructed first, and the system state equation is used for describing the relation between the state of the individual and the control information corresponding to the individual in the whole human-computer interaction system. The method comprises the steps of firstly constructing a system state equation according to current operation information, control information of a target robot, environment information, control information of a robot to be detected, control information of human beings and system noise interference, combining the current operation information and the environment information to obtain system state information x, combining the control information of the target robot, the control information of the robot to be detected and the control information of the human beings to obtain a total control information set u of a system, wherein the system noise is noise interference quantity in the system, the system noise is used for simulating various types of noise possibly existing in a man-machine interaction system, the adaptability of the system state equation to various uncertain factors can be improved by introducing the system noise, the system noise can be Gaussian noise, the system noise is set as w, and the system state equation is set as a state equation
Figure BDA0002753394800000101
In some human-computer interaction scenes, each individual is independent, so that the state equation of each individual is obtained through decoupling, wherein the state equation of the target robot is included, the system state equation is decoupled to obtain the state equation of the decoupled target robot, and the state equation is set as
Figure BDA0002753394800000102
The current operation information is xiThe control information of the target robot is uiNoise w of the target robotiThen, then
Figure BDA0002753394800000103
Step S22, determining the control information according to the environmental information, the state equation, and the target operation information.
For each individual such as target in the human-computer interaction systemFor a robot, the robot needs to observe the activity state or the operation state of other individuals in a system so as to adjust the state of the robot, environment information detected by a target robot through a sensor is the activity state or the operation state of other individuals, and an observation equation of the target robot is set as yiThen, then
yi=hi(x,vi),
Wherein v isiThe method is characterized in that the preset observation noise is artificially introduced mathematical quantity and is used for simulating environment noise which may exist. The target robot detects environmental information through a sensor, such as a vision sensor, a laser sensor, a distance sensor, a radar, and the like, and obtains x by combining current operation information. Therefore, an observation equation can be obtained according to the environmental information, the current operation information and the preset observation noise; after the observation equation is obtained, because in an actual application scenario, there is a common situation that direct communication connection is not established between different individuals in the human-computer interaction system, at this time, the robot can only detect environmental information including information of other individuals through the sensor, at this time, in order to determine control information, a scheme adopted is to determine the control information according to the observation equation, the state equation and target operation information, specifically, u is setiFor control information, GiFor the target operation information, can obtain
ui=gi(yi,Gi)=gi(hi(x,vi),Gi),
For the human-computer interaction system as a whole, the final goal is to make each individual of the human-computer interaction system reach their own target state, i.e. safe state, which can also be based on
ui=gi(yi,Gi)=gi(hi(x,vi),Gi)
Further describing the overall system state equation of the human-computer interaction system, the system state equation is
Figure BDA0002753394800000111
In addition, the present embodiment uses the target robot as the main body for observing the environmental information, and further obtains the control information through the environmental information observed by the target robot, because in the human-computer interaction system, the behavior of each individual affects each other, and each individual needs to change its own behavior according to the behavior of other individuals, because the human perception organ has relatively low accuracy, and the response speed is low when reacting to various kinds of external information, therefore, when coordinating the behavior of each individual in the human-computer interaction system to finally reach the safe state, and controlling the human to reach the safe state, the response speed of the human perception organ is low, the efficiency of controlling the human is not high, and in order to improve the efficiency of controlling the individual, the present embodiment controls the target robot, the sensor of the target robot has high reaction speed and can adapt to the environmental information more quickly, so that the sensor is more suitable for being used as a party for executing the control information.
In this embodiment, the state equation of the target robot is determined according to the current operation information and the environment information, and the control information is determined according to the environment information, the state equation, and the target operation information, thereby making the target robot reach the control information, because the target robot is determining the control information of itself, the environment information is detected through the sensor, making it able to further obtain the control information according to the environment information in combination with the current operation information of itself, and because the reaction speed of the robot is fast, the efficiency of detecting the environment information and further obtaining the control information through the sensor of the robot is higher than that of controlling the human, in addition, because noise is introduced, making the target robot have stronger adaptability to noise that may exist in the environment.
Referring to fig. 4, a third embodiment of the present invention provides a method for man-machine secure interaction, based on the first embodiment shown in fig. 2, after step S20, the method includes:
step S50, detecting the state and the current operation time of the target robot;
step S60, when the state is an unsafe state or the current running time is greater than the starting time, executing the step of determining the target control parameter according to the control information and the error function.
In a human-computer interaction system, it is necessary to ensure safety of all individuals even if a robot and a human being are in a safe state such as a state where different individuals do not collide with each other, and in the case of a target robot, it may be in a safe state such as no collision with other individuals or may be in a non-safe state at different times, in order to put the target robot in the safe state, the target robot first detects whether its state is in the safe state, and in the case where the state is in the non-safe state, it is necessary to determine target control information based on the control information and an error function and adjust the target robot to the safe state based on the target control information, and further, since the target robot should be in the safe state at a time after its activation, by judging whether the current operation time is greater than the activation time, in the case where the current operation time is greater than the activation time, it is indicated that the target robot is operating, and in case the target robot is operating, the target control information should be determined according to the control information and the error function, so as to control the operating state according to the target control information, so that the robot returns to or maintains a safe state, and more accurate control of the robot is achieved.
Further, the above-described scheme may also be described by an expression in which the state of the target robot is set to
Figure BDA0002753394800000121
Suppose that
Figure BDA0002753394800000122
When the target robot is in a non-safe state,
Figure BDA0002753394800000123
is that
Figure BDA0002753394800000124
The control strategy designed so that it is satisfied when the robot is in an unsafe state
Figure BDA0002753394800000125
Under the conditions of
Figure BDA0002753394800000126
The system is gradually returned to the safe state along with the time, and more generally, the starting time of the target robot is t0The current running time can be set to t, then at t>t0When 1) is
Figure BDA0002753394800000127
When the target robot is in a safe state; 2) if the target robot is in an unsafe state
Figure BDA0002753394800000128
Then it is required to
Figure BDA0002753394800000131
The two conditions are met, the system can be ensured to gradually converge and return to a safe state, and the safety control of human-computer interaction is realized.
In this embodiment, by detecting the state and the current operation time of the target robot, the target control information can be determined according to the control information and the error function when the state is an unsafe state or the current operation time is greater than the start time, so that the target robot returns to the safe state through the target control information when the target robot is unsafe, or the target robot is in the safe state according to the target control information in real time after the target robot is started, thereby improving the accuracy of controlling the target robot.
Referring to fig. 5, a fourth embodiment of the present invention provides a method for man-machine secure interaction, based on the first embodiment shown in fig. 2, where the step S30 includes:
step S31, acquiring a preset control information range, wherein the preset control information range indicates the range of preset control information of the target robot in a preset safety state, and the safety state indicates that the position of the target robot is in a preset position range or the moving track of the target robot is a preset moving track;
the target robot firstly acquires a preset control information range, wherein the preset control information range is a range or a set of control information pre-stored in the robot, the preset control information range indicates a range of the preset control information of the target robot in a safe state, the safe state indicates that the position of the target robot is within a preset position range, or the moving track of the target robot is a preset moving track, the preset position is within 20 meters for example, and the moving track avoids the track of an obstacle for example; is provided with
Figure BDA0002753394800000132
If the control information range is preset, then there is
Figure BDA0002753394800000133
Wherein the content of the first and second substances,
Figure BDA0002753394800000134
Figure BDA0002753394800000135
ηiis a parameter for ensuring the safety margin and combines preset control parameters
Figure BDA0002753394800000136
Determining an error function according to the Lyapunov theorem, the control information and the preset control information range to obtain the error function
Figure BDA0002753394800000137
Q is a positive definite matrix.
Step S32, when the control information is within the preset control information range, determining that the preset control information is the target control parameter;
set the target control parameter as
Figure BDA0002753394800000138
Then
Figure BDA0002753394800000141
At this time, there are two cases, the first is that the control information is within the preset control information range, then
Figure BDA0002753394800000142
Namely, the preset control information is the target control parameter.
And step S33, when the control information is not in the preset control information range, determining the error function according to the control information and the preset control information range, and determining the target control parameter according to the error function.
In the second case, the control information is not in the preset control information range, which indicates that different individuals interfere with each other, and an error function is determined according to the control information and the preset control information range, wherein the error function is
Figure BDA0002753394800000143
Wherein λ is Lagrange multiplier, and defines a first preset coefficient
Figure BDA0002753394800000144
Second predetermined coefficient N ═ ηiz(φ0) Then, then
Figure BDA0002753394800000145
Target control parameters, i.e. optimal solution fulfilment
Figure BDA0002753394800000146
Namely, it is
Figure BDA0002753394800000147
Solved to obtain
Figure BDA0002753394800000148
Figure BDA0002753394800000149
Finally, the target control parameter is obtained as
Figure BDA00027533948000001410
Wherein, the
Figure BDA00027533948000001411
For the target control parameter, the
Figure BDA00027533948000001412
For the preset control information, M is a first preset coefficient, N is a second preset coefficient, and Q is a positive definite matrix.
In this embodiment, at first, the preset control information range is obtained, and when the control information is in the preset control information range, the preset control information range is adopted as the target control parameter, when the control information is not in the preset control information range, more accurate target control information is obtained by adopting error function calculation, thereby different controls can be performed in different application scenes, on one hand, faster control in the rough range can be realized by the preset control information range to adapt to safer scenes, on the other hand, more accurate target control parameters can be calculated by combining the error function, so as to adapt to scenes in which interference is generated between a target robot and a robot to be detected or between robots, and more accurate control of the robot is realized.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on this understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a robot to perform the methods according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A human-computer safe interaction method is applied to a robot, and comprises the following steps:
acquiring current operation information of a target robot and environment information detected by a sensor of the target robot, wherein the environment information comprises operation information of the robot to be detected and activity state information of human, and the robot to be detected is different from the target robot;
determining a state equation of the target robot according to the current operation information and the environment information;
determining control information according to the environment information and preset target operation information;
acquiring a preset control information range, wherein the preset control information range indicates a range of preset control information of the target robot in a preset safety state, and the safety state indicates that the position of the target robot is in a preset position range or the moving track of the target robot is a preset moving track;
when the control information is not in the preset control information range, determining an error function according to the Lyapunov theorem, the control information and the preset control information range;
determining the target control parameter according to the error function, wherein the target control parameter is
Figure 575131DEST_PATH_IMAGE001
Wherein, the
Figure 554588DEST_PATH_IMAGE002
For the target control parameter, the
Figure 396642DEST_PATH_IMAGE003
For the preset controlPreparing information, wherein M is a first preset coefficient, N is a second preset coefficient, and Q is a positive definite matrix, wherein the error function is an error function of the current operation information and the target operation information;
and controlling the running state of the target robot according to the target control parameters.
2. A human-machine-safe interaction method according to claim 1, wherein the step of determining the state equation of the target robot based on the current operation information and the environment information comprises:
constructing the current operation information, the environment information, the control information of the target robot, the control information of the robot to be detected, the control information of the human and a system state equation of system noise interference;
and decoupling the system state equation to obtain the decoupled state equation of the target robot.
3. The human-computer security interaction method of claim 1, wherein the step of determining the control information according to the environment information and preset target operation information comprises:
determining an observation equation according to the environment information, the current operation information and preset observation noise;
and determining the control information according to the observation equation and the target operation information.
4. The human-computer security interaction method of claim 1, wherein after the step of determining the control information according to the environment information and the target operation information, further comprising:
detecting the state and the current operation time of the target robot;
and when the state is a non-safety state or the current running time is more than the starting time, executing the step of acquiring the range of the preset control information.
5. The human-computer security interaction method of claim 1, wherein after the step of obtaining the preset control information range, the method further comprises:
and when the control information is within the range of the preset control information, determining the preset control information as the target control parameter.
6. The human-computer security interaction method of claim 1, wherein before the step of determining the control information according to the environment information and the preset target operation information, the method further comprises:
acquiring execution information of a target task sent by terminal equipment, wherein the execution information is used for indicating the target robot to change operation information;
and determining the preset target operation information according to the execution information.
7. A robot, characterized in that the robot comprises a memory, a processor and a human-computer safe interaction program stored on the memory and operable on the processor, the human-computer safe interaction program, when executed by the processor, implementing the steps of the human-computer safe interaction method according to any one of claims 1 to 6.
8. A computer-readable storage medium, on which a human-computer security interaction program is stored, which, when executed by a processor, implements the steps of the human-computer security interaction method according to any one of claims 1 to 6.
CN202011200616.5A 2020-10-30 2020-10-30 Human-computer safe interaction method, robot and computer readable storage medium Active CN112346419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011200616.5A CN112346419B (en) 2020-10-30 2020-10-30 Human-computer safe interaction method, robot and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011200616.5A CN112346419B (en) 2020-10-30 2020-10-30 Human-computer safe interaction method, robot and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112346419A CN112346419A (en) 2021-02-09
CN112346419B true CN112346419B (en) 2021-12-31

Family

ID=74356701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011200616.5A Active CN112346419B (en) 2020-10-30 2020-10-30 Human-computer safe interaction method, robot and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112346419B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569491A (en) * 2016-10-31 2017-04-19 江苏华航威泰机器人科技有限公司 Robot obstacle avoidance trajectory planning method
WO2017132905A1 (en) * 2016-02-03 2017-08-10 华为技术有限公司 Method and apparatus for controlling motion system
CN108153309A (en) * 2017-12-22 2018-06-12 安徽农业大学 For the control method and caterpillar robot of caterpillar robot
CN109062224A (en) * 2018-09-06 2018-12-21 深圳市三宝创新智能有限公司 Robot food delivery control method, device, meal delivery robot and automatic food delivery system
CN110231823A (en) * 2019-06-13 2019-09-13 中山大学 A kind of direct control method of two-wheel robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102039595B (en) * 2009-10-09 2013-02-27 泰怡凯电器(苏州)有限公司 Self-moving ground handling robot and facing ground handling control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017132905A1 (en) * 2016-02-03 2017-08-10 华为技术有限公司 Method and apparatus for controlling motion system
CN106569491A (en) * 2016-10-31 2017-04-19 江苏华航威泰机器人科技有限公司 Robot obstacle avoidance trajectory planning method
CN108153309A (en) * 2017-12-22 2018-06-12 安徽农业大学 For the control method and caterpillar robot of caterpillar robot
CN109062224A (en) * 2018-09-06 2018-12-21 深圳市三宝创新智能有限公司 Robot food delivery control method, device, meal delivery robot and automatic food delivery system
CN110231823A (en) * 2019-06-13 2019-09-13 中山大学 A kind of direct control method of two-wheel robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于AS-R移动机器人的轨迹跟踪控制研究;李丽婷;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20130115;第52-57页 *
机器人***中的状态反馈控制器设计;周景雷;《菏泽学院学报》;20090930;第31卷(第5期);第I140-249页 *

Also Published As

Publication number Publication date
CN112346419A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
Santos et al. A novel null-space-based UAV trajectory tracking controller with collision avoidance
US8886357B2 (en) Reinforcement learning apparatus, control apparatus, and reinforcement learning method
Huq et al. Behavior-modulation technique in mobile robotics using fuzzy discrete event system
Freire et al. A new mobile robot control approach via fusion of control signals
EP3725609B1 (en) Calibrating method for vehicle anti-collision parameters, vehicle controller and storage medium
Liu et al. Driving behavior model considering driver's over-trust in driving automation system
CN112346419B (en) Human-computer safe interaction method, robot and computer readable storage medium
Xu et al. A new robot collision detection method: A modified nonlinear disturbance observer based-on neural networks
Jo et al. Track fusion and behavioral reasoning for moving vehicles based on curvilinear coordinates of roadway geometries
Zhang et al. Adaptive event based predictive lateral following control for unmanned ground vehicle system
Xu et al. Potential gap: A gap-informed reactive policy for safe hierarchical navigation
Cho et al. Intent inference-based ship collision avoidance in encounters with rule-violating vessels
Shahriari et al. A novel predictive safety criteria for robust collision avoidance of autonomous robots
US20230271621A1 (en) Driving assistance device, learning device, driving assistance method, medium with driving assistance program, learned model generation method, and medium with learned model generation program
US20220153293A1 (en) Method and device for operating an automated vehicle
CN113227834A (en) Method and device for sensor data fusion of a vehicle
KR102595615B1 (en) Method for determining safety-critical output values by way of a data analysis device for a technical entity
Oishi Assessing information availability for user-interfaces of shared control systems under reference tracking
CN114777770A (en) Robot positioning method, device, control terminal and readable storage medium
Huq et al. Distributed fuzzy discrete event system for robotic sensory information processing
Ertl et al. Using a mediator to handle undesired feature interaction of automated driving
Liu et al. Research on lane change motion planning steering input based on optimal control theory
Wu et al. Multi-objective dynamic coordinated Adaptive Cruise Control for intelligent electric vehicle with sensors fusion
Gavigan et al. Quantifying the relationship between software design principles and performance in jason: a case study with simulated mobile robots
Amrouche et al. Vision based collision avoidance for multi-agent systems using avoidance functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant