CN117950481A - Interactive information generation method, device and system, electronic equipment and medium - Google Patents

Interactive information generation method, device and system, electronic equipment and medium Download PDF

Info

Publication number
CN117950481A
CN117950481A CN202211267235.8A CN202211267235A CN117950481A CN 117950481 A CN117950481 A CN 117950481A CN 202211267235 A CN202211267235 A CN 202211267235A CN 117950481 A CN117950481 A CN 117950481A
Authority
CN
China
Prior art keywords
information
interaction
target robot
capability
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211267235.8A
Other languages
Chinese (zh)
Inventor
李伟
杨明川
王羽培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211267235.8A priority Critical patent/CN117950481A/en
Priority to PCT/CN2023/111705 priority patent/WO2024082781A1/en
Publication of CN117950481A publication Critical patent/CN117950481A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a method, a device, a system, electronic equipment and a storage medium for generating interaction information, and relates to the technical field of artificial intelligence. The interactive information generation method comprises the following steps: acquiring first attribute information from a first ontology knowledge base of a target robot, acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot; and inputting the first basic information of the target object into the interactive capability evaluation model, and outputting the first interactive information of the target robot and the target object. The method and the device improve the generation efficiency and flexibility of the interaction information.

Description

Interactive information generation method, device and system, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to an interactive information generation method, device and system, electronic equipment and a storage medium.
Background
The intelligent space is a typical environment for researching the harmonious man-machine interaction principle and technology. In smart spaces, users, robots can interact with a variety of information sources (including devices and data). With the development of artificial intelligence technology, human-computer ternary fusion in space is more and more compact. The intelligent service robot has the capability of intelligent interaction with other devices and articles, and is a key element in the intelligent service space. In order to accurately and efficiently schedule a proper intelligent service robot to complete tasks, it is important to construct interaction information between the robot and other objects in a space.
Because of the multiple elements and complex and changeable environment in the service intelligent space, the interactive information and rules of the robot and other elements are mainly constructed in a manual or semi-automatic mode at present, and the defects of low construction efficiency, strong limitation, low model expansibility, insufficient flexibility and the like exist.
Based on this, how to improve the efficiency of generating the interaction information between the robot and the object in the intelligent space is a technical problem to be solved.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a method, a device, a system, electronic equipment and a storage medium for generating interaction information, which at least overcome the problem of low efficiency of generating interaction information between a robot and an object in an intelligent space in the related technology to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided an interaction information generating method including: acquiring first attribute information from a first ontology knowledge base of a target robot, and acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot; inputting the first basic information of the target object into the interaction capability assessment model, and outputting first interaction information of the target robot and the target object, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment.
In one embodiment of the disclosure, the standard interaction capability set includes a mapping relationship between various attribute information and capability information of the target robot; based on the first attribute information and the standard interaction capability set of the target robot, constructing an interaction capability evaluation model of the target robot, including: determining each piece of capability information of a target robot based on the first attribute information and a mapping relation between various attribute information and capability information of the target robot; and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
In one embodiment of the disclosure, after inputting the first basic information of the target object into the interactive capability assessment model and outputting the first interactive information of the target robot and the target object, the method further includes: and respectively storing the first interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base.
In one embodiment of the present disclosure, the method further comprises: acquiring second attribute information from a first ontology knowledge base of the target robot, wherein the second attribute information is attribute information of the target robot at a second moment; and when the second attribute information is inconsistent with the first attribute information and the attribute value in the second attribute information exceeds a preset attribute threshold value, adjusting an interactive capability evaluation model of the target robot based on the second attribute information.
In one embodiment of the present disclosure, the method further comprises: acquiring second basic information from a second ontology knowledge base of the target object, wherein the second basic information is basic information of the target object at a second moment; when the second basic information is inconsistent with the first basic information and the basic value in the second basic information exceeds a preset basic threshold, inputting the second basic information of the target object into the interactive capability evaluation model, and outputting second interactive information of the target robot and the target object, wherein the second interactive information is an interactive rule of the target robot and the target object at a second moment; and respectively storing the second interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base.
According to another aspect of the present disclosure, there is provided an interactive information generating apparatus including: the information acquisition module is used for acquiring first attribute information from a first ontology knowledge base of the target robot and acquiring first basic information from a second ontology knowledge base of the target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; the evaluation model construction module is used for constructing an interaction capability evaluation model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot; the interaction information output module is used for inputting the first basic information of the target object into the interaction capability assessment model, and outputting first interaction information of the target robot and the target object, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment.
In one embodiment of the disclosure, the standard interaction capability set includes a mapping relationship between various attribute information and capability information of the target robot; the evaluation model construction module is further used for determining each piece of capability information of the target robot based on the first attribute information and the mapping relation between various attribute information and capability information of the target robot; and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
In an embodiment of the disclosure, the interaction information output module is further configured to store the first interaction information as interaction type information in the first ontology knowledge base and the second ontology knowledge base respectively.
In an embodiment of the disclosure, the information obtaining module is further configured to obtain second attribute information from a first ontology knowledge base of the target robot, where the second attribute information is attribute information of the target robot at a second moment; the evaluation model construction module is further configured to adjust an interaction capability evaluation model of the target robot based on the second attribute information when the second attribute information is inconsistent with the first attribute information and an attribute value in the second attribute information exceeds a preset attribute threshold.
In an embodiment of the disclosure, the information obtaining module is further configured to obtain second basic information from a second ontology knowledge base of the target object, where the second basic information is basic information of the target object at a second moment; and the interaction information output module is further used for inputting the second basic information of the target object into the interaction capability assessment model when the second basic information is inconsistent with the first basic information and the basic value in the second basic information exceeds a preset basic threshold value, outputting second interaction information of the target robot and the target object, wherein the second interaction information is an interaction rule of the target robot and the target object at a second moment, and storing the second interaction information as interaction type information in the first body knowledge base and the second body knowledge base respectively.
According to still another aspect of the present disclosure, there is provided an interactive information generating system including: the interaction attribute management module is used for acquiring first attribute information from a first ontology knowledge base of the target robot, acquiring first basic information from a second ontology knowledge base of the target object, sending the first attribute information to the interaction capability assessment module and sending the first basic information to the interaction matching module, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; the interaction capability evaluation module is used for constructing an interaction capability evaluation model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, and sending the obtained interaction capability evaluation model to the interaction matching module, wherein the standard interaction capability set is used for detecting capability information of the target robot; the interaction matching module is used for inputting first basic information of the target object into the interaction capability assessment model, outputting first interaction information of the target robot and the target object, and respectively storing the first interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment; the first ontology knowledge base is used for storing attribute information and interaction type information of the target robot; and the second ontology knowledge base is used for storing the basic information and interaction type information of the target object.
In one embodiment of the disclosure, the standard interaction capability set includes a mapping relationship between various attribute information and capability information of the target robot; the interaction capability assessment module is further used for determining each piece of capability information of the target robot based on the first attribute information and the mapping relation between various attribute information and capability information of the target robot; and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
In one embodiment of the disclosure, the interaction attribute management module is further configured to obtain second attribute information from a first ontology knowledge base of a target robot, and send the second attribute information to the interaction capability assessment module, where the second attribute information is attribute information of the target robot at a second moment; the interaction capability evaluation module is further configured to adjust an interaction capability evaluation model of the target robot based on the second attribute information when the second attribute information is inconsistent with the first attribute information and an attribute value in the second attribute information exceeds a preset attribute threshold.
In one embodiment of the disclosure, the interaction attribute management module is further configured to obtain second basic information from a second ontology knowledge base of the target object, and send the second basic information to the interaction matching module, where the second basic information is basic information of the target object at a second moment; the interaction matching module is further configured to input second basic information of the target object into the interaction capability assessment model when the second basic information is inconsistent with the first basic information and a basic value in the second basic information exceeds a preset basic threshold, and output second interaction information of the target robot and the target object, where the second interaction information is an interaction rule of the target robot and the target object at a second moment, and store the second interaction information as interaction type information in the first ontology knowledge base and the second ontology knowledge base respectively.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the interaction information generating method described above via execution of the executable instructions.
According to still another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the interactive information generation method described above.
The embodiment of the disclosure provides an interaction information generating method, an interaction information generating device, an interaction information generating system, electronic equipment and a storage medium, wherein the interaction information generating method comprises the following steps: acquiring first attribute information from a first ontology knowledge base of a target robot, acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot; and inputting the first basic information of the target object into the interactive capability evaluation model, and outputting the first interactive information of the target robot and the target object. The method and the device improve the generation efficiency and flexibility of the interaction information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 is a schematic diagram showing a configuration of a communication system in an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for generating interaction information in an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating another method of generating interaction information in an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an interaction information generating method according to an embodiment of the disclosure;
FIG. 5 is a flowchart illustrating another method of generating interaction information in an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating another method of generating interaction information in an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an interactive information generating apparatus according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram of an interactive information generation system according to an embodiment of the disclosure;
Fig. 9 shows a block diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The body knowledge base of the intelligent service robot at least comprises basic information and interaction information, wherein the basic information is used for describing objective attributes of the robot and comprises shape, size, color, weight, functional components and the like; the interaction type information defines the interaction rule of the robot and other elements in the space.
As mentioned in the background art, the existing method mainly builds the interaction information and rules of the robot and other elements in a manual or semi-automatic mode due to the fact that elements in the service intelligent space are numerous and the environment is complex and changeable, and the method has the defects of low building efficiency, strong limitation, low model expansibility, insufficient flexibility and the like.
Based on the above, the embodiment of the disclosure provides an interaction information generating method, an apparatus, a system, an electronic device and a storage medium, which constructs an interaction capability model of a target robot according to first attribute information and a standard interaction capability set of the target robot obtained from a first ontology knowledge base; the first basic information of the target object obtained from the second ontology knowledge base is input into the interaction capability assessment model, and the first interaction information of the target robot and the target object is obtained through output, so that the generation efficiency and the flexibility of the interaction information are improved.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which an interactive information generation method or an interactive information generation apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105.
The network 104 is a medium for providing a communication link between the terminal devices 101, 102, 103 and the server 105, and may be a wired network or a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the Internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), mobile, wired or wireless network, private network, or any combination of virtual private networks. In some embodiments, data exchanged over the network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible markup Language (Extensible MarkupLanguage, XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure sockets layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet ProtocolSecurity, IPsec), etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
The terminal devices 101, 102, 103 may be a variety of electronic devices including, but not limited to, smartphones, tablet computers, laptop portable computers, desktop computers, wearable devices, augmented reality devices, virtual reality devices, and the like.
Alternatively, the clients of the applications installed in the different terminal devices 101, 102, 103 are the same or clients of the same type of application based on different operating systems. The specific form of the application client may also be different based on the different terminal platforms, for example, the application client may be a mobile phone client, a PC client, etc.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
Optionally, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
Those skilled in the art will appreciate that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative, and that any number of terminal devices, networks, and servers may be provided as desired. The embodiments of the present disclosure are not limited in this regard.
The present exemplary embodiment will be described in detail below with reference to the accompanying drawings and examples.
First, in the embodiments of the present disclosure, an interactive information generating method is provided, where the method may be performed by the system or a device in the system disclosed in fig. 1, or may be performed by any electronic device having a computing processing capability.
Fig. 2 shows a flowchart of an interaction information generating method in an embodiment of the present disclosure, and as shown in fig. 2, the interaction information generating method provided in the embodiment of the present disclosure includes the following steps:
S202, acquiring first attribute information from a first ontology knowledge base of a target robot and acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment.
The target robot is an intelligent machine capable of semi-autonomous or fully autonomous operation; the target robot can perform tasks such as work or movement by programming and automatic control, and may be one of intelligent robots such as a service robot, an entertainment robot, a micro-manipulation robot, or an agricultural robot; the target object may be an object or element within the smart space; the first ontology knowledge base is used for storing attribute information of the target robot; the second ontology knowledge base is used for storing basic information of the target object; the first attribute information is used for describing objective attributes of the target robot, such as shape, size, color, weight, functional components and the like of the target robot, the first attribute information can be one or more combinations of component information such as bionic hand information and mechanical arm information of the target robot, and the bionic hand information can comprise parameters such as arm motion parameters, finger joint numbers, arm working radius and the like; the first basic information is used for describing objective properties of the target object, such as state, shape, position, material and the like of the target object, and can be one or more of combination of state information, position information, shape information, size information, volume information, material information, surface friction coefficient information and the like of the target object; the first time may be any time of day, for example ten minutes on day X, Y, month Z, X, Y and Z are integers.
In one embodiment of the disclosure, attribute information and interaction information of the target robot are stored in a first ontology knowledge base, basic class information and interaction information of the target object are stored in a second ontology knowledge base, the interaction information defines interaction rules between the object and other objects or elements in the intelligent space, first attribute information of the target robot can be obtained from the first ontology knowledge base, and first basic information of the target object can be obtained from the second ontology knowledge base.
S204, constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot.
The interaction capability assessment model is used for matching the relationship between the target robot and the target object to obtain interaction information, and the interaction capability assessment model can take basic information of the object as input and take interaction information of the target robot and the target object as output.
In one embodiment of the disclosure, capability information of a target robot may be detected based on a first attribute information of the target robot and a standard interaction capability set; and constructing an interactive capability evaluation model of the target robot according to the capability information of the target robot.
S206, inputting the first basic information of the target object into the interaction capability evaluation model, and outputting the first interaction information of the target robot and the target object, wherein the first interaction information is the interaction rule of the target robot and the target object at the first moment.
It should be noted that, the interaction rule is used to define an interaction manner between the target robot and the target object, and the interaction manner may be some action, such as grabbing, touching, supporting, or knocking.
In one embodiment of the present disclosure, the interaction capability assessment model may include a plurality of capability information of the target robot, where the capability information includes a capability name and a capability attribute corresponding to the capability name, each of the capability attributes may include a range value, and by comparing whether a plurality of basic values in the first basic information of the target object are within the range value of the corresponding capability attribute, an interaction rule between the target robot and the target object is determined, so as to obtain first interaction information of the target robot and the target object.
According to the interactive information generation method provided by the embodiment of the disclosure, the interactive capability model of the target robot is constructed according to the first attribute information and the standard interactive capability set of the target robot, which are acquired from the first ontology knowledge base; the first basic information of the target object obtained from the second ontology knowledge base is input into the interaction capability assessment model, and the first interaction information of the target robot and the target object is obtained through output, so that the generation efficiency and the flexibility of the interaction information are improved.
In one embodiment of the present disclosure, the standard interaction capability set contains a mapping relationship between various attribute information and capability information of the target robot; the steps disclosed in fig. 3 may be implemented to construct an interaction capability assessment model of the target robot based on the first attribute information and the standard interaction capability set of the target robot, and participate in another interaction information generation method flowchart shown in fig. 3, and may include the following steps:
S302, determining each piece of capability information of the target robot based on the first attribute information and the mapping relation between the various attribute information and the capability information of the target robot;
S304, based on each capability information of the target robot, constructing an interaction capability assessment model of the target robot.
It should be noted that, the standard interaction capability set may further include capability information, where the capability information may include a capability name and a capability attribute, the capability name may be one or more combinations of actions such as grabbing, touching, supporting, or knocking, and the capability attribute may be regarded as a display dimension of the capability, for example, when the capability name is grabbing, the corresponding capability attribute may define an attribute such as a grabbing manner, grabbing bearing, grabbing flexibility, and the like; the specific parameters of the capability may not be defined in the standard capability interaction set, and the expression mode of the capability information may be a capability name (capability attribute 1, capability attribute 2, …, capability attribute n), where n is the number of capability attributes that the capability name has, for example, capturing (capturing mode, capturing load bearing, capturing flexibility, capturing target identification, capturing path planning).
The first attribute information of the target robot and the capability information of the standard interaction capability set have a complex mapping relationship, for example, when the capability name is capturing, the capturing flexibility degree in the captured capability attribute is related to the attribute information such as the arm motion parameter, the finger joint number, the arm working radius and the like of the target robot. The interactive capability assessment model of the target robot may be constructed based on the first attribute information and the mapping relationship between the various attribute information and capability information of the target robot.
In an embodiment of the present disclosure, each capability name of the target robot may be determined according to a priori knowledge base and first attribute information of the target robot, where the priori knowledge base is used to identify the capability name of the target robot, and then determine a plurality of capability attributes corresponding to the capability name based on a standard interaction capability set, for example, when the first attribute information includes bionic hand information of the target robot, it is determined that the target robot has a capturing capability; combining the standard interaction capability set to determine a plurality of capability attributes of grabbing; and constructing an interactive capability assessment model of the target robot according to the plurality of capability names and the corresponding capability attributes.
In one embodiment of the disclosure, an interactive capability assessment model of a target robot may be constructed based on the first attribute information, the priori knowledge base and the standard interactive capability set, and capability names possessed by the target robot are first identified through the priori knowledge base, so as to improve construction efficiency of the interactive capability assessment model.
In an embodiment of the present disclosure, referring to a schematic diagram of an interaction information generating method shown in fig. 4, determining, based on first attribute information and mapping relationships between various attribute information and capability information of a target robot, each capability information possessed by the target robot may include: when the capability name in the capability information is grabbing, the bionic hand information of the target robot can be obtained from the first attribute information of the target robot, the capability attribute corresponding to the capability name of the target robot is determined according to the bionic hand information of the target robot, at this time, the capability information is the grabbing capability attribute, such as grabbing bearing, grabbing flexibility, grabbing target identification, grabbing path planning and the like, each capability attribute can be a range value, whether the base value in the first base information of the target object is in the range value of the corresponding capability attribute can be judged, if yes, the interaction rule of the target robot and the target object is determined, and the interaction information of the target robot and the target object is obtained. For example, when the capability attribute is a grabbing path plan, whether the position information of the target object is on the grabbing path plan is judged, if so, the interaction rule of the target robot and the target object on the grabbing path plan is determined. For another example, when the capability attribute is the grasping flexibility, whether the surface friction coefficient of the target object is within the range value of the grasping flexibility is judged, if so, the interaction rule of the target robot and the target object on the grasping flexibility is determined. When a certain basic value in the first attribute information of the target object is not in the range value of the capability attribute corresponding to the target robot, determining that the target robot cannot interact with the target object, and outputting an interaction result that the interaction cannot be performed.
In one embodiment of the present disclosure, after inputting the first basic information of the target object into the interaction ability evaluation model and outputting the first interaction information of the target robot and the target object, the method further includes: and respectively storing the first interaction information as interaction type information into a first ontology knowledge base and a second ontology knowledge base.
In one embodiment of the present disclosure, the method may further include the steps disclosed in fig. 5, referring to another flowchart of an interaction information generating method shown in fig. 5, may include:
S502, acquiring second attribute information from a first ontology knowledge base of the target robot, wherein the second attribute information is the attribute information of the target robot at a second moment;
S504, when the second attribute information is inconsistent with the first attribute information and the attribute value in the second attribute information exceeds a preset attribute threshold value, adjusting the interaction ability evaluation model of the target robot based on the second attribute information.
The second time may be any time different from the first time, and the second attribute information may include a plurality of attribute values, where the attribute values are used to describe feature information of the target robot.
In an embodiment of the present disclosure, attribute information of a target robot at a current moment may be obtained from a first ontology knowledge base in real time or at each preset time interval, if the current moment is a second moment, second attribute information is obtained from the first ontology knowledge base of the target robot, whether the second attribute information is consistent with the first attribute information is determined, meanwhile, whether each attribute value in the second attribute information exceeds a preset attribute threshold value is determined, when the second attribute information is inconsistent with the first attribute information, and the attribute value in the second attribute information exceeds the preset attribute threshold value, an interaction capability evaluation model of the target robot may be adjusted based on the second attribute information, so as to obtain an adjusted interaction capability evaluation model, and accuracy of outputting the interaction information by the interaction capability model is improved. The preset time interval may be any duration in units of seconds, minutes, or hours, such as 5 seconds, ten minutes, one hour, etc.
In one embodiment of the present disclosure, the method may further include the steps disclosed in fig. 6, referring to another flowchart of an interaction information generating method shown in fig. 6, may include:
S602, acquiring second basic information from a second ontology knowledge base of the target object, wherein the second basic information is basic information of the target object at a second moment;
S604, when the second basic information is inconsistent with the first basic information and the basic value in the second basic information exceeds a preset basic threshold, inputting the second basic information of the target object into the interaction capability evaluation model, and outputting second interaction information of the target robot and the target object, wherein the second interaction information is an interaction rule of the target robot and the target object at a second moment;
S606, the second interaction information is used as interaction type information to be respectively stored in the first ontology knowledge base and the second ontology knowledge base.
It should be noted that the second basic information may include a plurality of basic values, where the basic values are used to describe feature information of the target object.
In one embodiment of the disclosure, the basic information of the target object at the current moment may be obtained from the second ontology knowledge base in real time or at each preset time interval, if the current moment is the second moment, the second basic information is obtained from the first ontology knowledge base of the target robot, whether the second basic information is consistent with the first basic information is determined, meanwhile, whether each basic value in the second basic information exceeds a preset basic threshold value is determined, when the second basic information is inconsistent with the first basic information, and the basic value in the second basic information exceeds the preset basic threshold value, the second basic information of the target object is input into the interaction capability evaluation model, and the second interaction information of the target robot and the target object is output so as to update the interaction rules that can be performed by the target robot and the target object at different moments.
In an embodiment of the disclosure, the second basic information includes a plurality of basic values, and it may be further determined whether each basic value in the second basic information exceeds a preset basic threshold, and when the basic value in the second basic information exceeds the preset basic threshold, the second basic information of the target object is input into the interaction capability assessment model, and the second interaction information of the target robot and the target object is output.
Based on the same inventive concept, the embodiments of the present disclosure also provide an interactive information generating apparatus, as follows. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 7 is a schematic diagram of an interactive information generating apparatus according to an embodiment of the disclosure, as shown in fig. 7, where the apparatus includes:
The information obtaining module 710 is configured to obtain first attribute information from a first ontology knowledge base of the target robot, and obtain first basic information from a second ontology knowledge base of the target object, where the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of the target object at the first moment;
An evaluation model construction module 720, configured to construct an interaction capability evaluation model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, where the standard interaction capability set is used to detect capability information of the target robot;
the interaction information output module 730 is configured to input first basic information of the target object into the interaction capability assessment model, and output first interaction information of the target robot and the target object, where the first interaction information is an interaction rule of the target robot and the target object at the first moment.
In one embodiment of the present disclosure, the standard interaction capability set contains a mapping relationship between various attribute information and capability information of the target robot; the evaluation model construction module 720 is further configured to determine each piece of capability information that the target robot has based on the first attribute information and a mapping relationship between each piece of attribute information and capability information of the target robot; and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
In an embodiment of the present disclosure, the interaction information output module 730 is further configured to store the first interaction information as interaction type information in the first ontology knowledge base and the second ontology knowledge base, respectively.
In an embodiment of the disclosure, the information obtaining module 710 is further configured to obtain second attribute information from a first ontology knowledge base of the target robot, where the second attribute information is attribute information of the target robot at a second moment; the evaluation model construction module 720 is further configured to adjust the interaction capability evaluation model of the target robot based on the second attribute information when the second attribute information is inconsistent with the first attribute information and the attribute value in the second attribute information exceeds a preset attribute threshold.
In an embodiment of the disclosure, the information obtaining module 710 is further configured to obtain second basic information from a second ontology knowledge base of the target object, where the second basic information is basic information of the target object at a second moment; the interaction information output module 730 is further configured to input second basic information of the target object into the interaction capability assessment model when the second basic information is inconsistent with the first basic information and a basic value in the second basic information exceeds a preset basic threshold, and output second interaction information of the target robot and the target object, where the second interaction information is an interaction rule of the target robot and the target object at a second moment, and store the second interaction information as interaction type information in the first ontology knowledge base and the second ontology knowledge base respectively.
Based on the same inventive concept, the embodiments of the present disclosure also provide an interactive information generating system, such as the following embodiments. Since the principle of solving the problem of the system embodiment is similar to that of the method embodiment, the implementation of the system embodiment can be referred to the implementation of the method embodiment, and the repetition is omitted.
Fig. 8 is a schematic diagram of an interactive information generating system according to an embodiment of the disclosure, as shown in fig. 8, where the system includes:
The interaction attribute management module 810 is configured to obtain first attribute information from the first ontology knowledge base 840 of the target robot, obtain first basic information from the second ontology knowledge base 850 of the target object, send the first attribute information to the interaction capability assessment module 820, and send the first basic information to the interaction matching module 830, where the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of the target object at the first moment;
The interaction capability assessment module 820 is configured to construct an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, and send the obtained interaction capability assessment model to the interaction matching module 830, where the standard interaction capability set is used for detecting capability information of the target robot;
The interaction matching module 830 is configured to input first basic information of the target object into the interaction capability assessment model, output first interaction information of the target robot and the target object, and store the first interaction information as interaction class information to the first ontology repository 840 and the second ontology repository 850 respectively, where the first interaction information is an interaction rule of the target robot and the target object at a first moment;
a first ontology knowledge base 840 for storing attribute information and interaction class information of the target robot;
the second ontology repository 850 is used for storing basic information and interaction information of the target object.
In one embodiment of the present disclosure, the standard interaction capability set contains a mapping relationship between various attribute information and capability information of the target robot; the interactive capability assessment module 820 is further configured to determine each capability information of the target robot based on the first attribute information and a mapping relationship between each attribute information and the capability information of the target robot; and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
In one embodiment of the present disclosure, the interaction attribute management module 810 is further configured to obtain second attribute information from the first ontology knowledge base 840 of the target robot, and send the second attribute information to the interaction capability assessment module, where the second attribute information is attribute information of the target robot at a second moment; the interaction ability evaluation module 820 is further configured to adjust an interaction ability evaluation model of the target robot based on the second attribute information when the second attribute information is inconsistent with the first attribute information and an attribute value in the second attribute information exceeds a preset attribute threshold.
In one embodiment of the present disclosure, the interaction attribute management module 810 is further configured to obtain second basic information from the second ontology knowledge base 850 of the target object, and send the second basic information to the interaction matching module, where the second basic information is basic information of the target object at a second moment; the interaction matching module 830 is further configured to input second basic information of the target object into the interaction capability assessment model when the second basic information is inconsistent with the first basic information and a basic value in the second basic information exceeds a preset basic threshold, and output second interaction information of the target robot and the target object, where the second interaction information is an interaction rule of the target robot and the target object at a second moment, and store the second interaction information as interaction type information in the first ontology knowledge base 840 and the second ontology knowledge base 850 respectively.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 900 according to such an embodiment of the present disclosure is described below with reference to fig. 9. The electronic device 900 shown in fig. 9 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processing unit 910, the at least one storage unit 920, and a bus 930 connecting the different system components (including the storage unit 920 and the processing unit 910).
Wherein the storage unit stores program code that is executable by the processing unit 910 such that the processing unit 910 performs steps according to various exemplary embodiments of the present disclosure described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 910 may perform the following steps of the method embodiment described above: acquiring first attribute information from a first ontology knowledge base of a target robot, and acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment; constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot; inputting first basic information of the target object into the interaction capability evaluation model, and outputting first interaction information of the target robot and the target object, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment.
The storage unit 920 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 9201 and/or cache memory 9202, and may further include Read Only Memory (ROM) 9203.
The storage unit 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus 930 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 900 may also communicate with one or more external devices 940 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 900, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 900 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 950. Also, electronic device 900 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 960. As shown, the network adapter 960 communicates with other modules of the electronic device 900 over the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 900, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above of the present disclosure. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in the present disclosure may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this disclosure, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Alternatively, the program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. An interactive information generation method, characterized by comprising:
Acquiring first attribute information from a first ontology knowledge base of a target robot, and acquiring first basic information from a second ontology knowledge base of a target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment;
Constructing an interaction capability assessment model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot;
Inputting the first basic information of the target object into the interaction capability assessment model, and outputting first interaction information of the target robot and the target object, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment.
2. The interaction information generating method according to claim 1, wherein the standard interaction capability set contains a mapping relationship between various attribute information and capability information of the target robot;
Based on the first attribute information and the standard interaction capability set of the target robot, constructing an interaction capability evaluation model of the target robot, including:
determining each piece of capability information of a target robot based on the first attribute information and a mapping relation between various attribute information and capability information of the target robot;
and constructing an interactive capability evaluation model of the target robot based on the capability information of each target robot.
3. The interaction information generating method according to claim 1, wherein after inputting the first basic information of the target object into the interaction ability evaluation model and outputting the first interaction information of the target robot and the target object, the method further comprises:
And respectively storing the first interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base.
4. The interactive information generation method according to claim 1, characterized in that the method further comprises:
acquiring second attribute information from a first ontology knowledge base of the target robot, wherein the second attribute information is attribute information of the target robot at a second moment;
And when the second attribute information is inconsistent with the first attribute information and the attribute value in the second attribute information exceeds a preset attribute threshold value, adjusting an interactive capability evaluation model of the target robot based on the second attribute information.
5. The interactive information generation method according to claim 1, characterized in that the method further comprises:
Acquiring second basic information from a second ontology knowledge base of the target object, wherein the second basic information is basic information of the target object at a second moment;
When the second basic information is inconsistent with the first basic information and the basic value in the second basic information exceeds a preset basic threshold, inputting the second basic information of the target object into the interactive capability evaluation model, and outputting second interactive information of the target robot and the target object, wherein the second interactive information is an interactive rule of the target robot and the target object at a second moment;
and respectively storing the second interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base.
6. An interactive information generating apparatus, comprising:
the information acquisition module is used for acquiring first attribute information from a first ontology knowledge base of the target robot and acquiring first basic information from a second ontology knowledge base of the target object, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment;
The evaluation model construction module is used for constructing an interaction capability evaluation model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, wherein the standard interaction capability set is used for detecting capability information of the target robot;
The interaction information output module is used for inputting the first basic information of the target object into the interaction capability assessment model, and outputting first interaction information of the target robot and the target object, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment.
7. An interactive information generation system, comprising:
The interaction attribute management module is used for acquiring first attribute information from a first ontology knowledge base of the target robot, acquiring first basic information from a second ontology knowledge base of the target object, sending the first attribute information to the interaction capability assessment module and sending the first basic information to the interaction matching module, wherein the first attribute information is attribute information of the target robot at a first moment, and the first basic information is basic information of a target object at the first moment;
the interaction capability evaluation module is used for constructing an interaction capability evaluation model of the target robot based on the first attribute information of the target robot and a standard interaction capability set, and sending the obtained interaction capability evaluation model to the interaction matching module, wherein the standard interaction capability set is used for detecting capability information of the target robot;
The interaction matching module is used for inputting first basic information of the target object into the interaction capability assessment model, outputting first interaction information of the target robot and the target object, and respectively storing the first interaction information as interaction type information into the first ontology knowledge base and the second ontology knowledge base, wherein the first interaction information is an interaction rule of the target robot and the target object at a first moment;
The first ontology knowledge base is used for storing attribute information and interaction type information of the target robot;
And the second ontology knowledge base is used for storing the basic information and interaction type information of the target object.
8. The interactive information generating system as recited in claim 7, wherein,
The interaction attribute management module is further configured to obtain second attribute information from a first ontology knowledge base of the target robot, and send the second attribute information to the interaction capability assessment module, where the second attribute information is attribute information of the target robot at a second moment;
The interaction capability evaluation module is further configured to adjust an interaction capability evaluation model of the target robot based on the second attribute information when the second attribute information is inconsistent with the first attribute information and an attribute value in the second attribute information exceeds a preset attribute threshold.
9. The interactive information generating system as recited in claim 7, wherein,
The interaction attribute management module is further configured to obtain second basic information from a second ontology knowledge base of the target object, and send the second basic information to the interaction matching module, where the second basic information is basic information of the target object at a second moment;
The interaction matching module is further configured to input second basic information of the target object into the interaction capability assessment model when the second basic information is inconsistent with the first basic information and a basic value in the second basic information exceeds a preset basic threshold, and output second interaction information of the target robot and the target object, where the second interaction information is an interaction rule of the target robot and the target object at a second moment, and store the second interaction information as interaction type information in the first ontology knowledge base and the second ontology knowledge base respectively.
10. An electronic device, comprising:
A processor; and
A memory for storing executable instructions of the processor;
wherein the processor is configured to perform the interaction information generating method of any of claims 1-5 via execution of the executable instructions.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the interactive information generation method of any one of claims 1 to 5.
CN202211267235.8A 2022-10-17 2022-10-17 Interactive information generation method, device and system, electronic equipment and medium Pending CN117950481A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211267235.8A CN117950481A (en) 2022-10-17 2022-10-17 Interactive information generation method, device and system, electronic equipment and medium
PCT/CN2023/111705 WO2024082781A1 (en) 2022-10-17 2023-08-08 Interaction information generation method and apparatus, and system, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211267235.8A CN117950481A (en) 2022-10-17 2022-10-17 Interactive information generation method, device and system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117950481A true CN117950481A (en) 2024-04-30

Family

ID=90736826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211267235.8A Pending CN117950481A (en) 2022-10-17 2022-10-17 Interactive information generation method, device and system, electronic equipment and medium

Country Status (2)

Country Link
CN (1) CN117950481A (en)
WO (1) WO2024082781A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794214B (en) * 2015-04-27 2018-06-26 广州大学 A kind of method for designing big data driving cloud robot
CN106378780A (en) * 2016-10-21 2017-02-08 遨博(北京)智能科技有限公司 Robot system and method and server for controlling robot
CN107291811B (en) * 2017-05-18 2019-11-29 浙江大学 A kind of sense cognition enhancing robot system based on cloud knowledge fusion
CN112364853B (en) * 2021-01-13 2021-03-30 之江实验室 Robot task execution method based on knowledge base and PDDL semantic design

Also Published As

Publication number Publication date
WO2024082781A1 (en) 2024-04-25

Similar Documents

Publication Publication Date Title
US10679133B1 (en) Constructing and utilizing a knowledge graph for information technology infrastructure
US10833928B1 (en) Exchanging information among system middleware and models
US11442704B2 (en) Computerized system and method for a distributed low-code / no-code computing environment
US10114676B2 (en) Building multimodal collaborative dialogs with task frames
CN107153599B (en) Method and equipment for recording and playing back user operation
JP2021103506A (en) Method and device for generating information
US11163586B1 (en) Automated configuration of application program instance
US20200167267A1 (en) Asynchronous consumer-driven contract testing in micro service architecture
Thiyagarajan et al. Integration in the physical world in IoT using android mobile application
US20240184642A1 (en) Integrating applications using containerized integration flow
CN111143408B (en) Event processing method and device based on business rule
CN117950481A (en) Interactive information generation method, device and system, electronic equipment and medium
CN110554892A (en) Information acquisition method and device
US11704095B2 (en) Dynamic API bot for robotic process automation
CN111176982B (en) Test interface generation method and device
CN115803729A (en) Direct data loading of middleware generated records
WO2018134680A1 (en) System and method for integrating disparate computer systems and applications
EP4333401A1 (en) Logic injection in messaging state machines
US20230376363A1 (en) Framework for digital workers
EP4020217A1 (en) Performance monitoring for osgi application with bundles
US11704174B1 (en) Prediction and automatic performance of computer-related activities
Manione User centered integration of Internet of Things devices
CN117021088A (en) Robot task state monitoring method and device and related equipment
CN118228823A (en) Training method and device for intelligent agent of chess system, storage medium and electronic equipment
CN117591637A (en) Content extension and question reply method, device, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination