CN110815238A - Robot expression implementation method and device, computer equipment and storage medium - Google Patents

Robot expression implementation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110815238A
CN110815238A CN201911029797.7A CN201911029797A CN110815238A CN 110815238 A CN110815238 A CN 110815238A CN 201911029797 A CN201911029797 A CN 201911029797A CN 110815238 A CN110815238 A CN 110815238A
Authority
CN
China
Prior art keywords
expression
interface
element content
input information
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911029797.7A
Other languages
Chinese (zh)
Inventor
覃健全
周贤林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United States (shenzhen) Information Technology Ltd By Share Ltd
Original Assignee
United States (shenzhen) Information Technology Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United States (shenzhen) Information Technology Ltd By Share Ltd filed Critical United States (shenzhen) Information Technology Ltd By Share Ltd
Priority to CN201911029797.7A priority Critical patent/CN110815238A/en
Publication of CN110815238A publication Critical patent/CN110815238A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/001Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a robot expression implementation method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring expression input information through a first interface, and reading element content corresponding to the expression input information from an expression set according to the expression input information; calling an expression generating function through a second interface to generate a target expression corresponding to the element content; outputting the target expression to the display equipment of the robot by calling a third interface; by adopting the scheme, the robot expression method can be conveniently maintained in the realization process and the development cost is reduced.

Description

Robot expression implementation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a method and an apparatus for implementing robot expressions, a computer device, and a storage medium.
Background
With the development of computer technology, the field of robot technology has also been rapidly developed, and robots are gradually being popularized and applied to various fields (e.g., manufacturing industry, service industry, etc.). In the interaction process of the robot and the user, the expression of the robot serves as an important bridge of human-computer interaction, the feedback information of the robot is well transmitted, and the expression of emotion and emotion which can be understood by the user is properly simulated. At present, the expression of the robot can only convey simple emotion and emotion (smile, too much and the like), can not convey more rich emotion and emotion (joy, anger, sadness and the like), can not meet the requirement of a user on realizing the expression of the robot, and the expression diversification of the robot is required to be realized.
However, the demands of users on the robot expressions are higher and higher, and the currently adopted robot expression implementation method is not easy to maintain and has high development cost in the implementation process.
Disclosure of Invention
In view of the above, it is necessary to provide a robot expression method, apparatus, computer device, and storage medium capable of reducing development cost and facilitating maintenance.
A robot expression implementation method, the method comprising:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from an expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to a display device of the robot by calling a third interface, wherein the third interface is used for connecting the display device of the robot.
In one embodiment, before the obtaining of the expression input information through the first interface, the method further includes:
obtaining element content by carrying out picture module taking processing on a prepared expression picture, and constructing an expression set according to the element content.
In one embodiment, the reading element content corresponding to the expression input information from an expression set according to the expression input information includes:
analyzing the expression input information to obtain an expression identifier;
and reading element content corresponding to the expression identifier from an expression set according to the expression identifier.
In one embodiment, after the reading of the element content corresponding to the expression input information from the expression set, the method further includes:
storing the element content in a preset cache region;
the step of calling an expression generating function through a second interface to generate a target expression corresponding to the element content includes:
and reading the element content from the cache region, and calling an expression generating function through a second interface to generate a target expression corresponding to the element content. In one embodiment, the method for generating the target expression corresponding to the element content by using the expression generation function includes:
sequentially reading element contents from the cache region according to a preset time interval;
and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
In one embodiment, the method further comprises:
acquiring an expression updating instruction through a first interface;
reading element content corresponding to the expression updating instruction from the expression set according to the expression updating instruction;
calling an expression generating function through a second interface to generate an updated expression corresponding to the element content;
and outputting the updated expression to a display device of the robot by calling a third interface.
In one embodiment, the expression updating instruction is generated by triggering a timer of the robot or receiving triggering information input through an input device.
A robotic expression-implementing apparatus, the apparatus comprising:
the obtaining module is used for obtaining the expression input information through the first interface;
the reading module is used for reading element content corresponding to the expression input information from an expression set according to the expression input information;
the expression generation module is used for calling an expression generation function through a second interface to generate a target expression corresponding to the element content;
and the output module is used for outputting the target expression to the display equipment of the robot by calling a third interface, and the third interface is used for connecting the display equipment of the robot.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from an expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to a display device of the robot by calling a third interface, wherein the third interface is used for connecting the display device of the robot.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from an expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to a display device of the robot by calling a third interface, wherein the third interface is used for connecting the display device of the robot.
According to the robot expression implementation method, the robot expression implementation device, the computer equipment and the storage medium, expression input information is acquired through the first interface, and element content corresponding to the expression input information is read from the expression set according to the expression input information; calling an expression generating function through a second interface to generate a target expression corresponding to the element content; outputting the target expression to the display equipment of the robot by calling a third interface; only element contents corresponding to the expression to be realized are pre-stored in the expression set, corresponding element contents are acquired from the expression set according to the acquired expression input information, and the corresponding expression is displayed on the display equipment of the robot; the robot expression is realized through the interface, the realization process is simple, the maintenance is convenient in the realization process of the robot expression method, and the development cost is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a method for implementing robot expressions in one embodiment;
FIG. 2 is a flowchart illustrating a method for implementing robot expressions according to an embodiment;
FIG. 3 is a schematic flow chart of a robot expression implementation method in another embodiment;
FIG. 4 is a flowchart illustrating a method for implementing expression updating of a robot according to an embodiment;
FIG. 5 is a diagram of hardware architecture for implementing robot expressions in one embodiment;
FIG. 6 is a block diagram showing the construction of an apparatus for realizing a robot expression according to an embodiment;
FIG. 7 is a block diagram of a robot expression implementing apparatus according to another embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The robot expression implementation method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the terminal 104 via a network. The terminal 102 acquires expression input information sent by the terminal 104 through a first interface; reading element content corresponding to the expression input information from the expression set according to the expression input information; calling an expression generating function through a second interface to generate a target expression corresponding to the element content; and outputting the target expression to the display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot. The terminal 102 may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices, and the server 104 may be, but is not limited to, various personal computers, notebooks, smartphones, tablets, and portable wearable devices.
In one embodiment, as shown in fig. 2, a method for implementing a robot expression is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, obtaining expression input information through a first interface.
Wherein the expression input information may be a command for instructing the terminal to output an expression. The expressions can include static expressions and dynamic expressions, and the static expressions can include smiling, too hard, crying and the like; the emotion input information may include static emotion input information, dynamic emotion input information, and the like. The expression input information can be a voice instruction or generated by acquiring the information on the user interface through the triggering of an expression output button. For example, the expression input information may be a voice instruction, for example, feature extraction is performed on a preset voice through a voice recognition tool (e.g., kaldi), and model training is performed according to the extracted features to obtain a voice recognition model; when the voice instruction is obtained, the voice instruction is recognized and processed through the voice recognition model, and expression input information corresponding to the voice instruction is obtained from a voice library. For another example, the expression input information may be generated by clicking a button on the user interface, for example, the number "1" on the input keyboard of the user interface represents the smile output expression instruction, and the number "2" represents the smile expression input information; the interface is a class or function for transmitting or receiving data among different modules and processing, and the first interface is used for acquiring input expression input information of a user.
Specifically, the terminal obtains the emotion input information through the first interface via the communication interface, where the communication modes supported by the communication interface may be serial communication, Controller Area Network (CAN) communication, Network communication, Inter-Integrated Circuit (IIC) communication, and the like, where the serial communication may include RS232 communication, RS485 communication, and the like.
And step 204, reading element contents corresponding to the expression input information from the expression set according to the expression input information.
The element content may include the number of the expression and the component of the expression, the number may be a combination of numbers or a combination of letters, for example, the number is a combination of 16-system numbers, the expression represented by 03 is a combination of letters and "welome", the expression represented by 04 is a halo, 05 is a smile, 08 is a music symbol 1, 0a is a heart shape, etc.; the expression may be composed according to different components, for example, the expression is a smiling face, which includes an eye portion, a nose portion, a mouth portion, etc.; the element content may be stored in the form of an array or a structure, for example, the element content is stored in the form of a one-dimensional array, and each variable of the array stores the number of the element content; the expression number can also be stored in a two-dimensional array form, variables corresponding to the first dimension of the array are used for storing the number of the expression, and variables corresponding to the second dimension are used for storing the components of the expression. The expression set comprises element contents of different expressions, for example, the expression set comprises element contents of static expressions such as heart, halo, smile and loss and element contents of dynamic expressions such as electrocardiogram, charge and blink 1 time. The structure is a set of object attributes, for example, when the object is an expression, the attribute of the expression includes an expression number, a size of the expression, and the like; the defined structure may be struct Emoticon int num; }.
Specifically, after the terminal acquires the expression input information through the first interface, the terminal calls an interface corresponding to the expression instruction, and element content corresponding to the expression input information is read from the expression set through a function corresponding to the interface. When the expression input information is acquired through the first interface and is static expression input information, calling an interface corresponding to the expression instruction, and reading element content corresponding to the static expression input information from the expression set through a function corresponding to the interface; and when the expression input information is acquired through the first interface and is dynamic expression input information, calling an interface corresponding to the expression instruction, and reading element content corresponding to the dynamic expression input information from the expression set through a function corresponding to the interface.
And step 206, calling an expression generating function through the second interface to generate a target expression corresponding to the element content.
Specifically, after acquiring element content corresponding to expression input information through a first interface, a terminal calls a corresponding expression generating function through a second interface and generates a target expression corresponding to the element content through function processing; the expression generation function may generate the target expression from the acquired element content by calling a drawing method (Graphics), a Bitmap (Bitmap), and an Image method (Image).
And 208, outputting the target expression to the display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot.
The display device is used for displaying a target expression, and the display device may include a Light Emitting Diode (LED) dot matrix module, a Liquid Crystal Display (LCD) screen, an Organic Light Emitting Diode (OLED) display screen, a touch display screen, and the like.
Specifically, a font is required to be used for displaying the target expression, the acquired target expression needs to be subjected to font extraction processing, and the font is a code corresponding to the character displayed on the dot matrix. Storing graphs or characters in a mode of a character die, wherein each point needs a bit to store, the bit is 0 to represent that the pixel point is not displayed, and the bit is 1 to represent display; the generated target expression can be a picture or a letter, and the font extraction can be realized by processing the target expression by using font extraction software. After the target expression is generated, calling a display function to display the target expression on the display screen through an interface connected with the display screen, wherein the display function can be pictureshow (), for example; when the target expression corresponding to the element content is generated as a dynamic expression, the target expression is displayed at regular intervals by triggering a timer, for example, at time intervals of 1 s.
In one embodiment, the terminal expresses a target expression on a display device of the robot, the mouth and the eyes of the robot are set as display areas, an LED matrix is adopted, and the LED lamps of the display areas are controlled to be turned on and off through alternating current to display the generated target expression. The displayed target expressions can comprise static target expressions and dynamic target expressions, and the displayed static expressions comprise smiles, question marks, letter displays and the like; the displayed dynamic expressions include blinking, animated hearts, halos, and the like.
In the robot expression implementation method, expression input information is acquired through a first interface, and element content corresponding to the expression input information is read from an expression set according to the expression input information; calling an expression generating function through a second interface to generate a target expression corresponding to the element content; outputting a target expression to display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot; element contents corresponding to the expressions to be realized are only stored in the expression set in advance, the corresponding element contents are acquired from the expression set according to the acquired expression input information, and the corresponding expressions are displayed on the display equipment of the robot; corresponding element content can be obtained in the expression set according to any obtained expression input information, namely, the target expression corresponding to the element content can be generated, the implementation process is simple, the maintenance is convenient in the implementation process of the robot expression method, and the development cost is reduced.
In one embodiment, before the target expression is displayed on the display screen through the interface, distortion processing is performed on the target expression through the picture compensation function, and the distortion processing performed on the target expression through the picture compensation function can be that the length and the width of the generated target expression and the length and the width of a display area of the display screen are obtained, and scaling display is performed according to the length and the width of the generated target expression and the length and the width of the display area of the display screen in a corresponding proportion; target expression distortion caused by different resolutions of different display screens is avoided.
In one embodiment, before obtaining the expression input information through the first interface, the method further comprises:
element content is obtained by carrying out picture model taking processing on a prepared expression picture, and an expression set is constructed according to the element content.
Specifically, the picture prepared in advance may be a still picture, a moving picture, a video, or the like. The image modulus processing is that a prepared image can be processed through modulus software, and the image can be preprocessed before the modulus processing is carried out on the image, so that irrelevant information of the image is removed; the module-taking software is to introduce a pre-prepared picture into the module-taking software, wherein the module-taking software can comprise a dot matrix module-taking format (such as 8 dot matrix module-taking and 16 dot matrix module-taking), click to generate a character module according to the selected module-taking format and parameter setting, and obtain corresponding character module data, wherein the character module data is the element content of the pre-prepared picture; the picture preprocessing may include performing graying on the image, performing geometric transformation on the grayed image, and performing image enhancement. Element content is obtained by conducting picture modeling processing on the prepared expression pictures, and an expression set is constructed according to the element content, so that the element content and rich expressions can be conveniently added in the expression set, and the expressions can be conveniently combined and transplanted to different robots.
In one embodiment, after reading element content corresponding to the emotion input information from the emotion set, the method further includes: storing the content to be processed in a preset cache region; generating a target expression corresponding to the element content, including: and reading the element content from the cache region, and generating a target expression corresponding to the element content.
Specifically, the cache region is a Memory for storing real-time data, and the Memory may include a Random Access Memory (RAM), a Read-Only Memory (ROM), a FLASH, an SRAM, and the like; storing the element content in a preset cache region, reading the element content from the cache region, and generating a target expression corresponding to the element content; the obtained read element content can be quickly read from the cache region, and the performance and the stability of the expression generation program are improved.
In one embodiment, expression input information is acquired through a first interface, element content corresponding to the expression input information is read from an expression set according to the expression input information, an expression generating function is called through a second interface, a target expression corresponding to the element content is generated, a third interface is loaded, the generated target expression is displayed on display equipment of the robot by calling an expression display function corresponding to the second interface, the expression input information is acquired through the first interface by loading, the target expression is generated by loading the second interface, the target expression is displayed by loading the third interface, and the corresponding function is realized through the interface, so that the expansion and maintenance of a program are facilitated, and the development cost of the program is reduced.
In one embodiment, as shown in fig. 3, another method for implementing a robot expression is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 302, obtaining the expression input information through the first interface.
Specifically, the expression input information is acquired through the first interface, wherein the communication mode supported by the communication interface can be serial communication, and the serial communication can include RS232 communication, RS485 communication and the like. The expression input information is transmitted according to a format of a communication protocol, wherein the communication protocol can be an RS232 communication protocol, an RS485 communication protocol, a Modbus communication protocol, a user-defined communication protocol and the like.
And step 304, analyzing the expression input information to obtain an expression identifier.
The expression identifiers are used for identifying different expressions, one expression corresponds to a unique expression identifier, and the expression identifiers can be numbers or letters. For example, expressions represented by 03 are letter groups and "welgome", expressions represented by 04 are halos, 05 is smiles, 08 is music notation 1, 0a is heart-shaped, and so on.
Specifically, based on the communication protocol, when a data frame corresponding to the expression input information is acquired through the first interface, the numerical value in the address of the frame data register is read, and the expression identifier is obtained according to the acquired numerical value.
And step 306, reading element content corresponding to the expression identifier from the expression set according to the expression identifier.
Wherein, the expression mark and the element content are in one-to-one correspondence.
And 308, storing the element content in a preset cache region, reading the element content from the cache region, and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
And step 310, outputting the target expression to the display device of the robot by calling a third interface.
The robot expression implementation method includes the steps that expression input information is obtained through a first interface based on a communication protocol, the obtained expression input information is analyzed, expression identifiers are obtained, element content corresponding to the expression identifiers is read from an expression set according to the expression identifiers, the element content is stored in a preset cache region, the element content is read from the cache region, target expressions corresponding to the element content are generated, and the target expressions are output to display equipment of a robot; the element content can be directly read from the cache region, so that the reading speed of the element content is increased, and the expression display performance of the robot is improved.
In one embodiment, before obtaining the expression input information through the first interface, the method further comprises:
element content is obtained by carrying out picture model taking processing on a prepared expression picture, and an expression set is constructed according to the element content.
In one embodiment, the method for generating the target expression corresponding to the element content by calling the expression generating function through the second interface includes: sequentially reading element contents from the cache region according to a preset time interval; and calling an expression generating function through the second interface to generate a target expression corresponding to the element content.
Specifically, when the expression input information is a dynamic expression generation instruction, triggering a timer, and sequentially reading element contents from a cache region according to a preset time interval; generating a target expression corresponding to the element content read at each time interval; for example, the preset time interval is 1s, the displayed target expression is "welome", the content of the elements of the "welome" is sequentially read at the preset time interval, and w, e, l, c, o, m, e are sequentially generated; the element content is sequentially read from the cache region according to the preset time interval, wherein the element content can be stored in an array form, the element content is sequentially read from the cache region according to the preset time interval, and the target expression corresponding to the element content is generated by calling the expression generating function through the second interface, so that the dynamic expression can be displayed, and the emotional expression can be better realized.
In one embodiment, as shown in fig. 4, a method for implementing robot expression update is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 402, obtaining an expression updating instruction through a first interface.
The updating instruction is used for indicating a command for updating the display expression.
And step 404, reading element contents corresponding to the expression updating instruction from the expression set according to the expression updating instruction.
Specifically, after the expression updating instruction is obtained, an interface corresponding to the expression instruction is called, and element content corresponding to the expression updating instruction is read from the expression set through a function corresponding to the interface. When the obtained expression updating instruction is a static expression updating instruction, calling an interface corresponding to the expression instruction, and reading element content corresponding to the static expression updating instruction from the expression set through a function corresponding to the interface; and when the obtained expression updating instruction is a dynamic expression updating instruction, calling an interface corresponding to the expression instruction, and reading element content corresponding to the dynamic expression updating instruction from the expression set through a function corresponding to the interface.
And 406, calling an expression generating function through the second interface to generate an updated expression corresponding to the element content.
Specifically, after element content corresponding to the expression updating instruction is acquired, a corresponding expression generating function is called through an interface, and an updating expression corresponding to the element content is generated through function processing.
And step 408, outputting the updated expression to the display device of the robot by calling the third interface.
In the method for realizing robot expression updating, the expression updating instruction is acquired, element content corresponding to the expression updating instruction is read from the expression set according to the expression updating instruction, the updated expression corresponding to the element content is generated, the updated expression is output to the display device of the robot, and the expression can be updated and displayed according to the expression updating instruction.
In one embodiment, the expression update instruction is generated by a timer trigger of the robot or receiving trigger information input through an input device.
The timer triggering can be updating the expression according to a preset time point or updating time at a preset time interval; for example, the expression is preset to be updated once at 8 am and once at 12 am; and if the expression is updated every hour, displaying the expression. The expression updating instruction generated by receiving the trigger information input by the input device may be that the input device is provided with a plurality of keys, the key 1 represents a smiling face, the key 2 represents crying, when the key 1 is pressed, the expression updating instruction for displaying the smiling face is generated, and when the key 2 is pressed, the expression updating instruction for displaying crying is generated.
In one embodiment, the robot can acquire the expression of the user through face recognition, extract the characteristics of the expression of the user, and display the acquired facial expression on the display device of the robot through image processing.
The following describes a robot expression updating method by taking a human-computer interaction scene as an example.
In one embodiment, a visual sensor with a camera shooting function is installed in the robot, the action of a user is acquired through the visual sensor, expression features of the user are extracted, a corresponding expression updating instruction is acquired according to the expression features of the user, element content corresponding to the expression updating instruction is read from an expression set according to the expression updating instruction, an updated expression corresponding to the element content is generated, and the updated expression is output to a display device of the robot; for example, the user blinks 1 generation to show a smiling expression, blinks 2 times to show a faint expression, two fingers in the user gesture motion to show a dynamic expression of smiling, three fingers to show a dynamic "hi" character, etc.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a robotic expression implementation system logic diagram 500, comprising: an input device 502, a display device 504, a memory 506, a micro-processing unit 508; the micro-processing unit 508 includes: read module 510, expression and generate module 512 and display interface module 514, wherein:
the input device 502 is used for inputting information, and the input information may be expression input information or an expression update instruction.
And a display device 504 for displaying the generated target expression.
And a memory 506 for storing the expression sets.
The reading module 510 is configured to read corresponding element content from the expression set according to the obtained expression input information or expression update instruction.
And the expression generation module 512 is configured to generate a target expression corresponding to the element content according to the read element content.
The display interface module 514 is used for connecting a display device.
In the logic diagram 500 of the robot expression implementation system, expression input information or an expression updating instruction is input through the input device 502, the reading module 510 reads corresponding element contents from the memory 506 according to the expression input information or the expression updating instruction, the read element contents are generated into target expressions corresponding to the element contents through the expression generating module 512, and the target expressions are displayed by connecting the display interface module 514 to the display device 504.
In one embodiment, as shown in fig. 6, there is provided a robotic expression realization apparatus 600 including: an obtaining module 602, a reading module 604, an expression generating module 606, and an output module 608, wherein:
the obtaining module 602 is configured to obtain the expression input information through the first interface.
In one embodiment, the obtaining module 602 is further configured to obtain an expression updating instruction.
The reading module 604 is configured to read element content corresponding to the expression input information from the expression set according to the expression input information.
In one embodiment, the reading module 604 is further configured to read element content corresponding to the expression update instruction from the expression set according to the expression update instruction.
In one embodiment, the reading module 604 is further configured to sequentially read the element contents from the buffer according to a preset time interval.
And an expression generating module 606, configured to call an expression generating function through the second interface to generate a target expression corresponding to the element content.
In one embodiment, the emotion generating module 606 is further configured to call an emotion generating function through the second interface to generate an updated emotion corresponding to the element content.
And the output module 608 is configured to output the target expression to the display device of the robot by calling the third interface.
In one embodiment, the output module 608 is further configured to output the updated expression to a display device of the robot by calling the third interface.
In the robot expression implementation device, expression input information is acquired through a first interface, and element content corresponding to the expression input information is read from an expression set according to the expression input information; calling an expression generating function through a second interface to generate a target expression corresponding to the element content; outputting a target expression to display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot; element contents corresponding to the expressions to be realized are only stored in the expression set in advance, and the corresponding element contents are acquired from the expression set according to the acquired expression input information, so that the corresponding expressions can be displayed on the display equipment of the robot; the expression input information acquisition, the target expression generation and the target expression display function module are separated, the robot expression is realized by calling the first interface, the second interface and the third interface, the corresponding function modules can be improved and perfected independently, the maintenance is convenient, and the development cost is reduced.
In another embodiment, as shown in fig. 7, there is provided a robot expression implementing apparatus 600, the processing includes, in addition to the acquiring module 602, the reading module 604, the expression generating module 606 and the output module 608, further including: a module taking processing module 610, an analysis module 612 and a storage module 614, wherein:
and the module taking processing module 610 is configured to obtain element content by performing image module taking processing on a pre-prepared expression picture, and construct an expression set according to the element content.
And the analyzing module 612 is configured to analyze the expression input information, obtain an expression identifier, and read element content corresponding to the expression identifier from the expression set according to the expression identifier.
The storage module 614 is configured to store the element content in a preset buffer.
In one embodiment, the expression input information is acquired through a first interface, and the expression input information is analyzed to acquire an expression identifier; reading element content corresponding to the expression identifier from the expression set according to the expression identifier, storing the element content in a preset cache region, reading the element content from the cache region, calling an expression generating function through a second interface to generate a target expression corresponding to the element content, and outputting the target expression to display equipment of the robot through calling a third interface; the terminal acquires an expression updating instruction through a first interface; reading element content corresponding to the expression updating instruction from the expression set according to the expression updating instruction; calling an expression generating function through a second interface to generate an updated expression corresponding to the element content; outputting the updated expression to the display equipment of the robot by calling the third interface; the expression is updated by obtaining the updating instruction, and the displayed expression can be updated in real time.
For specific limitations of the robot expression implementation apparatus, reference may be made to the above limitations of the robot expression implementation method, which are not described herein again. All or part of the modules in the robot expression implementation device can be implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a robotic expression implementation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from the expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to the display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
element content is obtained by carrying out picture model taking processing on a prepared expression picture, and an expression set is constructed according to the element content.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
analyzing the expression input information to obtain an expression identifier;
and reading element content corresponding to the expression identifier from the expression set according to the expression identifier.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
storing the content to be processed in a preset cache region;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content, wherein the target expression comprises the following steps:
and reading element content from the cache region, and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
sequentially reading element contents from the cache region according to a preset time interval;
and calling an expression generating function through the second interface to generate a target expression corresponding to the element content.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring an expression updating instruction through a first interface;
reading element content corresponding to the expression updating instruction from the expression set according to the expression updating instruction;
calling an expression generating function through a second interface to generate an updated expression corresponding to the element content;
and outputting the updated expression to the display device of the robot by calling the third interface.
In one embodiment, the processor, when executing the computer program, further implements the following:
the expression updating instruction is generated by triggering a timer of the robot or receiving triggering information input through an input device.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from the expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to the display equipment of the robot by calling a third interface, wherein the third interface is used for connecting the display equipment of the robot.
In one embodiment, the computer program when executed by the processor further performs the steps of:
element content is obtained by carrying out picture model taking processing on a prepared expression picture, and an expression set is constructed according to the element content.
In one embodiment, the computer program when executed by the processor further performs the steps of:
analyzing the expression input information to obtain an expression identifier;
and reading element content corresponding to the expression identifier from the expression set according to the expression identifier.
In one embodiment, the computer program when executed by the processor further performs the steps of:
storing the content to be processed in a preset cache region;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content, wherein the target expression comprises the following steps:
and reading element content from the cache region, and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
In one embodiment, the computer program when executed by the processor further performs the steps of:
sequentially reading element contents from the cache region according to a preset time interval;
and calling an expression generating function through the second interface to generate a target expression corresponding to the element content.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring an expression updating instruction through a first interface;
reading element content corresponding to the expression updating instruction from the expression set according to the expression updating instruction;
calling an expression generating function through a second interface to generate an updated expression corresponding to the element content;
and outputting the updated expression to the display device of the robot by calling the third interface.
In one embodiment, the computer program when executed by the processor further implements the following:
the expression updating instruction is generated by triggering a timer of the robot or receiving triggering information input through an input device.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A robot expression implementation method is characterized by comprising the following steps:
acquiring expression input information through a first interface;
reading element content corresponding to the expression input information from an expression set according to the expression input information;
calling an expression generating function through a second interface to generate a target expression corresponding to the element content;
and outputting the target expression to a display device of the robot by calling a third interface, wherein the third interface is used for connecting the display device of the robot.
2. The method of claim 1, wherein before the obtaining the expression input information through the first interface, the method further comprises:
obtaining element content by carrying out picture module taking processing on a prepared expression picture, and constructing an expression set according to the element content.
3. The method of claim 1, wherein reading element contents corresponding to the expression input information from an expression set according to the expression input information comprises:
analyzing the expression input information to obtain an expression identifier;
and reading element content corresponding to the expression identifier from an expression set according to the expression identifier.
4. The method according to claim 1, wherein after the reading of element content corresponding to the expression input information from the expression set, the method further comprises:
storing the element content in a preset cache region;
the step of calling an expression generating function through a second interface to generate a target expression corresponding to the element content includes:
and reading the element content from the cache region, and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
5. The method of claim 4, wherein the emotion input information is a dynamic emotion generation instruction, the element content is read from the cache area, and an emotion generation function is called through a second interface to generate a target emotion corresponding to the element content, and the method includes:
sequentially reading element contents from the cache region according to a preset time interval;
and calling an expression generating function through a second interface to generate a target expression corresponding to the element content.
6. The method of claim 1, further comprising:
acquiring an expression updating instruction through a first interface;
reading element content corresponding to the expression updating instruction from the expression set according to the expression updating instruction;
calling an expression generating function through a second interface to generate an updated expression corresponding to the element content;
and outputting the updated expression to a display device of the robot by calling a third interface.
7. The method of claim 6, wherein the expression update instruction is generated by a timer trigger of the robot or receiving trigger information input through an input device.
8. A robotic expression-implementing device, the device comprising:
the obtaining module is used for obtaining the expression input information through the first interface;
the reading module is used for reading element content corresponding to the expression input information from an expression set according to the expression input information;
the expression generation module is used for calling an expression generation function through a second interface to generate a target expression corresponding to the element content;
and the output module is used for outputting the target expression to the display equipment of the robot by calling a third interface, and the third interface is used for connecting the display equipment of the robot.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911029797.7A 2019-10-28 2019-10-28 Robot expression implementation method and device, computer equipment and storage medium Pending CN110815238A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911029797.7A CN110815238A (en) 2019-10-28 2019-10-28 Robot expression implementation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911029797.7A CN110815238A (en) 2019-10-28 2019-10-28 Robot expression implementation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110815238A true CN110815238A (en) 2020-02-21

Family

ID=69550697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911029797.7A Pending CN110815238A (en) 2019-10-28 2019-10-28 Robot expression implementation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110815238A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105690407A (en) * 2016-04-27 2016-06-22 深圳前海勇艺达机器人有限公司 Intelligent robot with expression display function
CN105965513A (en) * 2016-04-15 2016-09-28 青岛克路德机器人有限公司 Implementation system for robot facial expressions
CN106448589A (en) * 2016-10-11 2017-02-22 塔米智能科技(北京)有限公司 Robot expression system based on double LCD (liquid crystal display) color screens
CN107116563A (en) * 2017-06-22 2017-09-01 国家康复辅具研究中心 Pet type robot and robot control system
CN107438503A (en) * 2017-04-13 2017-12-05 深圳市艾唯尔科技有限公司 Single master chip can realize the method for control Multiple-shower output
CN109773807A (en) * 2019-03-04 2019-05-21 昆山塔米机器人有限公司 Motion control method, robot
US20190248001A1 (en) * 2018-02-13 2019-08-15 Casio Computer Co., Ltd. Conversation output system, conversation output method, and non-transitory recording medium
KR20190116190A (en) * 2019-09-23 2019-10-14 엘지전자 주식회사 Robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105965513A (en) * 2016-04-15 2016-09-28 青岛克路德机器人有限公司 Implementation system for robot facial expressions
CN105690407A (en) * 2016-04-27 2016-06-22 深圳前海勇艺达机器人有限公司 Intelligent robot with expression display function
CN106448589A (en) * 2016-10-11 2017-02-22 塔米智能科技(北京)有限公司 Robot expression system based on double LCD (liquid crystal display) color screens
CN107438503A (en) * 2017-04-13 2017-12-05 深圳市艾唯尔科技有限公司 Single master chip can realize the method for control Multiple-shower output
CN107116563A (en) * 2017-06-22 2017-09-01 国家康复辅具研究中心 Pet type robot and robot control system
US20190248001A1 (en) * 2018-02-13 2019-08-15 Casio Computer Co., Ltd. Conversation output system, conversation output method, and non-transitory recording medium
CN109773807A (en) * 2019-03-04 2019-05-21 昆山塔米机器人有限公司 Motion control method, robot
KR20190116190A (en) * 2019-09-23 2019-10-14 엘지전자 주식회사 Robot

Similar Documents

Publication Publication Date Title
CN112541963B (en) Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium
CN111383308B (en) Method for generating animation expression and electronic equipment
CN108280166B (en) Method and device for making expression, terminal and computer readable storage medium
CN112527115B (en) User image generation method, related device and computer program product
CN110378203B (en) Image processing method, device, terminal and storage medium
US11562489B2 (en) Pixel-wise hand segmentation of multi-modal hand activity video dataset
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
Avula et al. CNN based recognition of emotion and speech from gestures and facial expressions
CN112149605B (en) Face recognition method, device, equipment and storage medium
US20220405994A1 (en) Communication assistance system and communication assistance program
CN112560854A (en) Method, apparatus, device and storage medium for processing image
CN111275110B (en) Image description method, device, electronic equipment and storage medium
CN110815238A (en) Robot expression implementation method and device, computer equipment and storage medium
CN110750154A (en) Display control method, system, device, equipment and storage medium
CN114821811B (en) Method and device for generating person composite image, computer device and storage medium
Mekala et al. Gesture recognition using neural networks based on HW/SW cosimulation platform
Malik et al. Reimagining application user interface (UI) design using deep learning methods: Challenges and opportunities
CN115424001A (en) Scene similarity estimation method and device, computer equipment and storage medium
JPWO2004095361A1 (en) Online handwritten character input device and method
CN110689052B (en) Session message processing method, device, computer equipment and storage medium
CN113672143A (en) Image annotation method, system, device and storage medium
CN114020623A (en) Software testing method, device, equipment and medium based on intelligent identification
CN111696179A (en) Method and device for generating cartoon three-dimensional model and virtual simulator and storage medium
Arias et al. Convolutional neural network applied to the gesticulation control of an interactive social robot with humanoid aspect
Khan et al. Real-Time American Sign Language Realization Using Transfer Learning With VGG Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221

RJ01 Rejection of invention patent application after publication