CN110502110B - Method and device for generating feedback information of interactive application program - Google Patents

Method and device for generating feedback information of interactive application program Download PDF

Info

Publication number
CN110502110B
CN110502110B CN201910726715.8A CN201910726715A CN110502110B CN 110502110 B CN110502110 B CN 110502110B CN 201910726715 A CN201910726715 A CN 201910726715A CN 110502110 B CN110502110 B CN 110502110B
Authority
CN
China
Prior art keywords
interactive application
user
dimensional
application program
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910726715.8A
Other languages
Chinese (zh)
Other versions
CN110502110A (en
Inventor
马坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910726715.8A priority Critical patent/CN110502110B/en
Publication of CN110502110A publication Critical patent/CN110502110A/en
Application granted granted Critical
Publication of CN110502110B publication Critical patent/CN110502110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2250/00Miscellaneous game characteristics
    • A63F2250/30Miscellaneous game characteristics with a three-dimensional image
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure discloses a method and a device for generating feedback information of an interactive application program, which are used for enabling a user who has downloaded or purchased the interactive application program to accurately feedback experience of the user using the interactive application program, wherein the method comprises the following steps: and running the interactive application program based on the input instruction of the user, and calling the image capturing equipment to capture the facial image of the user to generate a corresponding facial image set when the interactive application program is determined to run to any set node, and generating feedback information for describing the feedback of the user to the interactive application program based on the facial image set. In this way, a user who has downloaded or purchased an interactive application can more intuitively demonstrate the perception of the user when playing the interactive application.

Description

Method and device for generating feedback information of interactive application program
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and a device for generating feedback information of an interactive application program.
Background
With the development of science and technology, interactive applications having multiple branching storylines are increasingly being sought by people, for example, interactive games in which a game scenario is described using text as a carrier, interactive games in which a game scenario is described using a cartoon, interactive games in which a movie script is used as a game scenario, and so on.
Because the interactive application program is liked by people, users more hope to refer to the comments of the downloaded or purchased users when downloading or purchasing the interactive application program in the application market, and the language expression capability of many users is limited, the experience of the interactive application program is not described accurately enough, or when the users comment on the interactive application program, the time interval from the last playing of the interactive application program is too long, the experience when the users play the interactive application program for the first time is forgotten, and thus the experience of the interactive application program cannot be accurately expressed, so that the users who do not download or purchase the interactive application program have difficulty in accurately grasping the real use experience of the interactive application program.
In view of the foregoing, there is a need to design a new method and apparatus for generating feedback information of an interactive application program to overcome the above-mentioned drawbacks.
Disclosure of Invention
The disclosure provides a method and a device for generating feedback information of an interactive application program, which are used for enabling a user who has downloaded or purchased the interactive application program to accurately feedback experience of the user using the interactive application program. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a method for generating feedback information of an interactive application, including:
running an interactive application program based on an input instruction of a user, wherein a plurality of setting nodes are arranged in the interactive application program;
when the interactive application program runs to any set node, the image capturing device is called to capture facial images of a user, and a corresponding facial image set is generated;
and generating feedback information of the user to the interactive application program based on the facial image set.
Optionally, when the interactive application program runs to any set node, invoking the image capturing device to capture a facial image of the user, and generating a corresponding facial image set, including:
when the interactive application program is determined to run to any set node, calling the image capturing equipment positioned on the terminal to shoot a plurality of facial images of the user within a preset time length;
after deleting the face image representing the non-expression change from the plurality of face images, the remaining face images are determined as the user face image set.
Optionally, generating feedback information of the user to the interactive application program based on the face image set includes:
inputting the facial image set into a preset three-dimensional face model to generate a corresponding three-dimensional face expression;
synthesizing each obtained three-dimensional facial expression into a target three-dimensional facial expression;
and outputting the target three-dimensional facial expression as feedback information of the user to the interactive application program.
Optionally, inputting the face image set into a preset three-dimensional face model, and generating a corresponding three-dimensional face expression includes:
the following operations are performed for each face image in the face image set, respectively:
determining the coordinate positions of face key points of a face image, and generating a corresponding face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model respectively to generate a corresponding three-dimensional face expression.
Optionally, synthesizing the obtained three-dimensional facial expressions into a target three-dimensional facial expression, including:
acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions respectively;
arranging the three-dimensional facial expressions according to the preset sequence numbers;
and splicing the arranged three-dimensional facial expressions to generate the target three-dimensional facial expression.
Optionally, synthesizing the obtained three-dimensional facial expressions into a target three-dimensional facial expression, including:
sending each three-dimensional facial expression to a server, and triggering the server to synthesize each three-dimensional facial expression;
and receiving the target three-dimensional facial expression returned by the server.
Optionally, after generating the feedback information of the user to the interactive application program based on the face image set, the method further includes:
and sending the target three-dimensional facial expression to an evaluation area of an application program market to be used as an evaluation of the interactive application program by a user for display.
Optionally, after generating the feedback information of the user to the interactive application program based on the face image set, the method further includes:
and storing the target three-dimensional facial expression on a terminal, and sending the target three-dimensional facial expression to other application programs on the terminal for display.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for generating feedback information of an interactive application, including:
a setting unit configured to run an interactive application program based on an input instruction of a user, the interactive application program having a plurality of setting nodes set therein;
an acquisition unit configured to invoke an image capturing device to capture a facial image of a user when the interactive application program runs to any one of the set nodes, and generate a corresponding facial image set;
and the generating unit is configured to generate feedback information of the user to the interactive application program based on the face image set.
Optionally, when the interactive application program runs to any one of the set nodes, the image capturing device is invoked to capture facial images of the user, and a corresponding facial image set is generated, and the acquiring unit is configured to:
when the interactive application program is determined to run to any set node, calling the image capturing equipment positioned on the terminal to shoot a plurality of facial images of the user within a preset time length;
after deleting the face image representing the non-expression change from the plurality of face images, the remaining face images are determined as the user face image set.
Optionally, based on the face image set, feedback information of the user to the interactive application is generated, and the generating unit is configured to:
inputting the facial image set into a preset three-dimensional face model to generate a corresponding three-dimensional face expression;
synthesizing each obtained three-dimensional facial expression into a target three-dimensional facial expression;
and outputting the target three-dimensional facial expression as feedback information of the user to the interactive application program.
Optionally, the facial image set is input into a preset three-dimensional face model to generate a corresponding three-dimensional face expression, and the generating unit is configured to:
the following operations are performed for each face image in the face image set, respectively:
determining the coordinate positions of face key points of a face image, and generating a corresponding face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model respectively to generate a corresponding three-dimensional face expression.
Optionally, the obtained three-dimensional facial expressions are synthesized into a target three-dimensional facial expression, and the generating unit is configured to:
acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions respectively;
arranging the three-dimensional facial expressions according to the preset sequence numbers;
and splicing the arranged three-dimensional facial expressions to generate the target three-dimensional facial expression.
Optionally, the three-dimensional facial expressions are obtained and synthesized into a target three-dimensional facial expression, and the generating unit is configured to:
sending each three-dimensional facial expression to a server, and triggering the server to synthesize each three-dimensional facial expression;
and receiving the target three-dimensional facial expression returned by the server.
Optionally, after generating the feedback information of the user to the interactive application based on the face image set, the generating unit is further configured to:
and sending the target three-dimensional facial expression to an evaluation area of an application program market to be used as an evaluation of the interactive application program by a user for display.
Optionally, after generating the feedback information of the user to the interactive application based on the face image set, the generating unit is further configured to:
and storing the target three-dimensional facial expression on a terminal, and sending the target three-dimensional facial expression to other application programs on the terminal for display.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory for storing executable instructions;
and a processor for reading and executing the executable instructions stored in the memory to implement any one of the methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor, enables the steps of any one of the methods described above to be performed.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the embodiment of the disclosure, an interactive application program is operated based on an input instruction of a user, and when the interactive application program is operated to any one set node, an image capturing device is called to capture a facial image of the user to generate a corresponding facial image set, and feedback information is generated based on the facial image set and used for describing feedback of the user to the interactive application program. Therefore, the user who has downloaded or purchased the interactive application program can accurately record the facial images of each set node played by the user, and the experience of the user when playing the interactive application program can be more vividly and intuitively displayed by integrating a plurality of facial images.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow diagram illustrating a process for generating interactive application feedback information according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a set node X according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a three-dimensional facial expression characterizing a user's happiness, according to an example embodiment.
Fig. 4 is a diagram illustrating a three-dimensional facial expression characterizing a user confusion, according to an example embodiment.
Fig. 5 is a block diagram illustrating an apparatus for generating interactive application feedback information according to an exemplary embodiment.
Fig. 6 is a schematic diagram of an electronic device according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In order to enable a user who has downloaded or purchased an interactive application to accurately feedback the experience of the user using the interactive application, in an embodiment of the present disclosure, a solution is provided, where the solution is: and running the interactive application program based on the input instruction of the user, and calling the image capturing equipment to capture the facial image of the user to generate a corresponding facial image set when the interactive application program is determined to run to any set node, and generating feedback information for describing the feedback of the user to the interactive application program based on the facial image set.
The preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, in the embodiment of the disclosure, a detailed process of generating feedback information of an interactive application program is as follows:
s101, running an interactive application program based on an input instruction of a user, wherein a plurality of setting nodes are arranged in the interactive application program.
The user downloads or purchases the interactive application in the application market of the intelligent terminal, and stores the interactive application on the intelligent terminal. A plurality of key scenario points are arranged in the interactive application program, when a user sees the key scenario points, the user can make corresponding facial expressions, for example, when the user sees a certain terrorist scenario in a thrilling game, the user can show terrorist and fear facial expressions; when a user sees a certain emotion in a temperament game, the user can cry. If the key scenario points are set as the set nodes, all facial expression data of the key scenario points seen by the user may not be obtained in time, so that the embodiment of the application sets a certain time point before the key scenario points as the set nodes, thereby being convenient for obtaining all facial expression data of the key scenario points seen by the user.
S102, when the interactive application program runs to any set node, the image capturing device is called to capture facial images of the user, and a corresponding facial image set is generated.
Taking any one of the designated nodes (hereinafter referred to as node X) as an example, the procedure for acquiring the face image set of the user at node X is described as follows:
firstly, determining that an interactive application program runs to a node X, calling an image capturing device positioned on a terminal, and shooting a plurality of facial images of a user within a preset duration.
As shown in fig. 2, the node X refers to a time point that is 5 seconds before a preset critical scenario point, for example, 5 seconds before a terrorist appears in a room, for example, 6 seconds before a letter is read, for example, 5 seconds before a lamp of the room is turned on.
For example, assuming that node X is 5 seconds before the occurrence of a horror in the room, when the interactive thrilling game runs to 5 seconds before the occurrence of a horror in the room, the smart terminal invokes the front-facing camera to take a plurality of pictures of the user from 5 seconds before the occurrence of a horror in the room to the time of the occurrence of a horror in the room, each picture recording the facial expression of the user at the current time.
Then, a face image representing no expression change is deleted from the plurality of face images.
Since the node X is disposed before the node in the room where the horror appears, the image capturing device may capture not only the horror expression when the user sees the horror appearing in the room, but also the expression of the user unchanged before the horror appears in the room, and therefore, these facial images representing the unchanged expression need to be deleted before the three-dimensional facial expression is generated.
Finally, the remaining face images are determined as a set of face images.
And S103, generating feedback information of the user to the interactive application program based on the face image set.
Specifically, when step S103 is performed, the following operations may be included:
firstly, inputting the obtained facial image set into a preset three-dimensional facial model, and generating a corresponding three-dimensional facial expression.
Specifically, an existing three-dimensional face model, such as Animoji, 3DMax, may be adopted, after each face image in the face image set is sequentially input into the three-dimensional face model, the following operations are respectively performed to generate a corresponding three-dimensional face expression, and the following is taken as an example of one face image:
the coordinate positions of the face key points of a face image are determined first, and a corresponding face key point set is generated. The human face key points refer to points greatly affected by the expression, such as left and right lip angles, upper lip middle points, lower lip middle points, eyebrows, tails, eyebrow middle points and the like, and the expression of a user when seeing key scenario points can be accurately described through the generated human face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model to generate a corresponding three-dimensional face expression.
In the material library of the three-dimensional face model, not only the average face model, but also various cartoon animal face models exist, wherein the average face model is obtained by carrying out averaging processing on the basis of face templates of various types (such as round face, square face, long face and pointed face), and a user can freely select the face model provided in the material library, so that three-dimensional face expressions of a plurality of different face types can be generated by adopting the same face image set. In the embodiment of the disclosure, the expression is mapped to the face model of the material library instead of establishing the face model of the user, so as to ensure the privacy security of the user, and therefore, the expression when the user sees the key scenario points is presented, and the interestingness is increased.
And secondly, synthesizing each obtained three-dimensional facial expression into a target three-dimensional facial expression.
Specifically, the process of synthesizing the target three-dimensional facial expression in the interactive application program is as follows:
a1, acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions.
Each key scenario point in the interactive application program is configured with a unique serial number, so each setting node can be configured with a serial number corresponding to the key scenario point, and can also be identified in other manners.
For example, 10 key scenario points are set in the interactive game, the key scenario points 1-10 are marked by serial numbers 1-10 in sequence, and further the nodes 1-10 can also be marked by serial numbers 1-10.
For another example, 10 key scenario points are set in the interactive game, key scenario points 1-10 are marked with capital letters A-J in sequence, and nodes 1-10 are marked with lowercase letters a-J.
A2, arranging the three-dimensional facial expressions according to the preset sequence numbers.
And A3, splicing the arranged three-dimensional facial expressions to generate a target three-dimensional facial expression.
In the embodiment of the disclosure, each arranged three-dimensional facial expression can be spliced into the target three-dimensional facial expression locally, the target three-dimensional facial expression can be synthesized in the server, specifically, each arranged three-dimensional facial expression is sent to the server, the server is triggered to synthesize each three-dimensional facial expression, and then the target three-dimensional facial expression returned by the server is received.
And finally, outputting the target three-dimensional facial expression as feedback information of the user to the interactive application program.
Assuming that 10 nodes are total in the interactive game, the user runs the interactive game twice in the same day, the first interactive game does not run to the node 4, the operation of exiting the game is executed, and the first target three-dimensional facial expression is synthesized based on the first facial image set acquired when the interactive game runs to the nodes 1-3;
when the user enters the interactive game again, the interactive game continues to run from the last exit position until the user exits the game again before the node 8, the second facial image set running to the node 4-7 is acquired, the first facial image set and the second facial image set are combined to synthesize a second target three-dimensional facial expression, and at the moment, the second target three-dimensional facial expression replaces the first target three-dimensional facial expression and is output as feedback information of the user to the interactive application program.
Assuming that 10 nodes are total in the interactive game, the user runs the interactive game twice in the same day, the first interactive game does not run to the node 4, the operation of exiting the game is executed, and the first target three-dimensional facial expression is synthesized based on the first facial image set acquired when the interactive game runs to the nodes 1-3;
when the user enters the interactive game again, the interactive game starts to run from the initial level until the game is exited again before the node 8, the second facial image set when the user runs to the node 1-3 is obtained to replace the first facial image set, the second target three-dimensional facial expression is synthesized based on the second facial image set when the user runs to the node 1-7, and at the moment, the second target three-dimensional facial expression replaces the first target three-dimensional facial expression and is output as feedback information of the user to the interactive application program.
In the embodiment of the disclosure, feedback information presented in the form of a target three-dimensional facial expression is sent to an evaluation area of an application market and is used as an evaluation of an interactive application by a user for display, and compared with text evaluation, the expression evaluation more vividly and intuitively displays the feeling of the user when playing the interactive application; in addition, in order to protect the privacy of the user, the method can adopt an anonymous posting form to automatically upload the target three-dimensional facial expression synthesized on the same day to an evaluation area for display, so that the defect that the user forgets to play the interactive application program at the beginning because of overlong comment time and the time interval of playing the interactive application program last time, and cannot accurately evaluate the use experience of the interactive application program is overcome.
In addition, the generated target three-dimensional facial expression can be stored in the terminal and sent to other application programs on the terminal for display, for example, the user A shares the target three-dimensional facial expression to the user B through the chat application program, so that the interestingness of product use is improved, the use experience of the user is improved, potential users are also explored, and the transmission of the interactive application program is facilitated.
Based on the above embodiments, a specific application scenario is described in further detail below.
B1, when the interactive survival game runs to 5 seconds before the food in the left backpack is found, the intelligent terminal calls the front-facing camera to shoot a plurality of pictures in a period from 5 seconds before the food in the left backpack is found to the time when the food in the left backpack is found;
b2, deleting pictures without expression change from the pictures, and taking the rest pictures as a facial image set of the user;
b3, inputting the obtained facial image set into an Animoji to generate a three-dimensional facial expression representing the happiness of the user as shown in figure 3;
b4, when the interactive survival game runs for 5 seconds before the terrorist appears in the room, the intelligent terminal calls the front-facing camera to shoot a plurality of pictures in a period from 5 seconds before the terrorist appears in the room to the time when the terrorist appears in the room;
b5, deleting pictures without expression change from the pictures, and taking the rest pictures as a facial image set of the user;
b6, inputting the obtained facial image set into an Animoji to generate a three-dimensional facial expression representing the confusion of the user as shown in fig. 4;
b7, node 1 is 5 seconds before finding food contained in the lost backpack, node 2 is 5 seconds before terrorist in the room, wherein the serial number of node 1 is a and the serial number of node 2 is B, so that the three-dimensional facial expression representing the happiness of the user is arranged in front of the three-dimensional facial expression representing the confusion of the user;
b8, splicing the three-dimensional facial expression representing the happiness of the user with the three-dimensional facial expression representing the confusion of the user to generate a target three-dimensional facial expression which is converted from happiness to confusion;
and B9, sending the target three-dimensional facial expression to an evaluation area of the application market, and displaying the target three-dimensional facial expression as an evaluation of the interactive survival game by a user.
Based on the above embodiments, referring to fig. 5, in an embodiment of the present disclosure, a generating device for feedback information of an interactive application is provided, which at least includes a setting unit 501, an obtaining unit 502 and a generating unit 503, where,
a setting unit 501 configured to run an interactive application program based on an input instruction of a user, where a plurality of setting nodes are set in the interactive application program;
an obtaining unit 502, configured to invoke an image capturing device to capture a facial image of a user when the interactive application program runs to any one of the set nodes, and generate a corresponding facial image set;
and a generating unit 503 configured to generate feedback information of the user to the interactive application program based on the face image set.
Optionally, when the interactive application runs to any one of the set nodes, the image capturing device is invoked to capture a facial image of the user, and a corresponding facial image set is generated, and the acquiring unit 502 is configured to:
when the interactive application program is determined to run to any set node, calling the image capturing equipment positioned on the terminal to shoot a plurality of facial images of the user within a preset time length;
after deleting the face image representing the non-expression change from the plurality of face images, the remaining face images are determined as the user face image set.
Optionally, based on the face image set, feedback information of the user to the interactive application is generated, and the generating unit 503 is configured to:
inputting the facial image set into a preset three-dimensional face model to generate a corresponding three-dimensional face expression;
synthesizing each obtained three-dimensional facial expression into a target three-dimensional facial expression;
and outputting the target three-dimensional facial expression as feedback information of the user to the interactive application program.
Optionally, the face image set is input into a preset three-dimensional face model, a corresponding three-dimensional face expression is generated, and the generating unit 503 is configured to:
the following operations are performed for each face image in the face image set, respectively:
determining the coordinate positions of face key points of a face image, and generating a corresponding face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model respectively to generate a corresponding three-dimensional face expression.
Optionally, the obtained three-dimensional facial expressions are synthesized into a target three-dimensional facial expression, and the generating unit 503 is configured to:
acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions respectively;
arranging the three-dimensional facial expressions according to the preset sequence numbers;
and splicing the arranged three-dimensional facial expressions to generate the target three-dimensional facial expression.
Alternatively, the three-dimensional facial expressions are synthesized into a target three-dimensional facial expression, and the generating unit 503 is configured to:
sending each three-dimensional facial expression to a server, and triggering the server to synthesize each three-dimensional facial expression;
and receiving the target three-dimensional facial expression returned by the server.
Optionally, after generating the feedback information of the user to the interactive application based on the face image set, the generating unit 503 is further configured to:
and sending the target three-dimensional facial expression to an evaluation area of an application program market to be used as an evaluation of the interactive application program by a user for display.
Optionally, after generating the feedback information of the user to the interactive application based on the face image set, the generating unit 503 is further configured to:
and storing the target three-dimensional facial expression on a terminal, and sending the target three-dimensional facial expression to other application programs on the terminal for display.
Based on the above embodiments, referring to fig. 6, in an embodiment of the present disclosure, a computing device is provided, including at least a memory 601 and a processor 602, wherein,
a memory 601 for storing executable instructions;
a processor 602 for reading and executing executable instructions stored in the memory to implement any of the methods described above.
Based on the above embodiments, there is provided a storage medium including at least: the instructions in the storage medium, when executed by a processor, enable the steps of any one of the methods described above to be performed.
In summary, in the embodiment of the present disclosure, the interactive application is operated based on the input instruction of the user, and when the interactive application is determined to be operated to any one of the set nodes, the image capturing device is invoked to capture the facial image of the user, to generate a corresponding facial image set, and then based on the facial image set, feedback information is generated to describe the feedback of the user to the interactive application.
Compared with the text evaluation, the expression evaluation more vividly and intuitively shows the feeling of the user when playing the interactive application program, so that the user who downloads or purchases the interactive application program can conveniently and rapidly feed back the use experience of the interactive application program through the target three-dimensional facial expression, and the interesting target three-dimensional facial expression can attract more potential users to download or purchase the interactive application program, thereby being beneficial to the propagation and exposure of the interactive application program.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. The method for generating the feedback information of the interactive application program is characterized by comprising the following steps:
when the interval between two adjacent transmitted input instructions of the user is determined to be in a set period, continuously running an interactive application program from the position which is withdrawn last time based on the input instruction of the user, wherein a plurality of set nodes are arranged in the interactive application program;
before the interactive application program finishes running, acquiring facial images of a user when the interactive application program runs to each set node from the last exit position, and acquiring a second facial image set acquired at this time;
and generating a target three-dimensional facial expression containing multiple expressions based on the second facial image set and a first facial image set acquired when the user runs the interactive application in response to an input instruction sent by the user last time in the set period, wherein the target three-dimensional facial expression represents feedback information of the user to the interactive application.
2. The method of claim 1, wherein capturing facial images of a user when the interactive application is running from a last exit location to each set node before the interactive application is finished running, and obtaining a second set of facial images captured this time, comprises:
before the interactive application program finishes running, when the interactive application program is determined to run from the last exit position to each set node, calling an image capturing device positioned on a terminal, and shooting a plurality of facial images of a user within a preset duration;
and deleting the face images representing the non-expression change from the plurality of face images, and determining the rest face images as the second face image set acquired at this time.
3. The method of claim 1, wherein generating a target three-dimensional facial expression comprising a plurality of expressions based on the second set of facial images and a first set of facial images acquired while the interactive application is running in response to an input instruction last sent by a user within the set period comprises:
respectively inputting each facial image in the second facial image set and the first facial image set acquired when the interactive application program is operated in response to an input instruction sent by a user in the set period last time into a preset three-dimensional facial model to generate respective three-dimensional facial expressions;
and synthesizing each obtained three-dimensional facial expression into the target three-dimensional facial expression containing multiple expressions.
4. A method according to claim 3, wherein inputting the second set of facial images and each facial image in the first set of facial images acquired when the interactive application is run in response to an input instruction sent last time by the user in the set period into a preset three-dimensional face model to generate respective three-dimensional facial expressions, respectively, comprises:
for each facial image in the second facial image set and the first facial image set, respectively executing the following operations, wherein the first facial image is acquired when the interactive application program is operated in response to an input instruction sent by a user last time in the set period:
determining the coordinate positions of face key points of a face image, and generating a corresponding face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model respectively to generate a corresponding three-dimensional face expression.
5. A method according to claim 3, wherein synthesizing the obtained three-dimensional facial expressions into a target three-dimensional facial expression containing a plurality of expressions, comprises:
acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions respectively;
arranging the three-dimensional facial expressions according to the preset sequence numbers;
and splicing the arranged three-dimensional facial expressions to generate the target three-dimensional facial expression containing multiple expressions.
6. A method according to claim 3, wherein synthesizing the obtained three-dimensional facial expressions into a target three-dimensional facial expression comprising a plurality of expressions comprises:
sending each three-dimensional facial expression to a server, and triggering the server to synthesize each three-dimensional facial expression;
and receiving the target three-dimensional facial expression containing multiple expressions returned by the server.
7. The method of any one of claims 1-6, further comprising, after generating the target three-dimensional facial expression comprising a plurality of expressions:
and sending the target three-dimensional facial expression to an evaluation area of an application program market to be used as an evaluation of the interactive application program by a user for display.
8. The method of any one of claims 1-6, further comprising, after generating the target three-dimensional facial expression comprising a plurality of expressions:
and storing the target three-dimensional facial expression on a terminal, and sending the target three-dimensional facial expression to other application programs on the terminal for display.
9. An apparatus for generating feedback information of an interactive application program, comprising:
the setting unit is configured to continuously run the interactive application program from the position which is withdrawn last time based on the input instruction of the user when the interval between the input instructions which are sent by the user twice is determined to be in the setting period, and a plurality of setting nodes are set in the interactive application program;
the acquisition unit is configured to acquire facial images of a user when the interactive application program runs to each set node from the last exit position before the interactive application program finishes running, and acquire a second facial image set acquired at this time;
the generating unit is configured to generate a target three-dimensional facial expression containing multiple expressions based on the second facial image set and a first facial image set acquired when the user runs the interactive application program in response to an input instruction sent by the user last time in the set period, wherein the target three-dimensional facial expression represents feedback information of the user to the interactive application program.
10. The apparatus of claim 9, wherein the acquisition unit is configured to:
before the interactive application program finishes running, when the interactive application program is determined to run from the last exit position to each set node, calling an image capturing device positioned on a terminal, and shooting a plurality of facial images of a user within a preset duration;
and deleting the face images representing the non-expression change from the plurality of face images, and determining the rest face images as the second face image set acquired at this time.
11. The apparatus of claim 9, wherein the generating unit is configured to:
respectively inputting each facial image in the second facial image set and the first facial image set acquired when the interactive application program is operated in response to an input instruction sent by a user in the set period last time into a preset three-dimensional facial model to generate respective three-dimensional facial expressions;
and synthesizing each obtained three-dimensional facial expression into the target three-dimensional facial expression containing multiple expressions.
12. The apparatus of claim 11, wherein the generating unit is configured to:
for each facial image in the second facial image set and the first facial image set, the following operations are executed respectively, and the first facial image set is acquired when the interactive application program is operated in response to an input instruction sent by a user last time in the set period:
determining the coordinate positions of face key points of a face image, and generating a corresponding face key point set;
and mapping the coordinate position of each face key point in the face key point set to the corresponding coordinate position in the three-dimensional face model respectively to generate a corresponding three-dimensional face expression.
13. The apparatus of claim 11, wherein the generating unit is configured to:
acquiring preset sequence numbers of set nodes corresponding to the three-dimensional facial expressions respectively;
arranging the three-dimensional facial expressions according to the preset sequence numbers;
and splicing the arranged three-dimensional facial expressions to generate the target three-dimensional facial expression containing multiple expressions.
14. The apparatus of claim 11, wherein the generating unit is configured to:
sending each three-dimensional facial expression to a server, and triggering the server to synthesize each three-dimensional facial expression;
and receiving the target three-dimensional facial expression containing multiple expressions returned by the server.
15. The apparatus according to any one of claims 9-14, wherein after generating a target three-dimensional facial expression comprising a plurality of expressions, the generating unit is further configured to:
and sending the target three-dimensional facial expression to an evaluation area of an application program market to be used as an evaluation of the interactive application program by a user for display.
16. The apparatus according to any one of claims 9-14, wherein after generating a target three-dimensional facial expression comprising a plurality of expressions, the generating unit is further configured to:
and storing the target three-dimensional facial expression on a terminal, and sending the target three-dimensional facial expression to other application programs on the terminal for display.
17. An electronic device, comprising:
a memory for storing executable instructions;
a processor configured to read and execute executable instructions stored in the memory to implement the method for generating interactive application feedback information according to any one of claims 1 to 8.
18. A storage medium, wherein instructions in the storage medium, when executed by a processor, enable to perform the method of generating interactive application feedback information according to any one of claims 1 to 8.
CN201910726715.8A 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program Active CN110502110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910726715.8A CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910726715.8A CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Publications (2)

Publication Number Publication Date
CN110502110A CN110502110A (en) 2019-11-26
CN110502110B true CN110502110B (en) 2023-08-11

Family

ID=68587133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910726715.8A Active CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Country Status (1)

Country Link
CN (1) CN110502110B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917120A (en) * 2012-09-20 2013-02-06 北京百纳威尔科技有限公司 Mobile terminal and method for refreshing information displayed by mobile terminal
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN109948426A (en) * 2019-01-23 2019-06-28 深圳壹账通智能科技有限公司 Application program method of adjustment, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917120A (en) * 2012-09-20 2013-02-06 北京百纳威尔科技有限公司 Mobile terminal and method for refreshing information displayed by mobile terminal
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN109948426A (en) * 2019-01-23 2019-06-28 深圳壹账通智能科技有限公司 Application program method of adjustment, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110502110A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN108377334B (en) Short video shooting method and device and electronic terminal
Rigby et al. Development of a questionnaire to measure immersion in video media: The Film IEQ
CN108986192B (en) Data processing method and device for live broadcast
KR20160010449A (en) Method, user terminal and server for information exchange communications
CN108874114A (en) Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN111530088B (en) Method and device for generating real-time expression picture of game role
CN110162667A (en) Video generation method, device and storage medium
US20210166461A1 (en) Avatar animation
JP6563580B1 (en) Communication system and program
CN112528936B (en) Video sequence arrangement method, device, electronic equipment and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN111768478A (en) Image synthesis method and device, storage medium and electronic equipment
CN116437137A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN113271486B (en) Interactive video processing method, device, computer equipment and storage medium
CN110502110B (en) Method and device for generating feedback information of interactive application program
CN111530087B (en) Method and device for generating real-time expression package in game
KR102169804B1 (en) Apparatus and method of handling configuration information of a character using screen shot image
CN109510752A (en) Information displaying method and device
KR20200076371A (en) 4D Model based Brain Training System and Method
JP6792658B2 (en) Game programs and game equipment
JP7344084B2 (en) Content distribution system, content distribution method, and content distribution program
CN112235559B (en) Method, device and system for generating video
CN111659114B (en) Interactive game generation method and device, interactive game processing method and device and electronic equipment
CN114241132B (en) Scene content display control method and device, computer equipment and storage medium
WO2021208330A1 (en) Method and apparatus for generating expression for game character

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant