CN110502110A - A kind of generation method and device of interactive application program feedback information - Google Patents

A kind of generation method and device of interactive application program feedback information Download PDF

Info

Publication number
CN110502110A
CN110502110A CN201910726715.8A CN201910726715A CN110502110A CN 110502110 A CN110502110 A CN 110502110A CN 201910726715 A CN201910726715 A CN 201910726715A CN 110502110 A CN110502110 A CN 110502110A
Authority
CN
China
Prior art keywords
face
application program
user
interactive application
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910726715.8A
Other languages
Chinese (zh)
Other versions
CN110502110B (en
Inventor
马坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910726715.8A priority Critical patent/CN110502110B/en
Publication of CN110502110A publication Critical patent/CN110502110A/en
Application granted granted Critical
Publication of CN110502110B publication Critical patent/CN110502110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2250/00Miscellaneous game characteristics
    • A63F2250/30Miscellaneous game characteristics with a three-dimensional image
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure discloses the generation methods and device of a kind of interactive application program feedback information, for realizing allowing the user for having downloaded or having bought interactive application program, accurate feedback user uses the experience sense of the interactive application program, this method comprises: the input instruction based on user runs interactive application program, when every interactive application program of determination runs to any one setting node, image capture device is then called to capture the face-image of user, generate corresponding face-image set, it is based on the face-image set again, generate feedback information, for describing user to the feedback of interactive application program.In this way, the user of interactive application program has been downloaded or has bought, impression when can play interactive application program by illustrating user more visual in imagely.

Description

A kind of generation method and device of interactive application program feedback information
Technical field
This disclosure relates to field of computer technology more particularly to a kind of generation method of interactive application program feedback information And device.
Background technique
With the development of science and technology, the interactive application program for possessing multiple-limb plot is increasingly chased after by people It holds in both hands, for example, describing the interactive game of game scenario using text as carrier, the interactive of game scenario is described in the form of caricature Game, using film script as interactive game of game scenario, etc..
Since the interactive application program is liked by people, then user downloads or buys in application market When interactive application program, prefer to reference to the opinion for having downloaded or having bought user, and the language table Danone of many users Power is limited, describes not accurate enough for the experience of interactive application program, alternatively, the user comment interactive application program When, the last time interval for playing the interactive application program of distance is too long, has forgotten about when playing interactive application program originally Impression, also can not accurately give expression to the experience of interactive application program in this way, therefore, do not download or the user that does not buy very Hardly possible accurately grasps the real bring usage experience of interactive application program.
In view of this, need to design the generation method and device of a kind of new interactive application program feedback information, with gram Take drawbacks described above.
Summary of the invention
The disclosure provides the generation method and device of a kind of interactive application program feedback information, has downloaded for realizing allowing Or the user of interactive application program has been bought, accurate feedback user uses the experience sense of the interactive application program.This public affairs The technical solution opened is as follows:
According to the first aspect of the embodiments of the present disclosure, a kind of generation method of interactive application program feedback information is provided, Include:
Input instruction based on user runs interactive application program, and multiple settings are arranged in the interactive application program Node;
When the interactive application program, which is run, sets node to any one, image capture device is called to capture user Face-image, generate corresponding face-image set;
Based on the face-image set, the user is generated to the feedback information of the interactive application program.
Optionally, when the interactive application program, which is run, sets node to any one, image capture device is called The face-image for capturing user, generates corresponding face-image set, comprising:
When determining that the interactive application program runs to any one described setting node, the institute being located in terminal is called Image capture device is stated, multiple face-images of user in preset duration are shot;
It is after deleting the face-image for characterizing amimia variation in multiple described face-images, remaining face-image is true It is set to the user's face image collection.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program Breath, comprising:
The face-image set is input in preset three-dimensional face model, corresponding three-dimensional face expression is generated;
By each three-dimensional face expression of acquisition, a target three-dimensional face expression is synthesized;
It is exported the target three-dimensional face expression as feedback information of the user to interactive application program.
Optionally, the face-image set is input in preset three-dimensional face model, generates corresponding three-dimensional people Face expression, comprising:
Following operation is executed respectively for each face-image in the face-image set:
The coordinate position for determining the face key point an of face-image generates corresponding face set of keypoints;
By the coordinate position of each of face set of keypoints face key point, it is respectively mapped to the three-dimensional Corresponding coordinate position in faceform, generates corresponding three-dimensional face expression.
Optionally, by each three-dimensional face expression of acquisition, a target three-dimensional face expression is synthesized, comprising:
Obtain the default serial number of the corresponding setting node of each three-dimensional face expression;
Each three-dimensional face expression is arranged according to the default serial number;
The each three-dimensional face expression arranged is spliced, the target three-dimensional face expression is generated.
Optionally, each three-dimensional face expression will be obtained, a target three-dimensional face expression is synthesized, comprising:
Each three-dimensional face expression is sent in server, triggers the server to each three-dimensional face Expression carries out synthetic operation;
Receive the target three-dimensional face expression that the server returns.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, further comprise:
The target three-dimensional face expression is sent in the comment area in application program market, as user to the interaction The evaluation of formula application program is shown.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, further comprise:
The target three-dimensional face expression is saved at the terminal, and the target three-dimensional face expression is sent to terminal On other application program in be shown.
According to the second aspect of an embodiment of the present disclosure, a kind of generating means of interactive application program feedback information are provided, Include:
Setting unit is configured as the input instruction based on user and runs interactive application program, the interactive application Multiple setting nodes are set in program;
Acquiring unit is configured as when the interactive application program runs and sets node to any one, calling figure As the face-image of capture device capture user, corresponding face-image set is generated;
Generation unit is configured as generating the user to described interactive using journey based on the face-image set The feedback information of sequence.
Optionally, when the interactive application program, which is run, sets node to any one, image capture device is called The face-image for capturing user, generates corresponding face-image set, the acquiring unit is configured as:
When determining that the interactive application program runs to any one described setting node, the institute being located in terminal is called Image capture device is stated, multiple face-images of user in preset duration are shot;
It is after deleting the face-image for characterizing amimia variation in multiple described face-images, remaining face-image is true It is set to the user's face image collection.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program Breath, the generation unit are configured as:
The face-image set is input in preset three-dimensional face model, corresponding three-dimensional face expression is generated;
By each three-dimensional face expression of acquisition, a target three-dimensional face expression is synthesized;
It is exported the target three-dimensional face expression as feedback information of the user to interactive application program.
Optionally, the face-image set is input in preset three-dimensional face model, generates corresponding three-dimensional people Face expression, the generation unit are configured as:
Following operation is executed respectively for each face-image in the face-image set:
The coordinate position for determining the face key point an of face-image generates corresponding face set of keypoints;
By the coordinate position of each of face set of keypoints face key point, it is respectively mapped to the three-dimensional Corresponding coordinate position in faceform, generates corresponding three-dimensional face expression.
Optionally, by each three-dimensional face expression of acquisition, a target three-dimensional face expression, the life are synthesized It is configured as at unit:
Obtain the default serial number of the corresponding setting node of each three-dimensional face expression;
Each three-dimensional face expression is arranged according to the default serial number;
The each three-dimensional face expression arranged is spliced, the target three-dimensional face expression is generated.
Optionally, each three-dimensional face expression will be obtained, a target three-dimensional face expression, the generation are synthesized Unit is configured as:
Each three-dimensional face expression is sent in server, triggers the server to each three-dimensional face Expression carries out synthetic operation;
Receive the target three-dimensional face expression that the server returns.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, the generation unit is further configured to:
The target three-dimensional face expression is sent in the comment area in application program market, as user to the interaction The evaluation of formula application program is shown.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, the generation unit is further configured to:
The target three-dimensional face expression is saved at the terminal, and the target three-dimensional face expression is sent to terminal On other application program in be shown.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory, for storing executable instruction;
Processor, for reading and executing the executable instruction stored in the memory, to realize any of the above-described side Method.
According to a fourth aspect of embodiments of the present disclosure, a kind of storage medium is provided, when the instruction in the storage medium by When processor executes, the step of making it possible to execute any of the above-described method.
The technical scheme provided by this disclosed embodiment at least bring it is following the utility model has the advantages that
In the embodiments of the present disclosure, the input instruction based on user runs interactive application program, and every determination is interactive to answer When running to any one setting node with program, then image capture device is called to capture the face-image of user, generated corresponding Face-image set, then be based on the face-image set, feedback information generated, for describing user to interactive using journey The feedback of sequence.In this way, the user of interactive application program has been downloaded or has bought, it can be oneself to play each under accurate recording The face-image of node is set, multiple comprehensive face-images can illustrate user more visual in imagely and play interactive application Impression when program.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure, do not constitute the improper restriction to the disclosure.
Fig. 1 is a kind of process signal for generating interactive application program feedback information shown according to an exemplary embodiment Figure.
Fig. 2 is that the schematic diagram of setting nodes X is shown according to an exemplary embodiment.
Fig. 3 is that the happy three-dimensional face expression of characterization user is shown according to an exemplary embodiment.
Fig. 4 is the three-dimensional face expression for showing characterization user according to an exemplary embodiment and feeling uncertain.
Fig. 5 is a kind of frame of the generating means of interactive application program feedback information shown according to an exemplary embodiment Figure.
Fig. 6 is the structural schematic diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
In order to make ordinary people in the field more fully understand the technical solution of the disclosure, below in conjunction with attached drawing, to this public affairs The technical solution opened in embodiment is clearly and completely described.
It should be noted that the specification and claims of the disclosure and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to embodiment of the disclosure described herein can in addition to illustrating herein or Sequence other than those of description is implemented.Embodiment described in following exemplary embodiment does not represent and disclosure phase Consistent all embodiments.On the contrary, they are only and as detailed in the attached claim, the disclosure some aspects The example of consistent device and method.
The user of interactive application is downloaded or has bought in order to realize to allow, accurate feedback user interactive is answered using this With the experience sense of program, a solution, the program are provided in the embodiments of the present disclosure are as follows: the input instruction based on user Run interactive application program then calls image to catch when every interactive application program of determination runs to any one setting node The face-image that equipment captures user is obtained, generates corresponding face-image set, then be based on the face-image set, is generated anti- Feedforward information, for describing user to the feedback of interactive application program.
Disclosure preferred embodiment is described in detail with reference to the accompanying drawing.
As shown in fig.1, the detailed process for generating interactive application program feedback information is as follows in the embodiment of the present disclosure:
S101, the input instruction based on user run interactive application program, are arranged in the interactive application program multiple Set node.
User downloads or buys interactive application program in the application market of intelligent terminal, and by the interactive application Program is stored on intelligent terminal.Multiple crucial plot points are provided in interactive application program, user is seeing crucial play When feelings point, corresponding facial expression can be made, for example, user understands table when seeing some terrified plot in terrible class game Reveal facial expression that is terrified, fearing;User can make the face of sobbing when seeing some touching plot in tender feeling class game Portion's expression.If possibly can not obtain user in time if setting setting node for crucial plot point and see crucial plot point Whole facial expression datas, therefore, the embodiment of the present invention sets some time point before being located at crucial plot point to Node is determined, convenient for obtaining whole facial expression datas that user sees crucial plot point.
S102 calls image capture device to capture and uses when interactive application program, which is run, sets node to any one The face-image at family generates corresponding face-image set.
Below by taking any one specified node as an example (hereinafter referred to as nodes X), introduces in nodes X, obtain user's face The process of image collection is as follows:
Firstly, determining that interactive application program runs to nodes X, the image capture device being located in terminal, shooting are called Multiple face-images of user in preset duration.
As shown in Fig. 2, so-called nodes X, refers to before preset crucial plot point, apart sets the time point of duration, Such as, occur 5 seconds before terrified things in room, for another example, 6 seconds before letter are read, for another example, before the lamp for turning on room 5 seconds.
For example, it is assumed that nodes X is 5 seconds before occurring terrified things in room, then, when interactive terrible game is transported When row occurs 5 seconds before terrified things into room, intelligent terminal calls front camera shooting user to go out from room 5 seconds plurality of pictures occurred in terrified this period of things into room before existing terror things, every picture record The facial expression at user's current time.
Then, the face-image for characterizing amimia variation is deleted from multiple face-images.
Before occurring this node of terrified things in a room due to nodes X setting, image capture device can not only be shot See terrified expression when occurring terrified things in room to user, can also take user occur in a room terrified things it Therefore preceding unconverted expression needs to characterize these before generating three-dimensional face expression the face-image of amimia variation It deletes.
Finally, remaining face-image is determined as face-image set.
S103 is based on face-image set, generates user to the feedback information of the interactive application program.
Specifically, may include following operation when executing step S103:
Firstly, the face-image set of acquisition is input in preset three-dimensional face model, corresponding three-dimensional people is generated Face expression.
Specifically, it can use existing three-dimensional face model will be in face-image set as Animoji, 3DMax After each face-image is sequentially inputted to three-dimensional face model, following operation is executed respectively to generate corresponding three-dimensional face table Feelings, below by taking a face-image as an example:
The coordinate position for first determining the face key point of a face-image, generates corresponding face set of keypoints.This In face key point refer to by the biggish point of expression influence, as left and right labial angle, upper lip intermediate point, lower lip intermediate point, brows, Eyebrow tail, eyebrow intermediate point etc. can be gone out user with accurate description and see crucial plot by the face set of keypoints of generation Expression when point;
Again by the coordinate position of each of face set of keypoints face key point, it is respectively mapped to three-dimensional face mould Corresponding coordinate position in type, generates corresponding three-dimensional face expression.
In the material database of three-dimensional face model, not only there is average face model, there are also the faces of miscellaneous cartoon animals Portion's model, wherein it is flat that average face model is that the face template based on various types (e.g., round face, square face, long face, sharp face) carries out It is obtained after homogenizing processing, the facial model that user can provide in unrestricted choice material database, in this way, using same face figure Image set closes, so that it may generate the three-dimensional face expression of multiple and different shapes of face.In the embodiments of the present disclosure, why by Expression Mapping Onto the facial model of material database, rather than the facial model of user itself is established, is the personal secrets in order to guarantee user, this Sample does not present only expression when user sees crucial plot point, also adds interest.
Secondly, each three-dimensional face expression of acquisition is synthesized a target three-dimensional face expression.
Specifically, the process that target three-dimensional face expression is synthesized in interactive application program is as follows:
A1, the default serial number for obtaining the corresponding setting node of each three-dimensional face expression.
Each crucial plot point in interactive application program is configured for unique serial number, so each setting node can To be configured the serial number of corresponding crucial plot point, can also be identified using other modes.
For example, being provided with 10 crucial plot points in interactive game, crucial plot point 1-10 successively uses serial number 1-10 It is identified, and then node 1-10 can also be identified using serial number 1-10.
In another example being provided with 10 crucial plot points in interactive game, crucial plot point 1-10 successively uses capital letter Female A-J is identified, and node 1-10 is identified using lowercase a-j.
A2, each three-dimensional face expression is arranged according to default serial number.
A3, each three-dimensional face expression arranged is spliced, generates target three-dimensional face expression.
In the embodiment of the present disclosure, target three-dimensional can be spliced into each three-dimensional face expression that this will be arranged on the ground Human face expression can also synthesize target three-dimensional face expression in the server, specifically, each three-dimensional face table that will be arranged Feelings are sent in server, and trigger the server carries out synthetic operation to each three-dimensional face expression, then receive server return Target three-dimensional face expression.
Finally, being exported target three-dimensional face expression as feedback information of the user to the interactive application program.
Assuming that share 10 nodes in interactive game, user runs the interactive game on the same day twice, first Secondary interactive game also not running just performs the operation for exiting game to node 4, and based on acquisition when running to node 1-3 First facial image collection, synthesize first object three-dimensional face expression;
When this user is again introduced into interactive game, interactive game continues to run the position exited from last time, Until exiting game again until node 8, the second face-image set when running to node 4-7 can be specifically obtained, in conjunction with First facial image collection and the second face-image set synthesize the second target three-dimensional face expression, at this point, the second target is three-dimensional Human face expression will replace first object three-dimensional face expression, and defeated as feedback information of the user to the interactive application program Out.
Assuming that share 10 nodes in interactive game, user runs the interactive game on the same day twice, first Secondary interactive game also not running just performs the operation for exiting game to node 4, and based on acquisition when running to node 1-3 First facial image collection, synthesized first object three-dimensional face expression;
When this user is again introduced into interactive game, interactive game will bring into operation from initial outpost, Zhi Dao Game is exited before node 8 again, obtains the second face-image set replacement first facial image set when running to node 1-3 It closes, based on the second face-image set when running to node 1-7, the second target three-dimensional face expression is synthesized, at this point, the second mesh First object three-dimensional face expression will be replaced by marking three-dimensional face expression, and as user to the feedback letter of the interactive application program Breath output.
In the embodiments of the present disclosure, the feedback information that will be presented in the form of target three-dimensional face expression, is sent to using journey In the comment area in sequence market, the evaluation of interactive application program is shown as user, compared to word evaluation, expression is commented Valence illustrates impression when user plays interactive application program more visual in imagely;It, can be with moreover, in order to protect privacy of user The target three-dimensional face expression of same day synthesis is uploaded in comment area automatically and is shown by the form posted using anonymity, this Sample just overcomes user because comment time and the last time interval for playing interactive application program are too long, forgets to play originally mutually The impression of dynamic formula application program, and can not the accurate evaluation interactive application program usage experience defect.
In addition to this, the target three-dimensional face expression of generation can also save at the terminal, and the expression is sent to end It is shown in other application program on end, for example, user A, by the target three-dimensional face expression of oneself, by chatting, class is answered It is shared with user B with program, the interest that product uses is not only increased, improves the usage experience of user, has also been excavated latent In user, be conducive to the propagation of the interactive application program.
Based on the above embodiment, further description is made using a specific application scenarios below.
B1, when interactive survival game runs to 5 seconds of discovery before the food in the knapsack that loses, intelligence Terminal calling front camera shoots user and loses from discovery being mounted in discovery for 5 seconds before the food in the knapsack lost The plurality of pictures in this period of food in the knapsack fallen;
B2, the picture that amimia variation is deleted from plurality of pictures, and using remaining picture as the face-image of user Set;
B3, the face-image set of acquisition is input in Animoji, generates a characterization user as shown in Figure 3 and opens The three-dimensional face expression of the heart;
B4, when interactive survival game runs to 5 seconds before there is terrified things in room, intelligent terminal calls Front camera shoots user and occurs terrified this time of things into room within 5 seconds before occurring terrified things in room Plurality of pictures in section;
B5, the picture that amimia variation is deleted from plurality of pictures, and using remaining picture as the face-image of user Set;
B6, the face-image set of acquisition is input in Animoji, generates a characterization user as shown in Figure 4 and doubts Puzzled three-dimensional face expression;
B7, node 1 are 5 seconds of discovery before the food in the knapsack lost, and node 2 is that occur terror in room 5 seconds before things, the serial number of interior joint 1 was a, and the serial number of node 2 is b, therefore, the happy three-dimensional face of characterization user Expression comes before the three-dimensional face expression that characterization user feels uncertain;
B8, will characterize the happy three-dimensional face expression of user and characterize user feel uncertain three-dimensional face expression be stitched together, Generate a target three-dimensional face expression by being happily changed into doubt;
B9, it sends the target three-dimensional face expression in the comment area in application program market, as user to interactive The evaluation of survival game is shown.
Based on the above embodiment, as shown in fig.5, in the embodiment of the present disclosure, a kind of interactive application program feedback is provided The generating means of information include at least setting unit 501, acquiring unit 502 and generation unit 503, wherein
Setting unit 501 is configured as the input instruction based on user and runs interactive application program, described interactive to answer With multiple setting nodes are arranged in program;
Acquiring unit 502 is configured as calling when the interactive application program runs and sets node to any one Image capture device captures the face-image of user, generates corresponding face-image set;
Generation unit 503 is configured as generating the user to the interactive application based on the face-image set The feedback information of program.
Optionally, when the interactive application program, which is run, sets node to any one, image capture device is called The face-image for capturing user, generates corresponding face-image set, the acquiring unit 502 is configured as:
When determining that the interactive application program runs to any one described setting node, the institute being located in terminal is called Image capture device is stated, multiple face-images of user in preset duration are shot;
It is after deleting the face-image for characterizing amimia variation in multiple described face-images, remaining face-image is true It is set to the user's face image collection.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program Breath, the generation unit 503 are configured as:
The face-image set is input in preset three-dimensional face model, corresponding three-dimensional face expression is generated;
By each three-dimensional face expression of acquisition, a target three-dimensional face expression is synthesized;
It is exported the target three-dimensional face expression as feedback information of the user to interactive application program.
Optionally, the face-image set is input in preset three-dimensional face model, generates corresponding three-dimensional people Face expression, the generation unit 503 are configured as:
Following operation is executed respectively for each face-image in the face-image set:
The coordinate position for determining the face key point an of face-image generates corresponding face set of keypoints;
By the coordinate position of each of face set of keypoints face key point, it is respectively mapped to the three-dimensional Corresponding coordinate position in faceform, generates corresponding three-dimensional face expression.
Optionally, by each three-dimensional face expression of acquisition, a target three-dimensional face expression, the life are synthesized It is configured as at unit 503:
Obtain the default serial number of the corresponding setting node of each three-dimensional face expression;
Each three-dimensional face expression is arranged according to the default serial number;
The each three-dimensional face expression arranged is spliced, the target three-dimensional face expression is generated.
Optionally, each three-dimensional face expression will be obtained, a target three-dimensional face expression, the generation are synthesized Unit 503 is configured as:
Each three-dimensional face expression is sent in server, triggers the server to each three-dimensional face Expression carries out synthetic operation;
Receive the target three-dimensional face expression that the server returns.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, the generation unit 503 is further configured to:
The target three-dimensional face expression is sent in the comment area in application program market, as user to the interaction The evaluation of formula application program is shown.
Optionally, it is based on the face-image set, generates the user to the feedback letter of the interactive application program After breath, the generation unit 503 is further configured to:
The target three-dimensional face expression is saved at the terminal, and the target three-dimensional face expression is sent to terminal On other application program in be shown.
Based on the above embodiment, as shown in fig.6, in the embodiment of the present disclosure, a kind of calculating equipment is provided, including at least depositing Reservoir 601 and processor 602, wherein
Memory 601, for depositing executable instruction;
Processor 602, for reading and executing the executable instruction stored in the memory, to realize any of the above-described Method.
Based on the above embodiment, a kind of storage medium is provided, is included at least: when the instruction in the storage medium by When managing device and executing, the step of making it possible to execute any of the above-described method.
In conclusion in the embodiments of the present disclosure, the input instruction based on user runs interactive application program, every determination When interactive application program runs to any one setting node, then image capture device is called to capture the face-image of user, Corresponding face-image set is generated, then is based on the face-image set, feedback information is generated, for describing user to interaction The feedback of formula application program.
Compared to word evaluation, expression evaluation illustrates sense when user plays interactive application program more visual in imagely By, in this way, the user of interactive application program has been downloaded or has bought, it can be convenient by target three-dimensional face expression, fast The usage experience of interactive application program is fed back promptly, and has much the target three-dimensional face expression of interest, can be attracted more Potential user downloads or buys the interactive application program, is conducive to the propagation and exposure of interactive application program.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (10)

1. a kind of generation method of interactive application program feedback information characterized by comprising
Input instruction based on user runs interactive application program, and multiple setting sections are arranged in the interactive application program Point;
When the interactive application program, which is run, sets node to any one, image capture device is called to capture the face of user Portion's image generates corresponding face-image set;
Based on the face-image set, the user is generated to the feedback information of the interactive application program.
2. the method according to claim 1, wherein being set when the interactive application program is run to any one When determining node, calls image capture device to capture the face-image of user, generates corresponding face-image set, comprising:
When determining that the interactive application program runs to any one described setting node, the figure being located in terminal is called As capture device, multiple face-images of user in preset duration are shot;
After deleting the face-image for characterizing amimia variation in multiple described face-images, remaining face-image is determined as The user's face image collection.
3. generating the user couple the method according to claim 1, wherein being based on the face-image set The feedback information of the interactive application program, comprising:
The face-image set is input in preset three-dimensional face model, corresponding three-dimensional face expression is generated;
By each three-dimensional face expression of acquisition, a target three-dimensional face expression is synthesized;
It is exported the target three-dimensional face expression as feedback information of the user to interactive application program.
4. according to the method described in claim 3, it is characterized in that, the face-image set is input to preset three-dimensional people In face model, corresponding three-dimensional face expression is generated, comprising:
Following operation is executed respectively for each face-image in the face-image set:
The coordinate position for determining the face key point an of face-image generates corresponding face set of keypoints;
By the coordinate position of each of face set of keypoints face key point, it is respectively mapped to the three-dimensional face Corresponding coordinate position in model, generates corresponding three-dimensional face expression.
5. according to the method described in claim 3, it is characterized in that, each three-dimensional face expression of acquisition is synthesized One target three-dimensional face expression, comprising:
Obtain the default serial number of the corresponding setting node of each three-dimensional face expression;
Each three-dimensional face expression is arranged according to the default serial number;
The each three-dimensional face expression arranged is spliced, the target three-dimensional face expression is generated.
6. according to the method described in claim 3, synthesizing one it is characterized in that, each three-dimensional face expression will be obtained A target three-dimensional face expression, comprising:
Each three-dimensional face expression is sent in server, triggers the server to each three-dimensional face expression Carry out synthetic operation;
Receive the target three-dimensional face expression that the server returns.
7. method according to claim 1-6, which is characterized in that be based on the face-image set, generate institute After user is stated to the feedback information of the interactive application program, further comprise:
The target three-dimensional face expression is sent in the comment area in application program market, interactive is answered as user to described It is shown with the evaluation of program.
8. method according to claim 1-6, which is characterized in that be based on the face-image set, generate institute After user is stated to the feedback information of the interactive application program, further comprise:
The target three-dimensional face expression is saved at the terminal, and the target three-dimensional face expression is sent in terminal It is shown in other application program.
9. a kind of electronic equipment characterized by comprising
Memory, for storing executable instruction;
Processor, for reading and executing the executable instruction stored in the memory, to realize as in claim 1 to 8 The generation method of described in any item interactive application program feedback informations.
10. a kind of storage medium, which is characterized in that when the instruction in the storage medium is executed by processor, make it possible to Execute the generation method such as interactive application program feedback information described in any item of the claim 1 to 8.
CN201910726715.8A 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program Active CN110502110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910726715.8A CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910726715.8A CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Publications (2)

Publication Number Publication Date
CN110502110A true CN110502110A (en) 2019-11-26
CN110502110B CN110502110B (en) 2023-08-11

Family

ID=68587133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910726715.8A Active CN110502110B (en) 2019-08-07 2019-08-07 Method and device for generating feedback information of interactive application program

Country Status (1)

Country Link
CN (1) CN110502110B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917120A (en) * 2012-09-20 2013-02-06 北京百纳威尔科技有限公司 Mobile terminal and method for refreshing information displayed by mobile terminal
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN109948426A (en) * 2019-01-23 2019-06-28 深圳壹账通智能科技有限公司 Application program method of adjustment, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917120A (en) * 2012-09-20 2013-02-06 北京百纳威尔科技有限公司 Mobile terminal and method for refreshing information displayed by mobile terminal
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN108241434A (en) * 2018-01-03 2018-07-03 广东欧珀移动通信有限公司 Man-machine interaction method, device, medium and mobile terminal based on depth of view information
CN109948426A (en) * 2019-01-23 2019-06-28 深圳壹账通智能科技有限公司 Application program method of adjustment, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110502110B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
Machidon et al. Virtual humans in cultural heritage ICT applications: A review
US10527846B2 (en) Image processing for head mounted display devices
CN110390705B (en) Method and device for generating virtual image
CN106251389B (en) Method and device for producing animation
TWI795845B (en) Server and information processing method
CN109145788A (en) Attitude data method for catching and system based on video
Lomas Cellular forms: an artistic exploration of morphogenesis.
CN111530088B (en) Method and device for generating real-time expression picture of game role
US9959497B1 (en) System and method for using a digital virtual clone as an input in a simulated environment
JP6791530B1 (en) 3D data system, server and 3D data processing method
Kennedy Acting and its double: A practice-led investigation of the nature of acting within performance capture
KR102247481B1 (en) Device and method for generating job image having face to which age transformation is applied
CN110502110A (en) A kind of generation method and device of interactive application program feedback information
JP7182298B2 (en) 3D data processing method
WO2022026630A1 (en) Methods and systems for communication and interaction using 3d human movement data
JP7276946B2 (en) Program, 3D data processing method, and information processing device
KR20200076371A (en) 4D Model based Brain Training System and Method
US11074736B2 (en) Cosmetic transformation through image synthesis
WO2024069944A1 (en) Information processing device, information processing method, and program
Maes The perception of authenticity in 3D digitalised stop motion
CN112560556A (en) Action behavior image generation method, device, equipment and storage medium
Warshawsky It’s All a Simulation; Subverting Reality Capture Technology
JP2023166039A (en) Information processing method, information processing system and program
Peng My Personalized Movies: novel system for automatically animating a movie based on personal data and evaluation of its impact on affective and cognitive experience
CN117853622A (en) System and method for creating head portrait

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant