CN109692476B - Game interaction method and device, electronic equipment and storage medium - Google Patents

Game interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109692476B
CN109692476B CN201811592510.7A CN201811592510A CN109692476B CN 109692476 B CN109692476 B CN 109692476B CN 201811592510 A CN201811592510 A CN 201811592510A CN 109692476 B CN109692476 B CN 109692476B
Authority
CN
China
Prior art keywords
player
face
dimensional
current
physical model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811592510.7A
Other languages
Chinese (zh)
Other versions
CN109692476A (en
Inventor
张庭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201811592510.7A priority Critical patent/CN109692476B/en
Publication of CN109692476A publication Critical patent/CN109692476A/en
Application granted granted Critical
Publication of CN109692476B publication Critical patent/CN109692476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a game interaction method, a game interaction device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a player image, and performing face feature recognition on the player image to obtain face feature points; establishing a three-dimensional face model of the player in a game scene and determining current face posture parameters of the player based on the current face characteristic points; establishing a human face three-dimensional physical model based on the current human face three-dimensional model; determining whether to generate a player control object in a game scene according to the current face pose parameter; when the player control object is determined to be generated, generating the player control object on the surface of the human face three-dimensional model, and generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model; after a player control object is generated, updating a current human face three-dimensional model and a current human face three-dimensional physical model based on a current player image, and acquiring current motion parameters of the object three-dimensional physical model; the motion state of the player controlled object is controlled in dependence on the current motion parameter.

Description

Game interaction method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a game interaction method and apparatus, an electronic device, and a storage medium.
Background
The conventional game interaction usually needs to be realized by means of an external auxiliary device, such as a mouse or a keyboard or a game pad, so that a player can control a player-controllable object in the game to perform corresponding operation through the external auxiliary device. However, the addition of gaming peripherals increases game costs and limits the way players interact with the game, e.g., players can only interact with the game by manipulating input devices with their hands.
Disclosure of Invention
Based on the above, the invention provides a game interaction method, a game interaction device, an electronic device and a storage medium.
According to a first aspect of embodiments of the present invention, there is provided a game interaction method, the method comprising:
when a game starting instruction is received, displaying a game scene;
acquiring a player image, and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
establishing a three-dimensional model of the face of the player in the game scene and determining the current face posture parameters of the player based on the current face characteristic points; the current face posture parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
Establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
determining whether to generate a player control object in the game scene according to the current face pose parameters;
when the player control object is determined to be generated, the player control object is generated on the surface of the human face three-dimensional model, and the generated player control object is a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
after the player control object is generated, updating a current face three-dimensional model and a current face three-dimensional physical model based on a current player image, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current face three-dimensional physical model and the object three-dimensional physical model;
the motion state of the player control object in the game scene is controlled according to the current motion parameters.
According to a second aspect of embodiments of the present invention, there is provided a game interaction apparatus, the apparatus comprising:
the display module is used for displaying a game scene when receiving a game starting instruction;
the image acquisition module is used for acquiring a player image and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
The first establishing module is used for establishing a three-dimensional face model of the player in the game scene based on the current face characteristic point;
the determining module is used for determining the current face posture parameters of the player based on the current face characteristic points; the current face posture parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
the second establishing module is used for establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
the generating module is used for determining whether to generate a player control object in the game scene according to the current face posture parameter; and generating the player control object on the surface of the human face three-dimensional model when the player control object is determined to be generated, wherein the generated player control object is a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
the processing module is used for updating a current human face three-dimensional model and a current human face three-dimensional physical model based on a current player image after the player control object is generated, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current human face three-dimensional physical model and the object three-dimensional physical model;
And the control module is used for controlling the motion state of the player control object in the game scene according to the current motion parameters.
According to a third aspect of the embodiments of the present invention, there is provided an electronic apparatus including:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor implements the game interaction method when executing the program.
According to a fourth aspect of embodiments of the present invention, there is provided a machine-readable storage medium having a program stored thereon; the program realizes the game interaction method when being executed by a processor.
Compared with the related art, the embodiment of the invention at least has the following beneficial technical effects:
the player can operate the player control object positioned on the surface of the face by utilizing the head action of the player, so that the participation in the game is realized, the interaction between the player and the game is favorably enhanced, the game interest is improved, the game peripheral does not need to be additionally added, and the game cost of the player can be reduced. Moreover, the scheme provided by the invention can convert the real-person game which can be realized only by using food or other articles into the electronic game, for example, in the game of eating biscuits by crowding the eyebrows, biscuits need to be placed at the forehead of each participant, so that the participants can move the biscuits from the forehead to the mouth by utilizing the twisting of the muscle of the face. In the real-person game, whether the biscuits fall to the ground or not, the biscuits are generally discarded by the participants, so that the food waste and the game cost are high. However, by the scheme provided by the invention, the biscuit can be used as the player control object, and the electronic game of crowding out the eyebrows and eating the biscuit can be realized by simulating the face of the player and the physical motion process of the player control object, so that the waste of food or other articles can be avoided, and the environmental resources can be saved;
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of game interaction in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a schematic illustration of a game interface with a cookie eating game enabled in accordance with an exemplary embodiment of the present invention;
FIG. 3 is a schematic illustration of a game interface as a player control object is generated in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a schematic illustration of a game interface showing the instant a player control object drops into the mouth of a three-dimensional model of a face, in accordance with an exemplary embodiment of the present invention;
FIG. 5 is a schematic illustration of a game interface showing a process in which a player control object falls off a surface of a three-dimensional model of a human face into the ground, according to an exemplary embodiment of the invention;
FIG. 6 is a hardware block diagram of an electronic device shown in accordance with an exemplary embodiment of the present invention;
it should be noted that the faces applied in the examples shown in fig. 3 to 5 are blurred faces in order to protect the portrait rights of the person to which the faces in the examples belong. However, in the actual game process, the faces of the players displayed in the game interfaces shown in fig. 3 to 5 are clear and complete.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the field of game interaction technology, game interaction is generally achieved by means of an external auxiliary device, such as a mouse or a keyboard or a gamepad, so that a player can control a player-controllable object in a game to perform a corresponding operation through the external auxiliary device. However, the addition of gaming peripherals increases game cost and limits the way players interact with the game, e.g., players can only interact with the game by manipulating input devices with their hands.
Based on this, in order to overcome the technical problems of high game cost and single interaction mode between the player and the game caused by adding game peripherals in the related technology, the invention provides a game interaction method, which can be applied to a small game program of short video software, social software or beauty software on a mobile terminal or game software and a small game program of a live broadcast platform, so that the player can participate in the game through the head action of the player, the interactivity between the player and the game is enhanced, the game interest is improved, game peripherals are not required to be additionally added, and the game cost of the player can be reduced; and the algorithm complexity of game interaction is reduced to a certain extent, and the game response efficiency is improved.
Next, a game interaction method of the present invention will be explained. Fig. 1 is a flowchart illustrating a game interaction method according to an exemplary embodiment of the present invention, and as shown in fig. 1, the game interaction method of the present invention may be applied to a terminal, and includes:
s011, displaying a game scene when receiving a game starting instruction;
s012, obtaining a player image, and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
s013, establishing a human face three-dimensional model of a player in the game scene and determining current human face posture parameters of the player based on the current human face characteristic points; the current face pose parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
s014, establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
s015, determining whether to generate a player control object in the game scene according to the current human face posture parameter;
s016, when the player control object is determined to be generated, generating the player control object on the surface of the human face three-dimensional model, wherein the generated player control object is a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
S017, after the player control object is generated, updating a current face three-dimensional model and a current face three-dimensional physical model based on a current player image, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current face three-dimensional physical model and the object three-dimensional physical model;
s018, controlling a motion state of the player control object in the game scene according to the current motion parameters.
The game interaction method provided by the embodiment of the invention can be applied to AR games, and based on the method, the game scene can be an AR game scene. The AR game may be a game in which a player control object is controlled to move from one position of a face surface to another position of the face surface by a head motion to obtain a game score or achieve a game purpose. In addition, the technical scheme provided by the embodiment of the invention can also convert a real game into an electronic game, for example, convert the real game of glaring and glaring to eat biscuits into an AR to eat biscuits game, and in this case, the three-dimensional model corresponding to the player control object can be a three-dimensional model of biscuits.
When the AR game applying the game interaction method of the embodiment of the invention is started, the AR game can be regarded as receiving a game starting instruction, at this time, the game scene can be displayed through a display screen of the terminal, and the game scene can be displayed with the face three-dimensional model of the current player but not the player control object.
After the game is started, the image of the player can be captured by a camera module of the terminal (such as a front camera or a rear camera of the mobile terminal). The number of players participating in the game may be one or more, which is not limited in the embodiment of the present invention. When a game is just started, the acquired player images comprise a player front image and a player side image, so that a three-dimensional model of the face of a player shown in the game scene can be established according to the face characteristic points of the player front image and the player side image; during the game, the acquired player image may be a player face image or a player side image.
In an embodiment, in step S013, the step of building a three-dimensional model of the face of the player in the game scene based on the current face feature points may include:
s0131, establishing a human face two-dimensional model based on human face feature points obtained by performing human face feature recognition on a front image of a player;
s0132, determining depth coordinates of the specified face characteristic points in the face two-dimensional model based on face characteristic points obtained by carrying out face characteristic recognition on the side images of the player;
s0133, establishing a three-dimensional model of the face of the player in the game scene based on the two-dimensional model of the face and the depth coordinates of the face characteristic points specified in the two-dimensional model of the face.
In another embodiment, to simplify the step of building the three-dimensional face model, so as to improve the generation efficiency of the three-dimensional face model and reduce the computational complexity, the three-dimensional face model of the player shown in the game scene may be built according to the face feature points of the front image of the player, and based on this, the building process of the three-dimensional face model may be adjusted accordingly, for example, in step S013, the step of building the three-dimensional face model of the player in the game scene based on the current face feature points may include:
s0131', establishing a human face two-dimensional model based on human face feature points obtained by performing human face feature recognition on the front image of the player;
s0132', presetting depth coordinates of specified human face characteristic points in the human face two-dimensional model;
s0133', establishing a three-dimensional face model of the player in the game scene based on the two-dimensional face model and preset depth coordinates of face characteristic points specified in the two-dimensional face model.
Therefore, the corresponding depth coordinate is preset for the specified face characteristic point, the step of processing the side image of the player to obtain the depth coordinate can be omitted, and the efficiency of establishing the face three-dimensional model can be improved to a certain extent.
Because 106 personal face characteristic points can be generally obtained after the facial characteristic recognition is carried out based on the player image, if a facial three-dimensional model or a facial three-dimensional physical model containing 106 personal face characteristic points is established based on a facial two-dimensional model corresponding to the 106 personal face characteristic points, especially when the facial three-dimensional physical model is established, the problems of excessively complex establishment process, low efficiency, unsatisfactory physical simulation effect and the like can be caused. Therefore, to solve this technical problem and ensure the control effect of the head motion on the player control object, in an embodiment, only three-dimensional models and three-dimensional physical models of the face nose and the face mouth may be established, which may be understood as: only the nose and the mouth of the human face in the human face three-dimensional model are three-dimensional, and other parts of the human face are two-dimensional; similarly, in the human face three-dimensional physical model, only the nose and the mouth of the human face are three-dimensional, and other facial parts are two-dimensional. Based on this, the specified face feature points may be face nose feature points and face mouth feature points.
After the face three-dimensional model is obtained, a corresponding face three-dimensional physical model can be established based on the face three-dimensional model, so that the physical model can be utilized to simulate the physical effect between the face and the player control object in the game process. In an embodiment, the step S014 may include: and S0141, converting the human face three-dimensional model into a human face three-dimensional physical model in a physical world space.
The three-dimensional face model can be converted into a corresponding three-dimensional face physical model through a third-party physical engine, such as a Bullet physical engine, and the three-dimensional face physical model is placed in a virtual physical world space, so that the physical effects of the face and the player control object are simulated in the subsequent process, and the motion state of the player control object in the game scene is controlled according to the motion parameters corresponding to the simulated physical effects.
In the process of establishing the human face three-dimensional model, the current human face posture parameters of the player can be determined based on the human face characteristic points obtained after the human face characteristic recognition is carried out on the current player image. The current face feature point may be processed through a pose estimation algorithm to obtain a current face pose parameter of the player, that is, a first tilt angle of the current face with respect to a horizontal direction and a second tilt angle of the current face with respect to a vertical direction are obtained. Therefore, the human face pose parameters are used for representing the change of the face position caused by the head twisting of the player.
After obtaining the current face pose parameters of the player, it may be determined whether to generate a player control object in the game scene based on the current face pose parameters. Specifically, since the player needs to raise his head to increase the angle between his face and the horizontal direction to a certain extent in the biscuit-eating game, and the player cannot swing his head to the left or right by a large amount, and can successfully accept the player-controlled object as a biscuit, based on this, the step S015 may include: s0151, if the first inclination angle in the current face posture parameter is larger than or equal to a first preset value and the second inclination angle in the current face posture parameter is smaller than a second preset value, generating a player control object in the game scene; s0152, if the current first inclination angle is smaller than a first preset value or the current second inclination angle is larger than or equal to a second preset value, the player control object is not generated. The first preset value and the second preset value may be obtained empirically or experimentally, and are not limited herein.
When the player-controlled object satisfies its generation condition, the player-controlled object and the three-dimensional physical model of the object may be generated in the game scene and the physical world, respectively. Wherein the player control object is generated on the surface of a three-dimensional model of a human face in the game scene, and the three-dimensional physical model of the object is generated on the surface of the three-dimensional physical model of the human face in the physical world space. In one example, to ensure that the position of the player controlled object relative to the three-dimensional model of the face and the consistency of the three-dimensional physical model of the object relative to the three-dimensional physical model of the face are consistent to improve the simulation of the physical effect obtained by the simulation, the positions of the player controlled object and the three-dimensional physical model of the object when generated are consistent with each other as well as the positions during the movement.
In one example, an initial position may be specified at which the player-controlled object is generated, or a destination position that the player-controlled object ultimately needs to reach, based on which the game is successful: if the player can successfully control the player control object to move from the initial position to the target position within the preset game time, the game is successful, and game awards can be played. A first case of game failure may be: and if the player can not control the player control object to run from the initial position to the target position within the preset game time, indicating that the game fails and carrying out game punishment. A second case of game failure may be: and within the preset game time, if the player control object falls off from the human face three-dimensional model before reaching the target position in the process of controlling the player control object by the player, indicating that the game fails, and performing game punishment.
In one example, if the game interaction method of the embodiment of the present invention is applied to the AR cookie game, the initial position may be a forehead or an eye region of the human face, and the destination position may be a mouth or an inner mouth of the human face.
It should be noted that the player control object and the three-dimensional model of the human face are shown in the game scene for the player or other users to view the game process. The three-dimensional physical object model and the three-dimensional physical face model may be placed in a virtual physical world space without being displayed in the game scene, so as to simulate a physical motion effect between the face and the player control object according to the face pose of the current player, and actions such as raising, pulling down, turning left, turning right and the like of the head affect the face pose, thereby changing the motion of the player control object. Therefore, corresponding physical motion simulation is carried out through the object three-dimensional physical model and the human face three-dimensional physical model, and motion parameters corresponding to the motion state of the object three-dimensional physical model can be transmitted to the player control object in the game scene, so that the motion state of the player control object in the game scene is changed correspondingly.
The player control object and the three-dimensional physical object model are both established in advance during game development, and the "creation of the player control object" and the "creation of the three-dimensional physical object model of the player control object" mentioned in the above description mean: the pre-established three-dimensional physical models of the player control object and the object are called and placed at the current corresponding position, such as the initial position.
And after the player control object is generated, acquiring a player image in real time, and updating the current human face three-dimensional model and the current human face three-dimensional physical model based on human face feature points obtained after human face feature recognition is carried out on the current player image. Specifically, the face pose of the updated three-dimensional model of the face and the face pose of the three-dimensional physical model of the face are consistent with the face pose displayed by the current player image. It can be understood that: in the game process, the player images are continuously acquired, and the human face three-dimensional model and the human face three-dimensional physical model are updated based on the current player image so as to change the acting force applied to the object three-dimensional physical model and realize the control of the player on the motion state of the player control object.
To achieve the obtaining of the motion parameters of the player-controlled object during the game to control the motion of the player-controlled object in the game scene, in an embodiment, the step of obtaining the current motion parameters of the three-dimensional physical model of the object by the acting force between the current three-dimensional physical model of the face and the three-dimensional physical model of the object may include:
S0171, calculating to obtain the friction force of the object three-dimensional physical model when the surface of the current human face three-dimensional physical model moves based on the quality of the object three-dimensional physical model and the sliding friction coefficient of the surface of the current human face three-dimensional physical model;
s0172, calculating to obtain resultant force applied to the three-dimensional physical model of the object based on the friction force, the gravity of the three-dimensional physical model of the object and the supporting force of the current human face three-dimensional physical model to the three-dimensional physical model of the object;
s0173, acquiring the current motion parameter of the three-dimensional physical model of the object according to the resultant force, which may include:
s01731, calculating to obtain the acceleration of the three-dimensional physical model of the object based on the resultant force and the mass of the three-dimensional physical model of the object;
s01732, calculating to obtain position parameters of the object three-dimensional physical model in the motion process according to the acceleration and the acting duration of the resultant force;
s01733, the position parameters and the movement moments corresponding to the position parameters are used as current movement parameters.
In the above, the gravity and the sliding friction coefficient of the three-dimensional physical model of the object may be preset according to experience or experiment or by a user as required. In addition, how to specifically obtain the motion parameters of the physical motion of the object unit physical model according to the resultant force received by the object three-dimensional physical model can be known by combining the contents recorded in the steps S0171 to S0173 and the mechanical principle of the physical motion in the embodiment of the present invention, and therefore, the details are not described herein.
In the above, the acquisition of the current motion parameter may be considered to have real-time performance, and it can be understood that: each time a motion parameter is obtained, the motion of the player-controlled object in the game scene is controlled based on the motion parameter.
After the current motion parameter is obtained, the motion state of the player control object in the game scene may be controlled according to the current motion parameter, and specifically, the player control object is controlled to move to a position indicated by the position parameter corresponding to the current motion moment. In this process, the player-controlled object may be controlled to move from the position at the previous movement time to the position at the current movement time in the direction and at the magnitude of the speed indicated by the speed based on the speed of the three-dimensional physical model of the object from the previous movement time to the current movement time. The speed may be calculated according to a mechanical principle of physical motion, which is not described herein.
Thus, the player can control the motion state of the player control object by changing the face pose. The effect presented by the motion of the player-controlled object during the player's control of the player-controlled object may include at least one of: the first method comprises the following steps: separating the player control object from the surface of the human face three-dimensional model and dropping off from the surface of the human face three-dimensional model; and the second method comprises the following steps: the player control object is maintained in a certain area of the surface of the human face three-dimensional model within a certain period of time and shows a blocked state; and the third is that: the player control object slides or rolls from the initial position of the surface of the human face three-dimensional model to the target position quickly or slowly; and fourthly: the player control object is separated from the surface of the face three-dimensional model at a certain moment, but is received by the face three-dimensional model again at the next time and is in contact with the surface of the face three-dimensional model again.
In one embodiment, in order to let the player know whether the game is successful or not, so as to increase the game interest, the game interaction method may further include:
s021, when detecting that the object three-dimensional physical model moves to a destination position appointed in the face three-dimensional physical model, updating a game interface, wherein the updated game interface is used for indicating that a game is successful;
s022, when the object three-dimensional physical model is detected to be separated from the face three-dimensional physical model, updating a game interface, wherein the updated game interface is used for indicating game failure.
In order to avoid that the time consumption of one round of game is too long to affect the experience of the player, a time limit may be added in the step S021, for example, when it is detected that the object three-dimensional physical model moves to the destination position specified in the face three-dimensional physical model within a preset game time, a game interface is updated, and the updated game interface is used to indicate that the game is successful. Based on this, on the premise that the preset game time is exceeded, even if the player control object does not fall off from the human face three-dimensional model, the game is regarded as a game failure.
Further, since there may be a case where the player control object is separated from the surface of the three-dimensional face model at a certain moment, but is immediately received by the three-dimensional face model at the next time and is again in contact with the surface of the three-dimensional face model, in order to improve the accuracy of the game, in one example, a determination condition for determining a game failure may be further added, and based on this, the step S022 may be adaptively changed to: and when the separation of the object three-dimensional physical model and the face three-dimensional physical model is detected, and the separation time is greater than or equal to a preset separation threshold value, updating a game interface, wherein the updated game interface is used for indicating the game failure. The separation threshold may be preset according to experience or experiment or preset by a player according to needs, which is not described herein.
In the following, a game process of the game interaction method according to an embodiment of the present invention is described by taking a game of eating cookies as an example, based on which the player control object is a three-dimensional model of cookies:
fig. 2 is a schematic view of a game interface of the cookie eating game according to an exemplary embodiment of the present invention, shown in fig. 2. When the game is started, the game interface displays a game background, game time and game scores, and meanwhile, the camera module of the terminal captures images of a player so as to generate a three-dimensional face model of the player in a game scene based on the images of the player and generate a three-dimensional face physical model of the player in a physical world space. After the face three-dimensional model is successfully generated, the face three-dimensional model can be displayed on the game interface, wherein the face three-dimensional model and the face of the player can be in a mirror symmetry relationship. After the face three-dimensional model is generated, if the current face pose parameters calculated based on the current face feature points satisfy the generation conditions of the player control object, a player control object B and an object three-dimensional physical model can be generated at the positions (such as the initial positions) corresponding to the face three-dimensional model a and the face three-dimensional physical model, respectively, as shown in fig. 3, where fig. 3 is a schematic view of a game interface when the player control object is generated according to an exemplary embodiment of the present invention. And then, after the game starts, the camera module of the terminal continues to capture images of the player, the game system continuously identifies the face features of each frame of image of the player so as to update the face three-dimensional model and the face three-dimensional physical model in real time, and simulates physical motion between the face three-dimensional model and the player control object through the face three-dimensional physical model and the object three-dimensional physical model in real time so as to display the motion process of the player control object on the game interface. When the player-controlled object B is controlled by the player and then moves to a target position on the surface of the three-dimensional model a of the face, such as in the mouth a1, as shown in fig. 4, fig. 4 is a schematic view of a game interface at the moment when the player-controlled object is dropped into the mouth of the three-dimensional model a of the face according to an exemplary embodiment of the present invention, which may be considered as a game success, and at this time, a game success interface may be shown and/or a game award may be issued, which may include but is not limited to: bonus game scores or bonus game credits. When the player-controlled object B falls off from the surface of the three-dimensional model a of the face after being controlled by the player, as shown in fig. 5, fig. 5 is a schematic view of a game interface during a process of falling off from the surface of the three-dimensional model a of the face to the underground according to an exemplary embodiment of the present invention, which may be regarded as a game failure, and at this time, an interface of the game failure may be shown and/or a game penalty may be issued, where the game penalty may include, but is not limited to: the game score is not calculated or the game credits are deducted.
In another example, after the player-controlled object has been player-controlled to fall off the surface of the three-dimensional model of the face, if the play time has not been consumed, the player may readjust the head so that the current face pose parameters satisfy the player-controlled object generation conditions to generate a new player-controlled object and continue the game.
In addition, in order to realize sharing or live broadcasting of a game process and increase interaction between a player and a friend or between fans, in an embodiment, the game interaction method may further include:
s061, the game video pictures in the game process are sent to other clients.
Corresponding to the game interaction method, the invention also provides a game interaction device which can be applied to terminals such as mobile equipment, computers, live broadcast platforms or wearable equipment. The game interaction device comprises:
the display module is used for displaying a game scene when receiving a game starting instruction;
the image acquisition module is used for acquiring a player image and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
The first establishing module is used for establishing a three-dimensional model of the face of the player in the game scene based on the current face characteristic point;
the determining module is used for determining the current face pose parameters of the player based on the current face feature points; the current face pose parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
the second establishing module is used for establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
the generating module is used for determining whether to generate a player control object in the game scene according to the current face posture parameter; and generating the player control object on the surface of the human face three-dimensional model when the player control object is determined to be generated, wherein the generated player control object is a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
the processing module is used for updating a current human face three-dimensional model and a current human face three-dimensional physical model based on a current player image after the player control object is generated, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current human face three-dimensional physical model and the object three-dimensional physical model;
And the control module is used for controlling the motion state of the player control object in the game scene according to the current motion parameters.
In one embodiment, the first establishing module comprises:
a two-dimensional model establishing unit for establishing a two-dimensional model of a human face based on human face feature points obtained by performing human face feature recognition on a front image of a player;
the depth coordinate acquisition unit is used for determining the depth coordinate of the specified face characteristic point in the face two-dimensional model based on the face characteristic point obtained by carrying out face characteristic recognition on the side image of the player;
and the three-dimensional model establishing unit is used for establishing a human face three-dimensional model of the player in the game scene based on the human face two-dimensional model and the depth coordinates of the human face characteristic points specified in the human face two-dimensional model.
In one embodiment, the second establishing module comprises:
and the physical model establishing unit is used for converting the human face three-dimensional model into a human face three-dimensional physical model in a physical world space.
In one embodiment, the processing module comprises:
an updating unit configured to update a current face three-dimensional model and a current face three-dimensional physical model based on a current player image after the player control object is generated;
And the acquisition unit is used for acquiring the current motion parameters of the three-dimensional physical model of the object through the acting force between the current three-dimensional physical model of the face and the three-dimensional physical model of the object.
The acquisition unit includes:
the first calculating subunit is used for calculating and obtaining the friction force of the object three-dimensional physical model when the surface of the current human face three-dimensional physical model moves on the basis of the quality of the object three-dimensional physical model and the sliding friction coefficient of the surface of the current human face three-dimensional physical model;
the second calculation subunit is used for calculating and obtaining a resultant force applied to the object three-dimensional physical model based on the friction force, the gravity of the object three-dimensional physical model and the supporting force of the current human face three-dimensional physical model to the object three-dimensional physical model;
and the third calculation subunit is used for acquiring the current motion parameter of the three-dimensional physical model of the object according to the resultant force.
In one embodiment, the third calculation subunit comprises:
the accelerometer operator unit is used for calculating and obtaining the acceleration of the three-dimensional physical model of the object based on the resultant force and the mass of the three-dimensional physical model of the object;
the position parameter calculating subunit is used for calculating to obtain a position parameter of the object three-dimensional physical model in the motion process according to the acceleration and the acting duration of the resultant force;
And the motion parameter acquisition subunit is used for taking the position parameter and the motion moment corresponding to the position parameter as the current motion parameter.
In one embodiment, the control module comprises:
and the control unit is used for controlling the player control object to move to the position indicated by the position parameter corresponding to the current movement moment.
The implementation process of the functions and actions of each module and unit in the above device is detailed in the implementation process of the corresponding steps in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units.
Corresponding to the game interaction method, the invention also provides an electronic device for game interaction, wherein the electronic device can comprise:
a processor;
a memory for storing a computer program executable by the processor;
wherein, the processor implements the game interaction method in any one of the above method embodiments when executing the program.
The embodiment of the game interaction device provided by the embodiment of the invention can be applied to the electronic equipment. Taking software implementation as an example, as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the electronic device where the device is located. From a hardware level, as shown in fig. 6, fig. 6 is a hardware structure diagram of an electronic device according to an exemplary embodiment of the present invention, and besides the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 6, the electronic device may further include other hardware, such as a camera module, for implementing the foregoing game interaction method; or may also include other hardware, which is not described in detail herein, generally according to the actual functions of the electronic device.
Corresponding to the foregoing method embodiments, an embodiment of the present invention further provides a machine-readable storage medium, on which a program is stored, where the program is executed by a processor to implement the game interaction method in any one of the foregoing method embodiments.
Embodiments of the invention may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, containing program code. The machine-readable storage medium may include: permanent or non-permanent removable or non-removable media. The information storage functionality of the machine-readable storage medium may be implemented by any method or technology that may be implemented. The information may be computer readable instructions, data structures, models of programs, or other data.
Additionally, the machine-readable storage medium includes, but is not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology memory, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other non-transmission media that can be used to store information that can be accessed by a computing device.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (11)

1. A game interaction method, comprising:
when a game starting instruction is received, displaying a game scene;
acquiring a player image, and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
establishing a three-dimensional model of the face of the player in the game scene and determining the current face posture parameters of the player based on the current face characteristic points; the current face posture parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
Determining whether to generate a player control object in the game scene according to the current face pose parameters;
when the player control object is determined to be generated, generating the player control object on the surface of the human face three-dimensional model, wherein the generated player control object is a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
after the player control object is generated, updating a current human face three-dimensional model and a current human face three-dimensional physical model based on a current player image, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current human face three-dimensional physical model and the object three-dimensional physical model, wherein the acting force comprises friction force applied by the object three-dimensional physical model when the surface of the current human face three-dimensional physical model moves;
the state of motion of the player controlled object in the game scene is controlled in dependence on the current motion parameters.
2. The method of claim 1, wherein establishing a three-dimensional model of a player's face in the game scene based on current face feature points comprises:
establishing a human face two-dimensional model based on human face feature points obtained by carrying out human face feature recognition on the front image of the player;
Determining depth coordinates of specified human face characteristic points in the human face two-dimensional model based on human face characteristic points obtained by performing human face characteristic recognition on a side image of a player;
establishing a three-dimensional face model of a player in the game scene based on the two-dimensional face model and depth coordinates of face feature points specified in the two-dimensional face model;
establishing a human face three-dimensional physical model based on the current human face three-dimensional model, comprising the following steps:
and converting the human face three-dimensional model into a human face three-dimensional physical model in a physical world space.
3. The method of claim 1, wherein obtaining the current motion parameters of the three-dimensional physical model of the object through the acting force between the three-dimensional physical model of the current face and the three-dimensional physical model of the object comprises:
calculating to obtain the friction force of the object three-dimensional physical model when the surface of the current human face three-dimensional physical model moves based on the quality of the object three-dimensional physical model and the sliding friction coefficient of the surface of the current human face three-dimensional physical model;
calculating to obtain a resultant force applied to the three-dimensional physical model of the object based on the friction force, the gravity of the three-dimensional physical model of the object and the supporting force of the current human face three-dimensional physical model to the three-dimensional physical model of the object;
And acquiring the current motion parameters of the three-dimensional physical model of the object according to the resultant force.
4. The method of claim 3, wherein obtaining current motion parameters of the three-dimensional physical model of the object from the resultant force comprises:
calculating the acceleration of the three-dimensional physical model of the object based on the resultant force and the mass of the three-dimensional physical model of the object;
calculating to obtain position parameters of the object three-dimensional physical model in the motion process according to the acceleration and the acting duration of the resultant force;
and taking the position parameter and the movement moment corresponding to the position parameter as the current movement parameter.
5. The method of claim 4, wherein controlling the state of motion of the player control object in the game scene based on the current motion parameters comprises:
the player control object is controlled to move to a position indicated by the position parameter corresponding to the current movement time.
6. The method of claim 1, further comprising:
and when the object three-dimensional physical model is detected to move to the specified target position in the human face three-dimensional physical model, updating a game interface, wherein the updated game interface is used for indicating that the game is successful.
7. The method of claim 1, further comprising:
and when the separation of the object three-dimensional physical model and the face three-dimensional physical model is detected, updating a game interface, wherein the updated game interface is used for indicating the game failure.
8. The method of claim 1, further comprising:
and sending the game video pictures in the game process to other clients.
9. A game interaction apparatus, comprising:
the display module is used for displaying a game scene when receiving a game starting instruction;
the image acquisition module is used for acquiring a player image and carrying out face feature recognition on the player image to obtain face feature points; the player image comprises a player front image and/or a player side image;
the first establishing module is used for establishing a three-dimensional face model of the player in the game scene based on the current face characteristic point;
the determining module is used for determining the current face posture parameters of the player based on the current face characteristic points; the current face posture parameters comprise an inclination angle of the current face relative to the horizontal direction and an inclination angle of the current face relative to the vertical direction;
The second establishing module is used for establishing a human face three-dimensional physical model based on the current human face three-dimensional model;
a generating module for determining whether to generate a player control object in the game scene according to the current face pose parameters; and means for generating the player-controlled object on a surface of the three-dimensional model of the face when it is determined that the player-controlled object is generated, the generated player-controlled object being a three-dimensional model; generating an object three-dimensional physical model of the player control object on the surface of the human face three-dimensional physical model;
the processing module is used for updating a current human face three-dimensional model and a current human face three-dimensional physical model based on a current player image after the player control object is generated, and acquiring current motion parameters of the object three-dimensional physical model through acting force between the current human face three-dimensional physical model and the object three-dimensional physical model, wherein the acting force comprises friction force applied to the object three-dimensional physical model when the surface of the current human face three-dimensional physical model moves;
and the control module is used for controlling the motion state of the player control object in the game scene according to the current motion parameters.
10. An electronic device, comprising:
A processor;
a memory for storing a computer program executable by the processor;
wherein the processor implements the method of any one of claims 1 to 8 when executing the program.
11. A machine-readable storage medium having a program stored thereon; a program which, when executed by a processor, implements the method of any one of claims 1 to 8.
CN201811592510.7A 2018-12-25 2018-12-25 Game interaction method and device, electronic equipment and storage medium Active CN109692476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811592510.7A CN109692476B (en) 2018-12-25 2018-12-25 Game interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811592510.7A CN109692476B (en) 2018-12-25 2018-12-25 Game interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109692476A CN109692476A (en) 2019-04-30
CN109692476B true CN109692476B (en) 2022-07-01

Family

ID=66232062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811592510.7A Active CN109692476B (en) 2018-12-25 2018-12-25 Game interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109692476B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862348B (en) * 2020-07-30 2024-04-30 深圳市腾讯计算机***有限公司 Video display method, video generation method, device, equipment and storage medium
CN115499674A (en) * 2022-09-15 2022-12-20 广州方硅信息技术有限公司 Live broadcast room interactive picture presentation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499132A (en) * 2009-03-12 2009-08-05 广东药学院 Three-dimensional transformation search method for extracting characteristic points in human face image
JP2013017587A (en) * 2011-07-08 2013-01-31 Namco Bandai Games Inc Game system, program, and information storage medium
CN106909213A (en) * 2015-12-23 2017-06-30 掌赢信息科技(上海)有限公司 A kind of control instruction generation method and electronic equipment based on recognition of face
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499132A (en) * 2009-03-12 2009-08-05 广东药学院 Three-dimensional transformation search method for extracting characteristic points in human face image
JP2013017587A (en) * 2011-07-08 2013-01-31 Namco Bandai Games Inc Game system, program, and information storage medium
CN106909213A (en) * 2015-12-23 2017-06-30 掌赢信息科技(上海)有限公司 A kind of control instruction generation method and electronic equipment based on recognition of face
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109692476A (en) 2019-04-30

Similar Documents

Publication Publication Date Title
US10740951B2 (en) Foveal adaptation of particles and simulation models in a foveated rendering system
US8956227B2 (en) Storage medium recording image processing program, image processing device, image processing system and image processing method
EP3003517B1 (en) Image rendering responsive to user actions in head mounted display
TWI469813B (en) Tracking groups of users in motion capture system
US20090202114A1 (en) Live-Action Image Capture
JP7050883B2 (en) Foveal rendering optimization, delayed lighting optimization, particle foveal adaptation, and simulation model
US20230050933A1 (en) Two-dimensional figure display method and apparatus for virtual object, device, and storage medium
CN110832442A (en) Optimized shading and adaptive mesh skin in point-of-gaze rendering systems
US11107183B2 (en) Adaptive mesh skinning in a foveated rendering system
US20230285857A1 (en) Video frame rendering method and apparatus
CN109529317B (en) Game interaction method and device and mobile terminal
JP7503122B2 (en) Method and system for directing user attention to a location-based gameplay companion application - Patents.com
JP2023126292A (en) Information display method, device, instrument, and program
CN109692476B (en) Game interaction method and device, electronic equipment and storage medium
CN113209618B (en) Virtual character control method, device, equipment and medium
CN114053688A (en) Online body feeling fighting dance method and device, computer equipment and storage medium
CN113577774A (en) Virtual object generation method and device, electronic equipment and storage medium
JP2023041670A (en) Moving image distribution system, program and information processing method
CN112156454B (en) Virtual object generation method and device, terminal and readable storage medium
JP2023174714A (en) Program, image generation apparatus, and image generation method
US10668379B2 (en) Computer-readable recording medium, computer apparatus, image display method
CN113599829B (en) Virtual object selection method, device, terminal and storage medium
WO2024131391A1 (en) Virtual environment display method and apparatus, device, medium and program product
CN112999657B (en) Method, device, equipment and medium for displaying phantom of virtual character
US20240198227A1 (en) Program, information processing method, and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210119

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant