CN112190921A - Game interaction method and device - Google Patents

Game interaction method and device Download PDF

Info

Publication number
CN112190921A
CN112190921A CN202011120102.9A CN202011120102A CN112190921A CN 112190921 A CN112190921 A CN 112190921A CN 202011120102 A CN202011120102 A CN 202011120102A CN 112190921 A CN112190921 A CN 112190921A
Authority
CN
China
Prior art keywords
information
face
game
user
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011120102.9A
Other languages
Chinese (zh)
Inventor
刘峰
蒋楠
刘梦
章靖宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Online Game Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Online Game Technology Co Ltd filed Critical Zhuhai Kingsoft Online Game Technology Co Ltd
Priority to CN202011120102.9A priority Critical patent/CN112190921A/en
Publication of CN112190921A publication Critical patent/CN112190921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a game interaction method and a game interaction device, wherein the game interaction method comprises the following steps: acquiring game roles and collecting face information of a user; determining game role expressions and user emotions according to the user facial information and a preset facial gesture set; adding the game character expression to the face of the game character for display; and executing a corresponding game interaction instruction according to the user emotion. By the method, the facial information of the game user is acquired in real time, the facial information of the game role is adjusted according to the facial information of the user, the facial expression of the game user can directly influence the game, the substitution feeling of the game user is enhanced, the game experience of the game user is improved, the real-time emotion of the user is analyzed, the real-time interaction with the virtual role in the game is realized according to the emotion of the user, the game playability and interactivity of the game are improved, and the game experience of the user is further improved.

Description

Game interaction method and device
Technical Field
The present application relates to the field of internet technologies, and in particular, to a game interaction method and apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of internet technology, the Augmented Reality (AR) technology is also rapidly developed, the AR technology is applied to the field of games, virtual game roles can be displayed in real environments through media such as mobile phones and game machines by using the AR technology, the AR game realizes the optimized combination of the game and the AR technology from three aspects of position service, image recognition and data processing, and the significant breakthrough of the AR game in terms of playing methods and forms brings brand-new game experience to players.
In the existing AR game on the market, the behavior of a game role comes from the setting of the game, the interactivity with a game user is poor, the facial expression of the user cannot directly influence the scene in the game, the interaction with the game role in the game cannot be realized, and the high requirement of the game user on the game cannot be met.
Therefore, how to solve the above problems becomes a problem to be solved urgently by the skilled person.
Disclosure of Invention
In view of this, embodiments of the present application provide a game interaction method and apparatus, a computing device, and a computer-readable storage medium, so as to solve technical defects in the prior art.
According to a first aspect of embodiments of the present application, there is provided a game interaction method, including:
acquiring game roles and collecting face information of a user;
determining game role expressions and user emotions according to the user facial information and a preset facial gesture set;
adding the game character expression to the face of the game character for display;
and executing a corresponding game interaction instruction according to the user emotion.
Optionally, the collecting user face information includes:
and calling an image acquisition device to acquire the face information of the user.
Optionally, determining the expression of the game character according to the user facial information and a preset facial gesture set, including:
extracting at least one local feature information of the user face information;
analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture;
and determining the expression of the game character according to the weight information of each facial gesture.
Optionally, the local characteristic information includes at least one of local characteristic information of eyes, mouth, jaw, eyebrows, cheek, nose, and tongue;
analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture, wherein the weight information comprises:
and comparing the facial gestures in a preset facial gesture set with each local feature information respectively, and determining the weight information of each facial gesture in the user facial information.
Optionally, determining the emotion of the user according to the face information of the user and a preset face gesture set, including:
extracting eye information and mouth information of the user face information;
mapping the eye information and the mouth information to a base face model to generate virtual face information;
and performing emotion analysis on the virtual face information to determine the emotion of the user.
Optionally, adding the expression of the game character to the face of the game character for displaying includes:
adding the game character expression to the face of the game character;
and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
According to a second aspect of embodiments of the present application, there is provided a game interaction apparatus, comprising:
the acquisition and collection module is configured to acquire game roles and collect user face information;
the determining module is configured to determine game character expressions and user emotions according to the user facial information and a preset facial gesture set;
the adding and displaying module is configured to add the expression of the game character to the face of the game character for displaying;
and the execution module is configured to execute the corresponding game interaction instruction according to the emotion of the user.
Optionally, the acquisition and acquisition module is further configured to invoke an image acquisition device to acquire the facial information of the user.
Optionally, the determining module is further configured to extract at least one local feature information of the user face information; analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture; and determining the expression of the game character according to the weight information of each facial gesture.
Optionally, the local characteristic information includes at least one of local characteristic information of eyes, mouth, jaw, eyebrows, cheek, nose, and tongue;
the determining module is further configured to compare the facial gestures in a preset facial gesture set with each local feature information respectively, and determine weight information of each facial gesture in the user facial information.
Optionally, the determining module is further configured to extract eye information and mouth information of the face information of the user; mapping the eye information and the mouth information to a base face model to generate virtual face information; and performing emotion analysis on the virtual face information to determine the emotion of the user.
Optionally, the adding and presenting module is further configured to add the game character expression to the face of the game character; and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the game interaction method when executing the instructions.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the game interaction method.
In the embodiment of the application, the game role is acquired, and the face information of the user is acquired; determining game role expressions and user emotions according to the user face information and a preset face posture set, and adding the game role expressions to the faces of the game roles for displaying; the method comprises the steps of acquiring facial information of a game user in real time, adjusting the facial information of a game role according to the facial information of the user, and realizing that the facial expression of the game user directly influences a game.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flow chart of a game interaction method provided by an embodiment of the application;
FIG. 3 is a schematic diagram of a game interaction method provided by another embodiment of the present application;
fig. 4 is a schematic structural diagram of a game interaction device provided in an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the present application, a game interaction method and apparatus, a computing device and a computer-readable storage medium are provided, which are described in detail in the following embodiments one by one.
FIG. 1 shows a block diagram of a computing device 100 according to an embodiment of the present application. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-mentioned components of the computing device 100 and other components not shown in fig. 1 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the game interaction method shown in fig. 2. FIG. 2 shows a flow chart of a game interaction method according to an embodiment of the present application, including steps 202 to 208.
Step 202: and acquiring the game role and collecting the face information of the user.
The method comprises the steps that a user opens a game interface through a terminal to enter a game, the game role generated by the user in the game is obtained, the game role is controlled by the user, and meanwhile, facial information of the user is collected through calling image collection equipment of the terminal.
The terminal can be an intelligent terminal with an image acquisition function, such as a mobile phone, a tablet computer, a notebook computer and the like, and the intelligent terminal can also run a corresponding game program, and the terminal is not specifically limited in the application.
The face information of the user is obtained in real time through image acquisition equipment such as a front-facing camera arranged in the terminal. If there are multiple faces in the camera image, the largest or most clearly identifiable face is selected.
In an embodiment provided by the application, a user logs in a game through a mobile phone, game roles in the game are controlled to perform corresponding game operations, and facial information of the user is collected in real time through a front-facing camera of the mobile phone.
Step 204: and determining the expression of the game role and the emotion of the user according to the facial information of the user and a preset facial gesture set.
The Augmented Reality (AR) technology is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and aims to cover a virtual world on a screen in the real world for interaction, and each platform manufacturer issues respective corresponding AR interfaces to assist developers in realizing AR technical functions in a simpler manner.
A function is arranged in the AR interface and used for processing the face content of the user and matching the facial expression of the user, developers can recognize and track the movement and the expression of the face of the user through a front camera of the terminal, and the information of the pose, the topology and the expression of the face can be detected.
The preset facial gesture set comprises basic gestures for processing facial appearance of the face, and identification of specific facial features, including 52 basic gestures corresponding to eyes, mouth, jaw, eyebrows, cheek, nose and tongue.
Optionally, determining the expression of the game character according to the user facial information and a preset facial gesture set, including: extracting at least one local feature information of the user face information; analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture; and determining the expression of the game character according to the weight information of each facial gesture.
In practical application, after obtaining facial information of a user, extracting local feature information corresponding to a preset basic pose in each preset facial pose set of the facial information of the user, such as at least one local feature information of an eye, a mouth, a chin, an eyebrow, a cheek, a nose and a tongue, wherein the feature information of the eye is divided into a left eye and a right eye, and then respectively comparing each local feature information with the basic pose of each facial appearance in the preset facial pose set, specifically comparing the local feature information of the left eye with the basic pose of the left eye in the preset facial pose set (for example, monitoring a left eye closing coefficient, a left eye skin movement coefficient consistent with downward gaze, and the like), comparing the local feature information of the right eye with the basic pose of the right eye in the preset facial pose set, and comparing the local feature information of the mouth and the chin with the basic pose of the mouth and the chin in the preset facial pose set (for forward movement as follows) Dynamic coefficient, opening coefficient of jaw, and the like) in sequence until each local feature information of the face information of the user is compared.
And after the comparison of each local feature information of the user face information is completed, determining weight information occupied by each face gesture in the user face information, and determining the expression of the game character according to the weight information occupied by each specific face feature.
In an embodiment provided by the present application, along the above example, it is collected that the facial information of the user is that the eyes are open, the mouth is open, the jaw moves downwards, both the inside and the outside of the two eyebrows move upwards, the weight value of the basic posture related to the left eye in the left eye and preset facial posture set is determined by comparing with 52 basic postures in the preset facial posture set, the weight value of the basic posture related to the right eye in the right eye and preset facial posture set is determined in the same way, the weight value of the basic posture related to the mouth in the mouth and preset facial posture set is determined, the weight value of the basic posture related to the jaw in the jaw and preset facial posture set is determined, the weight value of the basic posture related to the eyebrows in the eyebrow and preset facial posture set is determined, the weight value of the basic posture related to the cheek in the cheek and preset facial posture set is determined, determining a weight value of a nose related basic gesture in a preset facial gesture set and a tongue related basic gesture in a preset facial gesture set, and further determining that the expression of the user is a surprised expression.
Optionally, determining the emotion of the user according to the face information of the user and a preset face gesture set, including: extracting eye information and mouth information of the user face information; mapping the eye information and the mouth information to a base face model to generate virtual face information; and performing emotion analysis on the virtual face information to determine the emotion of the user.
When the emotion of the user is determined according to the face information of the user and a preset face posture set, the information of eyes and mouth of the face information of the user is extracted, the eye information and the mouth information are correspondingly mapped to a virtual basic face model to generate virtual face information, and the emotion of the user is determined by analyzing the emotion of the user through the virtual face information.
The method can accurately acquire the facial information of the user through an AR interface issued by a platform manufacturer, process local characteristic information in the facial information of the user, such as extracting eye information and mouth information of the face of the user, analyze the expression of the user to determine the emotion of the user, does not need to accurately detect the expression of the user, and can quickly and timely determine the emotion of the user only through basic matching.
Step 206: and adding the expression of the game character to the face of the game character for display.
Correspondingly adding the obtained game role expression to a game role face for displaying, wherein the game role expression is added to the game role face; and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
In practical application, the basic shape of the facial information of the game role controlled by the user is generated by pre-pinching the face, the pinching is the operation of the user to customize the face for the controlled game role in the game, each user can create different faces for the game role according to own preference, the basic facial information of the game role is additionally provided with a control layer on a skin skeleton, and the pinching material is represented by static materials (such as makeup) and dynamic materials (such as dimple, eyebrow and the like in mood decoration).
The game character expression is added to the face of the game character, and the game character expression and the original face pinching data are overlapped. Standardizing the topological structure of the facial expression of the user through Warp 3D, ensuring the consistency of the topological structure of a vertex sequence among different face shapes, generating Morph information corresponding to each facial gesture, generating corresponding facial skeleton gesture information according to the weight value and the Morph information of each facial gesture through 3dsMax, checking the matching condition of the Morph corresponding to each facial gesture and the corresponding facial skeleton gesture information through a mode of comparing and checking the facial gestures one by one, and covering the corresponding Morph information of the face of the game role according to the Morph information of the facial gestures for the matched facial skeleton gesture information and the Morph. And for the case of unmatched facial skeleton posture information and Morph information, matching the facial skeleton posture information with the Morph information by adjusting the position and the weight of a skin point.
In the embodiment provided by the application, the above example is used, local feature information of the mouth of the user face information is taken as an example, the face posture corresponding to the mouth is opened for the mouth, Morph information corresponding to the opened mouth is generated through Warp 3D, and skin points and initial weight information are placed according to the Morph information Morph1Generating facial bone Pose information Pose corresponding to mouth1Checking the Morph of the mouth part by comparing the corresponding facial pose of the mouth1And Pose1If the matching is carried out, according to the Morph1Superimposing with Morph of mouth part and hard mouth of face of game character, if not, adjusting MongoliaSkin position and weight are such that the Morph of the mouth part1And Pose1Match, and then according to Morph1And the hand is overlapped with the mouth part of the face of the game character, and the mouth is hard.
Step 208: and executing a corresponding game interaction instruction according to the user emotion.
And executing the corresponding game interaction instruction according to the emotion of the user, specifically performing interaction according to the setting of the game, wherein if the obtained emotion of the user is happy, the game interaction instruction corresponding to the happy emotion, such as applause, can be obtained and executed in the game, and if the obtained emotion of the user is frightened, the game interaction instruction corresponding to the frightened emotion, such as two steps of backing and the like, can be obtained and executed in the game.
According to the game interaction method provided by the embodiment of the application, the game role is obtained, and the face information of the user is collected; determining game role expressions and user emotions according to the user face information and a preset face posture set, and adding the game role expressions to the faces of the game roles for displaying; the method comprises the steps of acquiring facial information of a game user in real time, adjusting the facial information of a game role according to the facial information of the user, and realizing that the facial expression of the game user directly influences a game.
Fig. 3 shows a game interaction method according to an embodiment of the present application, which is described by taking the example of obtaining user face information for game interaction, and includes steps 302 to 320.
Step 302: and acquiring the game role, and calling image acquisition equipment to acquire the facial information of the user.
In the embodiment provided by the application, a user starts a game through a mobile phone, obtains a game role A of the user in the game, and calls a front-facing camera of the mobile phone to collect data of facial information of the user.
Step 304: at least one local feature information of the user face information is extracted.
In the embodiment provided by the application, the local characteristic information of the left eye, the right eye, the mouth, the lower jaw, the eyebrow, the cheek, the nose and the tongue of the face information of the user is extracted.
Step 306: and analyzing each local feature information according to the face gestures in a preset face gesture set, and determining the weight information of each face gesture.
In the embodiment provided by the present application, the preset facial pose set includes 52 basic poses for processing facial appearance, wherein 7 basic poses are corresponding to the left eye, 7 basic poses are corresponding to the right eye, 27 basic poses are corresponding to the mouth and the lower jaw, 10 basic poses are corresponding to the eyebrow, the cheek and the nose, and 1 basic pose is corresponding to the tongue.
After local feature information of a left eye in the face information is compared with 7 basic postures corresponding to a left eye in a face posture set one by one, weight information of each basic posture in the 7 basic postures corresponding to the left eye is determined, and by analogy, weight information of each basic posture in the 7 basic postures corresponding to the right eye is determined in turn, weight information of each basic posture in the 27 basic postures corresponding to the mouth and the jaw is determined, weight information of each basic posture in 10 basic postures corresponding to the eyebrows, the cheeks and the nose is determined, and weight information of 1 basic posture corresponding to the tongue is determined.
Step 308: and determining the expression of the game character according to the weight information of each facial gesture.
In the embodiment provided by the application, the topological structures of the facial expressions of the user are standardized through Warp 3D, the topological structures of the vertex sequences are guaranteed to be consistent among different face shapes, Morph information corresponding to each basic posture is generated, and then corresponding facial skeleton posture information is generated through 3dsMax according to the weight value and the Morph information of each facial posture, so that the expressions of the game roles are determined.
Step 310: adding the game character expression to the face of the game character, respectively comparing and checking the matching condition of each face posture and the face of the game character, and adjusting the face posture which does not match the face of the game character to enable each face posture to be matched with the face of the game character.
In the embodiment provided by the application, the matching condition of the Morph information corresponding to each basic posture and the corresponding facial skeleton posture information is respectively checked, for the condition of unmatched facial skeleton posture information and Morph information, the facial skeleton posture information and the Morph are matched until the matching is successful by adjusting the skin point position and the weight, the Morph information corresponding to the successfully matched basic posture covers the corresponding Morph information of the face of the game role, and the operation of adding the expression of the game role to the face of the game role is completed.
Step 312: the eye information and the mouth information are mapped to the base face model to generate virtual face information.
In the embodiment provided by the application, the eye information and the local feature information of the mouth information of the face information of the user are obtained, and the eye information and the mouth information are correspondingly mapped onto a virtual basic face model to generate virtual face information.
Step 314: and performing emotion analysis on the virtual face information to determine the emotion of the user.
In the embodiment provided by the application, the emotion of the user is determined by performing emotion analysis on the user through the virtual face information.
Step 316: and executing a corresponding game interaction instruction according to the user emotion.
In the embodiment provided by the application, the corresponding game interaction instruction is obtained and executed through the emotion of the user, and the function of interaction with the virtual character in the game through the emotion of the user is realized.
According to the game interaction method provided by the embodiment of the application, the game role is obtained, and the face information of the user is collected; determining game role expressions and user emotions according to the user face information and a preset face posture set, and adding the game role expressions to the faces of the game roles for displaying; the method comprises the steps of acquiring facial information of a game user in real time, adjusting the facial information of a game role according to the facial information of the user, and realizing that the facial expression of the game user directly influences a game.
Corresponding to the above method embodiment, the present application further provides a game interaction apparatus embodiment, and fig. 4 shows a schematic structural diagram of the game interaction apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an acquisition module 402 configured to acquire a game character and acquire user facial information;
a determining module 404 configured to determine game character expressions and user emotions according to the user facial information and a preset facial gesture set;
an adding and displaying module 406, configured to add the game character expression to the face of the game character for displaying;
an execution module 408 configured to execute the corresponding game interaction instruction according to the user emotion.
Optionally, the acquiring and capturing module 402 is further configured to invoke an image capturing device to capture facial information of the user.
Optionally, the determining module 404 is further configured to extract at least one local feature information of the user face information; analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture; and determining the expression of the game character according to the weight information of each facial gesture.
Optionally, the local characteristic information includes at least one of local characteristic information of eyes, mouth, jaw, eyebrows, cheek, nose, and tongue;
the determining module 404 is further configured to compare the facial poses in a preset facial pose set with each local feature information, and determine weight information of each facial pose in the user facial information.
Optionally, the determining module 404 is further configured to extract eye information and mouth information of the face information of the user; mapping the eye information and the mouth information to a base face model to generate virtual face information; and performing emotion analysis on the virtual face information to determine the emotion of the user.
Optionally, the adding and presenting module 406 is further configured to add the game character expression to the face of the game character; and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
The game interaction device provided by the embodiment of the application acquires the game role and acquires the face information of the user; determining game role expressions and user emotions according to the user face information and a preset face posture set, and adding the game role expressions to the faces of the game roles for displaying; the method comprises the steps of acquiring facial information of a game user in real time, adjusting the facial information of a game role according to the facial information of the user, and realizing that the facial expression of the game user directly influences a game.
There is also provided in an embodiment of the present application a computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the game interaction method when executing the instructions.
An embodiment of the present application also provides a computer readable storage medium storing computer instructions, which when executed by a processor, implement the steps of the game interaction method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the game interaction method belong to the same concept, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the game interaction method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (14)

1. A game interaction method, comprising:
acquiring game roles and collecting face information of a user;
determining game role expressions and user emotions according to the user facial information and a preset facial gesture set;
adding the game character expression to the face of the game character for display;
and executing a corresponding game interaction instruction according to the user emotion.
2. A game interaction method as in claim 1, wherein capturing user facial information comprises:
and calling an image acquisition device to acquire the face information of the user.
3. The game interaction method of claim 1, wherein determining game character expressions from the user facial information and a set of preset facial gestures comprises:
extracting at least one local feature information of the user face information;
analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture;
and determining the expression of the game character according to the weight information of each facial gesture.
4. A game interaction method as in claim 3, wherein the local feature information includes at least one of local feature information of an eye, a mouth, a chin, an eyebrow, a cheek, a nose, and a tongue;
analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture, wherein the weight information comprises:
and comparing the facial gestures in a preset facial gesture set with each local feature information respectively, and determining the weight information of each facial gesture in the user facial information.
5. A game interaction method as in claim 1, wherein determining a user emotion from the user facial information and a set of preset facial gestures comprises:
extracting eye information and mouth information of the user face information;
mapping the eye information and the mouth information to a base face model to generate virtual face information;
and performing emotion analysis on the virtual face information to determine the emotion of the user.
6. The game interaction method of claim 1, wherein adding the game character expression to the game character's face for presentation comprises:
adding the game character expression to the face of the game character;
and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
7. A game interaction apparatus, comprising:
the acquisition and collection module is configured to acquire game roles and collect user face information;
the determining module is configured to determine game character expressions and user emotions according to the user facial information and a preset facial gesture set;
the adding and displaying module is configured to add the expression of the game character to the face of the game character for displaying;
and the execution module is configured to execute the corresponding game interaction instruction according to the emotion of the user.
8. The game interaction apparatus of claim 7,
the acquisition module is further configured to invoke an image acquisition device to acquire user facial information.
9. The game interaction apparatus of claim 7,
the determination module further configured to extract at least one local feature information of the user face information; analyzing each local feature information according to a face gesture in a preset face gesture set, and determining weight information of each face gesture; and determining the expression of the game character according to the weight information of each facial gesture.
10. The game interaction apparatus of claim 9, wherein the local feature information comprises at least one of local feature information of an eye, a mouth, a chin, an eyebrow, a cheek, a nose, and a tongue;
the determining module is further configured to compare the facial gestures in a preset facial gesture set with each local feature information respectively, and determine weight information of each facial gesture in the user facial information.
11. The game interaction apparatus of claim 7,
the determination module is further configured to extract eye information and mouth information of the user face information; mapping the eye information and the mouth information to a base face model to generate virtual face information; and performing emotion analysis on the virtual face information to determine the emotion of the user.
12. The game interaction apparatus of claim 7,
the adding and showing module is further configured to add the expression of the game character to the face of the game character; and respectively comparing and checking the matching condition of each face posture with the face of the game character, and adjusting the face posture of the face of the unmatched game character so as to enable each face posture to be matched with the face of the game character.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-6 when executing the instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6.
CN202011120102.9A 2020-10-19 2020-10-19 Game interaction method and device Pending CN112190921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011120102.9A CN112190921A (en) 2020-10-19 2020-10-19 Game interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011120102.9A CN112190921A (en) 2020-10-19 2020-10-19 Game interaction method and device

Publications (1)

Publication Number Publication Date
CN112190921A true CN112190921A (en) 2021-01-08

Family

ID=74009444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011120102.9A Pending CN112190921A (en) 2020-10-19 2020-10-19 Game interaction method and device

Country Status (1)

Country Link
CN (1) CN112190921A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050859A (en) * 2021-04-19 2021-06-29 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN113499584A (en) * 2021-08-02 2021-10-15 网易(杭州)网络有限公司 Game animation control method and device
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium
WO2022217826A1 (en) * 2021-04-14 2022-10-20 南方科技大学 Game interaction method and system based on close contact, server, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7386799B1 (en) * 2002-11-21 2008-06-10 Forterra Systems, Inc. Cinematic techniques in avatar-centric communication during a multi-user online simulation
CN108564641A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Expression method for catching and device based on UE engines
CN108874114A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN110599573A (en) * 2019-09-03 2019-12-20 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN110750161A (en) * 2019-10-25 2020-02-04 郑子龙 Interactive system, method, mobile device and computer readable medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7386799B1 (en) * 2002-11-21 2008-06-10 Forterra Systems, Inc. Cinematic techniques in avatar-centric communication during a multi-user online simulation
CN108874114A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN108564641A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Expression method for catching and device based on UE engines
CN110599573A (en) * 2019-09-03 2019-12-20 电子科技大学 Method for realizing real-time human face interactive animation based on monocular camera
CN110750161A (en) * 2019-10-25 2020-02-04 郑子龙 Interactive system, method, mobile device and computer readable medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022217826A1 (en) * 2021-04-14 2022-10-20 南方科技大学 Game interaction method and system based on close contact, server, and storage medium
CN113050859A (en) * 2021-04-19 2021-06-29 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN113050859B (en) * 2021-04-19 2023-10-24 北京市商汤科技开发有限公司 Driving method, device and equipment of interaction object and storage medium
CN113499584A (en) * 2021-08-02 2021-10-15 网易(杭州)网络有限公司 Game animation control method and device
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110390704B (en) Image processing method, image processing device, terminal equipment and storage medium
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN110163054B (en) Method and device for generating human face three-dimensional image
CN112190921A (en) Game interaction method and device
CN100468463C (en) Method,apparatua and computer program for processing image
CN108525305B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111680562A (en) Human body posture identification method and device based on skeleton key points, storage medium and terminal
CN108564641B (en) Expression capturing method and device based on UE engine
CN108874114A (en) Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service
CN111862116A (en) Animation portrait generation method and device, storage medium and computer equipment
CN108595012A (en) Visual interactive method and system based on visual human
CN113766168A (en) Interactive processing method, device, terminal and medium
CN105975072A (en) Method, device and system for identifying gesture movement
EP4071760A1 (en) Method and apparatus for generating video
CN106502401B (en) Image control method and device
CN108681398A (en) Visual interactive method and system based on visual human
CN114373044A (en) Method, device, computing equipment and storage medium for generating three-dimensional face model
Wang et al. Expression dynamic capture and 3D animation generation method based on deep learning
CN112149599A (en) Expression tracking method and device, storage medium and electronic equipment
WO2023035725A1 (en) Virtual prop display method and apparatus
CN108628454A (en) Visual interactive method and system based on visual human
CN108255308A (en) A kind of gesture interaction method and system based on visual human
CN114005156A (en) Face replacement method, face replacement system, terminal equipment and computer storage medium
CN104767980B (en) A kind of real-time emotion demenstration method, system, device and intelligent terminal
CN112132107A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108