CN114344914A - Interaction method and device based on virtual scene, electronic equipment and storage medium - Google Patents

Interaction method and device based on virtual scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN114344914A
CN114344914A CN202210021971.9A CN202210021971A CN114344914A CN 114344914 A CN114344914 A CN 114344914A CN 202210021971 A CN202210021971 A CN 202210021971A CN 114344914 A CN114344914 A CN 114344914A
Authority
CN
China
Prior art keywords
target
virtual
virtual object
password
main control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210021971.9A
Other languages
Chinese (zh)
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210021971.9A priority Critical patent/CN114344914A/en
Publication of CN114344914A publication Critical patent/CN114344914A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an interaction method and device based on a virtual scene, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying a target object in a virtual scene of the target game; responding to a trigger event of the main control virtual object to the target object, and displaying a password input box; under the condition that the main control virtual object has the target prop, acquiring an input password based on the password input box; and acquiring the virtual resource associated with the target object when the input password hits the target password. When participating in the confrontation among the virtual objects, the game player also needs to finish password deciphering as much as possible to compete for the virtual resources, so that the interaction mode of game-to-game is enriched, and the human-computer interaction efficiency is improved.

Description

Interaction method and device based on virtual scene, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interaction method and apparatus based on a virtual scene, an electronic device, and a storage medium.
Background
With the development of computer technology and the diversification of terminal functions, more and more games can be played on the terminal. The shooting game is a more popular game, a virtual scene is usually provided in the shooting game, and a player can control a virtual object in the virtual scene to use a virtual item to play a game.
In the current shooting game, players need to control virtual objects to beat other virtual objects to obtain certain scores, and finally the game outcome and the game outcome are determined according to the number of the scores, but the interaction mode among the players is single and fixed, and the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides an interaction method and device based on a virtual scene, electronic equipment and a storage medium, and can enrich interaction modes among players and improve human-computer interaction efficiency. The technical scheme is as follows:
in one aspect, an interaction method based on a virtual scene is provided, and the method includes:
displaying a target object in a virtual scene of the target game;
responding to a trigger event of the main control virtual object to the target object, and displaying a password input box;
under the condition that the main control virtual object has a target prop, acquiring an input password based on the password input box, wherein the target prop is used for providing times of inputting characters into the password input box;
and acquiring the virtual resource associated with the target object when the input password hits a target password.
In one aspect, an interactive device based on a virtual scene is provided, the device including:
the first display module is used for displaying a target object in a virtual scene of the target game;
the second display module is used for responding to a trigger event of the main control virtual object to the target object and displaying a password input box;
a first obtaining module, configured to obtain an input password based on the password input box when the main control virtual object already has a target prop, where the target prop is used to provide times for inputting characters into the password input box;
a second obtaining module, configured to obtain, when the input password hits a target password, a virtual resource associated with the target object.
In one possible implementation, the second display module includes:
and the display unit is used for displaying the decoded plaintext character and displaying the input frame of the ciphertext character to be decoded.
In one possible embodiment, the display unit is further configured to:
displaying the input frame corresponding to each ciphertext character to be decoded; or the like, or, alternatively,
and displaying an input frame and a second ciphertext character corresponding to a first ciphertext character, wherein the first ciphertext character is a first ciphertext character to be decoded, and the second ciphertext character is a ciphertext character except the first ciphertext character.
In one possible embodiment, the target object comprises a target building, the triggering event comprises the master virtual object being located within a first target range of the target building, and the apparatus further comprises:
a first determining module, configured to determine that the master virtual object is located within a first target range of the target building when there is an intersection between the collision detection range of the master virtual object and the collision detection range of the target building.
In one possible embodiment, the apparatus further comprises:
the setting module is used for setting the password input box to be in an enabled state under the condition that the main control virtual object already has the target prop, and the enabled state supports input operation based on the password input box;
the setting module is further configured to set the password input box to a disabled state under the condition that the main control virtual object does not own the target prop, where the disabled state does not support an input operation based on the password input box.
In a possible embodiment, the target prop includes a first target prop and a second target prop, the first target prop is used for providing the times of inputting the characters to the virtual objects belonging to a first camp, the second target prop is used for providing the times of inputting the characters to the virtual objects belonging to a second camp, and the first camp and the second camp are in an confrontational relationship.
In one possible embodiment, the apparatus further comprises:
a second determining module, configured to determine, based on the number of the first target props owned by the master virtual object, a number of character input times of the master virtual object when the master virtual object belongs to the first marketing, where the number of character input times is used to represent a number of times that a maximum number of characters are allowed to be input to the password input box.
In a possible embodiment, said first target prop appears in said virtual scenario when a virtual object belonging to said second lineup is defeated, said second target prop appears in said virtual scenario when a virtual object belonging to said first lineup is defeated.
In one possible embodiment, the apparatus further comprises:
a first deduction module, configured to deduct, in response to the master virtual object being defeated, each of the target properties owned by the master virtual object;
the first display module is further configured to display each deducted target prop in the virtual scene.
In one possible embodiment, the apparatus further comprises a control module configured to:
responding to the situation that the main control virtual object is located in a second target range of any target prop, and controlling the main control virtual object to obtain the target prop; or the like, or, alternatively,
responding to the fact that the main control virtual object is located in a second target range of any target prop, and displaying a pickup control of the target prop; and responding to the trigger operation of the picking control, and controlling the main control virtual object to obtain the target prop.
In one possible embodiment, the apparatus further comprises:
and the third determining module is used for determining that the main control virtual object is located in the second target range of the target prop under the condition that the collision detection range of the main control virtual object and the collision detection range of the target prop have intersection.
In one possible embodiment, the apparatus further comprises:
and the second deduction module is used for responding to any input operation based on the password input box and deducting the target prop owned by the main control virtual object.
In one possible embodiment, the winning conditions of the target pair are: and in the case that the virtual object of any one camp moves the virtual resource to the target position, taking the camp to which the virtual object belongs as a winner of the target game.
In one aspect, an electronic device is provided, which includes one or more processors and one or more memories, where at least one computer program is stored in the one or more memories, and loaded and executed by the one or more processors to implement the virtual scene based interaction method as described above.
In one aspect, a storage medium is provided, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to implement the virtual scene based interaction method as described above.
In one aspect, a computer program product or computer program is provided that includes one or more program codes stored in a computer readable storage medium. The one or more program codes can be read from a computer-readable storage medium by one or more processors of the electronic device, and the one or more processors execute the one or more program codes, so that the electronic device can execute the above-mentioned virtual scene-based interaction method.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through being close to the target object when possessing the target stage property to under the condition of deciphering out the target password, just can acquire certain virtual resource, these virtual resource can play some key effects in the game is to the game for the user is when playing the game to the game, when participating in the counter action between the virtual object, still need accomplish password as far as possible and decipher and compete for this virtual resource, thereby richened the interactive mode of game to the game, improved human-computer interaction efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an interaction method based on a virtual scene according to an embodiment of the present application;
fig. 2 is a flowchart of an interaction method based on a virtual scene according to an embodiment of the present disclosure;
fig. 3 is a flowchart of an interaction method based on a virtual scene according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a location of a target building provided by an embodiment of the present application;
FIG. 5 is a schematic interface diagram of a password input box according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of a first target prop provided by an embodiment of the present application;
FIG. 7 is a schematic view of a second target prop provided by an embodiment of the present application;
fig. 8 is a schematic diagram of a collision detection range of a master virtual object according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a collision detection range of a target prop according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of an interaction method based on a virtual scene according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a virtual scene provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of an interaction apparatus based on a virtual scene according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, and the meaning of "a plurality" means two or more, for example, a plurality of first locations means two or more first locations.
Virtual scene: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene. Optionally, the virtual scene may also be used for a virtual scene confrontation between at least two virtual objects, in which virtual resources are available for use by the at least two virtual objects.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene. Optionally, when the virtual scene is a three-dimensional virtual scene, optionally, the virtual object may be a three-dimensional stereo model, the three-dimensional stereo model may be a three-dimensional character constructed based on a three-dimensional human skeleton technology, and the same virtual object may exhibit different external images by wearing different skins. In some embodiments, the virtual object may also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
Alternatively, the virtual object may be a Player Character controlled by an operation on the client, or may be a Non-Player Character (NPC) provided in the virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Shooter Game (STG): the game is a game in which a virtual object uses hot weapon virtual props to carry out remote attack, and a shooting game is one of action games and has obvious action game characteristics. Alternatively, the shooting-type game includes, but is not limited to, a first-person shooting game, a third-person shooting game, an overlook shooting game, a head-up shooting game, a platform shooting game, a reel shooting game, a keyboard-mouse shooting game, a shooting range game, and the like, and the embodiment of the present application is not particularly limited to the type of the shooting-type game.
First Person Shooter (FPS) game: the first-person shooting game is a shooting game which can be played by a user at a first-person viewing angle, and a picture of a virtual scene in the game is a picture for observing the virtual scene at the viewing angle of a virtual object controlled by a terminal. In the game, at least two virtual objects carry out a single-play confrontation mode in a virtual scene, the virtual objects achieve the purpose of survival in the virtual scene by avoiding injuries initiated by other virtual objects and dangers (such as marshland and the like) existing in the virtual scene, when the life value of the virtual objects in the virtual scene is zero, the lives of the virtual objects in the virtual scene are ended, and finally the virtual objects living in the virtual scene are winners. Optionally, the countermeasure takes the moment when the first terminal joins the opposite as the starting moment and the moment when the last terminal exits the opposite as the ending moment, and each terminal may control one or more virtual objects in the virtual scene. Optionally, the competitive mode of the competition may include a single-person competition mode, a two-person group competition mode, or a multiple-person group competition mode, and the competitive mode is not specifically limited in the embodiment of the present application.
Taking a shooting game as an example, the user can control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb over land, and control a virtual object to swim, float, or dive in the sea, and the like, in the sky of the virtual scene, and the user can also control a virtual object to move in the virtual scene by riding a virtual vehicle, such as a virtual car, a virtual aircraft, and a virtual yacht, which are only exemplified in the above-described scene, but the present invention is not limited to this. The user can also control the virtual object to interact with other virtual objects through the virtual item, for example, the virtual item includes: the throwing prop which can be effective after being thrown, the shooting prop which can be effective after a certain launcher is launched, and the interactive prop used for short-distance confrontation.
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic view of an implementation environment of an interaction method based on a virtual scene according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual scene. Optionally, the application program includes: an FPS game, a third person shooter game, an MOBA (Multiplayer Online Battle Arena) game, a virtual reality application program, a three-dimensional map program, or a Multiplayer instrument survival game. In some embodiments, the first terminal 120 is a terminal used by the first user, when the first terminal 120 runs the application, a user interface of the application is displayed on a screen of the first terminal 120, and a virtual scene is loaded and displayed in the application based on an opening operation of the first user in the user interface, and the first user uses the first terminal 120 to operate a first virtual object located in the virtual scene for an activity, which includes but is not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing, confronting. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first terminal 120 and the second terminal 160 are in direct or indirect communication connection with the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 140 is used to provide background services for applications that support virtual scenarios. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
Optionally, the server 140 is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, a cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The second terminal 160 is installed and operated with an application program supporting a virtual scene. Optionally, the application program includes any one of an FPS game, a third person shooter game, an MOBA game, a virtual reality application program, a three-dimensional map program, or a multiplayer instrument type live game. In some embodiments, the second terminal 160 is a terminal used by the second user, when the second terminal 160 runs the application, a user interface of the application is displayed on a screen of the second terminal 160, and the virtual scene is loaded and displayed in the application based on an opening operation of the second user in the user interface, and the second user uses the second terminal 160 to operate a second virtual object located in the virtual scene for an activity, which includes but is not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing, confronting. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual object controlled by the first terminal 120 and the second virtual object controlled by the second terminal 160 are in the same virtual scene, and the first virtual object can interact with the second virtual object in the virtual scene.
The first virtual object and the second virtual object are in an enemy relationship, for example, the first virtual object and the second virtual object belong to different camps, and between the virtual objects in the enemy relationship, an interaction in a counter manner can be performed on the land, such as throwing a throwing prop. In other embodiments, the first virtual object and the second virtual object are in a teammate relationship, for example, the first virtual character and the second virtual character belong to the same camp, the same team, have a friend relationship or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 and the second terminal 160 are both generally referred to as one of a plurality of terminals, and the embodiment of the present application is illustrated by the first terminal 120 and the second terminal 160.
The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: but not limited to, at least one of a smart phone, a tablet computer, a smart speaker, a smart watch, a smart palm, a portable game device, a vehicle-mounted terminal, a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 160 are each a smart phone, or other handheld portable gaming device. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals is only one, or the number of the terminals is dozens or hundreds, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of an interaction method based on a virtual scene according to an embodiment of the present disclosure. Referring to fig. 2, the embodiment is executed by an electronic device, and is described by taking the electronic device as an example, and the embodiment includes the following steps:
201. and the terminal displays the target object in the virtual scene of the target game.
The terminal refers to an electronic device used by a user, for example, the terminal is a smart phone, a portable game device, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like, but is not limited thereto. The terminal is installed and run with an application program supporting a virtual scene, which is schematically referred to as a game application or a game client, for example, in the embodiment of the present application, a game client of a shooting game will be described as an example, but the game type corresponding to the game client is not limited.
The target object refers to any physical object in the virtual scene, for example, the target object is a target building, for example, a password station, or the target object is a preset interactive prop, for example, a password box.
Optionally, the target object has a camping attribute, that is, when the target-to-game involves multiple camps, a respective target object of each camps needs to be displayed in the virtual scene. Optionally, the target object does not have a camping attribute, that is, the target object displayed in the virtual scene can provide a virtual resource to any camping participating in the target-to-game, which is not specifically limited in this embodiment of the present application.
The target building is schematically a virtual building object in the virtual scene, and the target building is a building, a site, a base, a camp, a place of birth, a spring water, or the like. In one example, the target buildings have a camping attribute, i.e. when the target-to-office involves multiple camps, a respective target building for each camps is displayed in the virtual scene. Illustratively, taking an example that the target office is related to a first marketing and a second marketing, when the target building is a site, the site of the first marketing and the site of the second marketing are displayed in the virtual scene, and the sites include a password station, a transfer station, a supply station, and the like, which is not specifically limited in this embodiment of the present application.
It should be noted that the target object is different from other virtual objects in the virtual scene in that the target object has associated virtual resources, for example, the target object stores the virtual resources, or the target object is bound to the virtual resources, and the virtual resources are unlocked only under specified conditions. For example, the virtual resource is not issued to the game client when the corresponding marketing does not decode the password, and is only issued to the game client when the corresponding marketing completes the password decoding.
In some embodiments, a user starts the game client on a terminal, logs in a game account of the user in the game client, and then displays a user interface in the game client, where the user interface includes account information of the game account, a selection control of a game-play mode, a selection control of a scene map, and a game-play option, the user can select the game-play mode that the user wants to start through the selection control of the game-play mode, can select the scene map that the user wants to enter through the selection control of the scene of the map, and after the user finishes selecting, the terminal is triggered to enter a new game-play round by executing a trigger operation on the game-play option.
It should be noted that the above-mentioned selection operation of the scene map is not a step that has to be executed, for example, in some games, the user is allowed to select the scene map by himself, in other games, the user is not allowed to select the scene map by himself (instead, the server randomly allocates the scene map of the local office at the time of opening the game), or in some office-oriented modes, the user is allowed to select the scene map by himself, and in other office-oriented modes, the user is not allowed to select the scene map by himself, and the embodiment of the present application does not specifically limit whether the user has to select the scene map before opening the game, and whether the user has the option of the scene map.
Taking the game play of the current round as the target play, after the user performs the trigger operation on the play starting option, the game client enters the target play, loads the virtual scene corresponding to the target play, optionally, the game client downloads the multimedia resource of the virtual scene from the server, and renders the multimedia resource of the virtual scene by using the rendering engine, thereby displaying the virtual scene in the game client. The target game is any game in a game matching mode supporting the target object, for example, a virtual building serving as a password station is taken as an example of the target object, the game matching mode supporting the password station is called a password exploration mode, and the game matching in the password exploration mode which is started by the user in turn is the target game.
In some embodiments, the terminal further displays a master virtual object and other virtual objects in the virtual scene, where the master virtual object refers to a virtual object currently manipulated by the terminal (also referred to as a master virtual object, a controlled virtual object, or the like), and the other virtual objects refer to virtual objects in the virtual scene other than the master virtual object, for example, the other virtual objects include: virtual objects manipulated by other users participating in game play, virtual objects controlled by an AI (Artificial Intelligence), and the like, which are not specifically limited in this embodiment of the present application. Optionally, the terminal pulls the multimedia resources of the master control virtual object and the other virtual objects from the server, and renders the multimedia resources of the master control virtual object and the other virtual objects by using the rendering engine, so as to display the master control virtual object and the other virtual objects in the virtual scene.
In some embodiments, for some FPS games, since a virtual scene is observed from a first person perspective (i.e., a perspective of a main control virtual object), a virtual scene picture for observing the virtual scene based on the perspective of the main control virtual object is displayed, but the main control virtual object does not necessarily need to be displayed, for example, only a back shadow of the main control virtual object is displayed, only a part of a body (e.g., an upper body) of the main control virtual object is displayed, or the main control virtual object is not displayed.
In some embodiments, if the user wants to approach the target object to attempt to unlock the virtual resource, the master virtual object may be controlled to move to the target object of the local campaign in the virtual scene according to the guidance of the map control in the virtual scene (for example, the position of the target object is marked in the map control), and when the target object is located within the visual field of the master virtual object, the game client may display the target object in the virtual scene, illustratively, the game client obtains the multimedia resource of the target object, and renders the multimedia resource of the target object by using the rendering engine, so as to display the target object in the virtual scene, where the multimedia resource of the target object includes: an object model, a texture map, and the like of the target object, which are not particularly limited in this embodiment of the present application. Optionally, when the game client loads the multimedia resource of the virtual scene during opening, the multimedia resource of the target object is also loaded to the local for caching, so as to save communication overhead with the server, or the game client loads the multimedia resource of the target object from the server only when the target object enters the visual field range of the main control virtual object, so as to save storage overhead of the local memory, which is not specifically limited in the embodiment of the present application.
202. And the terminal responds to the trigger event of the main control virtual object to the target object and displays the password input box.
The main control virtual object refers to a virtual object controlled by a user through the terminal, and optionally, the camp to which the main control virtual object belongs is a first camp.
Optionally, the triggering event of the master virtual object to the target object includes: triggering the target object by the master control virtual object, namely, controlling the master control virtual object to execute triggering operation on the target object by a user to realize manual triggering; alternatively, the triggering event further comprises: the main control virtual object is located within a preset range of the target object, that is, the user controls the main control virtual object to be close to the target object and located within the preset range of the target object, so that automatic triggering is achieved. The preset range includes a range associated with the target object, for example, the preset range refers to a collision detection range of the target object, or after a radius is preset by a technician, a spherical range can be determined based on the radius and with the target object as a center of circle.
Taking the target object as the target building as an example for explanation, it is assumed that an automatic triggering manner is adopted, that is, the triggering event includes that the master virtual object is located within the first target range of the target building. Under the condition that a target building for first marketing is displayed in a virtual scene, a user controls the main control virtual object to move towards the target building, so that the main control virtual object is close to the target building, when the main control virtual object is located in a first target range of the target building, a trigger event for the target object is determined to be detected, and the password input box is displayed in the virtual scene. The first target range refers to a range associated with a target building, and when any virtual object belonging to the first marketing is located within the first target range, a pop-up password input box can be triggered.
Optionally, the first target range is a range that is no more than a target distance from the target building, such as 5 meters, 3 meters, 1 meter, etc., the target distance being any value greater than or equal to 0.
Optionally, the first target range is a collision detection range of the target building, that is, a range occupied by the collision detection box in the virtual scene, that is, the first target range, by mounting the collision detection box on the building model of the target building.
Taking the target object as the preset interactive prop as an example, it is assumed that a manual triggering manner is adopted, that is, the triggering event includes a triggering operation of the main control virtual object on the preset interactive prop. Under the condition that a preset interactive prop of a first battle is displayed in a virtual scene, a user controls a main control virtual object to move towards the preset interactive prop, so that the main control virtual object is close to the preset interactive prop, when the distance between the main control virtual object and the preset interactive prop is smaller than a distance threshold value, a trigger control of the preset interactive prop is displayed in the virtual scene, the trigger operation of the main control virtual object to the preset interactive prop is determined to be detected in response to the trigger operation of the user to the trigger control, namely, the trigger event to a target object is determined to be detected, and the password input frame is displayed in the virtual scene. The distance threshold is any value greater than 0, such as 5 meters, 3 meters, 1 meter, and the like.
In some embodiments, a password input popup is displayed in a virtual scene, and a password input box is displayed in the password input popup, so that interface display is more hierarchical, or the password input box is displayed in a floating layer manner in the virtual scene, so that the password input box is embedded in the virtual scene to provide immersive experience, or the password input box is displayed on a target object, so that an intuitive effect that a user unlocks a virtual resource after inputting a correct password to the target object is achieved, and the display manner of the password input box is not specifically limited in the embodiments of the present application.
In some embodiments, the game client loads the multimedia resource of the password input box from the server, and renders the multimedia resource of the password input box based on the rendering engine, so that the password input box can be displayed in the virtual scene, optionally, when the game client loads the multimedia resource of the virtual scene at the time of opening, the multimedia resource of the password input box is also loaded to the local for caching, so as to save communication overhead with the server, or when the game client detects a trigger event of the master control virtual object on the target object, the multimedia resource of the password input box is loaded from the server, so as to save storage overhead of the local memory, which is not specifically limited in this embodiment of the present application.
203. And the terminal acquires the input password based on the password input box under the condition that the main control virtual object has the target prop, wherein the target prop is used for providing the times of inputting characters into the password input box.
In some embodiments, since the target property is used to provide the number of times for inputting characters into the password input box, when the main control virtual object already owns the target property, the number of times for inputting characters that can be provided by the target property that represents the main control virtual object already owns is greater than 0, that is, the target property that represents the main control virtual object already owns can provide at least 1 number of times for inputting characters, and the terminal obtains the input password based on the password input box, otherwise, when the main control virtual object does not own the target property, the number of times for inputting characters that can be provided by the target property that represents the main control virtual object already owns is equal to 0, which prompts the user that the target property is insufficient, and the user needs to obtain the target property first to be able to participate in the password deciphering process.
In some embodiments, after the password input box is displayed in the virtual scene, the terminal determines whether the main control virtual object already owns the target prop, and determines the number of times of character input of the main control virtual object based on the number of the target props owned by the main control virtual object under the condition that the main control virtual object already owns the target prop, wherein the number of times of character input is used for representing the number of times of inputting characters into the password input box at most. Optionally, the terminal determines the number of character input times that each target prop can provide, and multiplies the number of target props owned by the master control virtual object by the number of character input times that each target prop can provide, to obtain the number of character input times of the master control virtual object.
In some embodiments, the terminal sends a query request of the number of character inputs to the server, and the server returns the number of character inputs of the master virtual object to the terminal, so that the computing resources of the terminal can be saved.
In some embodiments, in a case that the master virtual object does not own the target prop, the master virtual object itself does not have an opportunity to input a password, and at this time, the password input box is set to a disabled state, and the disabled state does not support an input operation based on the password input box; and under the condition that the main control virtual object has the target prop, setting the password input box to be in an enabled state, and supporting the input operation based on the password input box under the enabled state.
When the main control virtual object has the target property, the user can input a password into the password input box, or the user controls the main control virtual object to input the password into the password input box, when the password is wrong, that is, the target password is not hit, the user is prompted that the password is wrong or does not perform any processing, the virtual resource is kept in a blocked state, the user can input a new password again, until the password is correct, that is, the target password is hit, the user is prompted that the password is correct, and the following step 204 is performed.
204. And the terminal acquires the virtual resource associated with the target object under the condition that the input password hits the target password.
The target password is an unlocking password corresponding to the virtual resource, and the virtual resource can be acquired only when the target password is input by the user, otherwise, if the target password is not input by the user, the user is prompted that the password is wrong or does not perform any processing.
The virtual resource refers to a virtual resource associated with the target object, such as the virtual resource is stored in the target object or the virtual resource is bound to the target object, and the virtual resource includes: the virtual resources comprise a blocking state, an unoccupied state and an occupied state, when a target is started, the virtual resources are in the blocking state, a user can convert the virtual resources from the blocking state to the unoccupied state only by deciphering a target password, the user can control a main control virtual object to obtain the virtual resources, and after the main control virtual object obtains the virtual resources, the virtual resources are converted from the unoccupied state to the occupied state. Optionally, the virtual resource plays a key role in the target game play because the unlocking condition is difficult and the acquisition difficulty is high, for example, the virtual resource has a huge auxiliary effect or a high attack attribute, or can determine a victory or defeat process of the game play, which is not specifically limited in the embodiment of the present application.
When the password input by the user in the password input box hits the target password, the user prompts that the password is correct, at the moment, the game client displays the virtual resource in the virtual scene, and the virtual resource is switched from a blocked state to an unoccupied state. The user may control the main control virtual object to obtain the virtual resource, for example, the virtual resource is issued to a virtual scene, the user manually picks up the virtual resource, or the main control virtual object automatically obtains the virtual resource when the password is correctly input, for example, the server directly issues the virtual resource to a backpack or a material column of the main control virtual object, or the server binds the main control virtual object with the virtual resource, or the server binds a first marketing to which the main control virtual object belongs with the virtual resource, which is not specifically limited in this embodiment of the present application.
In some embodiments, the terminal determines, in real time, a decoding progress of each battle on the target password, where the decoding progress refers to a ratio of the number of decoded plaintext characters to the number of all characters contained in the target password, optionally, a technician sets a first progress threshold in advance, where the first progress threshold is set to 50%, or any value greater than or equal to 0 and less than or equal to 100%, further, when the decoding progress at the current time is lower than the first progress threshold, the virtual resource is set to an invisible state, that is, the user cannot view the virtual resource blocked by the target object, and when the decoding progress exceeds the first progress threshold, the virtual resource is set to a visible state, where the user can view the virtual resource blocked by the target object, so as to facilitate the user to determine whether the virtual resource in the visible state has a value of continuing decoding, and the user can conveniently make a reasonable countermeasure strategy decision.
Optionally, the technician also presets a second progress threshold, for example, the second progress threshold is set to 80%, or any value greater than or equal to 0 and less than or equal to 100%, where the second progress threshold is greater than the first progress threshold. Furthermore, when the decoding progress at the current moment exceeds the first progress threshold but is lower than the second progress threshold, the virtual resource is set to be in a visible state but not occupied, namely, the user can check the virtual resource blocked by the target object at the moment, but the virtual resource cannot be occupied, along with the decoding, when the decoding progress exceeds the second progress threshold, the virtual resource is set to be in an occupied state, and at the moment, the user needs to operate and occupy the virtual resource after decoding, so that the user can know the current decoding progress in time.
In some embodiments, in each target game, the number of the virtual resources is set to 1, but each camp can participate in the password cracking process, and the camp which first breaks the target password acquires the only 1 virtual resource, so that the key role of the virtual resource can be highlighted, and a user can be guided to participate in the password cracking process.
Optionally, the game client obtains a multimedia resource of the virtual resource, renders the multimedia resource of the virtual resource based on the rendering engine, and displays the virtual resource in the virtual scene, where the multimedia resource of the virtual resource includes: name of virtual resource, object model, texture map, identification pattern, etc. And under the condition that the virtual resources need to be manually picked up, displaying the object model and the picking control of the virtual resources in the virtual scene, and responding to the triggering operation of the picking control after a user executes the triggering operation on the picking control so that the main control virtual object can pick up the virtual resources. And under the condition of automatically issuing the virtual resources, directly displaying an acquisition success prompt of the virtual resources in the virtual scene, wherein the acquisition success prompt is used for prompting that the main control virtual object acquires the virtual resources. Optionally, the game client loads the multimedia resource of the virtual resource at the opening moment, or loads the multimedia resource of the virtual resource only when the target password is hit, and the loading time of the multimedia resource of the virtual resource is not specifically limited in the embodiment of the present application.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, certain virtual resources can be obtained only when the target prop is close to the target object and the target password is decoded, and the virtual resources can play some critical roles in game play, so that when a user plays game play, password decoding needs to be completed as far as possible to compete for the virtual resources when the user participates in confrontation behaviors among the virtual objects, an interaction mode of game play is enriched, and the human-computer interaction efficiency is improved.
Fig. 3 is a flowchart of an interaction method based on a virtual scene according to an embodiment of the present application, please refer to fig. 3, which is executed by an electronic device, and is described with the electronic device as a terminal as an example, where the embodiment includes the following steps:
301. and the terminal displays the virtual scene of the target game.
The terminal is installed and run with an application program supporting a virtual scene, which is schematically referred to as a game application or a game client, for example, in the embodiment of the present application, a game client of a shooting game will be described as an example, but the game type corresponding to the game client is not limited.
In some embodiments, a user starts the game client on a terminal, logs in a game account of the user in the game client, and then displays a user interface in the game client, where the user interface includes account information of the game account, a selection control of a game-play mode, a selection control of a scene map, and a game-play option, the user can select the game-play mode that the user wants to start through the selection control of the game-play mode, can select the scene map that the user wants to enter through the selection control of the scene of the map, and after the user finishes selecting, the terminal is triggered to enter a new game-play round by executing a trigger operation on the game-play option.
It should be noted that the above-mentioned selection operation of the scene map is not a step that has to be executed, for example, in some games, the user is allowed to select the scene map by himself, in other games, the user is not allowed to select the scene map by himself (instead, the server randomly allocates the scene map of the local office at the time of opening the game), or in some office-oriented modes, the user is allowed to select the scene map by himself, and in other office-oriented modes, the user is not allowed to select the scene map by himself, and the embodiment of the present application does not specifically limit whether the user has to select the scene map before opening the game, and whether the user has the option of the scene map.
Taking the game play of the current round as the target play, after the user performs the trigger operation on the play starting option, the game client enters the target play, loads the virtual scene corresponding to the target play, optionally, the game client downloads the multimedia resource of the virtual scene from the server, and renders the multimedia resource of the virtual scene by using the rendering engine, thereby displaying the virtual scene in the game client. The target game is any game in a game matching mode supporting the target object, for example, a virtual building serving as a password station is taken as an example of the target object, the game matching mode supporting the password station is called a password exploration mode, and the game matching in the password exploration mode which is started by the user in turn is the target game.
In some embodiments, the terminal further displays a master virtual object and other virtual objects in the virtual scene, where the master virtual object refers to a virtual object currently manipulated by the terminal (also referred to as a master virtual object, a controlled virtual object, or the like), and the other virtual objects refer to virtual objects in the virtual scene other than the master virtual object, for example, the other virtual objects include: virtual objects manipulated by other users participating in game play, virtual objects controlled by an AI (Artificial Intelligence), and the like, which are not specifically limited in this embodiment of the present application. Optionally, the terminal pulls the multimedia resources of the master control virtual object and the other virtual objects from the server, and renders the multimedia resources of the master control virtual object and the other virtual objects by using the rendering engine, so as to display the master control virtual object and the other virtual objects in the virtual scene.
In some embodiments, for some FPS games, since a virtual scene is observed from a first person perspective (i.e., a perspective of a main control virtual object), a virtual scene picture for observing the virtual scene based on the perspective of the main control virtual object is displayed, but the main control virtual object does not necessarily need to be displayed, for example, only a back shadow of the main control virtual object is displayed, only a part of a body (e.g., an upper body) of the main control virtual object is displayed, or the main control virtual object is not displayed.
302. And the terminal displays the target object of the first battle in the virtual scene.
In this embodiment of the application, it is described by taking an example that the target object has an camp attribute, that is, assuming that the target is involved in a first camp and a second camp, the first camp is a camp to which the main control virtual object controlled by the terminal belongs, and the second camp is a camp other than the first camp, so that the target object of the first camp and the target object of the second camp are respectively constructed in the virtual scene, and when the target object of the first camp is located in the visual field range of the main control virtual object, the terminal displays the target object of the first camp in the virtual scene. In other embodiments, the target object does not have the camping attribute, and then a target object may be constructed in the virtual scene, and the camping for decoding the target password first acquires the virtual resource associated with the target object.
In some embodiments, if the user wants to approach the target object to attempt to unlock the virtual resource, the master virtual object may be controlled to move to the target object of the local campaign in the virtual scene according to the guidance of the map control in the virtual scene (for example, the position of the target object is marked in the map control), and when the target object is located within the visual field of the master virtual object, the game client may display the target object in the virtual scene, illustratively, the game client obtains the multimedia resource of the target object, and renders the multimedia resource of the target object by using the rendering engine, so as to display the target object in the virtual scene, where the multimedia resource of the target object includes: an object model, a texture map, and the like of the target object, which are not particularly limited in this embodiment of the present application. Optionally, when the game client loads the multimedia resource of the virtual scene during opening, the multimedia resource of the target object is also loaded to the local for caching, so as to save communication overhead with the server, or the game client loads the multimedia resource of the target object from the server only when the target object enters the visual field range of the main control virtual object, so as to save storage overhead of the local memory, which is not specifically limited in the embodiment of the present application.
Taking a target object as an example of a target building, fig. 4 is a schematic position diagram of a target building provided in this embodiment of the present application, as shown in fig. 4, a scene map 400 of this time of target matching is shown, where the scene map 400 includes a target building 401 (i.e., point a) of a first marketing, a target building 402 (i.e., point B) of a second marketing, and a target position 403 (i.e., point C), schematically, in a matching mode to which the target matching belongs, each virtual object of the first marketing needs to be unlocked from point a as much as possible and acquire a virtual resource, each virtual object of the second marketing needs to be unlocked from point B as much as possible and acquire a virtual resource, and it is necessary to first deliver the virtual resource to the virtual object of point C and then win of the target matching.
303. And the terminal responds to a trigger event of the main control virtual object to the target object and displays a password input box, wherein the main control virtual object belongs to the first marketing.
Optionally, the triggering event of the master virtual object to the target object includes: triggering the target object by the master control virtual object, namely, controlling the master control virtual object to execute triggering operation on the target object by a user to realize manual triggering; alternatively, the triggering event further comprises: the main control virtual object is located within a preset range of the target object, that is, the user controls the main control virtual object to be close to the target object and located within the preset range of the target object, so that automatic triggering is achieved. The preset range includes a range associated with the target object, for example, the preset range refers to a collision detection range of the target object, or after a radius is preset by a technician, a spherical range can be determined based on the radius and with the target object as a center of circle.
Taking the target object as the target building as an example for explanation, it is assumed that an automatic triggering manner is adopted, that is, the triggering event includes that the master virtual object is located within the first target range of the target building. Under the condition that a target building for first marketing is displayed in a virtual scene, a user controls the main control virtual object to move towards the target building, so that the main control virtual object is close to the target building, when the main control virtual object is located in a first target range of the target building, a trigger event for the target object is determined to be detected, and the password input box is displayed in the virtual scene. The first target range refers to a range associated with a target building, and when any virtual object belonging to the first marketing is located within the first target range, a pop-up password input box can be triggered.
Optionally, the first target range is a range that is no more than a target distance from the target building, such as 5 meters, 3 meters, 1 meter, etc., the target distance being any value greater than or equal to 0.
In this case, the game client acquires the distance between the master virtual object and the target building, and determines that the master virtual object is located within the first target range of the target building when detecting that the distance is smaller than the target distance.
Optionally, the first target range is a collision detection range of the target building, that is, a range occupied by the collision detection box in the virtual scene, that is, the first target range, by mounting the collision detection box on the building model of the target building. Similarly, a collision detection box is also mounted on the character model of the master virtual object, and represents the collision detection range of the master virtual object.
In this case, it is determined that the master virtual object is located within the first target range of the target building when the collision detection range of the master virtual object intersects with the collision detection range of the target building, in other words, the collision detection box of the master virtual object collides with the collision detection box of the target building.
Taking the target object as the preset interactive prop as an example, it is assumed that a manual triggering manner is adopted, that is, the triggering event includes a triggering operation of the main control virtual object on the preset interactive prop. Under the condition that a preset interactive prop of a first battle is displayed in a virtual scene, a user controls a main control virtual object to move towards the preset interactive prop, so that the main control virtual object is close to the preset interactive prop, when the distance between the main control virtual object and the preset interactive prop is smaller than a distance threshold value, a trigger control of the preset interactive prop is displayed in the virtual scene, the trigger operation of the main control virtual object to the preset interactive prop is determined to be detected in response to the trigger operation of the user to the trigger control, namely, the trigger event to a target object is determined to be detected, and the password input frame is displayed in the virtual scene. The distance threshold is any value greater than 0, such as 5 meters, 3 meters, 1 meter, and the like.
In some embodiments, when a trigger event of a main control virtual object to a target object is detected, a password input popup is displayed in a virtual scene, and the password input box is displayed in the password input popup, so that the interface display is more hierarchical, or the password input box is displayed in a floating layer manner in the virtual scene, so that the password input box is embedded in the virtual scene to provide immersive experience, or the password input box is displayed on the target object, so that an intuitive effect that a user unlocks a virtual resource after inputting a correct password to the target object is achieved.
In some embodiments, the game client loads the multimedia resource of the password input box from the server, and renders the multimedia resource of the password input box based on the rendering engine, so that the password input box can be displayed in the virtual scene, optionally, when the game client loads the multimedia resource of the virtual scene at the time of opening, the multimedia resource of the password input box is also loaded to the local for caching, so as to save communication overhead with the server, or when the game client detects a trigger event of the master control virtual object on the target object, the multimedia resource of the password input box is loaded from the server, so as to save storage overhead of the local memory, which is not specifically limited in this embodiment of the present application.
In some embodiments, the password cracking of the virtual resource associated with the target object can be performed independently by only a single virtual object, that is, only a single virtual object is allowed to crack the final correct target password through multiple attempts, and only the cracking progress achieved by the current master virtual object on the last attempt is displayed in the password input box.
In some embodiments, the cryptographic decoding of the virtual resource associated with the target object is completed by cooperation of each virtual object controlled by the user in the same camp, that is, the cryptographic decoding process is performed in a camp unit, and when any virtual object in the same camp advances the decoding progress, the server synchronizes the latest decoding progress of the camp to each terminal corresponding to all virtual objects belonging to the same camp in the target-pair office, so that the latest decoding progress corresponding to the first camp is displayed in the cryptographic input box. Schematically, the server allocates ciphertext characters to be decoded to each virtual object in the same battle from the target password, that is, each virtual object does not need to pay attention to the decoding condition of characters at other positions in the target password, and only needs to decode ciphertext characters in charge of the virtual object, and the embodiment of the application does not specifically limit the password decoding rule.
In some embodiments, when a trigger event of the master virtual object of the first camp on the target object of the first camp is detected, a password input box is popped up, it needs to be noted that when a trigger event of any virtual object on the target object is detected, it can be determined whether the camp on which the virtual object executing the trigger event belongs is the same as the camp on which the target object belongs, when the camp on which the virtual object belongs is the same as the camp on which the target object belongs, the password input box is popped up, otherwise, the password input box is not popped up. Taking the target object as a target building as an example, when the master control virtual object of the first camp enters the first target range of the target building of the second camp, the password input box cannot pop up.
In some embodiments, it is not necessary to execute logic for determining whether the camp of the virtual object belongs to the same as the camp of the target object, that is, when a trigger event of any camp virtual object on the target object is detected, a password input box is popped up, at this time, the camp attribute of the target object may be removed, or the target object has a dual camp attribute, which is not specifically limited in this embodiment of the present application.
In some embodiments, for example, taking a camp as a unit to perform password cracking, assuming that a unique and correct target password includes N characters (N ≧ 1), each virtual object of the first camp cracks the N characters one by one, and finally cooperates to make up the target password, where the target password includes the following types of characters: at least one of a number, capital letter, lowercase letter, chinese character, or special symbol. Optionally, the N characters are only allowed to be decoded one by one from front to back, that is, only the ith (i is greater than or equal to 1 and less than or equal to N) character decoding is completed, the decoding link of the (i + 1) th character can be opened, or the first character after being decoded is allowed to be decoded, that is, whether the ith character is decoded completely or not, the (i + 1) th character can be tried to be decoded, and the embodiment of the present application does not specifically limit the character decoding order. Optionally, the value of N is dynamically determined based on the number of virtual objects participating in target game-play in each battle, so that each character in the target password needs to be deciphered by a designated player in each battle, and the team cooperation effect in password deciphering can be increased.
In some embodiments, for the current time, in a case that the first burst does not decode the characters of the target password, that is, the virtual objects in the first burst do not decode each of the N characters, the decoding rate of the first burst is 0, where the decoding rate is a ratio of the decoded characters in the N characters. At this time, since any plaintext character is not decoded, only the input box of the ciphertext character to be decoded is displayed in the virtual scene.
In some embodiments, for the current time, in a case that the first burst has decoded any character of the target password, that is, each virtual object in the first burst has decoded a part of the N characters, and the decoding progress of the first burst is greater than 0, at this time, since the decoding progress of the first burst is open to all virtual objects in the first burst, the decoded plaintext character is displayed in the virtual scene, and the input box of the ciphertext character to be decoded is displayed. In other words, if the main control virtual object or other virtual objects in the first formation have already decoded part of characters in the target password at the last time, then for the decoded part of characters, the plaintext characters can be directly displayed, and the input frame of the ciphertext character is displayed for the other part to be decoded, so that the decoding progress of the target password in the first formation can be clear at a glance, the situation that a user needs to repeatedly record the decoded plaintext characters is avoided, the information acquisition efficiency of the plaintext characters is improved, and the decoding efficiency of the target password is also improved.
In some embodiments, when the game client loads the multimedia resource of the password input box to the server, if the server detects that each virtual object of the first marketing has already decoded a part of the N characters, that is, the decoding progress of the first marketing is greater than 0, the server returns the decoded plaintext characters to the game client along with the multimedia resource of the password input box, or because the decoded plaintext characters themselves are displayed in the password input box, the decoded plaintext characters themselves may be regarded as a part of the multimedia resource of the password input box, which is not specifically limited in this embodiment of the present application. At this time, if the multimedia resource of the password input box received by the game client does not contain any plaintext character, it is indicated that the decoding progress of the first battle is 0, that is, no plaintext character needs to be displayed in the virtual scene.
Fig. 5 is an interface schematic diagram of a password input box according to an embodiment of the present application, as shown in fig. 5, taking an example that a target password includes 6 characters, in the password input box 500, 1 plaintext character 501 and 5 ciphertext characters 502 to 506 are included, that is, a 1 st character of a target password that has been decoded by each virtual object in a first camp is "3" at present, so that a plaintext character 501 is directly displayed at a position of the 1 st character in the password input box 500, and the remaining 5 characters have not been decoded yet, so that ciphertext characters 502 to 506 are displayed at positions from a 2 nd character to a 6 th character. In an exemplary scenario, under the condition that only the target password can be decoded from front to back, assuming that only 1 character can be input at a time, the 1 st character is decoded to become a plaintext character 501 at the current moment, at this time, the user can only decode the 2 nd ciphertext character 502 first, and can continue to decode the 3 rd ciphertext character 503, and when all the 6 characters are decoded, the virtual resource associated with the target object can be obtained.
In some embodiments, if the decoding order of the characters included in the target password is not limited, for the ciphertext characters to be decoded, the input box corresponding to each ciphertext character to be decoded is displayed, that is, the user can select the input box corresponding to any character desired to be decoded to perform the input operation.
In some embodiments, if it is limited that the ciphertext character to be decrypted can only be decrypted in the order from front to back, the ciphertext character to be decrypted may be divided into a first ciphertext character and a second ciphertext character, the first ciphertext character being the first ciphertext character to be decrypted, the second ciphertext character being a ciphertext character other than the first ciphertext character, and since the ciphertext can only be decrypted in the order from front to back, it may be known that the first ciphertext character is all preceded by plaintext characters or that the first ciphertext character is the first character of the target password (i.e., there are no other characters before the first ciphertext character). In this case, since the decoding can be performed only in the order from front to back, the decoding of the second ciphertext character cannot be started when the first ciphertext character is not decoded, and only the input box and the second ciphertext character corresponding to the first ciphertext character can be displayed at this time, that is, only after the first ciphertext character is decoded, the first ciphertext character is converted into a plaintext character, the input box of the current first second ciphertext character is unlocked, that is, the current first second ciphertext character is converted into a new first ciphertext character.
304. The terminal determines the number of first target props owned by the main control virtual object, and the first target props are used for providing the times of inputting characters into the password input box for the virtual object belonging to the first marketing.
Wherein the first target prop is one of the target props, the target prop is used for providing the virtual object with times of inputting characters into the password input box, and optionally, the prop type of the target prop includes: the system comprises a virtual token, a virtual card board, a virtual magnetic card, a virtual chip and the like, and the prop type of the target prop is not specifically limited in the embodiment of the disclosure.
In some embodiments, the target prop is divided into a first target prop and a second target prop according to a function of providing the number of times of character input for virtual objects of different avatars, the first target prop is used for providing the number of times of character input for a virtual object belonging to a first avant, the second target prop is used for providing the number of times of character input for a virtual object belonging to a second avant, and the first avant and the second avant are in an antagonistic relationship.
It should be noted that, only two camps having an antagonistic relationship are involved in the target-pair office, and if the target-pair office involves three or more camps having an antagonistic relationship with each other, a respective target prop is also configured for each camp, that is, the number of types of target props is consistent with the number of camps.
Optionally, the first target prop and the second target prop have different display modes, for example, the first target prop and the second target prop have different colors, different shapes, different sizes, or the like, for example, the first target prop is red, and the second target prop is blue, which is not specifically limited in this embodiment of the application.
One possible generation logic for a first target property and a second target property is described below, the generation logic comprising: the first target prop appears in the virtual scene when the virtual object belonging to the second camp is defeated, and the second target prop appears in the virtual scene when the virtual object belonging to the first camp is defeated.
In some embodiments, the first target prop or the second target prop falling in the virtual scene can only be picked up within a specified time, and otherwise will disappear, where the specified time is any value greater than 0, for example, the specified time is 5 seconds, or 3 seconds, or 10 seconds, and the timing is started from the moment when the first target prop or the second target prop falls in the virtual scene, each virtual object can be picked up within the specified time, but when the specified time is exceeded, the first target prop or the second target prop disappears from the virtual scene, that is, recovered by the system.
Fig. 6 is a schematic diagram of a first target item provided in this embodiment of the application, and as shown in fig. 6, when a user controls a main control virtual object of a first battle to defeat an enemy virtual object 601 of a second battle, a first target item 602 appears in a virtual scene 600, where the first target item 602 includes a first target item awarded by defeat and a target item originally held by the enemy virtual object 601. At this point, the user may control the master virtual object to pick up the first target prop 602 falling in the virtual scene 600.
Fig. 7 is a schematic diagram of a second target item provided in this embodiment of the present application, and as shown in fig. 7, when a friend virtual object 701 of a first battle is defeated, a second target item 702 appears in a virtual scene 700, where the second target item 702 includes a second target item awarded by the defeat and a target item originally held by the friend virtual object 701. At this time, the user may control the main control virtual object to pick up the second target prop 702 falling in the virtual scene 700, and at this time, although the second target prop 702 cannot provide the character input times for the main control virtual object after the main control virtual object picks up the second target prop 702, this second target prop 702 may be prevented from being picked up by an enemy virtual object in the second battle, thereby achieving the purpose of reducing the character input times in the second battle.
Illustratively, when the master control virtual object is defeated, because the master control virtual object belongs to the first battle, the system can generate a second target prop in the virtual scene as a defeated reward prop, and simultaneously all the target props (whether the first target prop or the second target prop) held by the master control virtual object fall into the virtual scene. Optionally, the defeating of the bonus property automatically enters the pack of the defeater, and the rest of the dropped target properties need to be picked up manually by the defeater, or the defeating of the bonus property and the rest of the dropped target properties automatically enter the pack of the defeater, or the defeating of the bonus property and the rest of the dropped target properties need to be picked up manually by the defeater, which is not specifically limited in the embodiment of the present application.
Since the first target prop and the second target prop are different only in that the character input times are provided to the virtual objects in different arrangements, that is, the first target prop and the second target prop both belong to the target props and have some basic attributes (or common attributes and commonalities) of the target props, the basic attributes of the target props will be described below.
Optionally, each target prop is configured to provide 1 character input time, or each target prop is configured to provide 2 character input times, where the number of character input times that can be provided by each target prop is configured by a technician on the server side, or the server randomly configures the number of character input times that can be provided by each target prop in real time, so as to increase uncertainty and interestingness when the target prop is picked up, which is not specifically limited by the embodiment of the present disclosure.
Optionally, the target property is bound to a virtual object holding the target property, that is, if the virtual object is defeated, eliminated, killed, seriously damaged, out of office, and the virtual life value is reduced to 0, or the virtual life value is smaller than a preset life value, each target property bound to the virtual object will disappear, that is, recovered by the system. Wherein, the preset life value is any value which is greater than or equal to 0.
Optionally, the target property is not bound to the virtual object holding the target property, that is, the virtual object holding the target property only temporarily holds the right of use of the target property, and if the virtual object is defeated, eliminated, killed, damaged, released, and the virtual life value is reduced to 0, or the virtual life value is smaller than the preset life value, each target property bound to the virtual object does not disappear, but falls into the virtual scene, and can be picked up by other virtual objects, that is, the system does not recover the target property.
In some embodiments, taking an example that the target property and the virtual object holding the target property are not bound, for the main control virtual object controlled by the terminal, in response to the main control virtual object being defeated, eliminated, killed, damaged, released, and the virtual life value is reduced to 0 or the virtual life value is smaller than the preset life value, each target property owned by the main control virtual object is deducted, that is, the main control virtual object loses the right to use each originally owned target property, and then each deducted target property is displayed in the virtual scene, so that each target property can be picked up by other virtual objects again.
In some embodiments, taking the master virtual object as an example, the logic for acquiring the target prop includes: and automatically picking up the target prop, namely, when the target prop is displayed in the virtual scene, the user controls the main control virtual object to move towards the target prop, and controls the main control virtual object to automatically acquire the target prop in response to the main control virtual object being located in a second target range of any target prop. The second target range refers to a range associated with the target prop, and when any virtual object is located within the second target range, the target prop can be automatically acquired.
Optionally, the second target range is a range that is not farther away from the target prop by a preset distance, for example, the preset distance is 1 meter, 0.5 meter, 0.1 meter, and the like, and the preset distance is any value greater than or equal to 0.
In this case, the game client obtains the distance between the main control virtual object and the target prop, and determines that the main control virtual object is located in the second target range of the target prop when detecting that the distance is smaller than the preset distance.
Optionally, the second target range is a collision detection range of the target prop, that is, by mounting a collision detection box on the prop model of the target prop, a range occupied by the collision detection box in the virtual scene is the second target range. Similarly, a collision detection box is also mounted on the character model of the master virtual object, and represents the collision detection range of the master virtual object.
In this case, the determination may be made through the collision detection range, that is, when there is an intersection between the collision detection range of the master virtual object and the collision detection range of the target prop, in other words, the collision detection box of the master virtual object collides with the collision detection box of the target prop, and at this time, it is determined that the master virtual object is located in the second target range of the target prop.
Fig. 8 is a schematic diagram of a collision detection range of a master control virtual object according to an embodiment of the present application, and as shown in fig. 8, a collision detection range 801 corresponding to a master control virtual object 800 is shown, and fig. 9 is a schematic diagram of a collision detection range of a target prop according to an embodiment of the present application, and as shown in fig. 9, a collision detection range 901 corresponding to a target prop 900 is shown. When the user controls the main control virtual object 800 to move to the target prop 900 in the virtual scene, and when there is an intersection between the collision detection range 801 of the main control virtual object 800 and the collision detection range 901 of the target prop 900, it represents that the main control virtual object 800 is located in the second target range of the target prop 900, that is, the logic for acquiring the target prop 900 is triggered.
In other embodiments, taking the master control virtual object as an example, the logic for acquiring the target property includes: and (2) floating a pickup control close to the target prop, manually picking up the target prop by using the pickup control, namely, when the target prop is displayed in a virtual scene, controlling the main control virtual object to move towards the target prop by a user, displaying the pickup control of the target prop in response to the main control virtual object being positioned in a second target range of any target prop, executing trigger operation on the pickup control by the user, and controlling the main control virtual object to acquire the target prop in response to the trigger operation on the pickup control. Optionally, the triggering operation on the pick-up control includes: the method comprises the steps of clicking operation, double-clicking operation, long-time pressing operation, voice instruction, gesture instruction and the like, and the type of the triggering operation is not specifically limited in the embodiment of the application. Optionally, the user may further perform a trigger operation on the target prop to control the main control virtual object to obtain the target prop, and illustratively, the user may click the target prop or click a pickup control of the target prop, both of which can control the main control virtual object to obtain the target prop.
In some embodiments, the target property obtained by the master control virtual object can be shared with each virtual object in the first camp, the user can select whether to share or not, or the system automatically completes sharing, under the condition of automatic sharing, each virtual object equivalent to the same camp jointly maintains a property pool of the target property, the target property in the property pool can be used by each virtual object, and the target property newly obtained by each virtual object can be automatically added into the property pool, so that the efficiency of team cooperation for deciphering the password is improved.
In step 304, since the master virtual object may carry a plurality of target props at the current time, and the target prop carried by the master virtual object may include both the first target prop and the second target prop, and therefore, because the second target prop cannot provide the character input times for the main control virtual object of the first battle, the number of the first target props owned by the main control virtual object needs to be determined, optionally, the game client sends a prop number query request to the server, the item quantity inquiry request carries the object identification of the main control virtual object and the item identification of the first target item, the server takes the item identification as an index, and the server receives the item quantity inquiry request from the corresponding game data of the object identification, and inquiring to obtain the quantity of the first target prop, and returning the quantity of the first target prop to the terminal.
305. And the terminal determines the character input times corresponding to the main control virtual object based on the number of the first target props owned by the main control virtual object, wherein the character input times are used for representing the times of inputting characters into the password input box at most.
In some embodiments, the game client determines the number of times of character input that each first target item can provide, and multiplies the number of first target items that the master control virtual object already has by the number of times of character input that each first target item can provide, to obtain the number of times of character input that the master control virtual object corresponds to.
In some embodiments, the game client does not need to execute the step 304-305, but sends a query request of the number of character input times to the server, and the server executes the step 304-305, and returns the number of character input times corresponding to the master control virtual object to the game client, so that the computing resources occupied by the game client on the terminal can be saved.
In some embodiments, if the target item is not distinguished from the first target item, then step 304-305 may be replaced with: and acquiring the number of the target props owned by the main control virtual object, and determining the character input times of the main control virtual object based on the number of the target props owned by the main control virtual object.
In some embodiments, since the target prop is used to provide the number of times for inputting characters, in the case that the master virtual object does not own the first target prop, the master virtual object itself has no chance to input a password, and at this time, the password input box is set to a disabled state, and the disabled state does not support an input operation based on the password input box; in the case where the master virtual object already has the first target prop, step 306 described below is entered on behalf of the master virtual object itself having at least one opportunity to enter a password.
306. And the terminal sets the password input box to be in an enabled state, and the input operation based on the password input box is supported in the enabled state.
In some embodiments, the terminal sets the password input box to the enabled state when the master control virtual object already has the first target prop, and the user can perform an input operation on the password input box in the enabled state at any time to input a password to be tried, that is, for a ciphertext character to be deciphered, if the password input this time hits the ciphertext character to be deciphered in the target password, the ciphertext character to be deciphered is converted into a plaintext character to prompt that the current ciphertext character is deciphered completely, and the user can control the master control virtual object to try many times to decipher part or all of the ciphertext characters when the number of times of character input allows.
307. The terminal acquires the input password in response to an input operation based on the password input box.
In some embodiments, after the password input box is displayed in the virtual scene, the user may input a password into the password input box, or the user controls the master virtual object to input a password into the password input box, and then, in response to an input operation based on the password input box, the input password is acquired, in other words, the terminal acquires the password input by the user based on the password input box, and may only input one character at a time, or may input a plurality of characters at a time, which does not exceed the number of times of inputting the currently owned character, which is not specifically limited in this embodiment of the present application.
For example, the input operation is an input operation based on a text input method, or the input operation is an input operation of converting speech into text, or an input button for each character is displayed in a virtual scene, and the user can input the corresponding character as the password to be input this time by clicking the input button.
In some embodiments, in response to any input operation based on the password input box, the first target prop already owned by the main control virtual object is deducted, for example, if each target prop is used to provide a number of times of character input, one first target prop is deducted each time the input operation is detected, or for example, if two target props can provide a number of times of character input, two first target props are deducted each time the input operation is detected, which is not specifically limited in this embodiment of the present application.
In some embodiments, when the input password is wrong, i.e. the target password is not hit, the password is prompted to be wrong or no processing is performed, the virtual resource associated with the target object is kept blocked, and the user can input a new password again until the input password is correct, i.e. the target password is hit, the password is prompted to be correct, and the following step 308 is performed.
308. And the terminal acquires the virtual resource associated with the target object under the condition that the input password hits the target password.
The target password is an unlocking password corresponding to the virtual resource, and the virtual resource can be acquired only when the target password is input by the user, otherwise, if the target password is not input by the user, the user is prompted that the password is wrong or does not perform any processing.
The virtual resource refers to a virtual resource associated with the target object, such as the virtual resource is stored in the target object or the virtual resource is bound to the target object, and the virtual resource includes: the virtual resources comprise a blocking state, an unoccupied state and an occupied state, when a target is started, the virtual resources are in the blocking state, a user can convert the virtual resources from the blocking state to the unoccupied state only by deciphering a target password, the user can control a main control virtual object to obtain the virtual resources, and after the main control virtual object obtains the virtual resources, the virtual resources are converted from the unoccupied state to the occupied state. Optionally, the virtual resource plays a key role in the target game play because the unlocking condition is difficult and the acquisition difficulty is high, for example, the virtual resource has a huge auxiliary effect or a high attack attribute, or can determine a victory or defeat process of the game play, which is not specifically limited in the embodiment of the present application.
In some embodiments, if the input password hits the target password under the limitation of the number of times corresponding to the input of the characters, that is, if the target password is obtained based on the password input box, the input password represents that all the characters of the target password have been decoded, the virtual resource associated with the target object is unlocked, for example, the virtual resource is switched from a blocked state to an unoccupied state, then the virtual resource is displayed in the virtual scene, the user can control the master virtual object to obtain the virtual resource, and switch the virtual resource from the unoccupied state to the occupied state, for example, the virtual resource in the unoccupied state is sent to the virtual scene, and the user can control the master virtual object to manually pick up the virtual resource, so that the virtual resource becomes the occupied state. The process of manually picking up the virtual resource is similar to the acquisition of manually picking up the target prop, and is not described herein again.
In some embodiments, when the target password is obtained based on the password input box, the main control virtual object automatically obtains the virtual resource, that is, directly switches the virtual resource from the blocked state to the occupied state, for example, the server directly issues the virtual resource to a backpack or a material column of the main control virtual object, or the server binds the main control virtual object to the virtual resource, or the server binds a first formation to which the main control virtual object belongs to the virtual resource, which is not specifically limited in this embodiment of the present application.
Optionally, the game client obtains a multimedia resource of the virtual resource, renders the multimedia resource of the virtual resource based on the rendering engine, and displays the virtual resource in the virtual scene, where the multimedia resource of the virtual resource includes: name of virtual resource, object model, texture map, identification pattern, etc.
And under the condition that the virtual resources need to be manually picked up, displaying the object model and the picking control of the virtual resources in the virtual scene, and responding to the triggering operation of the picking control after a user executes the triggering operation on the picking control so that the main control virtual object can pick up the virtual resources.
And under the condition of automatically issuing the virtual resources, directly displaying an acquisition success prompt of the virtual resources in the virtual scene, wherein the acquisition success prompt is used for prompting that the main control virtual object acquires the virtual resources.
Optionally, the game client loads the multimedia resource of the virtual resource at the opening moment, or loads the multimedia resource of the virtual resource only when the target password is hit, and the loading time of the multimedia resource of the virtual resource is not specifically limited in the embodiment of the present application.
In some embodiments, since the virtual resource plays a critical role in the target-to-office, after the target password is decoded and decoded by the virtual object of any camp and the virtual resource is acquired, the camp can be determined as the target-to-office acquirer; for another example, when a virtual object in any battle moves the virtual resource to a target position, the battle to which the virtual object belongs is taken as a winner of the target game, so that competition for the virtual resource, the target property and the like is intensified, and abundant and redundant interaction modes are increased.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, certain virtual resources can be obtained only when the target prop is close to the target object and the target password is decoded, and the virtual resources can play some critical roles in game play, so that when a user plays game play, password decoding needs to be completed as far as possible to compete for the virtual resources when the user participates in confrontation behaviors among the virtual objects, an interaction mode of game play is enriched, and the human-computer interaction efficiency is improved.
Fig. 10 is a schematic diagram of an interaction method based on a virtual scene according to an embodiment of the present application, and as shown in fig. 10, an example is a password exploration mode in a game mode of a target game, where the embodiment includes:
step one, selecting a game-play mode as a password exploration mode in a game client, and starting target game-play.
Step two, judge whether the main control virtual object defeats the enemy? If the enemy is defeated, entering a third step; and if the enemy is not defeated, returning to the step one.
That is, the user controls the main control virtual object of the first battle to confront the enemy virtual object of the second battle, the main control virtual object causes a damage value to the enemy virtual object, the virtual life value of the enemy virtual object is deducted, when the virtual life value of the enemy virtual object is reduced to 0, the enemy virtual object is determined to be defeated, and otherwise, the enemy virtual object is determined not to be defeated, namely, the enemy is not determined.
Optionally, the user controls the main control virtual object to shoot the enemy virtual object by using the shooting type prop, if the launcher hits the enemy virtual object, the virtual life value of the enemy virtual object is deducted, and when the virtual life value of the enemy virtual object is reduced to 0, the enemy virtual object is determined to be defeated.
Fig. 11 is a schematic diagram of a virtual scene provided in an embodiment of the present application, as shown in fig. 11, a virtual scene 1100 includes a main control virtual object 1101 and an enemy virtual object 1102, and after the main control virtual object 1101 aims at the enemy virtual object 1102 using a shooting type prop, a firing key may be pressed to trigger launching of a related launcher, at this time, the following detection logic is executed: determining a ray from the launching point of the shooting type prop to the aiming direction, wherein the end point of the ray is the launching point of the shooting type prop, the direction of the ray is the aiming direction, the length of the ray is the range (the farthest shooting range) of the shooting type prop, if the ray passes through the collision detection range of the enemy virtual object 1102, namely the ray is intersected with the collision detection range of the enemy virtual object 1102, determining that the launcher hits the enemy virtual object 1102, calculating the damage value of the launcher to the enemy virtual object 1102 at the moment, and if the virtual life value of the enemy virtual object 1102 minus the damage value is less than or equal to 0, determining that the enemy virtual object 1102 is hit.
And step three, the enemy drops the virtual token (namely the target prop) into the virtual scene.
That is, after the main control virtual object defeats the enemy virtual object, each target prop owned by the enemy virtual object falls into the virtual scene, and at this time, the main control virtual object can pick up each target prop falling into the virtual scene.
Step four, judging whether the main control virtual object is close to the virtual token? If the virtual token is close to the virtual token, entering a fifth step; and if the virtual token is not close to the virtual token, returning to the step three.
Namely, the user controls the main control virtual object to approach the target prop, and if the main control virtual object is located in a second target range (such as a collision detection range) of any target prop, the step five is carried out; otherwise, returning to the third step.
And step five, the master control virtual object acquires the virtual token.
That is, the main control virtual object automatically picks up the target prop after entering the second target range of the target prop, or the main control virtual object emerges a pickup control of the target prop after entering the second target range of the target prop, and the main control virtual object is manually controlled to pick up the target prop based on the trigger operation of the user on the pickup control.
Step six, judging whether the main control virtual object is defeated? If the result is defeated, entering a seventh step; if not, go to step eight.
Namely, in the process that the main control virtual object participates in the target game, detecting whether the virtual life value of the main control virtual object is reduced to 0, if the virtual life value of the main control virtual object is reduced to 0, determining that the main control virtual object is defeated, and entering a seventh step; otherwise, go to step eight.
And step seven, all the virtual tokens held by the main control virtual object fall into the virtual scene, and the step two is returned.
And when the main control virtual object is defeated, all the target properties owned by the main control virtual object fall into the virtual scene, and the step II is returned.
And step eight, the master control virtual object searches for a password station (namely the target building).
In the embodiment of the application, the target object is taken as an example of a target building, and when the main control virtual object is not defeated, the user controls the main control virtual object to search for the target building and to be close to the target building.
Step nine, judging whether the master control virtual object enters the password station? If entering the password station, entering a step ten; and if the password station is not entered, returning to the step eight.
Judging whether the main control virtual object is located in a first target range of the target building, if so, determining that the main control virtual object is located in the first target range of the target building, and entering a step ten; otherwise, returning to the step eight.
And step ten, popping up a password input box and inputting a password.
That is, entering the password station represents detecting a trigger event of the master virtual object to the target building (i.e., the target object), popping up a password input box, and acquiring the password input by the user in the password input box.
Eleventh, judge whether the current ciphertext character is decoded successfully? If the translation is successful, the character at the corresponding position in the target password is hit, and the step thirteen is entered; if the decryption fails, the corresponding character in the target password is missed, and the process goes to step twelve.
Judging whether the input character hits the character at the corresponding position in the target password, if so, entering a step thirteen; otherwise, go to step twelve.
Step twelve, the current ciphertext character is failed to be decoded, if the virtual token is still not used up, the step ten can be returned to input the password again.
That is, if the character input this time does not hit the character in the corresponding position in the target password, the decryption of the current ciphertext character fails, it needs to be determined whether the remaining number of the target prop can support the next character input, that is, it is determined whether the target prop still remains, if the target prop still remains, the process returns to the tenth step, and the password is input again.
And step thirteen, the current ciphertext character is successfully decoded and switched into a plaintext character, namely the current character in the target password is unlocked.
That is, if the input character hits the character at the corresponding position in the target password, the current ciphertext character is decoded successfully, the current ciphertext character is switched into a plaintext character, and the plaintext character is synchronized to the client of other users in the local camp.
Fourteen, judging whether all characters in the target password are completely decoded? If all the characters are completely decoded, go to step fifteen.
In the process of deciphering, whether all characters in the target password are deciphered is judged, namely if all characters of the target password are switched into plaintext characters, the target password is deciphered completely as a whole, and the step fifteen is carried out.
And step fifteen, winning is obtained, and the flow is ended.
In the embodiment of the application, by providing the game-matching mode based on the password exploration mode, when a user performs game matching, the user not only needs to control the virtual object to perform a countermeasure action to obtain the virtual token, but also needs to decode the password of the password station to unlock virtual resources (such as specified materials), so that the interaction mode and the countermeasure mode of the game matching are greatly enriched, the fresh interaction mode of decoding the password is introduced, and the human-computer interaction efficiency is improved.
Fig. 12 is a schematic structural diagram of an interactive apparatus based on a virtual scene according to an embodiment of the present application, and as shown in fig. 12, the apparatus includes:
a first display module 1201, configured to display a target object in a virtual scene of a target match;
a second display module 1202, configured to display a password input box in response to a trigger event of the master virtual object to the target object;
a first obtaining module 1203, configured to obtain, based on the password input box, an input password when the main control virtual object already has a target prop, where the target prop is used to provide times for inputting characters into the password input box;
a second obtaining module 1204, configured to, if the input password hits in the target password, obtain a virtual resource associated with the target object.
The device that this application embodiment provided is through being close to the target object when possessing the target stage property to under the condition of deciphering out the target password, just can acquire certain virtual resource, these virtual resource can play some critical effects in the game is to the game, make the user when playing the game to the game, when participating in the counter action between the virtual object, still need accomplish the password as far as possible and decipher and compete for this virtual resource, thereby richened the interactive mode of game to the game, improved human-computer interaction efficiency.
In one possible implementation, based on the apparatus composition of fig. 12, the second display module 1202 includes:
and the display unit is used for displaying the decoded plaintext character and displaying the input frame of the ciphertext character to be decoded.
In one possible embodiment, the display unit is further configured to:
displaying the input frame corresponding to each ciphertext character to be decoded; or the like, or, alternatively,
and displaying an input frame corresponding to a first ciphertext character and a second ciphertext character, wherein the first ciphertext character is a first ciphertext character to be decoded, and the second ciphertext character is a ciphertext character except the first ciphertext character.
In one possible embodiment, the target object includes a target building, the triggering event includes that the master virtual object is located within a first target range of the target building, and the apparatus further includes, based on the apparatus composition of fig. 12:
the first determining module is configured to determine that the master virtual object is located within a first target range of the target building when the collision detection range of the master virtual object intersects with the collision detection range of the target building.
In a possible embodiment, based on the apparatus composition of fig. 12, the apparatus further comprises:
the setting module is used for setting the password input box into an enabled state under the condition that the main control virtual object has the target prop, and the enabled state supports the input operation based on the password input box;
the setting module is further configured to set the password input box to a disabled state under the condition that the main control virtual object does not own the target prop, and the disabled state does not support an input operation based on the password input box.
In a possible embodiment, the target prop includes a first target prop and a second target prop, the first target prop is used for providing the times of inputting the characters to the virtual objects belonging to the first camp, the second target prop is used for providing the times of inputting the characters to the virtual objects belonging to the second camp, and the first camp and the second camp are in an confrontational relationship.
In a possible embodiment, based on the apparatus composition of fig. 12, the apparatus further comprises:
and the second determining module is used for determining the character input times of the main control virtual object based on the number of the first target props owned by the main control virtual object under the condition that the main control virtual object belongs to the first marketing, wherein the character input times are used for representing the times of inputting characters into the password input box at most.
In a possible embodiment, the first target prop appears in the virtual scenario when the virtual object belonging to the second lineup is defeated, and the second target prop appears in the virtual scenario when the virtual object belonging to the first lineup is defeated.
In a possible embodiment, based on the apparatus composition of fig. 12, the apparatus further comprises:
a first deduction module, configured to deduct each target property owned by the master virtual object in response to the master virtual object being defeated;
the first display module 1201 is further configured to display each deducted target prop in the virtual scene.
In a possible implementation, based on the apparatus composition of fig. 12, the apparatus further comprises a control module for:
responding to the situation that the main control virtual object is located in a second target range of any target prop, and controlling the main control virtual object to obtain the target prop; or the like, or, alternatively,
responding to the fact that the main control virtual object is located in a second target range of any target prop, and displaying a pickup control of the target prop; and responding to the trigger operation of the picking control, and controlling the main control virtual object to obtain the target prop.
In a possible embodiment, based on the apparatus composition of fig. 12, the apparatus further comprises:
and the third determining module is used for determining that the main control virtual object is located in the second target range of the target prop under the condition that the collision detection range of the main control virtual object and the collision detection range of the target prop have intersection.
In a possible embodiment, based on the apparatus composition of fig. 12, the apparatus further comprises:
and the second deduction module is used for responding to any input operation based on the password input box and deducting the target prop owned by the main control virtual object.
In one possible embodiment, the winning conditions of the target pair are: when the virtual resource is moved to the target position by any virtual object in the battle, the battle to which the virtual object belongs is used as the winner of the target game.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the interaction device based on the virtual scene, when the interaction is implemented based on the virtual scene, only the division of the functional modules is used for illustration, and in practical applications, the function distribution can be completed by different functional modules according to needs, that is, the internal structure of the electronic device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the interaction apparatus based on a virtual scene provided in the above embodiments and the interaction method based on a virtual scene are of the same concept, and specific implementation processes thereof are described in detail in the interaction method based on a virtual scene, and are not described herein again.
Fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application. Optionally, the device types of the terminal 1300 include: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Optionally, processor 1301 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Optionally, the processor 1301 is implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). In some embodiments, processor 1301 includes a main processor and a coprocessor, the main processor being a processor for Processing data in the wake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 is integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 1301 further includes an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
In some embodiments, memory 1302 includes one or more computer-readable storage media, which are optionally non-transitory. Optionally, memory 1302 also includes high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1302 is used to store at least one program code for execution by the processor 1301 to implement the virtual scene based interaction method provided by various embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral can be connected to the peripheral interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, and power supply 1308.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 are implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Optionally, the radio frequency circuit 1304 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 also includes NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal can be input to the processor 1301 as a control signal for processing. Optionally, the display 1305 is also used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 is one, providing the front panel of terminal 1300; in other embodiments, display 1305 is at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, display 1305 is a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even more optionally, the display 1305 is arranged in a non-rectangular irregular figure, i.e. a shaped screen. Alternatively, the Display 1305 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 also includes a flash. Optionally, the flash is a monochrome temperature flash, or a bi-color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, audio circuitry 1307 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo sound acquisition or noise reduction purposes, a plurality of microphones are respectively disposed at different positions of terminal 1300. Optionally, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. Alternatively, the speaker is a conventional membrane speaker, or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, audio circuit 1307 also includes a headphone jack.
Power supply 1308 is used to provide power to various components within terminal 1300. Optionally, power source 1308 is alternating current, direct current, disposable battery, or rechargeable battery. When power source 1308 comprises a rechargeable battery, the rechargeable battery supports wired charging or wireless charging. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, optical sensor 1314, and proximity sensor 1315.
In some embodiments, acceleration sensor 1311 detects acceleration in three coordinate axes of the coordinate system established with terminal 1300. For example, the acceleration sensor 1311 is used to detect components of gravitational acceleration in three coordinate axes. Optionally, the processor 1301 controls the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 is also used for acquisition of motion data of a game or a user.
In some embodiments, the gyro sensor 1312 detects the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 and the acceleration sensor 1311 cooperate to acquire the 3D motion of the user on the terminal 1300. Processor 1301 performs the following functions based on the data collected by gyroscope sensor 1312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Optionally, pressure sensor 1313 is disposed on a side bezel of terminal 1300 and/or underlying display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 can be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1314 is used to collect ambient light intensity. In one embodiment, the processor 1301 controls the display brightness of the display screen 1305 based on the intensity of ambient light collected by the optical sensor 1314. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, processor 1301 also dynamically adjusts the camera head assembly 1306's shooting parameters based on the ambient light intensity collected by optical sensor 1314.
A proximity sensor 1315, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1300. The proximity sensor 1315 is used to collect the distance between the user and the front surface of the terminal 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1315 detects that the distance between the user and the front surface of the terminal 1300 gradually decreases; when the proximity sensor 1315 detects that the distance between the user and the front surface of the terminal 1300 is gradually increased, the display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300, and can include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 1400 may generate a relatively large difference due to different configurations or performances, and the electronic device 1400 includes one or more processors (CPUs) 1401 and one or more memories 1402, where the memory 1402 stores at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1401 to implement the virtual scene based interaction method according to the foregoing embodiments. Optionally, the electronic device 1400 further has components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the electronic device 1400 further includes other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including at least one computer program, which is executable by a processor in a terminal to perform the virtual scene based interaction method in the above embodiments, is also provided. For example, the computer-readable storage medium includes a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can execute to complete the virtual scene-based interaction method in the above-described embodiments.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, or can be implemented by a program instructing relevant hardware, and optionally, the program is stored in a computer readable storage medium, and optionally, the above mentioned storage medium is a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (17)

1. An interaction method based on a virtual scene is characterized by comprising the following steps:
displaying a target object in a virtual scene of the target game;
responding to a trigger event of the main control virtual object to the target object, and displaying a password input box;
under the condition that the main control virtual object has a target prop, acquiring an input password based on the password input box, wherein the target prop is used for providing times of inputting characters into the password input box;
and acquiring the virtual resource associated with the target object when the input password hits a target password.
2. The method of claim 1, wherein displaying the password entry box comprises:
an input box for displaying the decoded plaintext character and the ciphertext character to be decoded.
3. The method of claim 2, wherein the input box displaying the ciphertext character to be decrypted comprises:
displaying the input frame corresponding to each ciphertext character to be decoded; or the like, or, alternatively,
and displaying an input frame and a second ciphertext character corresponding to a first ciphertext character, wherein the first ciphertext character is a first ciphertext character to be decoded, and the second ciphertext character is a ciphertext character except the first ciphertext character.
4. The method of claim 1, wherein the target object comprises a target building, wherein the triggering event comprises the master virtual object being located within a first target range of the target building, and wherein the method further comprises:
determining that the master virtual object is located within a first target range of the target building when there is an intersection between the collision detection range of the master virtual object and the collision detection range of the target building.
5. The method of claim 1, further comprising:
setting the password input box to be in an enabled state under the condition that the main control virtual object has the target prop, wherein the enabled state supports input operation based on the password input box;
and under the condition that the main control virtual object does not possess the target prop, setting the password input box to a forbidden state, wherein the forbidden state does not support the input operation based on the password input box.
6. The method of claim 1, wherein the target props comprise a first target prop for providing the number of times the input character is provided to a virtual object belonging to a first lineup and a second target prop for providing the number of times the input character is provided to a virtual object belonging to a second lineup, the first lineup and the second lineup being in an antagonistic relationship.
7. The method of claim 6, further comprising:
and under the condition that the main control virtual object belongs to the first marketing, determining the character input times of the main control virtual object based on the number of the first target props owned by the main control virtual object, wherein the character input times are used for representing the times of allowing characters to be input into the password input box at most.
8. The method of claim 6, wherein the first target prop appears in the virtual scene when a virtual object belonging to the second lineup is defeated, and the second target prop appears in the virtual scene when a virtual object belonging to the first lineup is defeated.
9. The method of claim 1, further comprising:
deducting each target property owned by the master virtual object in response to the master virtual object being defeated;
and displaying each deducted target prop in the virtual scene.
10. The method of claim 1, further comprising:
responding to the situation that the main control virtual object is located in a second target range of any target prop, and controlling the main control virtual object to obtain the target prop; or the like, or, alternatively,
responding to the fact that the main control virtual object is located in a second target range of any target prop, and displaying a pickup control of the target prop; and responding to the trigger operation of the picking control, and controlling the main control virtual object to obtain the target prop.
11. The method of claim 10, further comprising:
and under the condition that the collision detection range of the main control virtual object and the collision detection range of the target prop have intersection, determining that the main control virtual object is located in a second target range of the target prop.
12. The method of claim 1, further comprising:
and deducting the target prop owned by the main control virtual object in response to any input operation based on the password input box.
13. The method of claim 1, wherein the winning conditions for the target pair are: and in the case that the virtual object of any one camp moves the virtual resource to the target position, taking the camp to which the virtual object belongs as a winner of the target game.
14. An interactive device based on a virtual scene, the device comprising:
the first display module is used for displaying a target object in a virtual scene of the target game;
the second display module is used for responding to a trigger event of the main control virtual object to the target object and displaying a password input box;
a first obtaining module, configured to obtain an input password based on the password input box when the main control virtual object already has a target prop, where the target prop is used to provide times for inputting characters into the password input box;
a second obtaining module, configured to obtain, when the input password hits a target password, a virtual resource associated with the target object.
15. An electronic device, comprising one or more processors and one or more memories, wherein at least one computer program is stored in the one or more memories, and loaded and executed by the one or more processors to implement the virtual scene based interaction method according to any one of claims 1 to 13.
16. A storage medium having at least one computer program stored therein, the at least one computer program being loaded and executed by a processor to implement the virtual scene-based interaction method of any one of claims 1 to 13.
17. A computer program product, characterized in that the computer program product comprises at least one computer program which is loaded and executed by a processor to implement the virtual scene-based interaction method of any one of claims 1 to 13.
CN202210021971.9A 2022-01-10 2022-01-10 Interaction method and device based on virtual scene, electronic equipment and storage medium Pending CN114344914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210021971.9A CN114344914A (en) 2022-01-10 2022-01-10 Interaction method and device based on virtual scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210021971.9A CN114344914A (en) 2022-01-10 2022-01-10 Interaction method and device based on virtual scene, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114344914A true CN114344914A (en) 2022-04-15

Family

ID=81109710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210021971.9A Pending CN114344914A (en) 2022-01-10 2022-01-10 Interaction method and device based on virtual scene, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114344914A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116271830A (en) * 2023-02-09 2023-06-23 广州延利网络科技有限公司 Behavior control method, device, equipment and storage medium for virtual game object
WO2024051368A1 (en) * 2022-09-09 2024-03-14 腾讯科技(深圳)有限公司 Information display method and apparatus, storage medium and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051368A1 (en) * 2022-09-09 2024-03-14 腾讯科技(深圳)有限公司 Information display method and apparatus, storage medium and electronic device
CN116271830A (en) * 2023-02-09 2023-06-23 广州延利网络科技有限公司 Behavior control method, device, equipment and storage medium for virtual game object
CN116271830B (en) * 2023-02-09 2023-12-05 广州延利网络科技有限公司 Behavior control method, device, equipment and storage medium for virtual game object

Similar Documents

Publication Publication Date Title
CN110721468B (en) Interactive property control method, device, terminal and storage medium
CN111408133B (en) Interactive property display method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN114339368B (en) Display method, device and equipment for live event and storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN112973117B (en) Interaction method of virtual objects, reward issuing method, device, equipment and medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN110585706B (en) Interactive property control method, device, terminal and storage medium
CN113117331B (en) Message sending method, device, terminal and medium in multi-person online battle program
CN113181645A (en) Special effect display method and device, electronic equipment and storage medium
CN114344914A (en) Interaction method and device based on virtual scene, electronic equipment and storage medium
CN112675544B (en) Method, device, equipment and medium for acquiring virtual prop
CN111672108A (en) Virtual object display method, device, terminal and storage medium
CN111672131A (en) Virtual article acquisition method, device, terminal and storage medium
CN112076476A (en) Virtual object control method and device, electronic equipment and storage medium
CN111589144B (en) Virtual character control method, device, equipment and medium
CN112774196A (en) Virtual object control method, device, terminal and storage medium
CN113813606A (en) Virtual scene display method, device, terminal and storage medium
CN111589102B (en) Auxiliary tool detection method, device, equipment and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111659122A (en) Virtual resource display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40069744

Country of ref document: HK