CN110585706A - Interactive property control method, device, terminal and storage medium - Google Patents

Interactive property control method, device, terminal and storage medium Download PDF

Info

Publication number
CN110585706A
CN110585706A CN201910944049.5A CN201910944049A CN110585706A CN 110585706 A CN110585706 A CN 110585706A CN 201910944049 A CN201910944049 A CN 201910944049A CN 110585706 A CN110585706 A CN 110585706A
Authority
CN
China
Prior art keywords
fragments
interactive prop
interactive
terminal
ejection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910944049.5A
Other languages
Chinese (zh)
Other versions
CN110585706B (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910944049.5A priority Critical patent/CN110585706B/en
Publication of CN110585706A publication Critical patent/CN110585706A/en
Application granted granted Critical
Publication of CN110585706B publication Critical patent/CN110585706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an interactive prop control method, an interactive prop control device, a terminal and a storage medium, and belongs to the field of multimedia. This application is through when detecting interactive stage property and being located target deformation position, convert interactive stage property into the piece form from complete form in virtual scene, obtain a plurality of fragments, control this a plurality of fragments and launch to a plurality of placement positions respectively, when detecting this a plurality of fragments and arrive this a plurality of placement positions respectively, control this a plurality of fragments and take place deformation in these a plurality of placement positions respectively, the fragment that can provide interactive stage property outwards launches and produces the fresh interactive mode of deformation, thereby the interest and the interactive effect of the interactive process that the terminal provided for the user have been promoted, user experience has been optimized.

Description

Interactive property control method, device, terminal and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for controlling an interactive property.
Background
With the development of multimedia technology and the diversification of terminal functions, more and more games can be played on the terminal. The shooting game is a more popular game, the terminal can display a virtual scene in the interface and display a virtual object in the virtual scene, and the virtual object can control the interactive prop to fight against other virtual objects.
At present, taking an interactive prop with explosive power as an example, the control method of the interactive prop is generally as follows: and after the user clicks the interactive button, throwing the interactive prop to a certain position, and displaying the explosion effect of the interactive prop when the interactive prop reaches the position.
In the process, the falling point of the interactive prop is easy to predict, so that the evasion rate of an adversary user for the interactive prop with explosion power is high, and the shooting game provided by the terminal is low in interestingness, single in interactive mode and poor in interactive effect.
Disclosure of Invention
The embodiment of the application provides an interactive prop control method, an interactive prop control device, a terminal and a storage medium, and can solve the problems that shooting games provided by the terminal are low in interestingness, single in interactive mode and poor in interactive effect. The technical scheme is as follows:
in one aspect, an interactive prop control method is provided, and the method includes:
when the interactive prop is detected to be located at a target deformation position, converting the interactive prop from a complete form into a fragment form in a virtual scene to obtain a plurality of fragments of the interactive prop;
controlling the fragments to respectively eject to a plurality of drop point positions by taking the target deformation position as a starting point;
when the fact that the fragments respectively reach the multiple drop point positions is detected, controlling the fragments to deform at the multiple drop point positions in the virtual scene.
In one aspect, an interactive prop control apparatus is provided, the apparatus comprising:
the conversion module is used for converting the interactive prop from a complete form into a fragment form in a virtual scene when the interactive prop is detected to be located at a target deformation position, and obtaining a plurality of fragments of the interactive prop;
the control ejection module is used for controlling the fragments to respectively eject to a plurality of drop point positions by taking the target deformation positions as starting points;
and the control deformation module is used for controlling the fragments to deform at the plurality of the falling point positions respectively in the virtual scene when detecting that the fragments respectively reach the plurality of the falling point positions.
In one possible embodiment, the apparatus is further configured to:
when one or more virtual objects in the virtual scene are detected to be located in the interaction range of any fragment, deducting the interaction attribute values of the one or more virtual objects.
In one possible embodiment, the apparatus is further configured to:
and when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of the interactive props in the complete shape, deducting the interaction attribute values of the one or more virtual objects.
In one possible embodiment, the apparatus is further configured to:
when the triggering operation of the interactive prop is detected, displaying a predicted throwing track of the interactive prop in the virtual scene, wherein the predicted throwing track takes the current position of the interactive prop as a starting point and the target deformation position as an end point;
and when the throwing operation of the interactive prop is detected, controlling the interactive prop to fly along the predicted throwing track.
In one possible embodiment, the apparatus is further configured to:
when the adjustment operation of at least one of the current position of the interactive prop or the target deformation position is detected, acquiring at least one adjusted target position;
generating an adjusted projected throwing trajectory based on the at least one target location;
updating an original predicted throwing track to the adjusted predicted throwing track in the virtual scene.
In one possible embodiment, the apparatus is a node device in a blockchain system, and when the node device executes an interactive prop control method, interactive data generated in a control process of the interactive prop is uploaded to the blockchain system.
In one aspect, a terminal is provided and includes one or more processors and one or more memories, where at least one program code is stored in the one or more memories, and the at least one program code is loaded by the one or more processors and executed to implement the operations performed by the interactive prop control method according to any of the above possible implementations.
In one aspect, a storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the operations performed by the interactive prop control method according to any one of the above possible implementations.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the interactive prop is detected to be positioned at the target deformation position, the interactive prop is converted from a complete form into a fragment form in a virtual scene to obtain a plurality of fragments of the interactive prop, therefore, the fragments can be controlled to respectively eject to a plurality of landing point positions by taking the target deformation position as a starting point, when the fragments are detected to respectively reach the landing point positions, the fragments are controlled to deform at the plurality of falling point positions in the virtual scene, the randomness of the interactive prop and the fragments of the interactive prop during deformation is improved, the hit rate of the interactive prop and the fragments of the interactive prop is improved, a fresh interactive mode that the fragments of the interactive prop are ejected outwards and deform is provided, therefore, interestingness and interaction effect of the interaction process provided by the terminal for the user are improved, and user experience is optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an interactive prop control method according to an embodiment of the present application;
fig. 2 is a flowchart of an interactive prop control method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a predicted throwing track provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a predicted throwing track provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of a plurality of fragments provided by an embodiment of the present application;
fig. 6 is a schematic diagram of an ejection track provided in an embodiment of the present application;
fig. 7 is a schematic diagram of an ejection process provided by an embodiment of the present application;
FIG. 8 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a method for determining interaction range according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of a method for controlling an interactive prop according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an interactive prop control apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Hereinafter, terms related to the present application are explained.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a Player Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-Player Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user may also control the virtual object to interact with other virtual objects in a manner of fighting and the like through the virtual weapon, for example, the virtual weapon may be a throwing type virtual weapon such as a grenade, a cluster mine, a viscous grenade (abbreviated as "viscous grenade"), or a shooting type virtual weapon such as a machine gun, a pistol, a rifle, and the like, and the type of the virtual weapon is not specifically limited in the present application.
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic diagram of an implementation environment of an interactive prop control method provided in an embodiment of the present application, and referring to fig. 1, the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual scene. The application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online Battle Arena game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type live game. The first terminal 120 may be a terminal used by a first user, who uses the first terminal 120 to operate a first virtual object located in a virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first terminal 120 and the second terminal 160 are connected to the server 140 through a wireless network or a wired network.
The server 140 may include at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 140 is used to provide background services for applications that support virtual scenarios. Alternatively, the server 140 may undertake primary computational tasks and the first and second terminals 120, 160 may undertake secondary computational tasks; alternatively, the server 140 undertakes secondary computing work, the first terminal 120 and
the second terminal 160 undertakes the primary computational work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual scene. The application program can be any one of an FPS, a third person named shooting game, an MOBA, a virtual reality application program, a three-dimensional map program, a military simulation program or a multi-person gunfight survival game. The second terminal 160 may be a terminal used by a second user, who uses the second terminal 160 to operate a second virtual object located in the virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual object controlled by the first terminal 120 and the second virtual object controlled by the second terminal 160 are in the same virtual scene, and the first virtual object may interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be in a hostile relationship, for example, the first virtual object and the second virtual object may belong to different teams and organizations, and the hostile virtual objects may interact with each other in a mutual shooting manner on land.
In other embodiments, the first virtual object and the second virtual object may be in a teammate relationship, for example, the first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III), an MP4(Moving Picture Experts Group Audio Layer IV), a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 160 may be smart phones, or other handheld portable gaming devices. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
In an exemplary scenario, the first terminal 120 controls the first virtual object to throw the interactive prop to the target deformation position, so that when the interactive prop flies to the target deformation position, the first terminal 120 controls the interactive prop to convert from a complete form to a fragment form, and at the same time, the display of an explosion effect is often accompanied in the virtual scenario, and fire striking is caused to one or more second virtual objects within a certain interaction range, so as to deduct an interaction attribute value of the one or more second virtual objects. The interactive prop is any deformable virtual prop, for example, the interactive prop can be a throwing virtual weapon such as a grenade, a cluster mine, a viscous mine and the like.
In the related art, since the landing point (i.e., the target deformation position) of the interactive prop during flight is easily pre-determined by the user of the controlled second virtual object, the avoidance rate of the second virtual object to the interactive prop can be higher by the pre-determination method, in other words, the hit rate of the interactive prop itself is lower, so that the shooting game provided by the terminal has lower interest, a single interactive mode and a poor interactive effect.
In view of this, the embodiment of the present application provides an interactive prop control method, which can enhance the interest of the shooting game provided by the terminal, enrich the interactive manner of the shooting game, and optimize the interactive effect of the shooting game, and will be described in detail in the next embodiment.
In an exemplary scenario, the implementation environment may be built on a blockchain system, where a blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block.
In some embodiments, the first terminal 120 and the second terminal 160 may be node devices on a blockchain system, so that each time any node device performs control operation of an interactive prop through an application program and generates interactive data, the interactive data may be uploaded to the blockchain system, thereby implementing persistent storage on the blockchain system. In the above process, the interactive data may include a throwing time and a throwing position of each interactive prop, a modification time of an interactive attribute value of each virtual object, a value before and after modification, and the like, and the interactive data may reflect a fighting record of each virtual object in the interactive process, and the storage of the interactive data has higher security due to the non-tamper-ability of the block chain system.
Fig. 2 is a flowchart of an interactive prop control method according to an embodiment of the present application. Referring to fig. 2, the embodiment is exemplified by applying the method to a terminal, which may be the first terminal 120 or the second terminal 160 shown in fig. 1, and includes the following steps:
201. and when the terminal receives the creation instruction, the terminal creates the interactive prop.
The interactive prop is any deformable virtual prop, and when the interactive prop reaches a target deformation position, the interactive prop is converted from a complete form into a fragment form, and can be accompanied by generation of a deformation effect (usually an explosion effect), for example, the interactive prop may be a throwing virtual weapon such as a grenade, a bundled mine or a viscous grenade, and the embodiment of the application does not specifically limit the type of the interactive prop.
In the above process, the creation instruction is used to instruct the terminal to create the interactive prop. In some embodiments, the creation instruction may be triggered by a user operation. For example, the terminal may display a call control in the virtual scene, and when the user wants to call the interactive prop, the terminal may perform a trigger operation on the call control, and then the terminal may receive a trigger signal to the call control, generate a creation instruction, and create the interactive prop. The calling control is used for calling the interactive prop to enter the virtual scene, and the shape of the calling control can be a button which is displayed in the virtual scene in a suspension mode.
202. And the terminal displays the interactive prop in the virtual scene.
After the interactive prop is created by the terminal, the interactive prop can be displayed in a virtual scene. Specifically, the terminal may display the interactive prop on a target portion of the virtual object, so as to embody that the virtual object can control the interactive prop, for example, the target portion may be a hand, a shoulder, a back, or the like.
In one possible implementation manner, the terminal may further display a throwing control on the user graphical interface, where the throwing control is used to detect a throwing operation of the user to control the virtual object to control throwing of the interactive prop through the throwing control.
The step 201 and the step 202 are processes of displaying the interactive prop through calling operation by the terminal, the process is only one exemplary trigger condition for displaying the interactive prop, and the display of the interactive prop can be triggered through other trigger conditions.
For example, another trigger condition is provided, the interactive prop may be displayed on a ground or a virtual article in a virtual scene, when a distance between a virtual object corresponding to the terminal and the interactive prop is smaller than a first target threshold, a pickup option of the interactive prop is displayed in the virtual scene, when a trigger operation on the pickup option is detected, the terminal may control the virtual object to pick up the interactive prop, and after the pickup is completed, the interactive prop is displayed on a target portion of the virtual object in the virtual scene, thereby indicating that the virtual object is equipped with the interactive prop.
203. When the triggering operation of the interactive prop is detected, the terminal displays a predicted throwing track of the interactive prop in the virtual scene, wherein the predicted throwing track takes the current position of the interactive prop as a starting point and takes the target deformation position as an end point.
In the process, the terminal can display the throwing option of the interactive prop in the virtual scene, and when the triggering operation of the user on the throwing option is detected, the current position and the target deformation position of the interactive prop are obtained, so that the predicted throwing track of the interactive prop is established by taking the current position of the interactive prop as a starting point and the target deformation position as an end point.
Optionally, the triggering operation may be that a user touches a terminal screen, or that a user inputs a voice instruction, and the embodiment of the present application does not specifically limit the type of the triggering operation.
Optionally, the predicted throwing track may be a straight line, a parabola, a spiral line, a slow motion curve, and the like, and different interactive props may have different types of predicted throwing tracks, and the embodiment of the present application does not specifically limit the types of the predicted throwing tracks. For example, the terminal may model the predicted throwing track to obtain a dynamic model of the interactive prop, and input the start point coordinates and the terminal coordinates into the dynamic model, that is, the predicted throwing track of the interactive prop may be output, for example, if only the influence of gravity acceleration is considered in the dynamic model, the finally output predicted throwing track is a parabola.
In a possible implementation mode, the target deformation position can be the accurate center of the interactive prop, so that the accurate center of the interactive prop can be automatically aimed after the current position of the interactive prop is obtained, a predicted throwing track is established by taking the accurate center of the interactive prop as the target deformation position, and the operation complexity of a user can be simplified.
In a possible implementation manner, in addition to the accuracy of the interactive prop, the target deformation position may be any position designated by the user in the virtual scene, for example, when a trigger operation on the interactive prop is detected, the terminal displays a prompt message in the virtual scene, where the prompt message is used to instruct the user to click the target deformation position of the interactive prop to be thrown, the user may touch the position of the interactive prop to be thrown on the terminal screen according to the instruction of the prompt message, and the terminal determines the screen coordinate of the touch point as the screen coordinate of the target deformation position, so that the target deformation position of the interactive prop can be manually designated by the user, and accurate control on the target deformation position is achieved.
Fig. 3 is a schematic diagram of a predicted throwing track provided in an embodiment of the present application, and referring to fig. 3, taking an interactive prop as an example of a bundled lightning, when a terminal detects a triggering operation of a user on the bundled lightning, the terminal displays a predicted throwing track 301 of the bundled lightning in a virtual scene 300, where the predicted throwing track 301 starts from a current position (that is, a hand of a virtual object controlled by the user) 302 of the bundled lightning and points to a target deformation position 303.
In some embodiments, the user may also adjust the current position or the target deformation position of the interactive prop, thereby implementing the adjustment of the predicted throwing track. When the terminal detects that the user adjusts at least one of the current position or the target deformation position of the interactive prop, the adjusted at least one target position is obtained, and the terminal generates an adjusted predicted throwing track based on the at least one target position, so that the original predicted throwing track is updated to the adjusted predicted throwing track in the virtual scene.
Illustratively, because the interactive prop is usually bound with the position of the virtual object before being thrown, the user can control the virtual object to displace through the terminal, adjust the current position of the interactive prop, and determine the adjusted current position as a target position.
Illustratively, supposing that the target shape displacement is the center of sight of the interactive prop, the user can adjust the center of sight of the interactive prop through pressing, dragging and other operations, and after the center of sight adjustment is finished, the terminal can automatically determine the adjusted center of sight as a target position.
Further, when the adjusted predicted throwing track is generated according to at least one target position, if the target position only includes the adjusted current position, the terminal may generate the adjusted predicted throwing track by using the adjusted current position as a starting point and using the original target deformation position as an end point; if the target position only comprises the adjusted target deformation position, the terminal can generate an adjusted predicted throwing track by taking the original current position as a starting point and the adjusted target deformation position as an end point; if the target position includes the adjusted current position and the adjusted target deformation position, the terminal may generate the adjusted predicted throwing track with the adjusted current position as a starting point and the adjusted target deformation position as an ending point.
Fig. 4 is a schematic diagram of a predicted throwing track provided in an embodiment of the present application, referring to fig. 4, based on the above example, if a user adjusts at least one of a current position 302 or a target deformation position 303 of a bundled lightning, for example, the current position of the bundled lightning is adjusted from a side of 302 away from a large tree to a side of 401 close to the large tree, and the target deformation position is adjusted from a floor of 303 to a wall of 402, the terminal generates an adjusted predicted throwing track 403 with 401 as a starting point and 402 as an ending point, deletes the original predicted throwing track 301 in a virtual scene 300, and simultaneously draws the adjusted predicted throwing track 403.
204. When the throwing operation of the interactive prop is detected, the terminal controls the interactive prop to fly along the expected throwing track.
In the above process, the terminal may determine the displacement of the interactive prop between adjacent image frames according to the predicted throwing track, so as to draw the position of the interactive prop displayed in the virtual scene from the first image frame by frame according to the displacement between adjacent image frames until the interactive prop is located in the target deformation position, that is, display of the flight process of the interactive prop may be presented, and when the interactive prop flies to the target deformation position, the following step 205 is executed.
205. When the interactive prop is detected to be located at the target deformation position, the terminal decomposes the interactive prop into a plurality of original fragments in the virtual scene, and the fragments are randomly determined from the original fragments.
Optionally, the terminal may determine all the original fragments as the plurality of fragments, or may randomly extract a part of the original fragments from the plurality of original fragments to determine the part of the original fragments as the plurality of fragments, and the embodiment of the present application does not specifically limit the selection rule of the plurality of fragments, for example, the terminal decomposes the interactive prop into 30 original fragments, and randomly extracts 5 original fragments from the 30 original fragments as the plurality of fragments.
In the process, when the interactive prop is detected to be located at the target deformation position, the terminal equivalently converts the interactive prop from a complete form into a fragment form in a virtual scene, and a plurality of fragments of the interactive prop are obtained.
In some embodiments, the terminal may further play a deformation animation of the interactive prop along with converting the interactive prop into a fragment form, and specifically, the terminal may use the prop identifier of the interactive prop as an index, query index content corresponding to the index in the database, and obtain the deformation animation stored in the index content when the index can hit any index content, thereby invoking the play control to play the deformation animation. It should be noted that different interactive props may have the same morphed animation or different morphed animations.
Fig. 5 is a schematic diagram of a plurality of fragments according to an embodiment of the present application, referring to fig. 5, when the interactive prop reaches a target deformation position, the interactive prop is decomposed into a plurality of original fragments 501, the terminal randomly selects a plurality of fragments 502 from the plurality of original fragments 501, and can control the plurality of fragments 502 to fly in all directions through step 207, which will be described later, and a specific control process will be described in detail in step 207.
206. And when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of the interactive props in the complete form, deducting the interaction attribute values of the one or more virtual objects by the terminal.
In the above process, when the interactive prop deforms, fire attacks are usually caused on virtual objects around the interactive prop, so if it is detected that one or more virtual objects are located in an interaction range of the interactive prop in a complete form, interaction attribute values of the one or more virtual objects can be deducted, where the interaction attribute values are used to represent a fighting state of the virtual objects when the virtual objects participate in interaction, for example, the interaction attribute values may be a virtual blood volume, a virtual life value, a virtual essence value, and the like.
Optionally, the interaction range of the interactive prop may be at least one of a spherical space, a square space, a cylindrical space, or a conical space, different interactive props may have interaction ranges of different shapes, and the shape of the interaction range of the interactive prop is not specifically limited in this embodiment of the application.
In a possible implementation manner, the terminal may set a series of levels from low to high for the interactive prop, and when the level of the interactive prop is higher, the larger the interaction range is, that is, the greater the power of the deformation effect is, and when the level of the interactive prop is lower, the smaller the interaction range is, that is, the smaller the power of the deformation effect is.
Illustratively, taking an interaction range as a spherical space as an example, when the interactive prop flies to a target deformation position, the terminal obtains a distance between each virtual object in a virtual scene and the target deformation position, and when there is a distance corresponding to the virtual object that is smaller than a radius of the spherical space, determines the interaction range of the interactive prop where the virtual object (possibly one or more) is located in a complete form, thereby deducting an interaction attribute value of the virtual object.
207. And the terminal controls the fragments to respectively eject to a plurality of drop point positions by taking the target deformation position as a starting point.
In the foregoing process, the terminal may determine respective ejection tracks for the respective fragments, so as to control the respective fragments to eject according to the respective ejection tracks, which will be described below by taking any one of the fragments as an example, and specifically may include the following sub-steps:
2071. and for any fragment in the plurality of fragments, the terminal acquires the ejection direction and the ejection speed of the fragment.
In the above process, the terminal may randomly distribute the ejection directions for the respective fragments, for example, for any one of the fragments, a ray is randomly led out to a certain position from the target deformation position, and the direction pointed by the ray is determined as the ejection direction of the fragment.
In the above process, for fragments belonging to the same interactive prop, the terminal may configure the same ejection rate for each fragment of the interactive prop, and specifically, the terminal may query a list item corresponding to the index in a cached ejection rate list by using the prop identifier of the interactive prop as the index, so that when the index can hit any list item, a numerical value stored in the list item is determined as the ejection rate of each fragment of the interactive prop. It should be noted that if the fragments are subjected to variable speed motion during ejection, the fragments may have the same ejection speed only during initial ejection, and the ejection speed of each fragment may be changed differently according to different ejection tracks.
Of course, in some embodiments, the terminal may also configure different ejection rates for each fragment of the interactive prop, for example, the terminal may randomly allocate different ejection rates for each fragment within a pre-stored rate interval.
In the above process, after determining the ejection direction and the ejection rate, the ejection direction is used as the direction of the initial velocity, the ejection rate is used as the rate of the initial velocity, and an initial velocity vector is synthesized, so that the following step 2072 is performed.
2072. And the terminal determines the ejection track of the fragment according to the target deformation position, the ejection direction and the ejection speed, wherein the ejection track takes the target deformation position as a starting point.
In the process, the terminal can firstly use the target deformation position as the starting point of the ejection track, then synthesize the initial velocity vector of the fragment according to the ejection direction and the ejection speed, wherein the direction of the initial velocity vector is coincident with the ejection direction, and the initial velocity is equal to the ejection speed, and further bring the starting point coordinate and the initial velocity vector into the dynamic model of the fragment, that is, the ejection track of the fragment can be output by the dynamic model.
The movement curve of the fragment may be any one of a straight line, a parabola, a spiral line, or a slow-moving curve, and the type of the movement curve of the fragment is not specifically limited in the embodiment of the present application.
Fig. 6 is a schematic diagram of an ejection track provided in an embodiment of the present application, and referring to fig. 6, a terminal allocates different ejection directions to each fragment with a target deformation position as a starting point, and assigns the same ejection rate to each fragment at the same time, so that a plurality of folded ejection tracks can be obtained, and each fragment can fly along its respective ejection track in a subsequent process.
2073. The terminal determines any position on the ejection track as the position of the drop point of the fragment, and controls the fragment to eject to the drop point along the ejection track.
In some embodiments, when the position of the drop point is determined, the terminal may determine a virtual article intersecting the ejection track from the virtual scene, and determine an intersection point of the ejection track and the virtual article as the position of the drop point, so that an interaction effect that fragments collide with the virtual article after scattering around can be presented, secondary deformation is generated due to collision, and the fidelity of fragment deformation in the virtual scene is improved. The virtual object may be a ground, a building wall, a tree, or the like, and the embodiment of the present application does not specifically limit which virtual object is.
In some embodiments, the terminal can also determine any position on the ejection track before the intersection point as the position of the drop point of the fragment, so that an interactive effect that the fragment generates deformation during ejection can be presented, the randomness of the fragment deformation in a virtual scene is improved, and the secondary deformation of the fragment is more difficult to avoid.
In the above process, the process of controlling ejection of the fragment by the terminal is similar to the process of controlling flight of the interactive prop in the above step 204, that is, the terminal determines displacement of the fragment between adjacent image frames according to the ejection track, so as to draw the position of the fragment displayed in the virtual scene from the first image frame by frame according to the displacement between the adjacent image frames until the last image frame corresponding to the fragment located at the drop point position.
It should be noted that, for any one of the fragments, the terminal may repeatedly perform the above steps 2071 and 2073, so as to obtain the ejection tracks of all the fragments, and control each fragment to eject to the respective drop point position along the respective ejection track.
Fig. 7 is a schematic diagram of an ejection process provided in an embodiment of the present application, referring to fig. 7, after the terminal generates respective ejection tracks for the multiple fragments, the terminal controls the fragments to eject along the respective ejection tracks, so that an effect that the multiple fragments fly along a parabola from the same starting point (target deformation position) to all directions simultaneously can be presented, and a transition from deformation of the interactive prop to deformation of the fragments is more natural.
208. When the fragments are detected to respectively reach the drop point positions, the terminal controls the fragments to respectively deform at the drop point positions in the virtual scene.
In the above process, for any one of the plurality of fragments, when it is detected that the fragment reaches the position of the drop point corresponding to the fragment, the terminal may control the fragment to switch from the first form to a second form, which is different from the first form, for example, the first form may be a fragment form, and the second form may be a dust form.
The process of the fragment deformation in step 208 is similar to the process of the interactive prop deformation in step 205, that is, the terminal may decompose each fragment into a plurality of dusts, and simultaneously display the explosion animation of each fragment in the virtual scene, where different fragments may have the same explosion animation or different explosion animations, and details are not repeated here.
In the process, after the terminal controls the virtual object to throw the interactive prop, the interactive prop firstly deforms once when reaching the target deformation position, further, each fragment generated by the interactive prop can generate secondary deformation when reaching the respective drop point position, so that the probability that the interactive prop hits the virtual object in the throwing process can be improved, the interactive mode provided by the user can be enriched by the terminal, the interestingness brought by the user by the terminal is improved, and the user experience is optimized.
Fig. 8 is an interface schematic diagram provided in an embodiment of the present application, referring to fig. 8, because the launch rate of each fragment of the interactive prop is usually fast, it is possible to simultaneously display a deformation animation of the interactive prop and an explosion animation of each fragment in a virtual scene, and it can be seen that after the interactive prop is thrown, a plurality of explosion points exist in the virtual scene, and the plurality of explosion points include not only a target deformation position of the interactive prop, but also a drop point position of each fragment of the interactive prop, so that a richer visual effect can be presented.
209. And when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of any fragment, the terminal deducts the interaction attribute values of the one or more virtual objects.
In the above process, since each fragment usually causes fire impact to the virtual objects around each fragment when the fragment is deformed, if it is detected that one or more virtual objects are located in the interaction range of each fragment, the interaction attribute values of the one or more virtual objects may be subtracted, where the interaction attribute values are used to represent the fighting state of the virtual objects when the virtual objects participate in interaction, and for example, the interaction attribute values may be virtual blood volume, virtual life value, virtual spirit value, and the like.
Optionally, the interaction range of each fragment may be at least one of a spherical space, a square space, a cylindrical space, or a conical space, and different fragments may have interaction ranges of different shapes, for example, interaction ranges of different shapes are randomly determined for different fragments, and the shape of the interaction range of each fragment is not specifically limited in the embodiments of the present application.
In a possible implementation manner, the terminal may determine the interaction range of each fragment according to the volume of each fragment, and when the volume of the fragment is larger, the interaction range may be larger, that is, the power of the deformation effect is larger, and when the volume of the fragment is smaller, the interaction range may be smaller, that is, the power of the deformation effect is smaller.
Illustratively, taking an interaction range as a spherical space as an example, for any fragment, when the fragment is ejected to a corresponding drop point position, the terminal obtains distances between each virtual object in a virtual scene and the drop point position, and when there is a distance corresponding to a virtual object that is smaller than a radius of the spherical space, determines that the virtual object (possibly one or more) is located in the interaction range of the fragment, thereby subtracting an interaction attribute value of the virtual object.
In some embodiments, for each virtual object falling within the interaction range of any fragment, a value to be subtracted of the virtual object may be determined according to a distance between a position of the virtual object and a position of a drop point, where the distance between the position of the virtual object and the position of the drop point may be in a negative correlation with the value to be subtracted, for example, when the virtual object is closer to the position of the drop point, a value of the value to be subtracted is larger, and when the virtual object is farther from the position of the drop point, the value of the value to be subtracted is smaller, where the negative correlation may be a linear relationship or a nonlinear relationship.
Fig. 9 is a schematic diagram of determining an interaction range according to an embodiment of the present application, and referring to fig. 9, assuming that the interaction range of the interactive prop and the interaction range of each fragment are spherical spaces, point a is a target deformation position of the interactive prop, points B to E are four drop point positions corresponding to four fragments generated by the interactive prop, five spherical spaces with points a to E as sphere centers can be determined, the spherical space with point a as the sphere center is a first explosion range, the spherical space with point B to E as the sphere center is a second explosion range, thus, for each virtual object represented by the symbol "x", it can be seen that there are two virtual objects in the spherical space with the point E as the center of the sphere, and deducting the interaction attribute value from the two objects, wherein the spherical space with the point C as the sphere center does not have any virtual object, so that the fragment corresponding to the point C does not attack any virtual object.
The virtual object in step 209 may be the same virtual object as that in step 206, may be partially the same virtual object, may be completely different virtual objects, and may be an enemy relationship or a teammate relationship with the virtual object controlled by the terminal.
In a possible implementation manner, the interactive prop control method may be applied to a node device of a blockchain system, that is, the terminal may be any node device in the blockchain system. After the node device executes the interactive prop control method, the interactive data generated in the control process of the interactive prop can be uploaded to the blockchain system, so that persistent storage of the interactive data is realized on the blockchain system.
The interactive data may include at least one of a throwing time and a target deformation position of each interactive prop, an ejection track and deformation time of each fragment, or a change time of an interactive attribute value of each virtual object and a value before and after the change. The interactive data can reflect the fighting records of each virtual object in the interactive process, and the interactive data is stored in the block chain system due to the non-tamper property of the block chain system, so that the interactive data is stored with higher safety.
Optionally, the process of uploading the interactive data may include the following steps: the node device (i.e., the terminal) generates a block according to the interactive data, broadcasts the block in the blockchain system, and after receiving the block sent by the node device, other node devices (i.e., any device except the terminal) on the blockchain system identify the block, and adds the block to the blockchain after the block passes through the formula of the blockchain system, which is not described in detail herein.
The method provided by the embodiment of the application converts the interactive prop from a complete form to a fragment form in a virtual scene when detecting that the interactive prop is located at a target deformation position to obtain a plurality of fragments of the interactive prop, so that the fragments can be controlled to be ejected to a plurality of drop point positions respectively from the target deformation position, when detecting that the fragments reach the drop point positions respectively, the fragments are controlled to deform at the drop point positions respectively in the virtual scene, the randomness when the fragments of the interactive prop and the interactive prop deform is improved, the hit rate of the fragments of the interactive prop and the interactive prop is improved, a fresh interactive mode that the fragments of the interactive prop are ejected outwards and deform can be provided, and the interestingness and the interactive effect of the interactive process provided by a terminal for a user are improved, the user experience is optimized.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
By executing the method provided by the embodiment, the interactive prop can be thrown once, and two deformation effects of interactive prop deformation and fragment deformation are respectively generated. Referring to fig. 10, fig. 10 is a schematic flowchart of an interactive prop control method provided in the embodiment of the present application, taking an interactive prop as a mine cluster as an example, the flowchart shows the following steps:
step one, a user switches out the bundled mines in the game application.
For example, a user logs in a game application on a terminal, the terminal displays a virtual scene in the game application, the user clicks a calling control of a bundled lightning in the virtual scene, and when the terminal detects a click operation of the calling control by the user, a creation instruction is generated, the bundled lightning is created, and the bundled lightning is displayed on a hand of a virtual object controlled by the user in the virtual scene.
And step two, the terminal judges whether the user presses the firing key, if the user presses the firing key, the step three is executed, and if the user does not press the firing key, the step one is returned.
Wherein, the firing key is also the throwing option of the cluster mine.
And step three, displaying a throwing line by the terminal.
Wherein, the throwing line is the predicted throwing track of the cluster mine.
For example, when a user presses a firing key of the cluster mine for a long time, the terminal displays a throwing line in the virtual scene, which indicates that the cluster mine is to be thrown according to the throwing line, and the end point of the throwing line is the landing point (i.e. the target deformation position) of the cluster mine.
And step four, the terminal judges whether the user slides the screen or rotates the picture, if the user slides the screen or rotates the picture, the step five is executed, and if the user does not slide the screen or rotate the picture, the step three is returned.
For example, during the process that a user presses a firing key for a long time, the landing point of the cluster mine can be changed through the operation of sliding a screen or rotating a picture, so that the adjustment of a throwing line is realized.
And step five, the terminal changes the throwing line direction and the landing point.
For example, the terminal generates an adjusted throwing line based on the original current position and the adjusted landing point, and updates the original throwing line to the adjusted throwing line in the virtual scene.
And step six, the terminal judges whether the user looses hands, if the user looses hands, the step seven is executed, and if the user does not loosen hands, the step five is returned.
That is, when the user presses the firing key for a long time, the throwing line of the cluster mine can be adjusted, once the user looses his hand, the throwing operation on the cluster mine is considered to be triggered, for example, when the pressing operation of the firing key by the user is not detected for a period exceeding the target time, the terminal can judge that the user looses his hand, otherwise, the terminal judges that the user does not loosen his hand. The target time period is any value greater than or equal to 0.
And seventhly, the terminal controls the cluster mine to fly along a throwing line.
That is, the terminal controls the bundled mine to be thrown from the hand of the virtual object and to fly along a throw line, which may be a parabola, for example.
And step eight, the terminal judges whether the cluster mine falls on the ground or not, if the cluster mine falls on the ground, the step nine is executed, and if the cluster mine does not fall on the ground, the step seven is returned.
In this embodiment, the description will be given taking an example in which the target deformation position of the cluster mine is located on the ground.
And step nine, the terminal controls the cluster mine to explode to obtain a plurality of fragments of the cluster mine.
That is, the terminal controls the bundled lightning to be converted from a complete form to a fragment form at the landing point, and a plurality of fragments of the bundled lightning are obtained.
And step ten, the terminal judges whether an enemy (namely an enemy virtual object) is positioned in the explosion range of the cluster mine, if the enemy virtual object is positioned in the explosion range of the cluster mine, the step eleven is executed, and if the enemy virtual object is not positioned in the explosion range of the cluster mine, the step twelve is executed.
Wherein, the explosion range of the cluster mine is also the interaction range of the interactive prop.
And step eleven, the terminal calculates the damage of the bunched mines to the enemy to the virtual object, and the step twelve is executed.
For example, the terminal determines a value to be deducted for the interaction attribute value of each hostile virtual object.
And step twelve, the terminal controls the plurality of fragments of the cluster mine to fly to all places.
For example, the terminal controls each fragment of the cluster mine to eject to a respective landing point position along a respective ejection track.
Step thirteen, the terminal judges whether a plurality of fragments of the cluster mine fall on the ground or not, and if all the fragments fall on the ground, the execution is carried out
And step fourteen, if the fragments do not fall to the ground, returning to the step twelve.
In this embodiment, a virtual article intersected by the ejection tracks of the respective fragments is taken as a ground, so that the landing positions of the respective fragments can be located on the ground of the virtual scene.
And step fourteen, the terminal controls a plurality of fragments of the cluster mine to generate secondary explosion, and the damage of the plurality of fragments to nearby enemy virtual objects is calculated.
That is to say, the terminal controls each fragment of the bundled lightning to deform at the respective landing point position, and calculates the value to be deducted of the interaction attribute value of each fragment to the virtual object in the respective interaction range. After the harm caused to each enemy virtual object is summarized, the terminal can generate a harm value list and send the harm value list to the server.
Optionally, the terminal may send the damage value list of the bundled lightning body to the server once after the step eleven is executed, and send the damage value list of the bundled lightning fragments to the server once again after the step fourteen is executed, so that a delay of deducting the interactive attribute value from the virtual object by an enemy can be reduced.
Of course, after the step eleven is executed, the terminal may not transmit the damage value list of the bundled thunderbolt body to the server for the moment, but after the step fourteen is executed, the damage value list of the bundled thunderbolt body and the damage value list of the bundled thunderbolt fragment are summarized, the damage values caused by multiple times are summed up for the enemy virtual objects respectively suffering from multiple times of fire attacks, a target damage value list is generated, and the target damage value list is transmitted to the server, so that the server can be helped to share the calculation pressure.
The embodiment provides an interactive mode of throwing cluster thunder and initiating secondary explosion, make the user can be through throwing cluster thunder once, produce the primary explosion when control cluster thunder falls to the ground, and convert into a plurality of pieces of cluster thunder, control this a plurality of pieces and regard the explosion point of primary explosion as the center, fly away to all around along respective ejection orbit, demonstrate the visual effect that is similar to cluster thunder and is decomposed, when each piece falls to the ground, control each piece and produce the secondary explosion, thereby can have a plurality of explosion points when the secondary explosion, thereby through the interactive mode of this kind of random range explosion, interactive props such as increased cluster thunder are at the uncertainty of interactive process, increased the degree of difficulty coefficient that enemy escaped the attack of cluster thunder to the virtual object, also more interactive enjoyment has been brought for the user that uses cluster thunder.
Fig. 11 is a schematic structural diagram of an interactive prop control device according to an embodiment of the present application. Referring to fig. 11, the apparatus includes:
a conversion module 1101, configured to, when it is detected that the interactive prop is located at the target deformation position, convert the interactive prop from a complete form to a fragment form in a virtual scene, so as to obtain multiple fragments of the interactive prop;
the control ejection module 1102 is configured to control the multiple fragments to respectively eject to multiple drop point positions with the target deformation position as a starting point;
a deformation control module 1103, configured to control, when it is detected that the plurality of fragments respectively reach the plurality of drop point positions, the plurality of fragments to respectively deform at the plurality of drop point positions in the virtual scene.
The device provided by the embodiment of the application converts the interactive prop from a complete form to a fragment form in a virtual scene when detecting that the interactive prop is located at a target deformation position to obtain a plurality of fragments of the interactive prop, so that the fragments can be controlled to be ejected to a plurality of drop point positions respectively by taking the target deformation position as a starting point, when detecting that the fragments reach the drop point positions respectively, the fragments are controlled to deform at the drop point positions respectively in the virtual scene, the randomness when the fragments of the interactive prop and the interactive prop deform is improved, the hit rate of the fragments of the interactive prop and the interactive prop is improved, a fresh interactive mode that the fragments of the interactive prop are ejected outwards and deform can be provided, and the interestingness and the interactive effect of the interactive process provided by a terminal for a user are improved, the user experience is optimized.
In a possible embodiment, based on the device composition of fig. 11, the control ejection module 1102 includes:
the acquisition unit is used for acquiring the ejection direction and the ejection speed of any fragment in the fragments;
the determining unit is used for determining the ejection track of the fragments according to the target deformation position, the ejection direction and the ejection speed, and the ejection track takes the target deformation position as a starting point;
and the control unit is used for determining any position on the ejection track as the position of the drop point of the fragment and controlling the fragment to eject to the drop point along the ejection track.
In one possible embodiment, the control unit is configured to:
and determining a virtual article intersected with the ejection track from the virtual scene, and determining the intersection point of the ejection track and the virtual article as the position of the drop point.
In one possible implementation, the conversion module 1101 is configured to:
and decomposing the interactive prop into a plurality of original fragments in the virtual scene, and randomly determining the fragments from the original fragments.
In one possible embodiment, the apparatus is further configured to:
and when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of any fragment, deducting the interaction attribute values of the one or more virtual objects.
In one possible embodiment, the apparatus is further configured to:
and when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of the interactive props in the complete shape, deducting the interaction attribute values of the one or more virtual objects.
In one possible embodiment, the apparatus is further configured to:
when the triggering operation of the interactive prop is detected, displaying a predicted throwing track of the interactive prop in the virtual scene, wherein the predicted throwing track takes the current position of the interactive prop as a starting point and the target deformation position as an end point;
when the throwing operation of the interactive prop is detected, the interactive prop is controlled to fly along the predicted throwing track.
In one possible embodiment, the apparatus is further configured to:
when detecting the adjustment operation of at least one of the current position of the interactive prop or the target deformation position, acquiring at least one adjusted target position;
generating an adjusted projected throwing trajectory based on the at least one target location;
the original predicted throwing track is updated in the virtual scene to the adjusted predicted throwing track.
In one possible embodiment, the apparatus is a node device in a blockchain system, and when the node device executes an interactive property control method, interactive data generated in the control process of the interactive property is uploaded to the blockchain system.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: the interactive prop control device provided in the above embodiment is exemplified by only the division of the above functional modules when controlling the interactive prop, and in practical applications, the function allocation may be completed by different functional modules as needed, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the above described functions. In addition, the interactive prop control device and the interactive prop control method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the interactive prop control method embodiment, and are not described herein again.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement the interactive prop control methods provided by various embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera assembly 1206, audio circuitry 1207, positioning assembly 1208, and power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is configured to locate a current geographic Location of the terminal 1200 to implement navigation or LBS (Location Based Service). The positioning component 1208 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical button or vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including at least one program code, which is executable by a processor in a terminal to perform the interactive prop control method in the above embodiments is also provided. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. An interactive prop control method, comprising:
when the interactive prop is detected to be located at a target deformation position, converting the interactive prop from a complete form into a fragment form in a virtual scene to obtain a plurality of fragments of the interactive prop;
controlling the fragments to respectively eject to a plurality of drop point positions by taking the target deformation position as a starting point;
when the fact that the fragments respectively reach the multiple drop point positions is detected, controlling the fragments to deform at the multiple drop point positions in the virtual scene.
2. The method of claim 1, wherein the controlling the plurality of fragments to be ejected to a plurality of landing positions respectively starting from the target deformation position comprises:
acquiring the ejection direction and the ejection speed of any fragment in the plurality of fragments;
determining the ejection track of the fragments according to the target deformation position, the ejection direction and the ejection speed, wherein the ejection track takes the target deformation position as a starting point;
determining any position on the ejection track as a drop point position of the fragment, and controlling the fragment to eject to the drop point position along the ejection track.
3. The method of claim 2, wherein the determining any position on the ejection trajectory as the position of the drop point of the debris comprises:
and determining a virtual article intersected with the ejection track from the virtual scene, and determining the intersection point of the ejection track and the virtual article as the position of the drop point.
4. The method of claim 1, wherein converting the interactive prop from a full form to a fragmented form in the virtual scene, and wherein obtaining a plurality of fragments of the interactive prop comprises:
and decomposing the interactive prop into a plurality of original fragments in the virtual scene, and randomly determining the fragments from the original fragments.
5. The method of claim 1, wherein after controlling the plurality of fragments to deform at the plurality of landing positions in the virtual scene when the plurality of fragments are detected to reach the plurality of landing positions, respectively, the method further comprises:
when one or more virtual objects in the virtual scene are detected to be located in the interaction range of any fragment, deducting the interaction attribute values of the one or more virtual objects.
6. The method of claim 1, wherein after converting the interactive prop from a full form to a fragmented form in the virtual scene, the method further comprises:
and when detecting that one or more virtual objects in the virtual scene are positioned in the interaction range of the interactive props in the complete shape, deducting the interaction attribute values of the one or more virtual objects.
7. The method of claim 1, wherein before the step of converting the interactive prop from a complete form to a fragmented form in a virtual scene when the interactive prop is detected at the target deformation position, and obtaining a plurality of fragments of the interactive prop, the method further comprises:
when the triggering operation of the interactive prop is detected, displaying a predicted throwing track of the interactive prop in the virtual scene, wherein the predicted throwing track takes the current position of the interactive prop as a starting point and the target deformation position as an end point;
and when the throwing operation of the interactive prop is detected, controlling the interactive prop to fly along the predicted throwing track.
8. The method of claim 7, wherein upon detecting a triggering operation on the interactive prop, after displaying a projected throwing trajectory of the interactive prop in the virtual scene, the method further comprises:
when the adjustment operation of at least one of the current position of the interactive prop or the target deformation position is detected, acquiring at least one adjusted target position;
generating an adjusted projected throwing trajectory based on the at least one target location;
updating an original predicted throwing track to the adjusted predicted throwing track in the virtual scene.
9. The method according to claim 1, wherein the method is applied to a node device in a blockchain system, and when the node device executes an interactive prop control method, interactive data generated in the control process of the interactive prop is uploaded to the blockchain system.
10. An interactive prop control apparatus, the apparatus comprising:
the conversion module is used for converting the interactive prop from a complete form into a fragment form in a virtual scene when the interactive prop is detected to be located at a target deformation position, and obtaining a plurality of fragments of the interactive prop;
the control ejection module is used for controlling the fragments to respectively eject to a plurality of drop point positions by taking the target deformation positions as starting points;
and the control deformation module is used for controlling the fragments to deform at the plurality of the falling point positions respectively in the virtual scene when detecting that the fragments respectively reach the plurality of the falling point positions.
11. The apparatus of claim 10, wherein the control ejection module comprises:
the acquisition unit is used for acquiring the ejection direction and the ejection speed of any fragment in the fragments;
the determining unit is used for determining the ejection track of the fragments according to the target deformation position, the ejection direction and the ejection speed, wherein the ejection track takes the target deformation position as a starting point;
and the control unit is used for determining any position on the ejection track as the position of the drop point of the fragment and controlling the fragment to eject to the position of the drop point along the ejection track.
12. The apparatus of claim 11, wherein the control unit is configured to:
and determining a virtual article intersected with the ejection track from the virtual scene, and determining the intersection point of the ejection track and the virtual article as the position of the drop point.
13. The apparatus of claim 10, wherein the conversion module is configured to:
and decomposing the interactive prop into a plurality of original fragments in the virtual scene, and randomly determining the fragments from the original fragments.
14. A terminal, characterized in that the terminal comprises one or more processors and one or more memories, in which at least one program code is stored, which is loaded and executed by the one or more processors to implement the operations executed by the interactive prop control method according to any one of claims 1 to 9.
15. A storage medium having stored therein at least one program code, the at least one program code being loaded into and executed by a processor to perform operations performed by an interactive prop control method according to any one of claims 1 to 9.
CN201910944049.5A 2019-09-30 2019-09-30 Interactive property control method, device, terminal and storage medium Active CN110585706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910944049.5A CN110585706B (en) 2019-09-30 2019-09-30 Interactive property control method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944049.5A CN110585706B (en) 2019-09-30 2019-09-30 Interactive property control method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110585706A true CN110585706A (en) 2019-12-20
CN110585706B CN110585706B (en) 2021-10-29

Family

ID=68865526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944049.5A Active CN110585706B (en) 2019-09-30 2019-09-30 Interactive property control method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110585706B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111589145A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual article display method, device, terminal and storage medium
CN111744186A (en) * 2020-08-06 2020-10-09 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
WO2022222376A1 (en) * 2021-04-20 2022-10-27 网易(杭州)网络有限公司 Method and apparatus for displaying game scene, and electronic device and storage medium
US11904241B2 (en) 2020-09-04 2024-02-20 Tencent Technology (Shenzhen) Company Limited Virtual item control method and apparatus, terminal, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1247351A (en) * 1998-09-09 2000-03-15 英业达股份有限公司 Processing metho for implementing visual effect of explosion on display and computer system using it
JP2013208269A (en) * 2012-03-30 2013-10-10 Bndena Inc Program, information storage medium, electronic device, and server system
CN108837507A (en) * 2018-05-29 2018-11-20 网易(杭州)网络有限公司 Virtual item control method and device, electronic equipment, storage medium
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1247351A (en) * 1998-09-09 2000-03-15 英业达股份有限公司 Processing metho for implementing visual effect of explosion on display and computer system using it
JP2013208269A (en) * 2012-03-30 2013-10-10 Bndena Inc Program, information storage medium, electronic device, and server system
CN108837507A (en) * 2018-05-29 2018-11-20 网易(杭州)网络有限公司 Virtual item control method and device, electronic equipment, storage medium
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
4399小编: "生死狙击破片手雷 路人也受伤", 《HTTP:// NEWS.4399.COM/GONGLUE/SSJJ/XINWEN/853874.HTML》 *
LCXJ791030: "《无主之地》手雷类型全整理", 《HTTPS://M.3DMGAME.COM/MIP/GL/14194.HTML》 *
小杨学设计: "最皮设计师?竟然设计出子母雷!一颗手雷相当于六颗!不公平", 《HTTPS://V.QQ.COM/X/PAGE/K0689ZF1907.HTML》 *
无: "超能武器小讲堂——子母雷", 《HTTP://NEWS.ZYGAMES.COM/NEWS/GAME/4549.HTML》 *
李益等: "《Premiere Pro & after effects影视动画非线性编辑与合成》", November 2012, 北京联合出版公司 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111589145A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual article display method, device, terminal and storage medium
CN111744186A (en) * 2020-08-06 2020-10-09 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
CN111744186B (en) * 2020-08-06 2023-08-11 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
US11904241B2 (en) 2020-09-04 2024-02-20 Tencent Technology (Shenzhen) Company Limited Virtual item control method and apparatus, terminal, and storage medium
WO2022222376A1 (en) * 2021-04-20 2022-10-27 网易(杭州)网络有限公司 Method and apparatus for displaying game scene, and electronic device and storage medium

Also Published As

Publication number Publication date
CN110585706B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN110721468B (en) Interactive property control method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111408133B (en) Interactive property display method, device, terminal and storage medium
CN110507994B (en) Method, device, equipment and storage medium for controlling flight of virtual aircraft
CN110585706B (en) Interactive property control method, device, terminal and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN111475573B (en) Data synchronization method and device, electronic equipment and storage medium
CN110538459A (en) Method, apparatus, device and medium for throwing virtual explosives in virtual environment
CN111589150B (en) Control method and device of virtual prop, electronic equipment and storage medium
CN110507990B (en) Interaction method, device, terminal and storage medium based on virtual aircraft
CN110465098B (en) Method, device, equipment and medium for controlling virtual object to use virtual prop
CN110755841A (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111228809A (en) Operation method, device, equipment and readable medium of virtual prop in virtual environment
CN110755844B (en) Skill activation method and device, electronic equipment and storage medium
CN112221141A (en) Method and device for controlling virtual object to use virtual prop
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111330274A (en) Virtual object control method, device, equipment and storage medium
CN113144597A (en) Virtual vehicle display method, device, equipment and storage medium
CN111249726B (en) Operation method, device, equipment and readable medium of virtual prop in virtual environment
CN113680060B (en) Virtual picture display method, apparatus, device, medium and computer program product
CN111659122B (en) Virtual resource display method and device, electronic equipment and storage medium
CN110960849B (en) Interactive property control method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant