WO2023240925A1 - 虚拟道具的拾取方法、装置、计算机设备和存储介质 - Google Patents

虚拟道具的拾取方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2023240925A1
WO2023240925A1 PCT/CN2022/132380 CN2022132380W WO2023240925A1 WO 2023240925 A1 WO2023240925 A1 WO 2023240925A1 CN 2022132380 W CN2022132380 W CN 2022132380W WO 2023240925 A1 WO2023240925 A1 WO 2023240925A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
prop
game
virtual prop
picking
Prior art date
Application number
PCT/CN2022/132380
Other languages
English (en)
French (fr)
Inventor
张瑞珈
朱元元
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2023240925A1 publication Critical patent/WO2023240925A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/69Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions

Definitions

  • the present disclosure relates to the field of game technology, and specifically to methods, devices, computer equipment and storage media for picking up virtual props.
  • a variety of virtual props can be included in the game scene.
  • the user can control the game character to pick up the virtual props in the game scene to play the game. For example, the user can pick up the virtual props in the game scene by clicking on them.
  • Embodiments of the present disclosure provide methods, devices, computer equipment, and storage media for picking up virtual props, which can solve the problem in the prior art that users are prone to accidentally touching when controlling game characters to pick up virtual props, resulting in low picking accuracy.
  • embodiments of the present disclosure provide a method for picking up virtual props, providing a graphical user interface through a terminal, and the content displayed by the graphical user interface at least partially includes a game scene, a game character located in the game scene, and at least A virtual prop, the method includes: determining at least one candidate virtual prop from at least one virtual prop in the game scene, the candidate virtual prop configured with a virtual interactive object; responding to the game view screen corresponding to the game character.
  • the visual field direction points to the virtual interactive object, and a target virtual prop corresponding to the virtual interactive object is determined from the at least one candidate virtual prop; in response to a picking operation for the target virtual prop, the game character is controlled to pick up the virtual prop.
  • the target virtual props is provided.
  • embodiments of the present disclosure also provide a device for picking up virtual props, which provides a graphical user interface through a terminal.
  • the content displayed by the graphical user interface at least partially includes a game scene, a game character located in the game scene, and At least one virtual prop
  • the device including: a generating unit for determining at least one candidate virtual prop from at least one virtual prop in the game scene, the candidate virtual prop configured with a virtual interactive object; a determining unit for responding The field of view direction of the game field of view corresponding to the game character points to the virtual interactive object, and a target virtual prop corresponding to the virtual interactive object is determined from the at least one candidate virtual prop; a picking unit is used to respond to the The picking operation of the target virtual prop controls the game character to pick up the target virtual prop.
  • embodiments of the present disclosure also provide a computer device, including a processor and a memory, the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the instructions provided by the embodiments of the present disclosure. The steps in any method of picking up virtual props.
  • embodiments of the present disclosure also provide a computer-readable storage medium that stores a plurality of instructions, and the instructions are suitable for loading by the processor to execute the instructions provided by the embodiments of the present disclosure.
  • Embodiments of the present disclosure can determine at least one candidate virtual prop from at least one virtual prop in the game scene, and the candidate virtual prop is configured with a virtual interactive object; in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the Determine a target virtual prop corresponding to the virtual interactive object from the at least one candidate virtual prop; in response to a picking operation for the target virtual prop, control the game character to pick up the target virtual prop .
  • the virtual interactive objects can be used to determine the interaction area of the candidate virtual props in the game scene, so as to determine from the candidate virtual props through the interaction results between the field of view direction and the virtual interactive objects.
  • Target virtual props thereby improving the accuracy of selecting any candidate virtual props and improving the accuracy of picking up virtual props.
  • the virtual interactive object can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • Figure 1a is a schematic scene diagram of a virtual prop picking system provided by an embodiment of the present disclosure
  • Figure 1b is a schematic flowchart of a method for picking up virtual props provided by an embodiment of the present disclosure
  • Figure 1c is a schematic diagram of a virtual identity and a first bounding box provided by an embodiment of the present disclosure
  • Figure 1d is a schematic diagram of a first bounding box provided by an embodiment of the present disclosure
  • Figure 2a is a schematic flowchart of a method for picking up virtual props provided by another embodiment of the present disclosure
  • Figure 2b is a schematic diagram of the pickup area of virtual props provided by an embodiment of the present disclosure.
  • Figure 2c is a schematic diagram of a game character located in a pickup area provided by an embodiment of the present disclosure
  • Figure 2d is a schematic diagram of the detection ray directed to the selected area of virtual prop A and virtual prop B provided by the embodiment of the present disclosure
  • Figure 2e is a schematic diagram of a description interface for target virtual props provided by an embodiment of the present disclosure
  • Figure 2f is a schematic flowchart of determining target virtual props provided by an embodiment of the present disclosure
  • Figure 3 is a schematic structural diagram of a pickup device for virtual props provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • game scene is the game scene displayed (or provided) when the application runs on the terminal.
  • the game scene can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual environment, or a purely fictitious virtual environment.
  • the game scene may be any one of a two-dimensional game scene, a 2.5-dimensional game scene, or a three-dimensional game scene.
  • the embodiment of the present disclosure does not limit the dimensions of the game scene.
  • the game scene can include the sky, land, ocean, etc.
  • the land can include environmental elements such as deserts and cities, and the user can control the game character to move in the game scene.
  • Game character refers to the character used to simulate people or animals in the game scene.
  • the game character can be a virtual character, a virtual animal, an animation character, etc., such as a character or animal displayed in a game scene.
  • the game character may be a virtual avatar representing the user in the game scene.
  • the game scene may include multiple game characters. Each game character has its own shape and volume in the game scene and occupies a part of the space in the game scene.
  • Game character activities can include: adjusting body posture, crawling, walking, running, riding, flying, jumping, using virtual sights to aim, shooting, driving, picking up, attacking, throwing and releasing skills, etc.
  • the content displayed in the graphical user interface at least partially includes a game scene, wherein the game scene includes at least one game character.
  • game characters in the game scene include user-controlled virtual characters (Player Characters) and system-preset controlled, non-user-controlled virtual characters (Non-Player Character, NPC).
  • the game character is a virtual character controlled by the user.
  • the game character may be a virtual character controlled by the user through operations on the client.
  • the game character may be a virtual character competing in a game scene.
  • the number of game characters participating in the interaction in the game scene may be preset, or may be dynamically determined based on the number of clients participating in the interaction.
  • Virtual props refers to the virtual props that can be used by game objects in the game scene, including pistols, rifles, sniper rifles, daggers, knives, swords, axes, ropes and other virtual weapons that can cause damage to other virtual objects, bullets and other supplies
  • Props include defensive props such as shields, armors, and armored vehicles, virtual beams, virtual shock waves, and other virtual props that are displayed through the hands when virtual objects release skills, as well as healing props such as medicine packs and drinks.
  • Game interface refers to the interface corresponding to the application program provided or displayed through the graphical user interface.
  • the interface includes the graphical user interface for users to interact and the game screen.
  • the game screen is the screen of the game scene.
  • the game interface may include game controls (such as skill controls, movement controls, character control controls, backpack controls, chat controls, system setting controls and other functional controls, etc.), indicator signs (such as direction indicators) logo, role indicator, etc.), information display area (such as number of kills, game time, etc.).
  • game controls such as skill controls, movement controls, character control controls, backpack controls, chat controls, system setting controls and other functional controls, etc.
  • indicator signs such as direction indicators
  • logo logo, role indicator, etc.
  • information display area such as number of kills, game time, etc.
  • Embodiments of the present disclosure provide methods, devices, computer equipment, and storage media for picking up virtual props.
  • the pickup device for the virtual props can be integrated into an electronic device, and the electronic device can be a terminal, a server, or other equipment.
  • the terminal can be a mobile phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer (Personal computer).
  • Computer, PC Personal computer
  • the server can be a single server or a server cluster composed of multiple servers.
  • the device for picking up virtual props can also be integrated in multiple electronic devices.
  • the device for picking up virtual props can be integrated into multiple servers, and the multiple servers implement the picking up of virtual props of the present disclosure. method.
  • the virtual prop picking method can be run on a terminal device or a server.
  • the terminal device may be a local terminal device.
  • the method can be implemented and executed based on a cloud interaction system, where the cloud interaction system includes a server and a client device.
  • cloud applications such as cloud games
  • cloud gaming refers to a gaming method based on cloud computing.
  • the client device is used for data reception, Send and present the game screen.
  • the client device can be a display device with data transmission function close to the user side, such as a terminal, TV, computer, handheld computer, etc.; but the terminal device for character control is the cloud cloud gaming server.
  • the user operates the client device to send operation instructions, such as touch operation instructions, to the cloud game server.
  • the cloud game server runs the game according to the operation instructions, encodes and compresses the game screen and other data, and returns it to the client device through the network. , and finally, decode and output the game screen through the client device.
  • the server can also be implemented in the form of a terminal.
  • the terminal device may be a local terminal device. Taking games as an example, the local terminal device stores game programs and is used to present game screens. The local terminal device is used to interact with the user through a graphical user interface, that is, conventionally downloading, installing and running game programs through electronic devices.
  • the local terminal device may provide the graphical user interface to the user in a variety of ways. For example, it may be rendered and displayed on the display screen of the terminal, or provided to the user through holographic projection.
  • the local terminal device may include a display screen and a processor.
  • the display screen is used to present a graphical user interface.
  • the graphical user interface includes a game screen.
  • the processor is used to run the game, generate the graphical user interface, and control the graphical user interface. displayed on the display screen. Users can operate on the interface through input devices such as touch screens, mice, keyboards, or gamepads.
  • a schematic scene diagram of a system for picking up virtual props is provided, which system can implement a method for picking up virtual props.
  • the virtual prop picking system may include a terminal 1000 , a server 2000 and a network 3000 .
  • the terminal is used to determine at least one candidate virtual prop from at least one virtual prop in the game scene, and the candidate virtual prop is configured with a virtual interactive object; in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interaction object, from at least one A target virtual prop corresponding to the virtual interactive object is determined among the candidate virtual props; in response to a picking operation for the target virtual prop, the game character is controlled to pick up the target virtual prop.
  • the server is used to obtain data about users playing games on the terminal.
  • the network is used for data transmission between the server and the terminal.
  • the network can be a wireless network or a wired network.
  • the wireless network is a wireless local area network (WLAN), a local area network (LAN), a cellular network, a 2D network, a 3G network, a 4G network, and a 5G network. wait.
  • a method for picking up virtual props is provided.
  • a graphical user interface is provided through a terminal.
  • the content displayed by the graphical user interface at least partially includes a game scene, a game character located in the game scene and at least one virtual prop, such as As shown in Figure 1b, the specific process of the virtual prop picking method can be as follows:
  • At least one virtual prop may refer to a virtual prop that can be picked up by the user by controlling the game character to perform a picking operation in the game scene. Before performing the picking operation, it may also include moving the game object into the pickup range of the virtual prop through a movement operation. .
  • the candidate virtual props may refer to one or more virtual props determined from at least one virtual prop according to user operations or preset selection rules, etc.
  • the user operation may be a selection operation or marking operation on at least one virtual prop in the game scene, etc.
  • the preset selection rules may be preset rules based on the location information, type information, etc. of the candidate virtual props, such as the preset selection rules.
  • the virtual props whose game scene is located in the pickup range of the game object may be determined as candidate virtual props, or the virtual props whose type information in the game scene matches the game object may be determined as candidate virtual props, and so on.
  • virtual props such as head protection gear and treatment medicine packs in the game scene can be determined as candidate virtual props.
  • candidate virtual props can be considered to refer to virtual props in a pickable state.
  • Virtual props in a pickable state can be picked up and interacted with the field of view direction of the game field of view screen, in response to the field of view direction of the game field of view screen pointing to the virtual props.
  • Interactive object adjusts the pickable state to the pending state.
  • the virtual interactive object may refer to a virtual object that the candidate virtual prop is used to pick up and interact with the game character controlled by the user.
  • virtual interactive objects may include, but are not limited to, virtual identities or bounding boxes, and so on.
  • Configuring virtual interactive objects for candidate virtual props may refer to generating and displaying virtual interactive objects corresponding to each candidate virtual prop in the game scene.
  • virtual interactive objects of the candidate virtual props can be generated to determine the target virtual props from the candidate virtual props through the interaction results between the game character's field of view direction and the virtual interactive object.
  • the accuracy of selecting any candidate virtual prop can be improved by setting the display effect or shape size of the virtual interactive object, such as highlighting or displaying the virtual interactive object in a special shape, so as to improve the accuracy of picking up virtual props.
  • determining at least one candidate virtual prop from at least one virtual prop in the game scene may include:
  • At least one virtual prop corresponding to any pickup area is determined as a candidate virtual prop.
  • the preset pickup area conditions may refer to preset conditions for setting the pickup area of virtual props, and the pickup area may refer to the area or range in which virtual props can be picked up in the game scene.
  • the preset pickup area conditions may include a pickup radius, and a spherical or circular pickup area with a radius of the pickup radius may be set in the game scene with the location information of the virtual prop as the center of the circle.
  • the pickup area may be in the form of a bounding box, such as a spherical bounding box with a radius equal to the pickup radius.
  • a bounding box such as a spherical bounding box with a radius equal to the pickup radius.
  • different pickup radii can be set for different virtual props according to the type of the virtual prop or the game scene.
  • the smaller the pickup radius the game character needs to be closer to the virtual prop to perform the pickup operation, which takes more time. , the game is more difficult, which can provide diversified ways to pick up virtual props, improve user retention rate, and reduce server empty consumption.
  • the pickup can be adjusted according to the number of game characters in the pickup area.
  • the size of the area makes it easier for multiple game characters to spread out within the pickup area to avoid occlusion and improve the visual effect.
  • the preset pickup area conditions may include a pickup radius, and generating a pickup area for virtual props based on the location information and the preset pickup area conditions may include: generating an initial pickup area based on the location information and the pickup radius; The number of game characters in the area is adjusted to the initial pickup area to obtain the pickup area for virtual props.
  • the pickup radius is r
  • the initial pickup area size 2 ⁇ r
  • the initial pickup area and the center point of the pickup area are the positions of the virtual props in the game scene.
  • the field of view direction of the game field of view picture corresponding to the game character points to the virtual interactive object, before determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include:
  • the visual field direction is adjusted.
  • the visual field adjustment operation may refer to the operation used to adjust the visual field direction of the game visual field screen.
  • the visual field adjustment operation may include but is not limited to operations such as touch, drag, swipe, long press, short press, double click, click, end drag, etc.
  • users can perform line of sight adjustment operations through input devices such as touch screens, mice, keyboards, or handles.
  • the specific operation method depends on the game operation method or the specific game settings.
  • a gaze adjustment control can be provided in the game interface of the terminal's graphical user interface.
  • the game character is controlled to move his head, such as raising his head or lowering his head, to adjust the game.
  • Field of view The field of view direction of the screen. By adjusting the field of view direction of the game field of view, the user can be assisted to quickly select candidate virtual props, which can improve the efficiency of picking up virtual props.
  • first visual field adjustment operation and the second visual field adjustment operation in the embodiment of the disclosure are operations corresponding to different visual field direction adjustment processes, and the specific operation methods may be the same or different.
  • the field of view direction may refer to the direction of the game character's field of view in the game scene. If the field of view direction is the perspective in the game scene, the perspective in the game scene can be different depending on the specific settings of the game. For example, the perspective in the game scene can be a first-person perspective or a third-person perspective.
  • the field of view direction of the game field of view screen points to the virtual interactive object.
  • the field of view direction may fully or partially cover the area where the virtual interactive object is located.
  • the field of view direction of the game field of view can be the range of sight of the game character in the game scene.
  • the game character's eyes can be used as the origin of the game character's vision direction.
  • it can be a cone-shaped area with the midpoint of the game character's eye as the vertex; it can also be the origin of the game character's vision direction emitted by the game character.
  • the detection ray can be, for example, a detection ray emitted from the center point of the game character's eye, etc.
  • the camera model can be used as the origin of the line of sight.
  • the field of view direction can be a cone-shaped area emitted by the camera model.
  • the detection ray can be a detection ray emitted by the camera model and pointed to the direction of the field of view of the game field of view.
  • the camera model is located on the head or neck of the game character.
  • the camera model is located behind the game character. Therefore, the game screen content displayed is also different depending on the perspective.
  • the position and/or sight direction of the game character in the three-dimensional game scene changes, the content of the game screen will also change accordingly.
  • the target virtual prop may refer to a virtual prop that is in a state to be picked up.
  • the state to be picked up may refer to a state in which the game character can be picked up in response to the game character's picking operation.
  • the virtual prop is in a state to be picked up, it can mean that the virtual prop is associated with the picking control in the game interface. If the user performs a touch operation on the picking control, the game character can be controlled to automatically pick up the virtual prop.
  • candidate virtual props can also be highlighted so that the user can quickly find the target virtual prop from at least one virtual prop.
  • the selected candidate virtual prop may be determined as the target virtual object. For example, if the field of view direction completely covers the area where at least one candidate virtual prop is located, or if the detection ray points to the center point of at least one candidate virtual prop, then the at least one candidate virtual prop can be determined as the target virtual prop.
  • the target virtual prop is determined through the field of view direction of the game field of view and the virtual interactive object, which can improve the accuracy of picking up virtual props, help users quickly decide on the virtual props to pick up, simplify the virtual prop picking process, and achieve rapid picking up.
  • Virtual props improve the efficiency of picking up virtual props.
  • the virtual interactive object may include at least one of a first object and a second object.
  • the virtual interaction object is determined from at least one candidate virtual prop.
  • the target virtual prop corresponding to the virtual interactive object may include: in response to the field of view direction of the game field of view corresponding to the game character pointing to the first object or the second object, determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop. .
  • the first object and the second object are different virtual interactive objects, that is, each candidate virtual prop can be configured with one or more virtual interactive objects.
  • the interaction area of the candidate virtual prop is determined by the first object and the second object, which can provide diversified interaction methods, improve user retention rate, and reduce server consumption.
  • the first object may include a virtual identifier, and in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, before determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, May also include:
  • the virtual identification may refer to the identification used to represent the target virtual object in the game scene.
  • the virtual identification may have multiple presentation forms, and the specific presentation form may depend on the specific settings of the game.
  • the virtual logo can be a graphic with the same outline as the candidate virtual prop, and the graphic is highlighted in the game scene.
  • the sight height may be the sight height of the game field of view in the game scene.
  • the camera model can be used as the origin of the line of sight, and the line of sight height can be the height of the camera model from the ground or floor in the game scene.
  • the camera model is located on the head or neck of the game character.
  • the camera model is positioned behind the game character.
  • the display height can refer to the height of the virtual logo in the game scene.
  • virtual props in game scenes are usually scattered on the ground or other planes and are difficult to observe
  • by displaying virtual logos in the game scene on the one hand, users can quickly find candidate virtual props and the location of the candidate virtual props in the game scene.
  • the virtual logo improves the efficiency of picking up virtual props.
  • the display height of the virtual logo is determined based on the sight height of the game character.
  • the virtual logo can be displayed at a height that is convenient for observation based on the sight height, improving the efficiency of picking up virtual props.
  • determining the display height of the virtual identity of the candidate virtual prop based on the line of sight height may include:
  • the preset weight parameter may depend on the specific settings of the game.
  • the preset weight parameter may be a preset decimal number less than 1.
  • the display height of the virtual sign is obtained by weighting the sight height, which is fast in calculation.
  • the display height of the virtual sign can also be quantitatively adjusted by adjusting the preset weight parameters, which has high processing efficiency and can save computing power.
  • the display height may be a preset height.
  • the display height can be preset for each game character based on the eye level, so that the preset display height can be obtained based on the identity of the game character, and the virtual identity of the candidate virtual prop can be displayed at the display height.
  • the virtual identification may be a virtual light pillar located above the candidate virtual prop.
  • the virtual logo can be a virtual light column extending upward from the midpoint of the candidate virtual prop.
  • the first object may include a first bounding box
  • the second object may include a second bounding box
  • the first bounding box is located within the second bounding box.
  • a bounding box can refer to a closed space that completely contains a combination of objects. Encapsulating complex objects in a simple bounding box can improve the efficiency of geometric operations.
  • the bounding box can have many shapes, and the specific shape can depend on the specific settings of the game. For example, you can use AABB bounding box (Axis-aligned boundingbox), bounding sphere (Sphere), directional bounding box OBB (Oriented bounding box) and fixed direction convex hull FDH (Fixed Methods such as directions hulls or k-DOP) generate bounding boxes around candidate virtual props.
  • the first bounding box and the second bounding box are both bounding boxes generated around the candidate virtual prop.
  • the first bounding box can be around the candidate virtual prop and have the same outline as the candidate virtual prop.
  • the second bounding box may be a cylindrical bounding box on the periphery of the first bounding box.
  • the bounding box is used to determine the interaction area of the candidate virtual props
  • a simple-structured bounding box is used to approximately replace the complex-structured candidate virtual props to determine the interaction area, which can improve the determination of the interaction area.
  • two bounding boxes determine the interaction area of candidate virtual props, which can provide diversified interaction methods, improve user retention rate, and reduce server consumption.
  • determining at least one candidate virtual prop from at least one virtual prop in the game scene it may further include: determining an interaction area of the candidate virtual prop according to the virtual interactive object.
  • the virtual interactive object may be located in the interaction area, and the field of view direction of the game field of view picture points to the virtual interactive object.
  • the field of view direction of the game field of view picture points to the interaction area of the candidate virtual prop.
  • the interaction area may refer to an area where the candidate virtual props can be picked up and interacted with the game character.
  • the area where the virtual interactive object is located in the game scene can be determined as the interactive area, or the area where the virtual interactive object is located in the game scene can be determined as the interactive area.
  • the location information determines the interaction area, and so on. Specifically, it can be as follows: determine the two farthest points of the virtual interactive object and the candidate virtual prop in the game scene, use the connection between the two points as the diameter, and use the diameter to determine the spherical area in the game scene as the interaction area.
  • the virtual interactive object can be considered as a candidate virtual prop for picking up and interacting with the game character; if the interactive area is a virtual interactive object In the area where the virtual interactive object is located in the game scene, if the virtual interactive object is a three-dimensional model in the game scene, the interactive area can be the three-dimensional space occupied by the virtual interactive object in the game scene.
  • determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop may include:
  • the target virtual props may be part or all of the candidate virtual props pointed by the visual field direction. For example, it can be all subsequent virtual props that the visual field direction points to, or it can be the first candidate virtual prop that the visual field direction points to.
  • the gaze point can refer to the gaze point in the game scene along the field of view direction of the game field of view.
  • the intersection point of the field of view direction of the game field of view picture and any object in the game scene, such as a game character, a virtual prop or other object can be determined as the point of view of the field of view direction in the game scene.
  • the field of view direction of the game field of view picture is at
  • the sight point of the virtual interactive object may be the intersection point of the visual field direction of the game visual field screen and the virtual interactive object.
  • the point where the visual field direction of the game field of view picture falls on the line of sight of the virtual interactive object can also be the intersection point of the field of view direction of the game field of view picture and the interaction area of the virtual interactive object.
  • the gaze point can refer to the game character's gaze point in the game scene.
  • the field of view direction of the game field of view screen intersects with the virtual interactive object (whether there is an intersection point). If they intersect, it can be considered that the field of view direction of the game field of view screen points to the virtual interactive object, that is, by adjusting the field of view direction selection of the game field of view screen
  • Candidate virtual props corresponding to any virtual interactive object are obtained.
  • the intersection point with the visual field direction and the virtual interactive object is obtained, and the target virtual prop is determined from the candidate virtual props based on the intersection point. In this way, when the visual field direction is adjusted, the target virtual props can be calculated in real time. Intersect points and determine target virtual props to improve processing efficiency.
  • the position information of the line of sight origin in the game scene and the position information of the candidate virtual props can be obtained, the connection line between the line of sight origin and the virtual angle can be determined based on the obtained position information, and the angle A between the connection line and the horizontal direction of the game scene can be determined. , obtain the angle B between the field of view direction of the game field of view screen and the horizontal direction of the game scene. If the difference between the angle A and the angle B is less than the preset value, the field of view direction can be considered to point to the virtual interactive object.
  • the visual field direction intersects with the virtual interactive object (whether there is an intersection point). If they intersect, the visual field direction is considered to point to the virtual interactive object. Since both the candidate virtual prop and the game character are within the pickup range of the candidate virtual prop, when the angle A and the angle B are close to or even equal, it can be considered that the direction of vision of the game character is likely to point to the candidate virtual prop. By comparing angle A and angle B, only the intersection point of the game character's partial field of view needs to be calculated, which can improve processing efficiency and save computing power.
  • Obtaining the position information of the line of sight origin in the game scene and the position information of the candidate virtual props may include: obtaining the facial orientation of the game character in the game scene, the position information of the game character, and the position information of the candidate virtual props.
  • the position information of the virtual props determines the target direction.
  • the target direction can refer to the direction in which the position of the game character in the horizontal direction points to the position of the candidate virtual prop. If the face of the game character faces the same direction as the target direction, obtain the position information of the origin of the line of sight in the game scene. .
  • the angle B may be the centerline of the field of view or the angle between the detection ray and the horizontal direction of the game scene.
  • determining the target virtual prop corresponding to the first object from at least one candidate virtual prop may include:
  • the first first object may refer to the first first object along the visual field direction and pointed in the visual field direction. For example, it may be the first object closest to the origin of the visual field.
  • the field of view direction of the game field of view may point to one or more virtual interactive objects of the candidate virtual props. If the field of view direction of the game field of view points to the virtual interaction objects of multiple candidate virtual props, Then there are multiple first objects pointed in the visual field direction, so the first first object pointed in the visual field direction can be determined as the target virtual prop. In this way, the user can be assisted to quickly select the candidate virtual prop, and the picking up of virtual props can be improved. efficiency.
  • the field of view direction may include a detection ray that is emitted from a preset position of the game field of view picture and points to the field of view direction.
  • the first object of the field of view direction direction is determined.
  • a first object which can include:
  • the first first object to which the visual field direction points is determined from the first objects.
  • the preset position may be the origin of the line of sight set according to the game settings, for example, it may be the eye position of the game character, or it may be the position of the camera model used to determine the field of view direction of the game field of view, etc.
  • the first first object can be determined based on the intersection between the detection ray and the first object. For example, the first intersection point closest to the origin of the line of sight or the game character can be determined as The target intersection point, and the first object corresponding to the target intersection point is determined as the first first object.
  • the detection rays can be more accurately pointed at the target interactive object, avoiding accidental touches, increasing the error tolerance rate, improving the accuracy of selecting virtual props, and improving the accuracy of picking up virtual props.
  • the detection ray intersects with the virtual logo, it can be considered that the detection ray points to the virtual interactive object.
  • the target virtual prop can be determined through the intersection of the detection ray and the virtual logo.
  • the user can quickly find candidate virtual props through the virtual logo. and its virtual interactive objects.
  • users can adjust the detection ray to accurately point at the virtual logo, and can select virtual interactive objects more accurately, which can improve the efficiency of picking up virtual props and improve the accuracy of picking up virtual props.
  • the detection ray intersects the first bounding box A of the candidate virtual prop A and the first bounding box B of the candidate virtual prop B at the same time. At this time, it can be obtained that the detection ray intersects the first bounding box A and the first bounding box B respectively.
  • the position information of the intersection point is obtained, and the first bounding box corresponding to the point closest to the origin of the line of sight among the intersection points is determined as the first first object. It should be noted that if the detection ray points to multiple first bounding boxes and at least one second bounding box, only the position information of the second intersection point is obtained at this time.
  • the field of view direction may include a detection ray that is emitted from a preset position of the game field of view picture and points to the field of view direction.
  • the first object of the field of view direction direction is determined.
  • a first object which can include:
  • the first first object that responds to the detection ray is determined as the first first object.
  • the first object when a detection ray is emitted along the origin of the line of sight, and the detection ray intersects with any first object, the first object can respond to the detection ray. If there is a first object that responds to the detection ray, the process can be stopped. By extending the detection ray and determining the first object of the first response as the first first object, processing efficiency can be improved and computing power can be saved.
  • determining the target virtual prop corresponding to the second object from at least one candidate virtual prop may include:
  • the first second object may refer to the first second object along the visual field direction and pointed in the visual field direction.
  • it may be the second object closest to the origin of the visual field.
  • the virtual interactive object may include a first bounding box and a second bounding box. Since the first bounding box occupies less space in the game scene than the second bounding box, when the user wants to select a candidate virtual prop by adjusting the detection ray, it is more difficult to adjust the detection ray to intersect with the first bounding box than with the second bounding box. Boxes intersect, so a method for determining the intersection point can be provided based on the ease of adjusting the detection ray to intersect the two bounding boxes. Specifically, if the adjusted detection ray intersects with the first bounding box, it is considered that the candidate virtual prop corresponding to the first bounding box is selected through the detection ray.
  • the candidate virtual prop corresponding to the first bounding box may be selected by mistake.
  • the detection ray intersects the first bounding box and the second bounding box of candidate virtual prop A, and intersects the second bounding box of candidate virtual prop A
  • the The intersection point of the detection ray and the first bounding box of the candidate virtual prop A is determined as the target intersection point. If the detection ray intersects the second bounding box of the candidate virtual prop A but does not intersect the first bounding box of any candidate virtual prop, then you can The intersection point of the detection ray and the second bounding box of the candidate virtual prop A is determined as the target intersection point.
  • the candidate virtual prop is determined to be the target virtual prop. In this way, when there is only one candidate virtual prop in the game scene, the second object can assist the user to quickly select the candidate virtual prop, which can improve the efficiency of picking up the virtual prop.
  • the field of view direction may include detection rays emitted from a preset position of the game field of view screen and pointing to the field of view direction. In response to the field of view direction not pointing to the first object and the field of view direction pointing to the second object, determining the direction of the field of view direction.
  • the first second object can include:
  • the first second object to which the visual field direction points is determined from the second objects.
  • the detection ray intersects the second bounding box A of the candidate virtual prop A and the second bounding box B of the candidate virtual prop B at the same time. At this time, the detection ray intersects the second bounding box A and the second bounding box B respectively.
  • the position information of the intersection point is obtained, and the second bounding box corresponding to the point closest to the origin of the line of sight in the intersection point is determined as the first second object.
  • the field of view direction may include detection rays emitted from a preset position of the game field of view screen and pointing to the field of view direction. In response to the field of view direction not pointing to the first object and the field of view direction pointing to the second object, determining the direction of the field of view direction.
  • the first second object can include:
  • the first second object responsive to the detection ray is determined to be the first second object.
  • the first second object that responds to the detection ray can be determined as the first according to the time sequence of the responses of the second object.
  • the first object can improve processing efficiency and save computing power.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the first display method to display the target virtual prop.
  • the first display mode may be a preset display mode depending on the specific settings of the game. For example, it may include any one of the following display modes: reducing transparency, increasing color saturation, switching colors, highlighting, and stroking. , add prominent logos, etc. Among them, adding a prominent mark may be adding marks such as asterisks, dots, triangles, etc. on or near the surface of the target virtual prop.
  • the first display mode may be highlighting.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the second display The method displays the first bounding box of the target virtual prop.
  • the second display mode may be a preset display mode depending on the specific settings of the game. For example, it may include any one of the following display modes: reducing transparency, increasing color saturation, switching colors, highlighting, stroking, adding Highlight logos, etc.
  • the second display mode may be the same as the first display mode, or may be different. Displaying the target virtual prop or the first bounding box of the target virtual prop through the first display mode or the second display mode can assist the user to quickly find the selected target virtual prop, and can improve the efficiency of picking up the virtual prop.
  • the method in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, the method further includes:
  • the target virtual prop is maintained as the target virtual prop.
  • control the game character to pick up the target virtual prop In response to the picking operation of the target virtual prop, control the game character to pick up the target virtual prop.
  • the picking operation may refer to the operation used to control the game character to perform picking actions for virtual props.
  • Picking operations may include but are not limited to touch, drag, swipe, long press, short press, double click, click, end drag, etc.
  • Operation users can perform picking operations through input devices such as touch screens, mice, keyboards, or handles.
  • the specific operation method depends on the game operation method or the specific game settings.
  • the game character can be controlled to pick up the target virtual prop through the user's touch operation on the target virtual prop or a virtual interactive object of the target virtual prop.
  • the candidate virtual props selected by the field of view direction of the game field of view in the game scene can be accurately picked up, thereby improving the accuracy of picking up virtual props.
  • the method in response to the picking operation for the target virtual prop, before controlling the game character to pick up the target virtual prop, the method may further include:
  • controlling the game character to pick up the target virtual prop includes:
  • the game character In response to a touch operation acting on the picking control, the game character is controlled to pick up the target virtual prop.
  • the picking control associated with the target virtual prop can be displayed in the game interface of the graphical user interface. If the user touches the picking control, the game character can be controlled to automatically pick up the target virtual prop, which simplifies the virtual prop picking process and enables quick picking up of virtual props. , improve the efficiency of picking up virtual props.
  • the virtual prop picking solution provided by the embodiments of the present disclosure can be applied in various game scenarios. For example, taking a multiplayer competitive game as an example, at least one candidate virtual prop is determined from at least one virtual prop in the game scene, and the candidate virtual prop is configured with a virtual interactive object; in response to the game character corresponding to the game field of view, the field of view direction points to the virtual The interactive object determines a target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop; in response to a picking operation for the target virtual prop, controls the game character to pick up the target virtual prop.
  • virtual interactive objects configured with candidate virtual props can be used to determine the interactive area of the candidate virtual props in the game scene, so as to obtain the interactive results from the candidate virtual props through the interaction results between the field of view direction and the virtual interactive objects.
  • the target virtual prop is determined among the props, thereby improving the accuracy of selecting any candidate virtual prop and improving the accuracy of picking up the virtual prop.
  • it can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • the first object may include a virtual identification.
  • the virtual identification in the game scene, on the one hand, the user can quickly find the candidate virtual props and the location of the candidate virtual props.
  • Virtual interactive objects improve the efficiency of picking up virtual props.
  • virtual signs can be displayed at a height that is easy to observe based on the height of the line of sight, improving the efficiency of picking up virtual props.
  • the first object may include a first bounding box
  • the second object may include a second bounding box.
  • the bounding box is used to determine the interaction area of the candidate virtual prop
  • the bounding box with a simple structure is used to approximately replace the candidate with a complex structure.
  • Using virtual props to determine the interaction area can improve the efficiency of determining the interaction area and simplify the interaction process to improve the efficiency of picking up props.
  • using the first object and the second object to determine the interaction area of candidate virtual props can provide a variety of Optimized interaction methods improve user retention and reduce server consumption.
  • a spherical pickup area which can be a spherical bounding box with a radius n.
  • the distance f between the game character and the virtual prop A can be obtained. Only within the effective pickup distance of 0 ⁇ f ⁇ n, the virtual prop is in a pick-up state. In this way, virtual props can only be picked up within the effective pickup distance and can be picked up and interacted with the field of view of the game field of view. When the effective pickup distance is exceeded, virtual props cannot be picked up and interacted with the field of view of the game field of view. .
  • the user can control the movement of the game character so that the game character enters the pickup area of at least one virtual prop.
  • virtual prop A is a candidate virtual prop.
  • a virtual identification, a first bounding box and a second bounding box of the virtual prop A can be generated, wherein, as shown in Figures 1c and 1d, the virtual identification is the midpoint of the candidate virtual prop
  • the height of the virtual light column extending upward is half the height from the ground to the line of sight.
  • the first bounding box is a bounding box around the candidate virtual prop and has the same outline as the candidate virtual prop, as shown in Figure 1d.
  • the second bounding box is a cylindrical bounding box on the periphery of the first bounding box, with a cylinder height d and a diameter e, where d and e can be set according to the application scenario.
  • the virtual logo is visible in the game scene, and the user can quickly find the virtual prop A through the virtual logo in the game scene; the first bounding box and the second bounding box are not visible in the game scene.
  • the height of the virtual light pillar can be half of the height of the line of sight, so that no picking interaction occurs when the game character looks straight up.
  • the detection ray of the game character is adjusted slightly downward, it can intersect with the virtual light pillar and realize picking interaction.
  • detection rays can simulate the human eye as a trigger for picking interactions.
  • its field of view direction can be head-on.
  • the user can quickly find virtual prop A through the virtual logo in the game scene, because the field of view direction of the game field of view screen does not point to the virtual light beam of virtual prop A, the first Bounding box and second bounding box, the game character cannot interact with virtual prop A through the direction of view. Therefore, the user can touch the sight adjustment control in the game interface to make the game character lower his head until the angle between the detection ray and the horizontal direction of the game scene (the ground) is greater than 0° and less than 45°, as shown in Figure 1c indicates that the angle b must be less than 45°.
  • the detection ray of the game character can intersect with the virtual light column.
  • the detection ray it is determined whether the detection ray intersects the virtual light column, the first bounding box or the second bounding box of any candidate virtual prop in the game scene. If they intersect, it is considered that the detection ray points to the virtual interactive object.
  • the virtual light beam cannot completely cover the virtual prop A, its main function is to prompt the location of the virtual prop A and interact with the detection ray above the virtual prop A. Since there is often a distance between the game character and the virtual prop A, the detection It is difficult for rays to accurately select the virtual light pillar.
  • the virtual light pillar, the first bounding box and the second bounding box can be combined to determine the line of sight point of the detection ray.
  • virtual prop A and virtual prop B are both candidate virtual props, and different virtual interactive objects can be placed in the game scene.
  • the occupied area is divided into a selected area and a fault-tolerant area, and the picking interaction method is determined based on the selected area and the fault-tolerant area.
  • the interactive area occupied by the second bounding box can be used as the fault-tolerant area, and the interactive area occupied by the first bounding box can be used as the selected area.
  • the interactive area occupied by the virtual light column and the second bounding box can be used as the fault-tolerant area, and the interactive area occupied by the first bounding box can be used as the selected area.
  • the target virtual prop can be highlighted and stroked, or a description interface for the target virtual prop can be displayed on the game interface as shown in Figure 2e.
  • the pickup area of each virtual prop in the game scene is determined, and the pickup area can be a spherical bounding box. If the game character is within the pickup range of virtual prop A and virtual prop B at the same time, the virtual identification, first bounding box, and second bounding box of virtual prop A and virtual prop B are generated respectively. If the game character is not in the pickup range of virtual prop A and virtual prop B, If it is within the pickup range of prop B, virtual prop A and virtual prop B will not respond to the pickup interaction.
  • the detection ray first intersects the selection area of virtual prop A, then virtual prop A is selected; after the user adjusts the detection ray, if the detection ray first intersects the selection area of virtual prop B, then virtual prop B is selected; after the user adjusts the detection ray Afterwards, if the detection ray does not intersect with the selected area of virtual prop B, and if the detection ray intersects with the fault tolerance area of virtual prop A, virtual prop A remains selected; after the user adjusts the detection ray, if the detection ray does not intersect with virtual prop B intersects with the selected area of virtual prop A, and if the detection ray does not intersect with the fault tolerance area of virtual prop A, and if the detection ray intersects with the fault tolerance area of virtual prop B, then virtual prop B is selected; if the detection ray does not intersect with the selected area of any candidate virtual prop and the fault-tolerant zone intersect, then no candidate virtual props will be selected.
  • the selected virtual item A or the virtual item B may be picked up in response to the picking operation, while the unselected virtual item A or the virtual item does not respond to the picking operation.
  • the selected state will be maintained. This can avoid the movement of the virtual prop caused by accidental touch.
  • the fault tolerance zone can help the user quickly select the only candidate virtual prop.
  • the spherical bounding box the first bounding box (selected area) and the second bounding box (fault-tolerant area)
  • the three layers of bounding boxes can respond to different interaction processes.
  • the cooperation of layer bounding boxes can improve the accuracy and efficiency of picking up virtual props.
  • the game character is controlled to pick up the target virtual prop.
  • the embodiment of the present disclosure provides a two-layer picking judgment range for each virtual prop by setting a selection area and a fault-tolerance area.
  • the selection area is used for accurate switching
  • the fault-tolerance area is used for selection and fault tolerance.
  • inventions of the present disclosure also provide a device for picking up virtual props.
  • the device for picking up virtual props can be integrated in an electronic device, and the electronic device can be a terminal, a server, or other equipment.
  • the terminal can be a mobile phone, tablet computer, smart Bluetooth device, laptop, personal computer and other devices;
  • the server can be a single server or a server cluster composed of multiple servers.
  • a graphical user interface is provided through the terminal, and the content displayed by the graphical user interface at least partially includes a game scene, a game character located in the game scene, and at least one virtual prop.
  • the picking device of the virtual prop may include a generating unit 310, a determining unit 320 and a picking unit 330, as follows:
  • the candidate virtual prop Used to determine at least one candidate virtual prop from at least one virtual prop in the game scene, and the candidate virtual prop configures a virtual interactive object.
  • the generation unit 310 can be specifically configured to: obtain the prop position information of the virtual prop in the game scene; generate the pickup area of the virtual prop according to the prop position information and the preset pickup area conditions; if the game character is located in any Within the picking area, at least one virtual prop corresponding to any picking area is determined as a candidate virtual prop.
  • the preset pickup area conditions include a pickup radius, and generating a pickup area for virtual props based on the prop location information and the preset pickup area conditions includes: generating an initial pickup area based on the prop location information and the pickup radius; According to the number of game characters in the initial pickup area, the initial pickup area is adjusted to obtain the pickup area of the virtual props.
  • the first object may include a virtual logo
  • the generation unit 310 may also be configured to: obtain the sight height of the game field of view; determine the display height of the virtual logo of the candidate virtual prop based on the sight height; display the candidate virtual prop at the display height. The virtual identity of the prop.
  • determining the display height of the virtual logo of the candidate virtual prop based on the sight height includes: obtaining a preset weight parameter; performing weight calculation on the sight height according to the preset weight parameter to obtain the virtual logo of the candidate virtual prop. Display height.
  • the first object may include a first bounding box
  • the second object may include a second bounding box
  • the first bounding box is located within the second bounding box.
  • a target virtual prop corresponding to the virtual interactive object In response to the visual field direction of the game visual field corresponding to the game character pointing to the virtual interactive object, determine a target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop.
  • the virtual interactive object may include at least one of a first object and a second object
  • the determining unit 320 may be configured to: respond to the field of view direction of the game field of view picture corresponding to the game character pointing to the first object or the second object.
  • determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop may include: in response to The field of view direction of the game field of view picture corresponding to the game character points to the first object, and the target virtual prop corresponding to the first object is determined from at least one candidate virtual prop; or, in response to the field of view direction not pointing to the first object, and the field of view direction points to the third object two objects, and determine a target virtual prop corresponding to the second object from at least one candidate virtual prop.
  • determining the target virtual prop corresponding to the first object from at least one candidate virtual prop includes: responding to the game character corresponding to the game character.
  • the field of view direction of the field of view picture points to the first object, and the first first object to which the field of view direction points is determined; from at least one candidate virtual prop, a target virtual prop corresponding to the first first object is determined.
  • the field of view direction includes a detection ray that is emitted from a preset position of the game field of view picture and points to the field of view direction.
  • the first object to which the field of view direction points is determined.
  • the first object includes: in response to the detection ray pointing to at least one first object, determining a first intersection point, where the first intersection point is the intersection point of the detection ray and the first object; and determining the field of view from the first object according to the position information of the first intersection point.
  • the first first object that the direction points to.
  • determining a target virtual prop corresponding to the second object from at least one candidate virtual prop including: in response to the visual field direction not pointing to the second object The first object, and the visual field direction points to the second object, determine the first second object pointed by the visual field direction; determine the target virtual prop corresponding to the first second object from at least one candidate virtual prop.
  • the field of view direction includes a detection ray that is emitted from a preset position of the game field of view screen and points to the field of view direction.
  • determining the first direction of the field of view direction In response to the field of view direction not pointing to the first object and the field of view direction pointing to the second object, determining the first direction of the field of view direction.
  • a second object including: in response to the detection ray not pointing to the first object, and the detection ray pointing to the second object, determining a second intersection point, where the second intersection point is the intersection point of the detection ray and the second object; according to the position information of the second intersection point , determine the first second object to which the field of view direction points from the second object.
  • the determining unit 320 may also be configured to: adjust the field of view direction in response to the first field of view adjustment operation; if the adjusted field of view direction does not point to any first object, and the adjusted field of view direction points to the target virtual prop The second object maintains the target virtual prop as the target virtual prop.
  • the determining unit 320 may also be configured to: display the target virtual prop through the first display mode.
  • the determining unit 320 may also be configured to adjust the direction of the field of view in response to the second field of view adjustment operation.
  • the determining unit 320 may also be configured to: display the first bounding box of the target virtual prop through the second display mode.
  • the picking unit 330 may be specifically configured to: provide picking controls in a graphical user interface
  • controlling the game character to pick up the target virtual prop includes: in response to a touch operation acting on the picking control, controlling the game character to pick up the target virtual prop.
  • each of the above units can be implemented as an independent entity, or can be combined in any way to be implemented as the same or several entities.
  • each of the above units please refer to the previous method embodiments, and will not be described again here.
  • embodiments of the present disclosure can use the virtual interactive objects configured by the candidate virtual props to determine the interaction area of the candidate virtual props in the game scene, so as to obtain the interactive results from the candidate virtual props through the interaction results between the field of view direction and the virtual interactive objects.
  • the target virtual prop is determined among the props, thereby improving the accuracy of selecting any candidate virtual prop and improving the accuracy of picking up the virtual prop.
  • it can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • the computer device may be a terminal or a server.
  • the terminal may be a smartphone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer, or a personal digital assistant ( Personal Digital Assistant, PDA) and other terminal devices.
  • PDA Personal Digital Assistant
  • Figure 4 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • the computer device 400 includes a processor 410 with one or more processing cores, and a memory with one or more computer-readable storage media. 420 and a computer program stored on memory 420 and executable on the processor. Among them, the processor 410 is electrically connected to the memory 420.
  • the structure of the computer equipment shown in the figures does not constitute a limitation on the computer equipment, and may include more or fewer components than shown in the figures, or combine certain components, or arrange different components.
  • the processor 410 is the control center of the computer device 400, using various interfaces and lines to connect various parts of the entire computer device 400, by running or loading software programs and/or modules stored in the memory 420, and calling the software programs and/or modules stored in the memory 420. data, perform various functions of the computer device 400 and process data, thereby overall monitoring the computer device 400.
  • the processor 410 in the computer device 400 will follow the following steps to load instructions corresponding to the processes of one or more application programs into the memory 420, and the processor 410 will run the instructions stored in the memory. 420 to implement the methods of each of the aforementioned method embodiments:
  • At least one candidate virtual prop is determined from at least one virtual prop in the game scene, and the candidate virtual prop is configured with a virtual interactive object; in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, at least one candidate virtual prop is determined from the at least one candidate virtual prop A target virtual prop corresponding to the virtual interactive object; in response to a picking operation for the target virtual prop, the game character is controlled to pick up the target virtual prop.
  • the virtual interactive object may include at least one of a first object and a second object.
  • the virtual interaction object is determined from at least one candidate virtual prop.
  • the target virtual prop corresponding to the virtual interactive object may include: in response to the field of view direction of the game field of view corresponding to the game character pointing to the first object or the second object, determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop. .
  • the first object may include a virtual identifier, and in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, before determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, It may also include: obtaining the sight height of the game field of view; determining the display height of the virtual logo of the candidate virtual prop based on the sight height; and displaying the virtual logo of the candidate virtual prop at the display height.
  • determining the display height of the virtual logo of the candidate virtual prop based on the sight height may include: obtaining a preset weight parameter; performing weight calculation on the sight height according to the preset weight parameter to obtain the virtual logo of the candidate virtual prop. display height.
  • the first object may include a first bounding box
  • the second object may include a second bounding box
  • the first bounding box is located within the second bounding box.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the second display The method displays the first bounding box of the target virtual prop.
  • determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop may include: in response to The field of view direction of the game field of view picture corresponding to the game character points to the first object, and the target virtual prop corresponding to the first object is determined from at least one candidate virtual prop; or, in response to the field of view direction not pointing to the first object, and the field of view direction points to the third object two objects, and determine a target virtual prop corresponding to the second object from at least one candidate virtual prop.
  • determining the target virtual prop corresponding to the first object from at least one candidate virtual prop may include: in response to the game character corresponding to the field of view of the game character.
  • the field of view direction of the game field of view screen points to the first object, and the first first object to which the field of view direction points is determined; from at least one candidate virtual prop, a target virtual prop corresponding to the first first object is determined.
  • the field of view direction may include a detection ray that is emitted from a preset position of the game field of view picture and points to the field of view direction.
  • the first object of the field of view direction direction is determined.
  • a first object which may include: in response to the detection ray pointing to at least one first object, determining a first intersection point, where the first intersection point is the intersection point of the detection ray and the first object; according to the position information of the first intersection point, from the first object Determine the first first object that the view direction points to.
  • determining the target virtual prop corresponding to the second object from at least one candidate virtual prop may include: in response to the visual field direction not pointing to the second object. Point to the first object, and the visual field direction points to the second object, determine the first second object pointed by the visual field direction; determine the target virtual prop corresponding to the first second object from at least one candidate virtual prop.
  • the field of view direction may include detection rays emitted from a preset position of the game field of view screen and pointing to the field of view direction.
  • determining the direction of the field of view direction In response to the field of view direction not pointing to the first object and the field of view direction pointing to the second object, determining the direction of the field of view direction.
  • the first second object may include: in response to the detection ray not pointing to the first object, and the detection ray pointing to the second object, determining a second intersection point, where the second intersection point is the intersection point of the detection ray and the second object; according to the second intersection point The position information determines the first second object to which the visual field direction points from the second object.
  • the method may further include: in response to the first The visual field adjustment operation adjusts the visual field direction; if the adjusted visual field direction does not point to any first object, and the adjusted visual field direction points to the second object of the target virtual prop, the target virtual prop is maintained as the target virtual prop.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the first The display mode displays the target virtual prop.
  • determining at least one candidate virtual prop from at least one virtual prop in the game scene may include: obtaining prop position information of the virtual prop in the game scene; based on the prop position information and preset pickup area conditions, Generate a pickup area for virtual props; if the game character is located in any pickup area, determine at least one virtual prop corresponding to any pickup area as a candidate virtual prop.
  • the preset pickup area conditions include a pickup radius. Generating a pickup area for virtual props based on the prop location information and the preset pickup area conditions may include: generating an initial pickup area based on the prop location information and the pickup radius. ;According to the number of game characters in the initial pickup area, adjust the initial pickup area to obtain the pickup area for virtual props.
  • controlling the game character to pick up the target virtual prop includes: in response to a touch operation acting on the picking control, controlling the game character to pick up the target virtual prop.
  • the virtual prop picking method run by the embodiment of the present disclosure can configure virtual interactive objects through the candidate virtual props, and can use the virtual interactive objects to determine the interactive areas of the candidate virtual props in the game scene to interact with the virtual interactive objects through the field of view direction.
  • the target virtual prop is determined from the candidate virtual props, thereby improving the accuracy of selecting any candidate virtual prop to improve the accuracy of picking up the virtual prop.
  • it can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • the computer device 400 also includes: a touch display screen 430 , a radio frequency circuit 440 , an audio circuit 450 , an input unit 460 and a power supply 470 .
  • the processor 410 is electrically connected to the touch display screen 430, the radio frequency circuit 440, the audio circuit 450, the input unit 460 and the power supply 470 respectively.
  • the structure of the computer equipment shown in FIG. 4 does not constitute a limitation on the computer equipment, and may include more or fewer components than shown in the figure, or combine certain components, or arrange different components.
  • the touch display screen 430 can be used to display a graphical user interface and receive operation instructions generated by the user acting on the graphical user interface.
  • the touch display screen 430 may include a display panel and a touch panel.
  • the display panel can be used to display information input by the user or information provided to the user as well as various graphical user interfaces of the computer device. These graphical user interfaces can be composed of graphics, text, icons, videos, and any combination thereof.
  • the display panel can be configured in the form of a liquid crystal display (LCD, Liquid Crystal Display), organic light-emitting diode (OLED, Organic Light-Emitting Diode), etc.
  • LCD liquid crystal display
  • OLED Organic Light-Emitting Diode
  • the touch panel can be used to collect the user's touch operations on or near it (such as the user's operations on or near the touch panel using a finger, stylus, or any suitable object or accessory), and generate corresponding operations. instruction, and the operation instruction executes the corresponding program.
  • the touch panel may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact point coordinates, and then sends it to the touch controller. to the processor 410, and can receive commands sent by the processor 410 and execute them.
  • the touch panel can cover the display panel.
  • the touch panel When the touch panel detects a touch operation on or near the touch panel, it is sent to the processor 410 to determine the type of the touch event. Then the processor 410 provides information on the display panel according to the type of the touch event. Corresponding visual output.
  • the touch panel and the display panel can be integrated into the touch display 430 to implement input and output functions.
  • the touch panel and the display panel can be used as two independent components to implement input and output functions. That is, the touch display screen 430 can also be used as a part of the input unit 460 to implement the input function.
  • the processor 410 executes a game application to generate a graphical user interface on the touch display 430.
  • the content displayed on the graphical user interface at least partially includes a game scene, a game character located in the game scene, and a graphical user interface. At least one virtual prop.
  • the touch display screen 430 is used to present a graphical user interface and receive operation instructions generated by the user acting on the graphical user interface.
  • the radio frequency circuit 440 can be used to send and receive radio frequency signals to establish wireless communication with network equipment or other computer equipment through wireless communication, and to send and receive signals with the network equipment or other computer equipment.
  • the audio circuit 450 may be used to provide an audio interface between the user and the computer device through speakers and microphones.
  • the audio circuit 450 can transmit the electrical signal converted from the received audio data to the speaker, which converts it into a sound signal and outputs it; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received and converted by the audio circuit 450
  • the audio data is processed by the audio data output processor 410 and then sent to, for example, another computer device via the radio frequency circuit 440, or the audio data is output to the memory 420 for further processing.
  • Audio circuitry 450 may also include an earphone jack to provide communication of peripheral headphones to the computer device.
  • the input unit 460 can be used to receive input numbers, character information or user characteristic information (such as fingerprints, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control. .
  • Power supply 470 is used to power various components of computer device 400.
  • the power supply 470 can be logically connected to the processor 410 through a power management system, so that functions such as charging, discharging, and power consumption management can be implemented through the power management system.
  • Power supply 470 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and other arbitrary components.
  • the computer device 400 may also include a camera, a sensor, a wireless fidelity module, a Bluetooth module, etc., which will not be described again here.
  • the computer device can configure virtual interactive objects through candidate virtual props, and can use the virtual interactive objects to determine the interaction area of the candidate virtual props in the game scene to determine the interaction results with the virtual interactive objects through the field of view direction. , determine the target virtual prop from the candidate virtual props, thereby improving the accuracy of selecting any candidate virtual prop and improving the accuracy of picking up the virtual prop.
  • the virtual interactive object can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • an embodiment of the present disclosure provides a computer-readable storage medium in which a plurality of computer programs are stored.
  • the computer program can be loaded by a processor to perform the picking up of any virtual prop provided by the embodiment of the present disclosure. steps in the method.
  • the computer program can perform the steps of the methods of each of the foregoing method embodiments:
  • At least one candidate virtual prop is determined from at least one virtual prop in the game scene, and the candidate virtual prop is configured with a virtual interactive object; in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, at least one candidate virtual prop is determined from the at least one candidate virtual prop A target virtual prop corresponding to the virtual interactive object; in response to a picking operation for the target virtual prop, the game character is controlled to pick up the target virtual prop.
  • the virtual interactive object may include at least one of a first object and a second object.
  • Determining the target virtual prop corresponding to the virtual interaction object among the candidate virtual props may include: in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the first object or the second object, determining the target virtual prop corresponding to the virtual interaction object from at least one candidate virtual prop. The target virtual prop corresponding to the object.
  • the first object may include a virtual identifier, and in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, before determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, It may also include: obtaining the sight height of the game field of view; determining the display height of the virtual logo of the candidate virtual prop based on the sight height; and displaying the virtual logo of the candidate virtual prop at the display height.
  • determining the display height of the virtual logo of the candidate virtual prop based on the sight height may include: obtaining a preset weight parameter; performing weight calculation on the sight height according to the preset weight parameter to obtain the virtual logo of the candidate virtual prop. display height.
  • the first object may include a first bounding box
  • the second object may include a second bounding box
  • the first bounding box is located within the second bounding box.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the second display The method displays the first bounding box of the target virtual prop.
  • determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop may include: in response to The field of view direction of the game field of view picture corresponding to the game character points to the first object, and the target virtual prop corresponding to the first object is determined from at least one candidate virtual prop; or, in response to the field of view direction not pointing to the first object, and the field of view direction points to the third object two objects, and determine a target virtual prop corresponding to the second object from at least one candidate virtual prop.
  • determining the target virtual prop corresponding to the first object from at least one candidate virtual prop may include: in response to the game character corresponding to the field of view of the game character.
  • the field of view direction of the game field of view screen points to the first object, and the first first object to which the field of view direction points is determined; from at least one candidate virtual prop, a target virtual prop corresponding to the first first object is determined.
  • the field of view direction may include a detection ray that is emitted from a preset position of the game field of view picture and points to the field of view direction.
  • the first object of the field of view direction direction is determined.
  • a first object which may include: in response to the detection ray pointing to at least one first object, determining a first intersection point, where the first intersection point is the intersection point of the detection ray and the first object; according to the position information of the first intersection point, from the first object Determine the first first object that the view direction points to.
  • determining the target virtual prop corresponding to the second object from at least one candidate virtual prop may include: in response to the visual field direction not pointing to the second object. Point to the first object, and the visual field direction points to the second object, determine the first second object pointed by the visual field direction; determine the target virtual prop corresponding to the first second object from at least one candidate virtual prop.
  • the field of view direction may include detection rays emitted from a preset position of the game field of view screen and pointing to the field of view direction.
  • determining the direction of the field of view direction In response to the field of view direction not pointing to the first object and the field of view direction pointing to the second object, determining the direction of the field of view direction.
  • the first second object may include: in response to the detection ray not pointing to the first object, and the detection ray pointing to the second object, determining a second intersection point, where the second intersection point is the intersection point of the detection ray and the second object; according to the second intersection point The position information determines the first second object to which the visual field direction points from the second object.
  • the method may further include: in response to the first The visual field adjustment operation adjusts the visual field direction; if the adjusted visual field direction does not point to any first object, and the adjusted visual field direction points to the second object of the target virtual prop, the target virtual prop is maintained as the target virtual prop.
  • the target virtual prop corresponding to the virtual interactive object in response to the field of view direction of the game field of view picture corresponding to the game character pointing to the virtual interactive object, after determining the target virtual prop corresponding to the virtual interactive object from at least one candidate virtual prop, it may also include: through the first The display mode displays the target virtual prop.
  • determining at least one candidate virtual prop from at least one virtual prop in the game scene may include: obtaining prop position information of the virtual prop in the game scene; based on the prop position information and preset pickup area conditions, Generate a pickup area for virtual props; if the game character is located in any pickup area, determine at least one virtual prop corresponding to any pickup area as a candidate virtual prop.
  • the preset pickup area conditions include a pickup radius. Generating a pickup area for virtual props based on the prop location information and the preset pickup area conditions may include: generating an initial pickup area based on the prop location information and the pickup radius. ;According to the number of game characters in the initial pickup area, adjust the initial pickup area to obtain the pickup area for virtual props.
  • controlling the game character to pick up the target virtual prop includes: in response to a touch operation acting on the picking control, controlling the game character to pick up the target virtual prop.
  • the virtual prop picking method run by the embodiment of the present disclosure can configure virtual interactive objects through the candidate virtual props, and can use the virtual interactive objects to determine the interactive areas of the candidate virtual props in the game scene to interact with the virtual interactive objects through the field of view direction.
  • the target virtual prop is determined from the candidate virtual props, thereby improving the accuracy of selecting any candidate virtual prop to improve the accuracy of picking up the virtual prop.
  • it can assist the user to quickly decide on the virtual props to be picked up, simplifying the virtual prop picking process, enabling quick picking up of virtual props, and improving the efficiency of picking up virtual props.
  • the storage medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • the computer program stored in the storage medium can execute the steps in any method for picking up virtual props provided by the embodiments of the present disclosure, it is possible to realize the picking up of any kind of virtual props provided by the embodiments of the present disclosure.
  • the beneficial effects that the method can achieve are detailed in the previous embodiments and will not be described again here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例公开了虚拟道具的拾取方法,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象;响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具;响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。在本申请实施例中,通过候选虚拟道具配置的虚拟交互对象,提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向协助用户快速决策要拾取的虚拟道具,简化了拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。

Description

虚拟道具的拾取方法、装置、计算机设备和存储介质
本公开要求于2022年06月17日提交中国专利局、申请号为202210693562.3、发明名称为“虚拟道具的拾取方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及游戏技术领域,具体涉及虚拟道具的拾取方法、装置、计算机设备和存储介质。
背景技术
在一些游戏中,在游戏场景中可以包括丰富多样的虚拟道具,用户可以通过控制游戏角色拾取游戏场景中的虚拟道具进行游戏,例如,可以通过点击游戏场景中的虚拟道具进行拾取。
然而,在复杂的游戏场景中,用户在控制游戏角色拾取虚拟道具时容易发生误触。
技术问题
本公开实施例提供虚拟道具的拾取方法、装置、计算机设备和存储介质,可以解决现有技术中用户在控制游戏角色拾取虚拟道具时容易发生误触,导致拾取的准确性低的问题。
技术解决方案
第一方面,本公开实施例提供一种虚拟道具的拾取方法,通过终端提供图形用户界面,所述图形用户界面显示的内容至少部分地包含游戏场景、位于所述游戏场景中的游戏角色以及至少一个虚拟道具,所述方法包括:从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,所述候选虚拟道具配置虚拟交互对象;响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具;响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具。
第二方面,本公开实施例还提供一种虚拟道具的拾取装置,通过终端提供图形用户界面,所述图形用户界面显示的内容至少部分地包含游戏场景、位于所述游戏场景中的游戏角色以及至少一个虚拟道具,所述装置包括:生成单元,用于从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,所述候选虚拟道具配置虚拟交互对象;确定单元,用于响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具;拾取单元,用于响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具。
第三方面,本公开实施例还提供一种计算机设备,包括处理器和存储器,所述存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行本公开实施例所提供的任一种虚拟道具的拾取方法中的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行本公开实施例所提供的任一种虚拟道具的拾取方法中的步骤。
有益效果
本公开实施例可以从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,所述候选虚拟道具配置虚拟交互对象;响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具;响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具。在本公开中,通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本公开实施例提供的虚拟道具的拾取***的场景示意图;
图1b是本公开实施例提供的虚拟道具的拾取方法的流程示意图;
图1c是本公开实施例提供的虚拟标识以及第一包围盒的示意图;
图1d是本公开实施例提供的第一包围盒的示意图;
图2a是本公开另一个实施例提供的虚拟道具的拾取方法的流程示意图;
图2b是本公开实施例提供的虚拟道具的拾取区域的示意图;
图2c是本公开实施例提供的游戏角色位于拾取区域内示意图;
图2d是本公开实施例提供的检测射线指向虚拟道具A和虚拟道具B的选中区的示意图;
图2e是本公开实施例提供的对目标虚拟道具的描述界面的示意图;
图2f是本公开实施例提供的确定目标虚拟道具的流程示意图;
图3是本公开实施例提供的虚拟道具的拾取装置的结构示意图;
图4是本公开实施例提供的计算机设备的结构示意图。
本发明的实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在对本公开实施例进行详细地解释说明之前,先对本公开实施例涉及到的一些名词进行解释说明。
其中,游戏场景:是应用程序在终端上运行时显示(或提供)的游戏场景。该游戏场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。游戏场景可以是二维游戏场景、2.5维游戏场景或者三维游戏场景中的任意一种,本公开实施例对游戏场景的维度不加以限定。例如,游戏场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制游戏角色在该游戏场景中进行移动。
游戏角色:是指在游戏场景中用于模拟人物或动物的角色。该游戏角色可以是虚拟人物、虚拟动物、动漫人物等,比如:在游戏场景中显示的人物、动物。该游戏角色可以是该游戏场景中的一个虚拟的用于代表用户的虚拟形象。游戏场景中可以包括多个游戏角色,每个游戏角色在游戏场景中具有自身的形状和体积,占据游戏场景中的一部分空间。游戏角色的活动可以包括:调整身体姿态、爬行、步行、奔跑、骑行、飞行、跳跃、使用虚拟瞄具瞄准、射击、驾驶、拾取、攻击、投掷和释放技能等。
在一些实施例中,图形用户界面中显示的内容至少部分地包含游戏场景,其中,游戏场景中包含至少一个游戏角色。
在一些实施例中,游戏场景中的游戏角色包括用户操控的虚拟角色(Player Character)和,***预设控制、非用户操控的虚拟角色(Non-Player Character,NPC)。在本实施例中,游戏角色为由用户操控的虚拟角色。
在本实施例中,该游戏角色可以是用户通过客户端上的操作进行控制的虚拟角色。可选地,该游戏角色可以是在游戏场景中进行竞技的虚拟人物。可选地,该游戏场景中参与互动的游戏角色的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
虚拟道具:是指游戏对象在游戏场景中的能够使用的虚拟道具,包括***、步枪、狙击枪、匕首、刀、剑、斧子、绳索等能够对其他虚拟对象发起伤害的虚拟武器,子弹等补给道具,盾牌、盔甲、装甲车等防御道具,虚拟光束、虚拟冲击波等用于虚拟对象释放技能时通过手部展示的虚拟道具,以及药包、饮料等治疗道具。
游戏界面:是指通过图形用户界面提供或显示的应用程序对应的界面,该界面中包括供用户进行交互的图形用户界面和游戏画面,该游戏画面是游戏场景的画面。
在一些实施例中,该游戏界面中可以包括游戏控件(如,技能控件、移动控件、角色控制控件、以及背包控件、聊天控件、***设置控件等功能控件等)、指示标识(如,方向指示标识、角色指示标识等)、信息展示区(如,击杀人数、比赛时间等)。
本公开实施例提供虚拟道具的拾取方法、装置、计算机设备和存储介质。
其中,该虚拟道具的拾取装置具体可以集成在电子设备中,该电子设备可以为终端、服务器等设备。其中,终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑(Personal Computer,PC)等设备;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群。
在一些实施例中,该虚拟道具的拾取装置还可以集成在多个电子设备中,比如,虚拟道具的拾取装置可以集成在多个服务器中,由多个服务器来实现本公开的虚拟道具的拾取方法。
在一些实施例中,虚拟道具的拾取方法可以运行于终端设备或者是服务器。其中,终端设备可以为本地终端设备。当虚拟道具的拾取方法运行于服务器时,该方法则可以基于云交互***来实现与执行,其中,云交互***包括服务器和客户端设备。
在一可选的实施方式中,云交互***下可以运行各种云应用,例如:云游戏。以云游戏为例,云游戏是指以云计算为基础的游戏方式。在云游戏的运行模式下,游戏程序的运行主体和游戏画面呈现主体是分离的,虚拟道具的拾取方法的储存与运行是在云游戏服务器上完成的,客户端设备的用于数据的接收、发送以及游戏画面的呈现,举例而言,客户端设备可以是靠近用户侧的具有数据传输功能的显示设备,如,终端、电视机、计算机、掌上电脑等;但是进行角色控制的终端设备为云端的云游戏服务器。在进行游戏时,用户操作客户端设备向云游戏服务器发送操作指令,如触控操作的操作指令,云游戏服务器根据操作指令运行游戏,将游戏画面等数据进行编码压缩,通过网络返回客户端设备,最后,通过客户端设备进行解码并输出游戏画面。
在一些实施例中,服务器也可以以终端的形式来实现。
在一可选的实施方式中,终端设备可以为本地终端设备。以游戏为例,本地终端设备存储有游戏程序并用于呈现游戏画面。本地终端设备用于通过图形用户界面与用户进行交互,即,常规的通过电子设备下载安装游戏程序并运行。该本地终端设备将图形用户界面提供给用户的方式可以包括多种,例如,可以渲染显示在终端的显示屏上,或者,通过全息投影提供给用户。举例而言,本地终端设备可以包括显示屏和处理器,该显示屏用于呈现图形用户界面,该图形用户界面包括游戏画面,该处理器用于运行该游戏、生成图形用户界面以及控制图形用户界面在显示屏上的显示。用户能够通过触摸屏、鼠标、键盘或手柄等输入设备在界面上进行操作。
例如,参考图1a,在一些实施方式中提供了一种虚拟道具的拾取***的场景示意图,该***可以实现虚拟道具的拾取方法。该虚拟道具的拾取***可以包括终端1000、服务器2000以及网络3000。
其中,终端用于从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象;响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具;响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。服务器用于获取用户在终端进行游戏的数据。网络用于服务器与终端之间的数据传输,网络可以是无线网络或者有线网络,比如无线网络为无线局域网(WLAN)、局域网(LAN)、蜂窝网络、2d网络、3G网络、4G网络、5G网络等。
以下分别进行详细说明。可以理解的是,在本公开的具体实施方式中,涉及到用户的操作、游戏数据等相关的数据,当本公开实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
在本实施例中,提供了一种虚拟道具的拾取方法,通过终端提供图形用户界面,图形用户界面显示的内容至少部分地包含游戏场景、位于游戏场景中的游戏角色以及至少一个虚拟道具,如图1b所示,该虚拟道具的拾取方法的具体流程可以如下:
110、从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象。
其中,至少一个虚拟道具可以指用户通过控制游戏角色在游戏场景中执行拾取操作后可以拾取的虚拟道具,在执行拾取操作之前还可以包括通过移动操作使游戏对象移动至虚拟道具的可拾取范围内。
其中,候选虚拟道具可以指根据用户操作或预设选取规则从至少一个虚拟道具中确定的一个或多个虚拟道具等。例如,用户操作可以为对游戏场景中至少一个虚拟道具的选中操作或标记操作等,预设选取规则可以为根据候选虚拟道具的位置信息、类型信息等预先设置的规则,如,预设选取规则可以为将游戏场景位于游戏对象的拾取范围的虚拟道具确定为候选虚拟道具,也可以为将游戏场景中类型信息与游戏对象匹配的虚拟道具确定为候选虚拟道具,等等。如具体可以为,若游戏对象装配的虚拟道具缺少头部护具以及治疗药包,可以将游戏场景中头部护具以及治疗药包类的虚拟道具确定为候选虚拟道具。需说明的是,可以认为候选虚拟道具是指处于可拾取状态的虚拟道具,处于可拾取状态的虚拟道具可以与游戏视野画面的视野方向进行拾取交互,以响应于游戏视野画面的视野方向指向虚拟交互对象,将可拾取状态调整为待拾取状态。
其中,虚拟交互对象可以指候选虚拟道具用于与用户控制的游戏角色进行拾取交互的虚拟对象。例如,虚拟交互对象可以包括但不限于虚拟标识或包围盒,等等。为候选虚拟道具配置虚拟交互对象可以指在游戏场景中生成并显示每个候选虚拟道具对应的虚拟交互对象。
在游戏场景中有游戏角色以及候选虚拟道具时,可以生成候选虚拟道具的虚拟交互对象,以通过游戏角色的视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具。在实际应用中,可以通过设置虚拟交互对象的显示效果或形状大小,如突出显示或以特殊形状显示虚拟交互对象,来提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。
在一些实施方式中,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,可以包括:
获取虚拟道具在游戏场景中的道具位置信息;
根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域;
若游戏角色位于任意拾取区域内,将对应任意拾取区域的至少一个虚拟道具确定为候选虚拟道具。
其中,预设的拾取区域条件可以指预先设置的用于设置虚拟道具的拾取区域的条件,拾取区域可以指虚拟道具在游戏场景中可以被拾取的区域或范围。例如,预设的拾取区域条件可以包括拾取半径,可以以虚拟道具的位置信息为圆心,在游戏场景中设置半径为拾取半径的球形或圆形拾取区域。
游戏场景中通常会有多个虚拟道具,为了从多个可拾取的游戏道具中筛选得到可拾取的候选虚拟道具,提高虚拟道具拾取的有效性和效率,减少服务器空耗,可以根据预设的拾取区域条件以及虚拟道具的位置信息,在游戏场景中生成虚拟道具的拾取区域,当用户控制游戏角色进入任意一个虚拟道具的拾取区域内时,则可以将该虚拟道具的状态设置为可拾取状态,此时该游戏角色可以与该虚拟道具进行交互,如查看虚拟道具、拾取虚拟道具等。
在一些实施方式中,拾取区域可以为包围盒形式,如可以为半径为拾取半径的球形包围盒。利用结构简单的包围盒来设置拾取区域,能更快地生成拾取区域,可以提升确定拾取交互区域的效率,以及简化拾取交互过程,以提升道具拾取的效率。
在一些实施方式中,可以根据虚拟道具的类型或游戏场景为不同的虚拟道具设置不同的拾取半径,拾取半径越小,游戏角色需要更靠近虚拟道具才能执行拾取操作,以此花费的时间更多、游戏难度更大,以此可以提供多样化的虚拟道具拾取方式,提高用户留存率,减少服务器空耗。例如,可以依序为虚拟武器、防御道具、补给道具以及治疗道具设置从小到大的拾取半径。再如,可以根据游戏场景的不同地形设置不同的拾取半径,例如,可以为山地、雨林等复杂地形设置较小的拾取半径,可以为沙漠、平原等简单地形设置较大的拾取半径,以增加在复杂地形中拾取虚拟道具的难度,提升游戏的真实度。
在一些实施方式中,可能会有多个游戏角色在同一虚拟道具的拾取区域内,为了增加多个游戏角色在同一拾取区域内的分散度,可以根据拾取区域内的游戏角色的数量来调整拾取区域的大小,便于多个游戏角色在拾取区域内散开,避免遮挡,提升视觉效果。具体地,预设的拾取区域条件可以包括拾取半径,根据位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域,可以包括:根据位置信息以及拾取半径,生成初始拾取区域;根据初始拾取区域内的游戏角色的个数,调整初始拾取区域,得到虚拟道具的拾取区域。例如,拾取半径为r,初始拾取区域大小=2πr,初始拾取区域内有a个游戏角色,拾取区域的大小=2πr(1+a/b),其中b可以为预设的参数,例如b=10,初始拾取区域以及拾取区域的中心点均为虚拟道具在游戏场景中的位置。
在一些实施方式中,若响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:
响应于第二视野调整操作,调整视野方向。
其中,视野调整操作可以指用于调整游戏视野画面的视野方向的操作,视线调整操作可以包括但不限于触摸、拖动、划按、长按、短按、双击、点击、结束拖动等操作,用户能够通过触摸屏、鼠标、键盘或手柄等输入设备进行视线调整操作,具体操作方式视游戏操作方法或游戏具体设定而定。例如,在第三人称游戏中,可以在终端的图形用户界面的游戏界面中提供视线调整控件,响应于作用于视线调整控件的触控操作,控制游戏角色移动头部如抬头或低头,以调整游戏视野画面的视野方向。通过调整游戏视野画面的视野方向,可以辅助用户快速选中候选虚拟道具,可以提升虚拟道具的拾取效率。
需说明的是,本公开实施例中的第一视野调整操作以及第二视野调整操作是分别对应不同调整视野方向调整过程的操作,具体操作方式可以相同也可以不同。
120、响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
其中,视野方向可以指游戏角色在游戏场景中的视野的指向。如视野方向以为游戏场景中的视角,根据游戏具体设定,游戏场景中的视角可以不同,例如,游戏场景中的视角可以为第一人称视角,也可以为第三人称视角。游戏视野画面的视野方向指向虚拟交互对象可以为该视野方向全部或部分覆盖虚拟交互对象所在的区域。
当游戏角色处于第一人称视角时,游戏视野画面的视野方向可以为游戏角色在游戏场景中的视线范围。举例而言,可以通过游戏角色眼部作为游戏角色视野方向的原点,例如可以为游戏角色眼部中点为顶点的锥形区域;也可以为由游戏角色发出的、指向游戏角色的视线方向的检测射线,如可以为以游戏角色眼部中点发出的检测射线,等等。再如,可以通过摄像机模型作为视线原点,如视野方向可以为由摄像机模型发出的锥形区域,检测射线可以为由摄像机模型发出、指向游戏视野画面的视野方向的检测射线,如当游戏角色处于第一人称视角时,摄像机模型位于游戏角色的头部或者颈部,当游戏角色处于第三人称视角时,摄像机模型位于游戏角色的后方,从而所处的视角不同,显示的游戏画面内容也不同,当游戏角色在三维游戏场景中的位置和/或视线方向改变时,游戏画面内容也会对应改变。
其中,目标虚拟道具可以指处于待拾取状态的虚拟道具。待拾取状态可以指可以响应于游戏角色的拾取操作,以被游戏角色拾取的状态。例如,若虚拟道具处于待拾取状态,则可以指将该虚拟道具与游戏界面中的拾取控件关联,若用户对拾取控件进行触控操作,可以控制游戏角色自动拾取该虚拟道具。再如,还可以将候选虚拟道具进行突出显示以便用户能从至少一个虚拟道具中快速找到目标虚拟道具。
在生成候选虚拟道具的虚拟交互对象后,若调整游戏视野画面的视野方向使其指向任意候选虚拟道具的虚拟交互对象,即认为通过调整游戏视野画面的视野方向选择了至少一个候选虚拟道具,因此可以将选择的候选虚拟道具确定为目标虚拟对象。例如,若视野方向与完全覆盖至少一个候选虚拟道具所在的区域,或若检测射线指向至少一个候选虚拟道具的中心点,则可以将该至少一个候选虚拟道具确定为目标虚拟道具。以此,通过游戏视野画面的视野方向以及虚拟交互对象来确定目标虚拟道具,可以提升拾取虚拟道具的准确性,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
在一些实施方式中,虚拟交互对象可以包括第一对象以及第二对象中的至少一种,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
其中,第一对象以及第二对象为不同的虚拟交互对象,即每个候选虚拟道具可以配置有一个或多个虚拟交互对象。由第一对象以及第二对象确定候选虚拟道具的交互区域,可以提供多样化的交互方式,提高用户留存率,减少服务器空耗。
在一些实施方式中,第一对象可以包括虚拟标识,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:
获取游戏视野画面的视线高度;
根据视线高度确定候选虚拟道具的虚拟标识的显示高度;
在显示高度显示候选虚拟道具的虚拟标识。
其中,虚拟标识可以指游戏场景中用于表征目标虚拟对象的标识,虚拟标识可以有多种呈现形式,具体呈现形式可以视游戏具体设定而定。例如,虚拟标识可以为具有与候选虚拟道具相同轮廓的图形,且该图形在游戏场景中高亮显示。
其中,视线高度可以为游戏场景中游戏视野画面的视线高度。例如,可以通过摄像机模型作为视线原点,视线高度可以为摄像机模型距离游戏场景中地面或地板的高度,如当游戏角色处于第一人称视角时,摄像机模型位于游戏角色的头部或者颈部,当游戏角色处于第三人称视角时,摄像机模型位于游戏角色的后方。
其中,显示高度可以指虚拟标识在游戏场景中的高度,例如,可以将视线高度作为显示高度,也可以根据视线高度计算得到显示高度,如显示高度=c×(1/2),其中c为视线高度,等等。
由于游戏场景中的虚拟道具通常都散落在地面或其他平面上,不易观察,通过在游戏场景中显示虚拟标识,一方面,可以使用户在游戏场景中快速找到候选虚拟道具以及该候选虚拟道具的虚拟标识,提升虚拟道具的拾取效率,另一方面,依据游戏角色的视线高度确定虚拟标识的显示高度,可以依据视线高度将虚拟标识显示在便于观察的高度,提升虚拟道具的拾取效率。
在一些实施方式中,根据视线高度确定候选虚拟道具的虚拟标识的显示高度,可以包括:
获取预设的权重参数;
根据预设的权重参数对视线高度进行权重计算,得到候选虚拟道具的虚拟标识的显示高度。
预设的权重参数可以视游戏具体设定而定,例如,预设的权重参数可以为预先设置的小于1的小数。通过对视线高度进行权重计算得到虚拟标识的显示高度,计算速度快,还可以通过调整预设的权重参数量化调整虚拟标识的显示高度,处理效率高,能节省算力。
在一些实施方式中,显示高度可以为预设的高度。例如,可以根据视线高度为每种游戏角色预设显示高度,以可以根据游戏角色的标识获取预设的显示高度,并在显示高度显示候选虚拟道具的虚拟标识。
在一些实施方式中,为了便于用户找到虚拟标识,提升虚拟道具的拾取效率,虚拟标识可以为位于候选虚拟道具上方的虚拟光柱。例如,如图1c所示,虚拟标识可以为由候选虚拟道具中点向上延伸的虚拟光柱,虚拟光柱的显示高度a=c×(1/2),其中c为视线高度,即虚拟光柱的高度为c×(1/2)。
以此,当游戏角色平视前方时,其视野方向不会指向虚拟标识,当游戏角色向下低头至与地面角度小于45°时,其视野方向可以指向虚拟标识的全部或部分位置,以此,通过设置光柱高度低于视线高度,使游戏角色在特定视角下才会与虚拟标识交互,避免误触,增加容错率,提升选中虚拟道具的准确性,以提升拾取虚拟道具的准确性。
在一些实施方式中,第一对象可以包括第一包围盒,第二对象可以包括第二包围盒,第一包围盒位于第二包围盒内。
其中,包围盒可以指将物体组合完全包容起来的封闭空间,将复杂物体封装在简单的包围盒中,就可以提高几何运算的效率。包围盒可以有多种形状,具体形状可以视游戏具体设定而定。例如,可以通过AABB包围盒(Axis-aligned boundingbox),包围球(Sphere),方向包围盒OBB(Oriented bounding box)以及固定方向凸包FDH(Fixed directions hulls或k-DOP)等方法在候选虚拟道具***生成包围盒。第一包围盒以及第二包围盒均为在候选虚拟道具***生成的包围盒,例如,如图1c和图1d所示,第一包围盒可以为候选虚拟道具***、具有与候选虚拟道具相同轮廓的包围盒,如图1d所示,第二包围盒可以为第一包围盒***、圆柱体形的包围盒。
通过生成候选虚拟道具的两个包围盒,一方面利用包围盒来确定候选虚拟道具的交互区域,利用结构简单的包围盒来近似替代结构复杂的候选虚拟道具来确定交互区域,可以提升确定交互区域的效率,以及简化交互过程,以提升道具拾取的效率;另一方面,由两个包围盒确定候选虚拟道具的交互区域,可以提供多样化的交互方式,提高用户留存率,减少服务器空耗。
在一些实施方式中,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具之后,还可以包括:根据虚拟交互对象,确定候选虚拟道具的交互区域。虚拟交互对象可以位于交互区域内,游戏视野画面的视野方向指向虚拟交互对象可以为游戏视野画面的视野方向指向候选虚拟道具的交互区域。
其中,交互区域可以指候选虚拟道具能与游戏角色进行拾取交互的区域。例如,可以将虚拟交互对象在游戏场景中所在的区域确定为交互区域,也可以将虚拟交互对象在游戏场景中所在的区域确定为交互区域,还可以根据虚拟交互对象以及候选虚拟道具在游戏场景的位置信息确定交互区域,等等。如具体可以为,确定虚拟交互对象以及候选虚拟道具在游戏场景中距离最远的两点,将该两点之间的连线作为直径,以该直径在游戏场景中确定球形区域作为交互区域。需说明的是,若将虚拟交互对象在游戏场景中所在的区域确定为交互区域,则可以认为虚拟交互对象为候选虚拟道具用于与游戏角色进行拾取交互的对象;若交互区域为虚拟交互对象在游戏场景中所在的区域,若虚拟交互对象在游戏场景中为三维模型,该交互区域可以为虚拟交互对象在游戏场景中占据的三维空间。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:
响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具;
或,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具。
目标虚拟道具可以为视野方向指向的部分或全部的候选虚拟道具。例如,可以为视野方向指向的全部后续虚拟道具,也可以为视野方向指向的首个候选虚拟道具。
例如,可以通过视线落点来判断游戏角色的视野方向是否指向虚拟交互对象。视线落点可以沿指游戏视野画面的视野方向在游戏场景中的注视点。如,可以将游戏视野画面的视野方向与游戏场景中任意对象如游戏角色、虚拟道具或其他物体的交点,确定为该视野方向在游戏场景中的视线落点,如游戏视野画面的视野方向在虚拟交互对象的视线落点可以为游戏视野画面的视野方向与虚拟交互对象的交点。再如,游戏视野画面的视野方向在虚拟交互对象的视线落点也可以为游戏视野画面的视野方向与虚拟交互对象的交互区域的交点。将当游戏角色处于第一人称视角时,视线落点可以指游戏角色在游戏场景中的注视点。
例如,可以判断游戏视野画面的视野方向与虚拟交互对象是否相交(是否存在交点),若相交,则可以认为游戏视野画面的视野方向指向虚拟交互对象,即认为通过调整游戏视野画面的视野方向选择了任一虚拟交互对象对应的候选虚拟道具,此时获取与视野方向与虚拟交互对象的交点,并根据交点从候选虚拟道具中确定目标虚拟道具,以此,当调整视野方向时,可以实时计算交点并确定目标虚拟道具,提升处理效率。
再如,可以通过游戏视野画面的视野方向来判断是否指向虚拟交互对象。例如,可以获取游戏场景中视线原点的位置信息以及候选虚拟道具的位置信息,根据获取的位置信息确定视线原点以及虚拟角度之间的连接线,确定该连接线与游戏场景水平方向的夹角A,获取游戏视野画面的视野方向与游戏场景水平方向的夹角B,若夹角A与夹角B之间的差值小于预设值,则可以认为视野方向指向虚拟交互对象。也可以进一步判断视野方向与虚拟交互对象是否相交(是否存在交点),若相交,则认为视野方向指向该虚拟交互对象。由于候选虚拟道具与游戏角色均在候选虚拟道具的拾取范围内,因此,当夹角A与夹角B接近甚至是相等时,可以认为游戏角色的视野方向很可能是指向候选虚拟道具,以此通过对夹角A和夹角B的比较,只需计算游戏角色部分视野方向的交点,能提升处理效率,节省算力。
获取游戏场景中视线原点的位置信息以及候选虚拟道具的位置信息可以包括:获取游戏场景中游戏角色的面部朝向、游戏角色的位置信息以及候选虚拟道具的位置信息,根据游戏角色的位置信息以及候选虚拟道具的位置信息,确定目标方向,目标方向可以指水平方向上游戏角色的位置指向候选虚拟道具的位置的方向,若游戏角色的面部朝向与目标方向相同,获取游戏场景中视线原点的位置信息。通过判断游戏角色的面部朝向,可以只计算满足目标方向时视野方向的交点,能进一步提升处理效率,节省算力。具体地,夹角B可以为视野方向的中线或检测射线与游戏场景水平方向的夹角。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具,可以包括:
响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象;
从至少一个候选虚拟道具中,确定与首个第一对象对应的目标虚拟道具。
其中,首个第一对象可以指与沿视野方向、且视野方向指向的第一个第一对象,例如,可以为距离视线原点最近的第一对象。
在游戏场景中有多个候选虚拟道具时,游戏视野画面的视野方向可能指向一个或多个候选虚拟道具的虚拟交互对象,若游戏视野画面的视野方向指向多个候选虚拟道具的虚拟交互对象,则存在多个视野方向指向的第一对象,因此可以将视野方向指向的第一个第一对象确定为目标虚拟道具,以此,可以辅助用户快速选中该候选虚拟道具,可以提升虚拟道具的拾取效率。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象,可以包括:
响应于检测射线指向至少一个第一对象,确定第一交点,第一交点为检测射线与第一对象的交点;
根据第一交点的位置信息,从第一对象中确定视野方向指向的首个第一对象。
其中,预设位置可以为根据游戏设定设置的视线原点,例如可以为游戏角色眼部位置,也可以为用于确定游戏视野画面的视野方向的摄像机模型的位置,等等。
例如,可以判断检测射线与第一对象是否有交点,若有,则根据检测射线与第一对象的交点确定首个第一对象,如可以将距离视线原点或游戏角色最近的第一交点确定为目标交点,并将目标交点对应的第一对象确定为首个第一对象。通过在游戏视野画面的视野方向内设置检测射线,检测射线可以更精准地指向目标交互对象,避免误触,增加容错率,提升选中虚拟道具的准确性,以提升拾取虚拟道具的准确性。
例如,若检测射线与虚拟标识有交点,则可以认为检测射线指向虚拟交互对象,此时可以通过检测射线以及虚拟标识的交点确定目标虚拟道具,一方面,用户可以通过虚拟标识快速找到候选虚拟道具及其虚拟交互对象,另一方面,用户可以通过调整检测射线使其精准指向虚拟标识,能更准确的选中虚拟交互对象,可以提升虚拟道具的拾取效率,以及提升拾取虚拟道具的准确性。
再如,检测射线同时与候选虚拟道具A的第一包围盒A以及候选虚拟道具B的第一包围盒B相交,此时可以获取检测射线分别与第一包围盒A以及第一包围盒B相交的交点的位置信息,并将交点中与视线原点距离最近的点对应的第一包围盒确定为首个第一对象。需说明的是,若检测射线指向多个第一包围盒的同时,还指向至少一个第二包围盒,此时也只获取第二交点的位置信息。
需说明的是,若检测射线同时与候选虚拟道具A的第一包围盒A以及候选虚拟道具B的第二包围盒B相交,则只获取检测射线与第一包围盒A的交点的位置信息,并从检测射线与第一包围盒A相交的交点中确定目标交点。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象,可以包括:
将第一个响应于检测射线的第一对象确定为首个第一对象。
例如,在沿视线原点发出检测射线时,该检测射线与任一第一对象相交,即可以为该第一对象响应该检测射线,若有一个第一对象响应于该检测射线,则可以停止继续延长该检测射线,并将该第一响应的第一对象确定为首个第一对象,以此可以提高处理效率,节省算力。
在一些实施方式中,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具,可以包括:
响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象;
从至少一个候选虚拟道具中,确定与首个第二对象对应的目标虚拟道具。
其中,首个第二对象可以指与沿视野方向、且视野方向指向的第一个第二对象,例如,可以为距离视线原点最近的第二对象。
例如,虚拟交互对象可以包括第一包围盒和第二包围盒。由于第一包围盒在游戏场景中占据的空间小于第二包围盒,在用户想要通过调整检测射线选择候选虚拟道具时,将检测射线调整至与第一包围盒相交的难度大于与第二包围盒相交,因此,可以根据将检测射线调整至与两个包围盒相交的难易程度,提供相应的交点的确定方法。具体地,若调整检测射线与第一包围盒相交时,则认为通过检测射线选中该第一包围盒对应的候选虚拟道具,若调整检测射线与第二包围盒相交,则认为可能误操作选中了第二包围盒,此时需要进一步确定检测射线是否与任意第一包围盒相交,否则认为通过检测射线选中该第二包围盒对应的候选虚拟道具,通过这种方式能提升选中虚拟道具的准确性,以提升拾取虚拟道具的准确性。尤其是在游戏场景中多个候选虚拟道具时,例如若检测射线与候选虚拟道具A的第一包围盒和第二包围盒相交,与候选虚拟道具A的第二包围盒相交,此时可以将检测射线与候选虚拟道具A的第一包围盒的交点确定为目标交点,再如若检测射线与候选虚拟道具A第二包围盒相交,未与任何候选虚拟道具的第一包围盒相交,此时可以将检测射线与候选虚拟道具A的第二包围盒的交点确定为目标交点。
在一些实施方式中,若仅有一个候选虚拟道具,若检测射线指向该候选虚拟道具的第二对象,则将该候虚拟道具确定为目标虚拟道具。以此,当游戏场景中仅有一个候选虚拟道具时,第二对象可以辅助用户快速选中该候选虚拟道具,可以提升虚拟道具的拾取效率。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象,可以包括:
响应于检测射线未指向第一对象,且检测射线指向第二对象,确定第二交点,第二交点为检测射线与第二对象的交点;
根据第二交点的位置信息,从第二对象中确定视野方向指向的首个第二对象。
例如,检测射线同时与候选虚拟道具A的第二包围盒A以及候选虚拟道具B的第二包围盒B相交,此时可以获取检测射线分别与第二包围盒A以及第二包围盒B相交的交点的位置信息,并将交点中与视线原点距离最近的点对应的第二包围盒确定为首个第二对象。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象,可以包括:
响应于第一对象未响应于检测射线,且第二对象响应于检测射线,将第一个响应于检测射线的第二对象确定为首个第二对象。
例如,在沿视线原点发出检测射线时,该检测射线未与第一对象相交,则可以根据第二对象的响应的时间先后顺序,将第一个响应于该检测射线的第二对象确定为首个第一对象,以此可以提高处理效率,节省算力。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第一显示方式显示目标虚拟道具。
其中,第一显示方式可以为视游戏具体设定而定的预设的显示方式,例如可以包括以下显示方式中的任意一种:透明度降低、提高颜色饱和度、切换颜色、高亮、描边、添加突出标识等。其中,添加突出标识可以是目标虚拟道具表面或附近添加如星号、圆点、三角形等标识。
在一些实施方式中,第一显示方式可以为高亮显示。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第二显示方式显示目标虚拟道具的第一包围盒。
第二显示方式可以为视游戏具体设定而定的预设的显示方式,例如可以包括以下显示方式中的任意一种:透明度降低、提高颜色饱和度、切换颜色、高亮、描边、添加突出标识等。第二显示方式可以与第一显示方式相同,也可以不同。通过第一显示方式或第二显示方式显示目标虚拟道具或目标虚拟道具的第一包围盒,可以辅助用户快速找到被选中的目标虚拟道具,可以提升虚拟道具的拾取效率。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还包括:
响应于第一视野调整操作,调整视野方向;
若调整后的视野方向未指向任意第一对象,且调整后的视野方向指向目标虚拟道具的第二对象,维持目标虚拟道具为目标虚拟道具。
在通过调整游戏视野画面的视野方向选中候选虚拟道具后,若再次调整游戏视野画面的视野方向,此时为了避免该调整操作为用户的误操作,仅当检测射线从与目标虚拟道具的第一包围盒或第二包围盒相交调整至移出目标虚拟道具的第二包围盒,或当调整后的检测射线与其他候选虚拟道具的第一包围盒相交时,重新从候选虚拟道具中确定新的目标虚拟道具。而当调整过程中,检测射线只在目标虚拟道具的第二包围盒的范围内移动时,则认为该调整操作为无效操作,不重新确定新的目标虚拟道具,以提升拾取虚拟道具的准确性。
130、响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
其中,拾取操作可以指用于控制游戏角色执行针对虚拟道具的拾取动作的操作,拾取操作可以包括但不限于触摸、拖动、划按、长按、短按、双击、点击、结束拖动等操作,用户能够通过触摸屏、鼠标、键盘或手柄等输入设备进行拾取操作,具体操作方式视游戏操作方法或游戏具体设定而定。例如,可以通过用户针对目标虚拟道具或目标虚拟道具的虚拟交互对象触控操作,以控制游戏角色拾取目标虚拟道具。通过用户对目标虚拟道具的拾取操作,可以准确拾取游戏场景中被游戏视野画面的视野方向选中的候选虚拟道具,提升拾取虚拟道具的准确性。
在一些实施方式中,响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具之前,还可以包括:
在图形用户界面提供拾取控件;
响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具,包括:
响应于作用于拾取控件的触控操作,控制游戏角色拾取目标虚拟道具。
可以通过图形用户界面的游戏界面中显示关联目标虚拟道具的拾取控件,若用户对拾取控件进行触控操作,可以控制游戏角色自动拾取目标虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
本公开实施例提供的虚拟道具的拾取方案可以应用在各种游戏场景中。比如,以多人竞技类游戏为例,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象;响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具;响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
采用本公开实施例提供的方案通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
例如,通过设置候选虚拟道具的第一对象盒第二对象,第一对象可以包括虚拟标识,通过在游戏场景中显示虚拟标识,一方面,可以使用户快速找到候选虚拟道具以及该候选虚拟道具的虚拟交互对象,提升虚拟道具的拾取效率,另一方面,可以依据视线高度将虚拟标识显示在便于观察的高度,提升虚拟道具的拾取效率。
再如,第一对象可以包括第一包围盒,第二对象可以包括第二包围盒,一方面利用包围盒来确定候选虚拟道具的交互区域,利用结构简单的包围盒来近似替代结构复杂的候选虚拟道具来确定交互区域,可以提升确定交互区域的效率,以及简化交互过程,以提升道具拾取的效率;另一方面,由第一对象以及第二对象确定候选虚拟道具的交互区域,可以提供多样化的交互方式,提高用户留存率,减少服务器空耗。
根据上述实施例所描述的方法,以下将作进一步详细说明。
在本实施例中,将以游戏角色处于第一人称视角为例,对本公开实施例的方法进行详细说明。
如图2a所示,一种虚拟道具的拾取方法具体流程如下:
210、获取虚拟道具在游戏场景中的位置信息。
例如,在游戏场景中有三个虚拟道具,分别为虚拟道具A、虚拟道具B以及虚拟道具C,可以获取这三个虚拟道具在游戏场景中的坐标位置。
220、根据位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域。
例如,设置虚拟道具的拾取半径为n米,n=2,如图2b所示,可以分别以虚拟道具A、虚拟道具B以及虚拟道具C的坐标位置在游戏场景中确定拾取半径为n米的球形拾取区域,该区域可以为半径n的圆球包围盒,当游戏角色接触该球形拾取区域时,则视为游戏角色位于任意拾取区域内,虚拟道具为可拾取状态。
再如,如图2c所示,可以获取游戏角色与虚拟道具A之间的距离f,只有在0≤f≤n的有效拾取距离内,虚拟道具为可拾取状态。以此,只有在有效拾取距离内,虚拟道具才为可拾取状态,能与游戏视野画面的视野方向进行拾取交互,当超出有效拾取距离时,虚拟道具无法与游戏视野画面的视野方向进行拾取交互。
230、若游戏角色位于任意拾取区域内,将对应任意拾取区域的虚拟道具确定为候选虚拟道具。
例如,用户可以通过控制游戏角色移动,使游戏角色进入至少一个虚拟道具的拾取区域内,如图2c所示,游戏角色进入的虚拟道具A的拾取区域,则虚拟道具A为候选虚拟道具。
240、生成候选虚拟道具的虚拟交互对象。
例如,如图1c和图1d所示,可以生成虚拟道具A的虚拟标识、第一包围盒以及第二包围盒,其中,如图1c和图1d所示,虚拟标识为由候选虚拟道具中点向上延伸的虚拟光柱,其高度为地面到视线高度的二分之一,第一包围盒为候选虚拟道具***、具有与候选虚拟道具相同轮廓的包围盒,如图1d所示,第二包围盒为第一包围盒***、圆柱体形的包围盒,圆柱体高度d、直径e,其中,d、e可以根据应用场景设置。此时,虚拟标识在游戏场景中可见,用户可以通过游戏场景中的虚拟标识快速找到虚拟道具A;第一包围盒以及第二包围盒在游戏场景中不可见。
再如,虚拟光柱的高度可以为视线高度的二分之一,以使得游戏角色平视时不发生拾取交互,当稍微向下调整游戏角色的检测射线,即能与虚拟光柱相交,实现拾取交互。
250、响应于视线调整操作,调整游戏角色的检测射线。
例如,检测射线可以模拟人眼,作为拾取交互的触发器。在调整检测射线前,其视野方向可以为平视,此时虽然用户可以通过游戏场景中的虚拟标识快速找到虚拟道具A,但由于游戏视野画面的视野方向未指向虚拟道具A的虚拟光柱、第一包围盒以及第二包围盒,游戏角色无法通过视野方向与虚拟道具A进行交互。因此,用户可以通过对游戏界面中的视线调整控件的触控操作,使游戏角色低头直至其检测射线与游戏场景水平方向(地面)的夹角大于0°,小于45°,即如图1c所示,夹角b要小于45°,在该视野方向下,游戏角色的检测射线可以与虚拟光柱相交。
260、判断检测射线是否指向虚拟交互对象。
例如,判断检测射线是否与游戏场景中的任意候选虚拟道具的虚拟光柱、第一包围盒或第二包围盒相交。若相交,则认为检测射线是否指向虚拟交互对象。
270、若检测射线指向虚拟交互对象,确定视野方向在虚拟交互对象上的视线落点。
例如,由于虚拟光柱并不能完全覆盖虚拟道具A,其主要作用是提示虚拟道具A的位置以及在虚拟道具A上方与检测射线交互,而由于游戏角色与虚拟道具A之间常有一段距离,检测射线很难精准选中虚拟光柱,可以结合虚拟光柱、第一包围盒以及第二包围盒判断检测射线的视线落点。
例如,若游戏场景中虚拟道具A和虚拟道具B的拾取区域重叠,游戏角色进入的该重叠区域,则虚拟道具A和虚拟道具B均为候选虚拟道具,可以将不同虚拟交互对象在游戏场景中占据的区域(交互区域)分为选中区和容错区,并基于选中区和容错区确定拾取交互方法。
例如,若检测射线与虚拟道具A虚拟光柱或第一包围盒相交,则选中虚拟道具A为目标虚拟道具。具体地,可以将第二包围盒所占据的交互区域作为容错区,将第一包围盒占据的交互区域作为选中区。再如,在游戏场景中有多个候选虚拟道具时,多个虚拟光柱以及第二包围盒都可能都会与检测射线相交,因此容易发生误触。此时可以将虚拟光柱以及第二包围盒所占据的交互区域作为容错区,将第一包围盒占据的交互区域作为选中区。如图2d所示,在有多个候选虚拟道具如虚拟道具A和虚拟道具B时,若检测射线指向虚拟道具A和虚拟道具B的选中区,可以获取虚拟道具A的选中区与检测射线的交点A、以及虚拟道具B的选中区与检测射线的交点B作为视线落点。
280、根据视线落点,从候选虚拟道具中确定目标虚拟道具。
例如,可以获取交点A和交点B在游戏场景中的坐标位置,以及获取游戏角色在游戏场景中的坐标位置,将距离游戏角色的视线原点最近的交点A确定为与检测射线首先接触的目标交点,并将交点A对应的虚拟道具A确定为目标虚拟道具(即为检测射线首个接触的候选虚拟道具)。在确定待拾取道具后,可以将目标虚拟道具高亮描边,或如图2e所示,在游戏界面上显示对目标虚拟道具的描述界面。
如图2f所示的流程,确定游戏场景中每个虚拟道具的拾取区域,该拾取区域可以为圆球包围盒。若游戏角色同时在虚拟道具A和虚拟道具B的拾取范围内,则分别生成虚拟道具A和虚拟道具B的虚拟标识、第一包围盒以及第二包围盒,若游戏角色不在虚拟道具A和虚拟道具B的拾取范围内,则虚拟道具A和虚拟道具B不响应拾取交互。若检测射线首先与虚拟道具A的选中区相交,则选中虚拟道具A;在用户调整检测射线后,若检测射线首先与虚拟道具B的选中区相交,则选中虚拟道具B;在用户调整检测射线后,若检测射线未与虚拟道具B的选中区相交,且若检测射线与虚拟道具A的容错区相交,则维持选中虚拟道具A;在用户调整检测射线后,若检测射线未与虚拟道具B的选中区相交,且若检测射线未与虚拟道具A的容错区相交,且若检测射线与虚拟道具B的容错区相交,则选中虚拟道具B;若检测射线未与任何候选虚拟道具的选中区以及容错区相交,则不选中任何候选虚拟道具。被选中的虚拟道具A或虚拟道具B可以响应于拾取操作被拾取,而未被选中的虚拟道具A或虚拟道具不响应于拾取操作。以此,当虚拟道具被选中时,只要检测射线与该虚拟道具的容错区相交,且未与其他虚拟道具的选中区相交,即维持选中状态,以此可以避免误触导致的移动使虚拟道具脱离选中状态,且当只有一个候选虚拟道具时,容错区可以帮助用户快速选中唯一的候选虚拟道具。
以此,通过设置圆球包围盒、第一包围盒(选中区)以及第二包围盒(容错区),使虚拟道具***有三层包围盒,三层包围盒可以响应于不同的交互过程,三层包围盒相互配合可以提升拾取虚拟道具的准确性和效率。
290、响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
例如,响应于用户作用于拾取控件的触控操作,控制游戏角色拾取目标虚拟道具。
由上可知,本公开实施例通过设置选中区和容错区,为每个虚拟道具提供了两层拾取判断范围,选中区用于精准切换,容错区用于选中容错。在拾取虚拟道具时,若有多个距离过近或重叠的虚拟道具时,可以通过选中区精准切换选中的虚拟道具,可以提升拾取虚拟道具的准确性和效率。通过容错区,扩大已选中虚拟道具的交互区域,增加容错率,在用户调整检测射线时,提供顺滑的拾取体验。
为了更好地实施以上方法,本公开实施例还提供一种虚拟道具的拾取装置,该虚拟道具的拾取装置具体可以集成在电子设备中,该电子设备可以为终端、服务器等设备。其中,终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、个人电脑等设备;服务器可以是单一服务器,也可以是由多个服务器组成的服务器集群。
比如,在本实施例中,将以虚拟道具的拾取装置具体集成在终端为例,对本公开实施例的方法进行详细说明。通过终端提供图形用户界面,图形用户界面显示的内容至少部分地包含游戏场景、位于游戏场景中的游戏角色以及至少一个虚拟道具。
例如,如图3所示,该虚拟道具的拾取装置可以包括生成单元310、确定单元320以及拾取单元330,如下:
(一)生成单元310
用于从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象。
在一些实施方式中,生成单元310具体可以用于:获取虚拟道具在游戏场景中的道具位置信息;根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域;若游戏角色位于任意拾取区域内,将对应任意拾取区域的至少一个虚拟道具确定为候选虚拟道具。
在一些实施方式中,预设的拾取区域条件包括拾取半径,根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域,包括:根据道具位置信息以及拾取半径,生成初始拾取区域;根据初始拾取区域内的游戏角色的个数,调整初始拾取区域,得到虚拟道具的拾取区域。
在一些实施方式中,第一对象可以包括虚拟标识,生成单元310还可以用于:获取游戏视野画面的视线高度;根据视线高度确定候选虚拟道具的虚拟标识的显示高度;在显示高度显示候选虚拟道具的虚拟标识。
在一些实施方式中,根据视线高度确定候选虚拟道具的虚拟标识的显示高度,包括:获取预设的权重参数;根据预设的权重参数对视线高度进行权重计算,得到候选虚拟道具的虚拟标识的显示高度。
在一些实施方式中,第一对象可以包括第一包围盒,第二对象可以包括第二包围盒,第一包围盒位于第二包围盒内。
(二)确定单元320
用于响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
在一些实施方式中,虚拟交互对象可以包括第一对象以及第二对象中的至少一种,确定单元320具体可以用于:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具;或,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具,包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象;从至少一个候选虚拟道具中,确定与首个第一对象对应的目标虚拟道具。
在一些实施方式中,视野方向包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象,包括:响应于检测射线指向至少一个第一对象,确定第一交点,第一交点为检测射线与第一对象的交点;根据第一交点的位置信息,从第一对象中确定视野方向指向的首个第一对象。
在一些实施方式中,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具,包括:响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象;从至少一个候选虚拟道具中,确定与首个第二对象对应的目标虚拟道具。
在一些实施方式中,视野方向包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象,包括:响应于检测射线未指向第一对象,且检测射线指向第二对象,确定第二交点,第二交点为检测射线与第二对象的交点;根据第二交点的位置信息,从第二对象中确定视野方向指向的首个第二对象。
在一些实施方式中,确定单元320还可以用于:响应于第一视野调整操作,调整视野方向;若调整后的视野方向未指向任意第一对象,且调整后的视野方向指向目标虚拟道具的第二对象,维持目标虚拟道具为目标虚拟道具。
在一些实施方式中,确定单元320还可以用于:通过第一显示方式显示目标虚拟道具。
在一些实施方式中,确定单元320还可以用于:响应于第二视野调整操作,调整视野方向。
在一些实施方式中,确定单元320还可以用于:通过第二显示方式显示目标虚拟道具的第一包围盒。
(三)拾取单元330
用于响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
在一些实施方式中,拾取单元330具体可以用于:在图形用户界面提供拾取控件;
响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具,包括:响应于作用于拾取控件的触控操作,控制游戏角色拾取目标虚拟道具。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
由此,本公开实施例可以通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
相应的,本公开实施例还提供一种计算机设备,该计算机设备可以为终端或服务器,该终端可以为智能手机、平板电脑、笔记本电脑、触控屏幕、游戏机、个人计算机、个人数字助理(Personal Digital Assistant,PDA)等终端设备。
如图4所示,图4为本公开实施例提供的计算机设备的结构示意图,该计算机设备400包括有一个或者一个以上处理核心的处理器410、有一个或一个以上计算机可读存储介质的存储器420及存储在存储器420上并可在处理器上运行的计算机程序。其中,处理器410与存储器420电性连接。本领域技术人员可以理解,图中示出的计算机设备结构并不构成对计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
处理器410是计算机设备400的控制中心,利用各种接口和线路连接整个计算机设备400的各个部分,通过运行或加载存储在存储器420内的软件程序和/或模块,以及调用存储在存储器420内的数据,执行计算机设备400的各种功能和处理数据,从而对计算机设备400进行整体监控。
在本公开实施例中,计算机设备400中的处理器410会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器420中,并由处理器410来运行存储在存储器420中的应用程序,从而实现前述各个方法实施例的方法:
从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象;响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具;响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
在一些实施方式中,虚拟交互对象可以包括第一对象以及第二对象中的至少一种,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
在一些实施方式中,第一对象可以包括虚拟标识,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:获取游戏视野画面的视线高度;根据视线高度确定候选虚拟道具的虚拟标识的显示高度;在显示高度显示候选虚拟道具的虚拟标识。
在一些实施方式中,根据视线高度确定候选虚拟道具的虚拟标识的显示高度,可以包括:获取预设的权重参数;根据预设的权重参数对视线高度进行权重计算,得到候选虚拟道具的虚拟标识的显示高度。
在一些实施方式中,第一对象可以包括第一包围盒,第二对象可以包括第二包围盒,第一包围盒位于第二包围盒内。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第二显示方式显示目标虚拟道具的第一包围盒。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具;或,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象;从至少一个候选虚拟道具中,确定与首个第一对象对应的目标虚拟道具。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象,可以包括:响应于检测射线指向至少一个第一对象,确定第一交点,第一交点为检测射线与第一对象的交点;根据第一交点的位置信息,从第一对象中确定视野方向指向的首个第一对象。
在一些实施方式中,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具,可以包括:响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象;从至少一个候选虚拟道具中,确定与首个第二对象对应的目标虚拟道具。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象,可以包括:响应于检测射线未指向第一对象,且检测射线指向第二对象,确定第二交点,第二交点为检测射线与第二对象的交点;根据第二交点的位置信息,从第二对象中确定视野方向指向的首个第二对象。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:响应于第一视野调整操作,调整视野方向;若调整后的视野方向未指向任意第一对象,且调整后的视野方向指向目标虚拟道具的第二对象,维持目标虚拟道具为目标虚拟道具。
在一些实施方式中,述响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第一显示方式显示目标虚拟道具。
在一些实施方式中,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,可以包括:获取虚拟道具在游戏场景中的道具位置信息;根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域;若游戏角色位于任意拾取区域内,将对应任意拾取区域的至少一个虚拟道具确定为候选虚拟道具。
在一些实施方式中,预设的拾取区域条件包括拾取半径,根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域,可以包括:根据道具位置信息以及拾取半径,生成初始拾取区域;根据初始拾取区域内的游戏角色的个数,调整初始拾取区域,得到虚拟道具的拾取区域。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:响应于第二视野调整操作,调整视野方向。
在一些实施方式中,响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具之前,还可以包括:在图形用户界面提供拾取控件;
响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具,包括:响应于作用于拾取控件的触控操作,控制游戏角色拾取目标虚拟道具。
本实施例运行的虚拟道具的拾取方法的具体实施例内容,同样适用于前述虚拟道具的拾取方法的实施例内容,故在此不做赘述。
本公开实施例运行的虚拟道具的拾取方法,可以通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
可选的,如图4所示,计算机设备400还包括:触控显示屏430、射频电路440、音频电路450、输入单元460以及电源470。其中,处理器410分别与触控显示屏430、射频电路440、音频电路450、输入单元460以及电源470电性连接。本领域技术人员可以理解,图4中示出的计算机设备结构并不构成对计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
触控显示屏430可用于显示图形用户界面以及接收用户作用于图形用户界面产生的操作指令。触控显示屏430可以包括显示面板和触控面板。其中,显示面板可用于显示由用户输入的信息或提供给用户的信息以及计算机设备的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。触控面板可用于收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作),并生成相应的操作指令,且操作指令执行对应程序。可选的,触控面板可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器410,并能接收处理器410发来的命令并加以执行。触控面板可覆盖显示面板,当触控面板检测到在其上或附近的触摸操作后,传送给处理器410以确定触摸事件的类型,随后处理器410根据触摸事件的类型在显示面板上提供相应的视觉输出。在本公开实施例中,可以将触控面板与显示面板集成到触控显示屏430而实现输入和输出功能。但是在某些实施例中,触控面板与显示面板可以作为两个独立的部件来实现输入和输出功能。即触控显示屏430也可以作为输入单元460的一部分实现输入功能。
在本公开实施例中,通过处理器410执行游戏应用程序在触控显示屏430上生成图形用户界面,图形用户界面上的显示的内容至少部分地包含游戏场景、位于游戏场景中的游戏角色以及至少一个虚拟道具。该触控显示屏430用于呈现图形用户界面以及接收用户作用于图形用户界面产生的操作指令。
射频电路440可用于收发射频信号,以通过无线通信与网络设备或其他计算机设备建立无线通讯,与网络设备或其他计算机设备之间收发信号。
音频电路450可以用于通过扬声器、传声器提供用户与计算机设备之间的音频接口。音频电路450可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路450接收后转换为音频数据,再将音频数据输出处理器410处理后,经射频电路440以发送给比如另一计算机设备,或者将音频数据输出至存储器420以便进一步处理。音频电路450还可能包括耳塞插孔,以提供外设耳机与计算机设备的通信。
输入单元460可用于接收输入的数字、字符信息或用户特征信息(例如指纹、虹膜、面部信息等),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
电源470用于给计算机设备400的各个部件供电。可选的,电源470可以通过电源管理***与处理器410逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。电源470还可以包括一个或一个以上的直流或交流电源、再充电***、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管图4中未示出,计算机设备400还可以包括摄像头、传感器、无线保真模块、蓝牙模块等,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
由上可知,本实施例提供的计算机设备可以通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本公开实施例提供一种计算机可读存储介质,其中存储有多条计算机程序,该计算机程序能够被处理器进行加载,以执行本公开实施例所提供的任一种虚拟道具的拾取方法中的步骤。例如,该计算机程序可以执行前述各个方法实施例的方法的步骤:
从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,候选虚拟道具配置虚拟交互对象;响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具;响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具。
在一些实施方式中,在一些实施方式中,虚拟交互对象可以包括第一对象以及第二对象中的至少一种,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具。
在一些实施方式中,第一对象可以包括虚拟标识,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:获取游戏视野画面的视线高度;根据视线高度确定候选虚拟道具的虚拟标识的显示高度;在显示高度显示候选虚拟道具的虚拟标识。
在一些实施方式中,根据视线高度确定候选虚拟道具的虚拟标识的显示高度,可以包括:获取预设的权重参数;根据预设的权重参数对视线高度进行权重计算,得到候选虚拟道具的虚拟标识的显示高度。
在一些实施方式中,第一对象可以包括第一包围盒,第二对象可以包括第二包围盒,第一包围盒位于第二包围盒内。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第二显示方式显示目标虚拟道具的第一包围盒。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象或第二对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具;或,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,从至少一个候选虚拟道具中确定与第一对象对应的目标虚拟道具,可以包括:响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象;从至少一个候选虚拟道具中,确定与首个第一对象对应的目标虚拟道具。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于游戏角色对应的游戏视野画面的视野方向指向第一对象,确定视野方向指向的首个第一对象,可以包括:响应于检测射线指向至少一个第一对象,确定第一交点,第一交点为检测射线与第一对象的交点;根据第一交点的位置信息,从第一对象中确定视野方向指向的首个第一对象。
在一些实施方式中,响应于视野方向未指向第一对象,且视野方向指向第二对象,从至少一个候选虚拟道具中确定与第二对象对应的目标虚拟道具,可以包括:响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象;从至少一个候选虚拟道具中,确定与首个第二对象对应的目标虚拟道具。
在一些实施方式中,视野方向可以包括由游戏视野画面的预设位置发出、指向视野方向的检测射线,响应于视野方向未指向第一对象,且视野方向指向第二对象,确定视野方向指向的首个第二对象,可以包括:响应于检测射线未指向第一对象,且检测射线指向第二对象,确定第二交点,第二交点为检测射线与第二对象的交点;根据第二交点的位置信息,从第二对象中确定视野方向指向的首个第二对象。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:响应于第一视野调整操作,调整视野方向;若调整后的视野方向未指向任意第一对象,且调整后的视野方向指向目标虚拟道具的第二对象,维持目标虚拟道具为目标虚拟道具。
在一些实施方式中,述响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之后,还可以包括:通过第一显示方式显示目标虚拟道具。
在一些实施方式中,从游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,可以包括:获取虚拟道具在游戏场景中的道具位置信息;根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域;若游戏角色位于任意拾取区域内,将对应任意拾取区域的至少一个虚拟道具确定为候选虚拟道具。
在一些实施方式中,预设的拾取区域条件包括拾取半径,根据道具位置信息以及预设的拾取区域条件,生成虚拟道具的拾取区域,可以包括:根据道具位置信息以及拾取半径,生成初始拾取区域;根据初始拾取区域内的游戏角色的个数,调整初始拾取区域,得到虚拟道具的拾取区域。
在一些实施方式中,响应于游戏角色对应的游戏视野画面的视野方向指向虚拟交互对象,从至少一个候选虚拟道具中确定与虚拟交互对象对应的目标虚拟道具之前,还可以包括:响应于第二视野调整操作,调整视野方向。
在一些实施方式中,响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具之前,还可以包括:在图形用户界面提供拾取控件;
响应于针对目标虚拟道具的拾取操作,控制游戏角色拾取目标虚拟道具,包括:响应于作用于拾取控件的触控操作,控制游戏角色拾取目标虚拟道具。
本实施例运行的虚拟道具的拾取方法的具体实施例内容,同样适用于前述虚拟道具的拾取方法的实施例内容,故在此不做赘述。
本公开实施例运行的虚拟道具的拾取方法,可以通过候选虚拟道具配置的虚拟交互对象,可以利用虚拟交互对象确定候选虚拟道具在游戏场景中的交互区域,以通过视野方向与虚拟交互对象的交互结果,从候选虚拟道具中确定目标虚拟道具,从而可以提升选中任意候选虚拟道具的准确性,以提升拾取虚拟道具的准确性。此外,通过视野方向与虚拟交互对象的交互,可以协助用户快速决策要拾取的虚拟道具,简化了虚拟道具拾取流程,实现快速拾取虚拟道具,提升虚拟道具的拾取效率。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的计算机程序,可以执行本公开实施例所提供的任一种虚拟道具的拾取方法中的步骤,因此,可以实现本公开实施例所提供的任一种虚拟道具的拾取方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本公开实施例所提供的一种虚拟道具的拾取方法、装置、计算机设备和存储介质进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (20)

  1. 一种虚拟道具的拾取方法,通过终端提供图形用户界面,所述图形用户界面显示的内容至少部分地包含游戏场景、位于所述游戏场景中的游戏角色以及至少一个虚拟道具,所述方法包括:
    从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,所述候选虚拟道具配置虚拟交互对象;
    响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具;
    响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具。
  2. 如权利要求1所述的虚拟道具的拾取方法,其中,所述虚拟交互对象包括第一对象以及第二对象中的至少一种,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具,包括:
    响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象或所述第二对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具。
  3. 如权利要求2所述的虚拟道具的拾取方法,其中,所述第一对象包括虚拟标识,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具之前,还包括:
    获取所述游戏视野画面的视线高度;
    根据所述视线高度确定所述候选虚拟道具的虚拟标识的显示高度;
    在所述显示高度显示所述候选虚拟道具的虚拟标识。
  4. 如权利要求3所述的虚拟道具的拾取方法,其中,所述根据所述视线高度确定候选虚拟道具的虚拟标识的显示高度,包括:
    获取预设的权重参数;
    根据所述预设的权重参数对所述视线高度进行权重计算,得到候选虚拟道具的虚拟标识的显示高度。
  5. 如权利要求2~4任一项所述的虚拟道具的拾取方法,其中,所述第一对象包括第一包围盒,所述第二对象包括第二包围盒,所述第一包围盒位于所述第二包围盒内。
  6. 如权利要求5所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具之后,还包括:
    通过第二显示方式显示所述目标虚拟道具的所述第一包围盒。
  7. 如权利要求2~6任一项所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象或所述第二对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具,包括:
    响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象,从所述至少一个候选虚拟道具中确定与所述第一对象对应的目标虚拟道具;
    或,响应于所述视野方向未指向所述第一对象,且所述视野方向指向所述第二对象,从所述至少一个候选虚拟道具中确定与所述第二对象对应的目标虚拟道具。
  8. 如权利要求7所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象,从所述至少一个候选虚拟道具中确定与所述第一对象对应的目标虚拟道具,包括:
    响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象,确定所述视野方向指向的首个所述第一对象;
    从所述至少一个候选虚拟道具中,确定与所述首个所述第一对象对应的目标虚拟道具。
  9. 如权利要求8所述的虚拟道具的拾取方法,其中,所述视野方向包括由所述游戏视野画面的预设位置发出、指向所述视野方向的检测射线,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述第一对象,确定所述视野方向指向的首个所述第一对象,包括:
    响应于所述检测射线指向至少一个所述第一对象,确定第一交点,所述第一交点为所述检测射线与所述第一对象的交点;
    根据所述第一交点的位置信息,从所述第一对象中确定所述视野方向指向的首个所述第一对象。
  10. 如权利要求7~9任一项所述的虚拟道具的拾取方法,其中,所述响应于所述视野方向未指向所述第一对象,且所述视野方向指向所述第二对象,从所述至少一个候选虚拟道具中确定与所述第二对象对应的目标虚拟道具,包括:
    响应于所述视野方向未指向所述第一对象,且所述视野方向指向所述第二对象,确定所述视野方向指向的首个所述第二对象;
    从所述至少一个候选虚拟道具中,确定与所述首个所述第二对象对应的目标虚拟道具。
  11. 如权利要求9所述的虚拟道具的拾取方法,其中,所述视野方向包括由所述游戏视野画面的预设位置发出、指向所述视野方向的检测射线,所述响应于所述视野方向未指向所述第一对象,且所述视野方向指向所述第二对象,确定所述视野方向指向的首个所述第二对象,包括:
    响应于所述检测射线未指向所述第一对象,且所述检测射线指向所述第二对象,确定第二交点,所述第二交点为所述检测射线与所述第二对象的交点;
    根据所述第二交点的位置信息,从所述第二对象中确定所述视野方向指向的首个所述第二对象。
  12. 如权利要求2~11任一项所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具之后,还包括:
    响应于第一视野调整操作,调整所述视野方向;
    若调整后的视野方向未指向任意所述第一对象,且所述调整后的视野方向指向所述目标虚拟道具的所述第二对象,维持所述目标虚拟道具为目标虚拟道具。
  13. 如权利要求1~12任一项所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具之后,还包括:
    通过第一显示方式显示所述目标虚拟道具。
  14. 如权利要求1~13任一项所述的虚拟道具的拾取方法,其中,所述从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,包括:
    获取所述虚拟道具在所述游戏场景中的道具位置信息;
    根据所述道具位置信息以及预设的拾取区域条件,生成所述虚拟道具的拾取区域;
    若所述游戏角色位于任意所述拾取区域内,将对应所述任意所述拾取区域的至少一个所述虚拟道具确定为候选虚拟道具。
  15. 如权利要求14所述的虚拟道具的拾取方法,其中,所述预设的拾取区域条件包括拾取半径,所述根据所述道具位置信息以及预设的拾取区域条件,生成所述虚拟道具的拾取区域,包括:
    根据所述道具位置信息以及所述拾取半径,生成初始拾取区域;
    根据所述初始拾取区域内的所述游戏角色的个数,调整所述初始拾取区域,得到所述虚拟道具的拾取区域。
  16. 如权利要求1~15任一项所述的虚拟道具的拾取方法,其中,所述响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具之前,还包括:
    响应于第二视野调整操作,调整所述视野方向。
  17. 如权利要求1~16任一项所述的虚拟道具的拾取方法,其中,所述响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具之前,还包括:
    在所述图形用户界面提供拾取控件;
    响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具,包括:
    响应于作用于所述拾取控件的触控操作,控制所述游戏角色拾取所述目标虚拟道具。
  18. 一种虚拟道具的拾取装置,通过终端提供图形用户界面,所述图形用户界面显示的内容至少部分地包含游戏场景、位于所述游戏场景中的游戏角色以及至少一个虚拟道具,所述装置包括:
    生成单元,用于从所述游戏场景中的至少一个虚拟道具中确定至少一个候选虚拟道具,所述候选虚拟道具配置虚拟交互对象;
    确定单元,用于响应于所述游戏角色对应的游戏视野画面的视野方向指向所述虚拟交互对象,从所述至少一个候选虚拟道具中确定与所述虚拟交互对象对应的目标虚拟道具;
    拾取单元,用于响应于针对所述目标虚拟道具的拾取操作,控制所述游戏角色拾取所述目标虚拟道具。
  19. 一种计算机设备,其中,包括处理器和存储器,所述存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行如权利要求1~17任一项所述的虚拟道具的拾取方法中的步骤。
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1~17任一项所述的虚拟道具的拾取方法中的步骤。
PCT/CN2022/132380 2022-06-17 2022-11-16 虚拟道具的拾取方法、装置、计算机设备和存储介质 WO2023240925A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210693562.3 2022-06-17
CN202210693562.3A CN115040870A (zh) 2022-06-17 2022-06-17 虚拟道具的拾取方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023240925A1 true WO2023240925A1 (zh) 2023-12-21

Family

ID=83163408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132380 WO2023240925A1 (zh) 2022-06-17 2022-11-16 虚拟道具的拾取方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN115040870A (zh)
WO (1) WO2023240925A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115040870A (zh) * 2022-06-17 2022-09-13 网易(杭州)网络有限公司 虚拟道具的拾取方法、装置、计算机设备和存储介质
CN118022334A (zh) * 2022-11-04 2024-05-14 网易(杭州)网络有限公司 游戏中的交互控制方法、装置和电子设备
CN115690374B (zh) * 2023-01-03 2023-04-07 江西格如灵科技有限公司 一种基于模型边缘射线检测的交互方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113413597A (zh) * 2021-06-21 2021-09-21 网易(杭州)网络有限公司 虚拟道具的装配方法、装置、计算机设备和存储介质
CN113577762A (zh) * 2021-07-28 2021-11-02 网易(杭州)网络有限公司 游戏中道具的拾取方法、装置、电子设备及可读介质
US20220105429A1 (en) * 2019-10-31 2022-04-07 Tencent Technology (Shenzhen) Company Limited Virtual prop control method and apparatus, computer-readable storage medium, and electronic device
CN114470770A (zh) * 2021-12-03 2022-05-13 腾讯科技(深圳)有限公司 虚拟道具的拾取方法、装置、设备、存储介质及程序产品
CN115040870A (zh) * 2022-06-17 2022-09-13 网易(杭州)网络有限公司 虚拟道具的拾取方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220105429A1 (en) * 2019-10-31 2022-04-07 Tencent Technology (Shenzhen) Company Limited Virtual prop control method and apparatus, computer-readable storage medium, and electronic device
CN113413597A (zh) * 2021-06-21 2021-09-21 网易(杭州)网络有限公司 虚拟道具的装配方法、装置、计算机设备和存储介质
CN113577762A (zh) * 2021-07-28 2021-11-02 网易(杭州)网络有限公司 游戏中道具的拾取方法、装置、电子设备及可读介质
CN114470770A (zh) * 2021-12-03 2022-05-13 腾讯科技(深圳)有限公司 虚拟道具的拾取方法、装置、设备、存储介质及程序产品
CN115040870A (zh) * 2022-06-17 2022-09-13 网易(杭州)网络有限公司 虚拟道具的拾取方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN115040870A (zh) 2022-09-13

Similar Documents

Publication Publication Date Title
WO2023240925A1 (zh) 虚拟道具的拾取方法、装置、计算机设备和存储介质
US20240070974A1 (en) Method and apparatus for displaying virtual environment picture, device, and storage medium
CN113426124B (zh) 游戏中的显示控制方法、装置、存储介质及计算机设备
CN113559518A (zh) 虚拟模型的交互检测方法、装置、电子设备及存储介质
WO2024011894A1 (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
CN114522423A (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
WO2023005234A1 (zh) 虚拟资源的投放控制方法、装置、计算机设备及存储介质
WO2022257690A1 (zh) 在虚拟环境中标记物品的方法、装置、设备及存储介质
CN113398566A (zh) 游戏的显示控制方法、装置、存储介质及计算机设备
CN112843716B (zh) 虚拟物体提示与查看方法、装置、计算机设备及存储介质
US20230271087A1 (en) Method and apparatus for controlling virtual character, device, and storage medium
WO2023071808A1 (zh) 基于虚拟场景的图形显示方法、装置、设备以及介质
CN115970284A (zh) 虚拟武器的攻击方法、装置、存储介质及计算机设备
CN112245914B (zh) 一种视角调整方法、装置、存储介质及计算机设备
CN115999153A (zh) 虚拟角色的控制方法、装置、存储介质及终端设备
CN116650963A (zh) 游戏信息显示方法、装置、计算机设备和存储介质
CN114522429A (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
CN116139483A (zh) 游戏功能控制方法、装置、存储介质及计算机设备
US20240226745A9 (en) Method and apparatus for controlling put of virtual resource, computer device, and storage medium
CN113398564B (zh) 虚拟角色控制方法、装置、存储介质及计算机设备
WO2023231544A9 (zh) 虚拟对象的控制方法、装置、设备以及存储介质
WO2024139055A1 (zh) 虚拟武器的攻击方法、装置、存储介质及计算机设备
CN116850594A (zh) 游戏交互方法、装置、计算机设备及计算机可读存储介质
CN115569380A (zh) 游戏角色控制方法、装置、计算机设备和存储介质
CN117482523A (zh) 游戏交互方法、装置、计算机设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946591

Country of ref document: EP

Kind code of ref document: A1