WO2023160015A1 - 虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023160015A1
WO2023160015A1 PCT/CN2022/130823 CN2022130823W WO2023160015A1 WO 2023160015 A1 WO2023160015 A1 WO 2023160015A1 CN 2022130823 W CN2022130823 W CN 2022130823W WO 2023160015 A1 WO2023160015 A1 WO 2023160015A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual resource
icon
resource icon
target
virtual
Prior art date
Application number
PCT/CN2022/130823
Other languages
English (en)
French (fr)
Inventor
肖婕
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/348,859 priority Critical patent/US20230350554A1/en
Publication of WO2023160015A1 publication Critical patent/WO2023160015A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/306Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying a marker associated to an object or location in the game field
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/307Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying an additional window with a view from the top of the game field, e.g. radar screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • the present application relates to the technical field of virtualization and human-computer interaction, and in particular to a method, device, equipment, storage medium and program product for position marking in a virtual scene.
  • Embodiments of the present application provide a method, device, electronic device, computer-readable storage medium, and computer program product for position marking in a virtual scene, which can realize fast marking of virtual resource icons, reduce the number of human-computer interactions, and improve the accuracy of the virtual scene. Control efficiency.
  • An embodiment of the present application provides a position marking method in a virtual scene, including:
  • a display module configured to display a map of the virtual scene in the interface of the virtual scene
  • the display module is further configured to display at least one virtual resource icon in response to a position marking instruction for virtual resources in the virtual scene;
  • a control module configured to, in response to a drag operation on a target virtual resource icon in the at least one virtual resource icon, control the target virtual resource icon to move in the map accompanying the execution of the drag operation;
  • the marking module is configured to mark the target virtual resource icon at the current location of the target virtual resource icon in the map in response to the release instruction for the dragging operation.
  • the processor is configured to, when executing the executable instructions stored in the memory, implement the position marking method in the virtual scene provided by the embodiment of the present application.
  • An embodiment of the present application provides a computer program product, including computer programs or instructions.
  • the computer program or instructions are executed by a processor, the method for marking positions in a virtual scene provided by the embodiments of the present application is implemented.
  • the target virtual resource icon by dragging the target virtual resource icon displayed in the virtual scene, the target virtual resource icon is controlled to move in the map, and after receiving the release command for the drag operation, the target virtual resource icon is completed.
  • the marking process of resource icons in this way, without changing the scale and position of the map, fast and accurate marking of virtual resource icons can be achieved by dragging, compared to clicking the position on the map in the related technology
  • the way of triggering the marking pop-up window to mark reduces the number of human-computer interactions, improves the marking efficiency of virtual resource icons in the map, avoids false touches, and improves the control efficiency of virtual scenes.
  • FIG. 2 is a schematic structural diagram of an electronic device implementing a position marking method in a virtual scene provided by an embodiment of the present application;
  • Fig. 3 is a schematic flow chart of a method for position marking in a virtual scene provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of a virtual scene map interface provided by an embodiment of the present application.
  • 6A-6B are schematic diagrams of displaying virtual resource icons provided by the embodiment of the present application.
  • Fig. 7 is a schematic diagram of a target display style provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of the display style of the roulette provided by the embodiment of the present application.
  • Fig. 9 is a schematic diagram of information prompt provided by the embodiment of the present application.
  • Fig. 10 is a schematic diagram of resource name modification provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of a partially enlarged interface provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of a fixed partial zoom interface provided by the embodiment of the present application.
  • Fig. 13 is a schematic diagram of a multi-point marking mode provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a multi-point marking area provided by an embodiment of the present application.
  • Fig. 15 is a schematic diagram of map markers provided by related technologies.
  • FIG. 16 is a flow chart of the location marking operation for virtual resources provided by the embodiment of the present application.
  • Fig. 17 is a flow chart of the editing operation for the tag name provided by the embodiment of the present application.
  • Fig. 18 is a flow chart of a location marking method provided by an embodiment of the present application.
  • first/second in the application documents, add the following explanation.
  • first ⁇ second ⁇ third are only used to distinguish similar objects, not Represents a specific ordering of objects. It is understandable that “first ⁇ second ⁇ third” can be exchanged for a specific order or sequence if allowed, so that the embodiments of the application described here can be used in addition to the Carried out in sequences other than those shown or described.
  • Client an application running on a terminal for providing various services, such as an instant messaging client and a video playback client.
  • Response is used to represent the condition or state on which the executed operation depends.
  • one or more operations to be executed may be real-time or have a set delay; Unless otherwise specified, there is no restriction on the order in which the operations are performed.
  • the activities include but are not limited to: adjusting body posture, crawling, At least one of walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing.
  • the virtual scene can be displayed from a first-person perspective (such as playing the virtual object in the game from the player's own perspective); it can also be displayed from a third-person perspective (such as the player chasing the virtual object in the game to play the game). ); the virtual scene can also be displayed with a bird's-eye view; wherein, the above-mentioned perspectives can be switched arbitrarily.
  • the virtual scene displayed in the human-computer interaction interface may include: according to the viewing position and field angle of the virtual object in the complete virtual scene, the field of view area of the virtual object is determined to present a complete virtual scene.
  • the part of the virtual scene located in the field of view area in the scene, that is, the displayed virtual scene may be a part of the virtual scene relative to the panoramic virtual scene. Because the first-person perspective is the viewing perspective that can give users the most impact, in this way, the immersive perception of the user during the operation can be realized.
  • the interface of the virtual scene presented in the human-computer interaction interface may include: responding to the zoom operation for the panoramic virtual scene, presenting a part of the virtual scene corresponding to the zoom operation in the human-computer interaction interface, That is, the displayed virtual scene may be a part of the virtual scene relative to the panoramic virtual scene. In this way, the operability of the user during the operation can be improved, thereby improving the efficiency of human-computer interaction.
  • Scene data representing various characteristics of the objects in the virtual scene during the interaction process, for example, may include the position of the objects in the virtual scene.
  • different types of features may be included according to the type of virtual scene; for example, in the virtual scene of a game, the scene data may include the waiting time for various functions configured in the virtual scene (depending on the ability to use the same function within a certain period of time).
  • the number of functions can also represent the attribute values of various states of the game character, for example, including life value (also called red amount), mana value (also called blue amount), status value, blood volume, etc.
  • Open world games also known as roaming games (free roam), a type of game level design, in which players can freely roam in a virtual world, and can freely choose the time and method of completing game tasks .
  • FIG. 1 is a schematic diagram of the architecture of a position marking system in a virtual scene provided by an embodiment of the present application.
  • a terminal terminal 400-1 and terminal 400-2 are shown as examples
  • the network 300 is connected to the server 200, and the network 300 may be a wide area network or a local area network, or a combination of the two, using wireless or wired links to realize data transmission.
  • the terminal (such as terminal 400-1 and terminal 400-2) is configured to receive a trigger operation of entering the virtual scene based on the view interface, and send a request for obtaining scene data of the virtual scene to the server 200;
  • the server 200 is configured to receive a scene data acquisition request, and return the scene data of the virtual scene to the terminal in response to the acquisition request;
  • the terminal (such as terminal 400-1 and terminal 400-2) is configured to receive the scene data of the virtual scene, render the picture of the virtual scene based on the obtained scene data, and display the image of the virtual scene on the graphical interface (the graphical interface 410- 1 and graphical interface 410-2) presenting the picture of the virtual scene; wherein, corresponding map information is presented in the picture of the virtual scene, and the contents presented in the picture of the virtual scene are rendered based on the returned scene data of the virtual scene;
  • the terminal (such as terminal 400-1 and terminal 400-2) is also configured to display a map of the virtual scene; in response to a position marking instruction for virtual resources in the virtual scene, display at least one virtual resource icon; in response to at least one virtual resource
  • the drag operation of the target virtual resource icon in the icon controls the movement of the target virtual resource icon in the map along with the execution of the drag operation; in response to the release command for the drag operation, the target virtual resource icon is currently located in the map location, mark the target virtual resource icon. In this way, any position marking on the entire open world map can be completed by only dragging the target virtual resource icon.
  • Cloud technology refers to the unification of a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, and processing. and shared hosting technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. It can form a resource pool and be used on demand, which is flexible and convenient. Cloud computing technology will become an important support.
  • the background service of the technical network system requires a lot of computing and storage resources; for example, when the virtual scene is a game scene, the corresponding game is a cloud game, and the images of the virtual scene displayed on the terminal are all rendered by the server.
  • the server 200 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network Cloud servers for basic cloud computing services such as services, cloud communications, middleware services, domain name services, security services, content delivery network (CDN, Content Delivery Network), and big data and artificial intelligence platforms.
  • Terminals such as terminal 400-1 and terminal 400-2) may be smart phones, tablet computers, laptops, desktop computers, smart speakers, smart TVs, smart watches, etc., but are not limited thereto.
  • Terminals (such as terminal 400-1 and terminal 400-2) and server 200 may be connected directly or indirectly through wired or wireless communication, which is not limited in this application.
  • terminals including terminal 400-1 and terminal 400-2) are installed and run with applications supporting virtual scenes.
  • the application can be a first-person shooter game (FPS, First-Person Shooting game), a third-person shooter game, a driving game where the steering operation is the dominant behavior, and a multiplayer online tactical arena game (MOBA, Multiplayer Online Battle Arena games) , two-dimensional (Two Dimension, referred to as 2D) game application, three-dimensional (Three Dimension, referred to as 3D) game application, virtual reality application program, three-dimensional map program or any one of multiplayer survival games.
  • the application program may also be a stand-alone version of the application program, such as a stand-alone version of a 3D game program.
  • the user can operate on the terminal in advance, and after the terminal detects the user's operation, it can download the game configuration file of the electronic game, and the game configuration file can include the application program of the electronic game, Interface display data or virtual scene data, etc., so that when the user logs in the electronic game on the terminal, the game configuration file can be invoked to render and display the electronic game interface.
  • the user can perform a touch operation on the terminal. After the terminal detects the touch operation, it can determine the game data corresponding to the touch operation, and render and display the game data.
  • the game data can include virtual scene data, the Behavioral data of virtual objects in virtual scenes, etc.
  • FIG. 2 is a schematic structural diagram of an electronic device implementing a method for marking a location in a virtual scene provided by an embodiment of the present application.
  • the electronic device 500 may be the server or the terminal shown in FIG. 1.
  • the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510 , a memory 550 , at least one network interface 520 and a user interface 530 .
  • Various components in the electronic device 500 are coupled together through the bus system 540 .
  • the bus system 540 is configured to enable connection communication between these components.
  • the bus system 540 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 540 in FIG. 2 .
  • Processor 510 can be a kind of integrated circuit chip, has signal processing capability, such as general-purpose processor, digital signal processor (DSP, Digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware Components, etc., wherein the general-purpose processor can be a microprocessor or any conventional processor, etc.
  • DSP digital signal processor
  • DSP Digital Signal Processor
  • User interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
  • the user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
  • Memory 550 may be removable, non-removable or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 550 optionally includes one or more storage devices located physically remote from processor 510 .
  • memory 550 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • Operating system 551 including system programs configured to process various basic system services and perform hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for realizing various basic services and processing hardware-based tasks;
  • Network communication module 552 configured to reach other computing devices via one or more (wired or wireless) network interfaces 520
  • exemplary network interfaces 520 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • Presentation module 553 configured to enable presentation of information via one or more output devices 531 (e.g., display screen, speakers, etc.) associated with user interface 530 (e.g., a user interface for operating peripherals and displaying content and information );
  • output devices 531 e.g., display screen, speakers, etc.
  • user interface 530 e.g., a user interface for operating peripherals and displaying content and information
  • the input processing module 554 is configured to detect one or more user inputs or interactions from one or more of the input devices 532 and to translate the detected inputs or interactions.
  • the position marking device in the virtual scene provided by the embodiment of the present application can be realized by software.
  • FIG. 2 shows a position marking device 555 stored in the memory 550 in the virtual scene, which can be a program and Software in the form of plug-ins, including the following software modules: display module 5551, control module 5552, and marking module 5553. These modules are logical, so they can be combined or split arbitrarily according to the realized functions. Each module will be described below. function of the module.
  • the position marking device in the virtual scene provided by the embodiment of the present application may be implemented by combining software and hardware.
  • the position marking device in the virtual scene provided by the embodiment of the present application may be implemented by using hardware translation
  • a processor in the form of a code processor which is programmed to perform the position marking method in the virtual scene provided by the embodiment of the present application, for example, the processor in the form of a hardware decoding processor can adopt one or more application-specific integrated circuits (ASICs) , Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic components.
  • ASICs application-specific integrated circuits
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the following describes the position marking method in the virtual scene provided by the embodiment of the present application.
  • the location marking method in the virtual scene provided by the embodiment of the present application may be implemented solely by the server or the terminal, or jointly implemented by the server and the terminal.
  • a terminal or a server can implement the method for marking a position in a virtual scene provided by the embodiment of the present application by running a computer program.
  • a computer program can be a native program or software module in the operating system; it can be a local (Native) application program (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a client that supports virtual scenes It can also be a small program, that is, a program that only needs to be downloaded into the browser environment to run; it can also be a small program that can be embedded in any APP.
  • APP Native application program
  • the above-mentioned computer program can be any form of application program, module or plug-in.
  • FIG. 3 is a schematic flow diagram of a method for marking positions in a virtual scene provided by an embodiment of the present application.
  • the method for marking positions in a virtual scene provided by an embodiment of the present application includes:
  • an application client such as a game client supporting virtual scenes may be installed on the terminal, or a client integrated with a virtual scene function (such as an instant messaging client, a live broadcast client, an education client, etc. ), when the user opens the application client on the terminal, and the terminal runs the application client, the user can interact with virtual objects based on the virtual scene picture displayed by the client; for example, when the client is a game client On the client side, the user can perform interactions between game characters (virtual objects) in the game scene based on the game screen displayed by the game client (such as virtual battles).
  • the terminal presents an interface of a virtual scene (such as an open world adventure game), and in the interface of the virtual scene, presents a corresponding map (scene map).
  • Players collect, and players can also mark various virtual resources at the location of the map. Common virtual resources may include treasure chests, energy bars, and the like.
  • FIG. 4 is a schematic diagram of a virtual scene map interface provided by an embodiment of the present application.
  • virtual resource location points can be displayed on a map of a virtual scene.
  • step 102 at least one virtual resource icon is displayed in response to a position marking instruction for the virtual resource in the virtual scene.
  • the terminal may display at least one virtual resource icon after receiving the position marking instruction for the virtual resource in the virtual scene.
  • the terminal may receive a position marking instruction for the virtual resource in the virtual scene in the following manner: the terminal displays the position marking function item in the interface of the virtual scene; in response to For the first trigger operation of the position marking function item, a position marking instruction for virtual resources in the virtual scene is received.
  • the position marking function item is displayed in the interface of the virtual scene, wherein the position marking function item is also a control, which can have various presentation forms, such as graphic buttons, progress bars, menus, lists, etc., the embodiment of the present application This is not limited.
  • the terminal may receive a position marking instruction for the virtual resource.
  • the terminal can receive a position marking instruction.
  • the terminal may receive a position marking instruction for the virtual resource in the following manner: the terminal receives a graphic drawing operation triggered by an interface based on the virtual scene; when the graphic drawing operation draws When the graphic matches the preset graphic, a position marking instruction for the virtual resource in the virtual scene is received.
  • the user can perform graphics drawing operations for the virtual scene interface at any position on the terminal screen.
  • the terminal acquires the position information of each point of the graphic drawing operation, generates the graphic drawn by the graphic drawing operation, and compares the drawn graphic with the pre-stored preset in the graphic library for triggering the position marking instruction.
  • a location marker command for the virtual resource can be triggered.
  • the terminal acquires the drawing trajectory when the user performs the graphics drawing operation, matches the pattern formed by the drawing trajectory with the pre-stored graphics, and triggers a position marking instruction for the virtual resource when the matching is successful.
  • the terminal can also predict the drawn graphics through the artificial intelligence-based multi-classification model deployed on the terminal to classify the graphics.
  • the input information of the multi-classification model is the location information of the drawn graphics, and the output information is that the drawn graphics belong to the preset Sets the graphics category in the graphics library.
  • FIG. 5 is a schematic drawing diagram of a graph provided by an embodiment of the present application.
  • the user executes the graphics drawing operation on the virtual scene interface, and obtains the graphics shown in number 1 in the figure (the graphics can be in various styles, such as circles, triangles, etc.).
  • the graphics obtained by the graphics drawing operation may not be displayed in the virtual scene interface, that is, the graphics shown in number 1 in the figure may not be displayed in the actual virtual live room interface.
  • the above-mentioned way of triggering the position mark command through graphic drawing can effectively reduce the screen ratio of the controls in the virtual scene interface and save the screen space occupancy rate.
  • the terminal can display the resource name of the virtual resource indicated by the virtual resource icon in the following manner: the terminal acquires the icon display length of the virtual resource icon, and the resource name display length of the virtual resource indicated by the virtual resource icon ; When the sum of the icon display length and the name length does not reach the length threshold, during the process of displaying the virtual resource icon, display the resource name of the virtual resource indicated by the virtual resource icon.
  • the terminal receives the location marking instruction for virtual resources triggered by the aforementioned triggering method, and displays at least one virtual resource icon and the virtual resource icon on the virtual scene interface according to the actual situation of the virtual scene displayed on the terminal screen.
  • the resource name of the indicated virtual resource It should be noted that the virtual resource icon and the resource name of the virtual resource indicated by the virtual resource icon may be displayed at the same time, or only the virtual resource icon may be displayed.
  • the terminal may determine whether to simultaneously display the virtual resource icon and the corresponding resource name according to the icon length of the virtual resource icon and the name length of the resource name.
  • FIGS. 6A-6B are schematic diagrams of displaying virtual resource icons provided by the embodiment of the present application.
  • virtual resource icons and corresponding resource names are simultaneously displayed in FIG. 6A .
  • the above method of simultaneously displaying virtual resource icons and corresponding resource names can visually display virtual resource names and improve human-computer interaction experience.
  • the terminal can display the resource name of the virtual resource indicated by the virtual resource icon in the following manner: when the sum of the icon display length and the name display length reaches the width threshold, the terminal hides the corresponding resource name during the process of displaying the virtual resource icon
  • the resource name of the virtual resource and when the virtual resource icon is selected, the resource name of the virtual resource indicated by the virtual resource icon is displayed in the form of a floating layer. In this way, when the sum of the icon display length and the name display length reaches the width threshold, the display resource occupation is reduced by hiding the resource name, and the display space is released.
  • the virtual resource icon is selected, the virtual resource icon is displayed in a floating layer.
  • the resource name of the selected virtual resource icon enriches the display mode of information, makes the display of resource names more flexible, and improves the utilization rate of display resources.
  • the terminal judges that the sum of the icon display length and the name display length reaches the width threshold, only the virtual resource icon can be displayed, that is, when the virtual resource icon is displayed, the name of the corresponding virtual resource is hidden.
  • the terminal may also provide a setting interface for the display mode of the virtual resource icon, and use the set target display mode to display the virtual resource icon on the virtual interface.
  • the above method of only displaying virtual resource icons can reduce the space utilization rate of the virtual interface and improve the human-computer interaction experience.
  • the terminal can display the virtual resource icon in the following manner: the terminal responds to the position marking instruction for the virtual resource in the virtual scene, displays the icon suspension layer in the interface of the virtual scene, and in the icon suspension layer, adopts The target display style displays at least one virtual resource icon; wherein, the target display style includes at least one of a list display style and a roulette display style.
  • the terminal provides a setting interface for setting the display style of displaying at least one virtual resource icon.
  • the setting interface at least one display style option is presented, including but not limited to list display style options, carousel display style options, etc.
  • the terminal In response to a selection operation for a target display style option in the at least one display style option, determine a target display style corresponding to the target display style option as the display style of the at least one virtual resource icon. After determining the target display style, the terminal may display at least one virtual resource icon in the virtual scene interface in the form of a floating layer in the target display style.
  • FIG. 7 is a schematic diagram of a target display style provided by an embodiment of the present application.
  • the figure shows two target display style options and an example diagram corresponding to each target display style option.
  • the target display style is a list display style
  • the display style of at least one virtual resource icon is shown in Figures 6A-6B; see Figure 8,
  • Figure 8 is a schematic diagram of the roulette display style provided by the embodiment of the present application, in Figure 8, the terminal
  • the target display style for at least one virtual resource icon is a carousel display style
  • the terminal responds to the long press operation on the location function item (shown by number 1 in the figure), and calls out the carousel ( Number 2 in the figure), when sliding the cursor from the position function item to any virtual resource icon in the wheel, the resource name of the virtual resource indicated by the virtual resource icon can be displayed in the form of a floating layer.
  • at least one virtual resource icon in the carousel display style can be located anywhere in the virtual interface.
  • the above multiple display styles for at least one virtual resource icon can meet the personalized needs of users and improve human-computer interaction experience.
  • the virtual resource icon can be in a disabled state, which is used to remind the user that the number of virtual resources indicated by the current virtual resource icon has reached the threshold in the map.
  • FIG. 9 is a schematic diagram of information prompts provided by the embodiment of the present application.
  • virtual resource icons and resource names are simultaneously displayed in a list display style, where "resource name 1" and "Resource Name 2" are disabled, and other virtual resource icons are available.
  • the terminal presents a suspension layer shown in number 2 in the figure in the interface of the virtual scene, and the prompt information "mark of resource name 1" is displayed in the suspension layer. Quantity is full, current icon is unavailable”.
  • the terminal can edit the resource name of the virtual resource in the following manner: the terminal controls the resource name of the virtual resource indicated by the virtual resource icon to be in an editable state in response to a trigger operation on the virtual resource icon; The editing operation of the resource name in the editable state displays the edited resource name.
  • the terminal can provide an editing operation for the resource name indicated by the virtual resource icon, and the terminal responds to a trigger operation for the resource name (such as a double-click operation for the virtual resource name), controls the resource name to be in an editable state, and receives Until the editing operation (re-input) for the selected resource name, the modified resource name is determined.
  • a trigger operation for the resource name such as a double-click operation for the virtual resource name
  • FIG. 10 is a schematic diagram of resource name modification provided by the embodiment of the present application.
  • the terminal receives a double-click operation on "resource name 4" and controls "resource name 4" to be in an editable state.
  • the cursor flickers in the input box where "resource name 4" is located, prompting the user to input a new resource name, and the new resource name is displayed after the input is completed.
  • step 103 in response to a drag operation on the target virtual resource icon in the at least one virtual resource icon, the target virtual resource icon is controlled to move in the map along with the execution of the drag operation.
  • the terminal can control the target virtual resource icon to move in the map in the following manner: the terminal responds to a drag operation on the target virtual resource icon in the floating state among at least one virtual resource icon, and the control is in the floating state The target virtual resource icon of is moved on the map along with the execution of the drag operation.
  • the terminal may perform a drag operation on the target virtual resource icon in the floating state, wherein there are multiple ways to control the virtual resource icon in the floating state.
  • the terminal may provide settings for whether the virtual resource icon is in a floating state.
  • the terminal can control the virtual resource icon to be in the floating state in the following manner: the terminal responds to a press operation on a target virtual resource icon in at least one virtual resource icon, and obtains an operation parameter of the press operation, wherein the operation parameter includes the following At least one of: operation duration and pressure; when the operation duration reaches the duration threshold or the pressure reaches the pressure threshold, the control target virtual resource icon is in a floating state.
  • the terminal can obtain parameters such as the operation duration and pressure of the pressing operation, and compare the operation duration with the preset duration threshold, or compare the pressure with the preset pressure. In comparison, when the operation duration is greater than the preset duration threshold, or the pressure is greater than the pressure threshold, the icon of the target virtual resource can be controlled to be in a suspended state.
  • the terminal responds to the long-press operation of the position mark button shown in number 1, and calls out the roulette (shown in number 2 in the figure) displaying virtual resource icons.
  • the long-press operation will directly Switch to the sliding operation starting from the position mark button, and slide to the icon of "resource name 2" (the target virtual resource icon).
  • the above method of controlling the movement of the target virtual resource icon on the map can quickly and conveniently realize the operation of moving the target virtual resource icon on the map through uninterrupted continuous actions, thereby improving the human-computer interaction experience.
  • the terminal can also control the target virtual resource icon to move in the map in the following manner: the terminal displays at least one virtual resource icon in a movable state in response to a position marking instruction for the virtual resource in the virtual scene; Correspondingly, in response to the drag operation of the target virtual resource icon in at least one virtual resource icon in the movable state, generate an icon copy in the movable state corresponding to the target virtual resource icon; control the copy of the icon in the movable state, Moves on the map as the drag operation is performed.
  • the icon copy of the virtual resource icon can also be created to respond to the drag operation on the icon copy, and the icon copy can be controlled to move in the map. In this way, the original The absence of virtual resource icons ensures the aesthetics of virtual resource icons when displayed.
  • the terminal can display the real-time location of the target resource icon in the following manner: the terminal obtains the target virtual resource during the process of controlling the target virtual resource icon to move on the map with the execution of the drag operation The real-time position of the icon; synchronously display the partially enlarged interface including the real-time position.
  • the terminal may display the partially enlarged interface in the following manner: the terminal displays the target area in the interface of the virtual scene; in the target area, the partially enlarged interface including the real-time position is synchronously displayed.
  • the terminal may adopt a fixed method to synchronously display a partially enlarged interface including a real-time location.
  • the terminal may provide a display form setting interface for the partial zoom-in interface, and provide at least two display form options, such as follow-up and fixed, in response to the selection operation for the fixed option, display the fixed partial in the interface of the virtual scene Zoom in on the interface. That is, in a suitable area in the virtual scene interface, a partially enlarged interface including the real-time position of the target virtual icon is displayed.
  • the shape of the target area can be a quadrangle, a circle, etc.
  • the target area can be moved, that is, the position of the target area can be moved according to the actual needs of the user.
  • FIG. 12 is a schematic diagram of a fixed partial zoom-in interface provided by the embodiment of the present application.
  • the terminal responds to the drag operation of the target virtual resource icon shown in number 1, and the terminal shown in number 2
  • the target area circular area
  • a partially enlarged interface showing the real-time position of the target virtual icon is displayed synchronously.
  • the terminal may also display the partial zoom interface in the following manner: the terminal synchronously displays the accompanying floating layer associated with the target virtual resource icon, and displays the partial zooming interface including the real-time position in the accompanying floating layer.
  • the terminal may adopt a follow-up method to synchronously display a partially enlarged interface including a real-time location.
  • the terminal may provide a display form setting interface for the partial zoom-in interface, and provide at least two display form options, such as follow-up and fixed, in response to the selection operation for the follow-up option, display the follow-up partial in the interface of the virtual scene Zoom in on the interface.
  • step 104 in response to the release instruction for the drag operation, the target virtual resource icon is marked at the current location of the target virtual resource icon in the map.
  • this marking mode can be understood as a single-point marking mode, that is, only one position is marked each time for the marking operation of the target virtual resource icon.
  • the terminal can be triggered to enter the multi-point marking mode for the target virtual resource icon in various ways, for example, the terminal displays a multi-point marking mode switch in the interface of the virtual scene, and in response to the multi-point marking mode switch The trigger operation, the terminal enters the multi-point marking mode for the target virtual resource icon, that is, when the multi-point marking mode switch is turned on, the terminal enters the multi-point marking mode, and when the multi-point marking mode switch is turned off, the terminal is in the single-point marking mode Marking mode; for another example, the terminal controls to enter a multi-point marking mode for the target virtual resource icon in response to the first trigger operation on the target virtual resource icon in the at least one virtual resource icon.
  • the terminal can also enable a multi-point marking mode for the target virtual resource icon, and in the multi-point marking mode, the marking operation for the target resource icon can be continuously performed multiple times.
  • the terminal can control to enter the multi-point marking mode for the target virtual resource icon.
  • the terminal can mark the target virtual resource icon on the map in the following manner: in the multi-point marking mode, the terminal controls the target virtual resource icon to be in the cursor following state; Click operation, mark the target virtual resource icon at the first position in the map, and after the target virtual resource icon is marked at the first position, when a click operation for the second position in the map is received, mark the target virtual resource icon at the second position The target virtual resource icon.
  • the terminal controls the target virtual resource icon to move with the cursor, and in response to the click operation on position A in the map, marks the target virtual resource icon at position A, and then the target virtual resource icon As the cursor continues to move, and in response to a click operation on location B in the map, the target virtual resource icon is marked at location B.
  • FIG. 13 is a schematic diagram of the multi-point marking mode provided by the embodiment of the present application.
  • the terminal controls the multi-point marking mode, as shown in number 1 Mark the resource icon at the position shown, and then continue to perform the drag operation on the resource icon, mark the resource icon again at the position shown by number 2, and finally, at the position shown by number 3, the third time
  • the resource icon is marked, that is, in the multi-point marking mode, the resource icon of the "resource name 2" is continuously marked three times.
  • the multi-point marking of the target virtual resource icon is performed by turning on the multi-point marking mode above, that is, the user can realize multiple consecutive markings on the target virtual resource icon through one trigger on the target virtual resource icon, effectively improving For the marking efficiency of resource icons, reduce the number of operations and improve the human-computer interaction experience.
  • the terminal can control the exit from the multi-point marking mode in the following manner: the terminal switches the display style of the target virtual resource icon from the first display style to the second display style; correspondingly, the terminal marks the target at the second position After the virtual resource icon, in response to a second trigger operation on the target virtual resource icon, the control exits the multi-point marking mode, and switches the display style of the target virtual resource icon from the second display style to the first display style.
  • the terminal can control and exit the multi-point marking mode for the target resource icon by responding to other trigger operations for the target virtual resource icon again, and change the display style of the target resource icon.
  • the first display style is to enter In the multi-point marking mode, the style of the target resource icon;
  • the second display style is the style of the target resource icon in exiting the multi-point marking mode (normal marking mode or single-point marking mode).
  • the above method of exiting the multi-point marking mode can flexibly control the opening and closing of the multi-point marking mode, and improve marking efficiency.
  • the terminal can implement multi-point marking for the target resource icon in the following manner: the terminal displays the drag track corresponding to the drag operation in the virtual scene; when the graphic corresponding to the drag track on the map is a closed graphic , take the area corresponding to the closed figure as the multi-point marked area; correspondingly, the terminal responds to the release command for the drag operation, and in the multi-point marked area, marks the target virtual resource icon.
  • the terminal can indicate each markable resource location so that the user knows that the location is a markable resource location.
  • the terminal in the multi-point marking mode for the target resource icon, can also select the multi-point marking area according to the drag operation for the target resource icon, and after receiving the release instruction for the drag operation, In the multi-mark area, mark at least one target virtual resource icon at the same time.
  • FIG. 14 is a schematic diagram of a multi-point marking area provided by an embodiment of the present application.
  • the terminal in response to the drag operation of the target resource icon shown in number 1, the terminal displays the corresponding Drag the track (shown by number 2 in the figure), wherein the drag track can form a closed graph, and in the closed graph, each resource position in an unmarked state on the map is obtained, and the terminal receives a release instruction for the drag operation , mark the 4 target virtual icons shown in the figure.
  • the above method of determining the multi-point marking area in the multi-point marking mode can greatly reduce the operation steps during multi-point marking and improve the marking efficiency.
  • the target virtual resource icon is controlled to move on the map, and during the moving process, the real-time position including the target virtual resource icon is displayed synchronously Partial zoom-in and interface, in response to the release command for the drag operation, mark the target virtual resource icon on the map, so that, without changing the scale and position of the map, through the combination of quick drag and partial zoom-in Quickly and accurately mark on the map while avoiding false touches.
  • Figure 15 is a schematic diagram of map markers provided by related technologies.
  • Too many click operations If you want to mark a certain location on the map, you need to click multiple times. If you want to repeat the mark, you need to repeat more click steps. At the same time, because the mark points may be scattered in different positions on the map, the player It is necessary to constantly adjust the map position for fixed-point marking;
  • the embodiment of the present application provides a position marking method in a virtual scene.
  • the method first tries to reduce the click operation of the player by only using drag and drop. Complete the marking process without interruption; at the same time, this method adds a magnifying glass effect in the large map to ensure that the player can accurately map in a larger map area without moving the map when the map scale is small mark.
  • FIG. 16 is a flow chart of the position marking operation for virtual resources provided by the embodiment of this application.
  • the mark button (shown by number 1 in the figure) on the map triggers a position mark command, and the terminal responds to the position mark command, and displays a mark classification list (also referred to as a resource icon list) shown by number 2 in the figure; then the player will Drag the mark button to the callout mark classification list and select a mark or click the mark in the call out list and drag the mark in the list (the target mark shown in number 3 in the figure); the player moves the target mark to a specific location on the map, and has The magnifying glass (shown by number 4 in the figure) follows; finally, the player drags the target mark to the target position in the map, releases the finger, releases the drag operation on the target mark, and the target mark is displayed at the target position, and the resource Once the marking process is complete.
  • a mark classification list also referred to as a resource icon list
  • the marking operation process shown in the embodiment of the present application is fast and convenient to operate, and can mark any position on the entire world map only by dragging and dropping. On the other hand, it avoids the situation of marking the mission points by mistake on the map. Because most of the current games have more teleportation points and mission points, when clicking on places close to these places to mark, the player is easy to cause accidental touches. , affecting the player's operating experience. In addition, it is also possible to accurately mark without moving the map, that is, when the scale of the big world map is the smallest: when using it, players can directly mark accurately on the complete big map without changing the map scale or manually dragging Move the map. That is to say, through the above marking steps, without changing the scale and position of the map, quickly and accurately mark on the map through the combination of quick drag and magnifying glass, and avoid accidental touches.
  • FIG. 17 For the editing command of the marker name, after receiving the editing command, the terminal presents an editing interface for the marker name (shown by number 1 in the figure) on the interface where the map is located, and the player renames the selected marker name in the editing interface , after clicking the OK button, the modified tag name is displayed in the tag category list.
  • FIG. 18 is a flow chart of the position marking method provided by the embodiment of the present application.
  • the trigger operation perform step 202: judge whether the trigger operation is a long press operation, and when the trigger operation is a long press operation, perform step 203a: display the mark classification selection list, at this time, each mark item in the list cannot be clicked, cannot Rename; when the trigger operation is a click operation, perform step 203b: display the mark classification list, at this time, each mark item in the list can be clicked and rename; when the terminal receives the target mark in the mark classification selection list by the player
  • step 204a determine whether the trigger operation is a long press operation, and when it is determined that it is a long press operation, the player performs step 205a: drag the target mark to any point on the map, and during the dragging process, the terminal performs step 206a : Judging whether the user’s finger
  • step 207a complete marking, that is, mark the target mark at the target position in the map;
  • step 204b click or long press the mark list to select the target mark, and the terminal judges whether the player performs a long press operation or a click operation. If it is a long press operation, step 205a can be performed. If it is a click operation, the player can continue Execute step 205b: rename the target tag, that is, perform a renaming operation on the target tag according to the operation flow shown in FIG. 16 .
  • a display module 5551 configured to display a map of the virtual scene
  • the control module 5552 is configured to, in response to a drag operation on a target virtual resource icon in the at least one virtual resource icon, control the target virtual resource icon to move in the map as the drag operation is performed;
  • the marking module 5553 is configured to mark the target virtual resource icon at the current location of the target virtual resource icon in the map in response to the release instruction for the drag operation.
  • the display module is further configured to display a position marker function item in the interface of the virtual scene; in response to the first trigger operation for the position marker function item, receiving the Position marker directives for virtual assets in the scene.
  • the display module is further configured to receive a graphic drawing operation triggered by an interface based on the virtual scene; when the graphic drawn by the graphic drawing operation matches the preset graphic, the A position marking instruction of the virtual resource in the virtual scene.
  • the display module is further configured to obtain the icon display length of the virtual resource icon, and the name display length of the resource name of the virtual resource indicated by the virtual resource icon; when the icon display length When the sum of the length of the name and the length of the name does not reach the length threshold, during the process of displaying the virtual resource icon, the resource name of the virtual resource indicated by the virtual resource icon is displayed.
  • the display module is further configured to control the resource name of the virtual resource indicated by the virtual resource icon to be in an editable state in response to a trigger operation on the virtual resource icon; Describe the editing operation of the resource name in the editable state, and display the edited resource name.
  • control module is further configured to control the target in the floating state in response to a drag operation on the target virtual resource icon in the floating state among the at least one virtual resource icon
  • the virtual resource icon moves in the map along with the execution of the dragging operation; correspondingly, the control module is further configured to respond to a pressing operation on a target virtual resource icon in the at least one virtual resource icon, Acquiring the operation parameters of the press operation, the operation parameters include at least one of the following: operation duration and pressure; when the operation duration reaches a duration threshold or the pressure reaches a pressure threshold, control the target virtual resource icon in suspension.
  • the display module is further configured to display at least one virtual resource icon in a movable state in response to a position marking instruction for the virtual resource in the virtual scene;
  • control module is further configured to generate an icon corresponding to the target virtual resource in response to a drag operation on the target virtual resource icon in the at least one movable virtual resource icon.
  • the copy of the icon in the movable state; the copy of the icon in the movable state is controlled to move in the map with the execution of the drag operation.
  • the display module is further configured to: when there is a virtual resource icon in a disabled state among the at least one virtual resource icon; in response to a trigger operation on the virtual resource icon in a disabled state, display Prompt information; wherein, the prompt information is used to prompt that the number of marks for the virtual resource corresponding to the disabled virtual resource icon has reached a quantity threshold.
  • the display module is further configured to acquire the target virtual resource icon during the process of controlling the target virtual resource icon to move in the map with the execution of the drag operation.
  • Real-time location synchronously displaying a partially enlarged interface including the real-time location.
  • the marking module is further configured to display a target area in the interface of the virtual scene; and in the target area, synchronously display a partially enlarged interface including the real-time position.
  • the marking module is further configured to synchronously display an accompanying floating layer associated with the target virtual resource icon, and display a partially enlarged interface including the real-time position in the accompanying floating layer.
  • the display module is further configured to display an icon suspension layer in the interface in response to a position marking instruction for virtual resources in the virtual scene, and in the icon suspension layer, adopt The target display style displays at least one virtual resource icon; wherein, the target display style includes at least one of a list display style and a roulette style.
  • control module is further configured to, in response to a first trigger operation on a target virtual resource icon in the at least one virtual resource icon, control to enter a multi-point marking mode for the target virtual resource icon;
  • the marking module is further configured to control the target virtual resource icon to be in a cursor following state in the multi-point marking mode; in response to clicking on the first position in the map Operation, marking the target virtual resource icon at the first position in the map, and after marking the target virtual resource icon at the first position, when a click operation on the second position in the map is received , marking the target virtual resource icon at the second position.
  • control module is further configured to switch the display style of the target virtual resource icon from the first display style to the second display style; correspondingly, the control module is also configured to respond to the The second trigger operation of the target virtual resource icon controls the exit of the multi-point marking mode, and switches the display style of the target virtual resource icon from the second display style to the first display style.
  • control module is further configured to display the drag track corresponding to the drag operation in the virtual scene; when the graphic corresponding to the drag track on the map is a closed graphic , using the area corresponding to the closed figure as a multi-point marking area; correspondingly, the marking module is further configured to respond to the release instruction for the dragging operation, within the multi-point marking area, each Mark the target virtual resource icon respectively at the markable resource position of the marked state.
  • the target virtual resource icon by dragging the target virtual resource icon displayed in the virtual scene, the target virtual resource icon is controlled to move in the map, and after receiving the release command for the drag operation, the target virtual resource icon is completed.
  • the marking process of resource icons in this way, without changing the scale and position of the map, fast and accurate marking of virtual resource icons can be achieved by dragging, compared to clicking the position on the map in the related technology
  • the way of triggering the marking pop-up window to mark reduces the number of human-computer interactions, improves the marking efficiency of virtual resource icons in the map, avoids false touches, and improves the control efficiency of virtual scenes.
  • An embodiment of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned method for marking positions in a virtual scene in the embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored.
  • the processor When the executable instructions are executed by the processor, the processor will be caused to execute the virtual scene provided by the embodiment of the present application.
  • a position marking method for example, a position marking method in a virtual scene as shown in FIG. 3 .
  • the computer-readable storage medium can be a read-only memory (Read-Only Memory,
  • ROM Read Only Memory
  • RAM Random Access Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory magnetic surface memory, optical disk, or CD-ROM and other memories; it can also be various devices including one or any combination of the above memories.
  • executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
  • HTML Hyper Text Markup Language

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品;其中,方法包括:显示虚拟场景的地图;响应于针对虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;响应于针对至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制目标虚拟资源图标伴随拖动操作的执行,在地图中进行移动;响应于针对拖动操作的释放指令,在地图中目标虚拟资源图标当前所处的位置,标记目标虚拟资源图标。

Description

虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202210179779.2、申请日为2022年02月25日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及虚拟化和人机交互技术领域,尤其涉及一种虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品。
背景技术
随着开放性世界主题游戏越来越多,玩家通常都在大世界中进行***推荐的各种各样的游戏任务。游戏中的任务种类多,且每个任务的交互场景和非玩家角色往往各不相同。通常玩家查看任务的场景,需要通过游戏内的地图功能打开地图,并找到地图上对应的任务图标,自动导航或者设置标记点后,跟随游戏内的指引前往任务点。
相关技术中,若想要标记地图的某个位置,需要点击操作多次,若想重复标记,则需要重复更多次的点击步骤,人机交互次数频繁,标记效率低。
发明内容
本申请实施例提供一种虚拟场景中的位置标记方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够实现针对虚拟资源图标的快速标记,减少人机交互次数,提升虚拟场景的操控效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中的位置标记方法,包括:
显示所述虚拟场景的地图;
响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;
响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置,标记所述目标虚拟资源图标。
本申请实施例提供一种虚拟场景中的位置标记装置,包括:
显示模块,配置为在虚拟场景的界面中,显示所述虚拟场景的地图;
所述显示模块,还配置为响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;
控制模块,配置为响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
标记模块,配置为响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置,标记所述目标虚拟资源图标。
本申请实施例提供一种电子设备,包括:
存储器,配置为存储可执行指令;
处理器,配置为执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场景中的位置标记方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟场景中的位置标记方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,计算机程序或指令被处理器执行时实现本申请实施例提供的虚拟场景中的位置标记方法。
本申请实施例具有以下有益效果:
应用本申请实施例,通过针对虚拟场景中显示的目标虚拟资源图标的拖动操作,控制目标虚拟资源图标在地图中进行移动,并在接收到针对拖动操作的释放指令后,完成针对目标虚拟资源图标的标记过程,如此,在不需要改变地图比例和位置的情况下,通过拖动的方式即可实现针对虚拟资源图标的快速及精准标记,相较于相关技术中通过点击地图上的位置触发标记弹窗进行标记的方式,减少了人机交互次数,提高地图中虚拟资源图标的标记效率,同时避免了误触,提升虚拟场景的操控效率。
附图说明
图1是本申请实施例提供的虚拟场景中的位置标记***的架构示意图;
图2是本申请实施例提供的实施虚拟场景中的位置标记方法的电子设备的结构示意图;
图3是本申请实施例提供的虚拟场景中的位置标记方法的流程示意图;
图4是本申请实施例提供的虚拟场景地图界面示意图;
图5是本申请实施例提供的图形绘制示意图;
图6A-6B是本申请实施例提供的虚拟资源图标的显示示意图;
图7是本申请实施例提供的目标展示样式示意图;
图8是本申请实施例提供的轮盘展示样式示意图;
图9是本申请实施例提供的信息提示示意图;
图10是本申请实施例提供的资源名称修改示意图;
图11是本申请实施例提供的局部放大界面示意图;
图12是本申请实施例提供的固定式局部放大界面示意图;
图13是本申请实施例提供的多点标记模式示意图;
图14是本申请实施例提供的多点标记区域示意图;
图15是相关技术提供的地图标记示意图;
图16是本申请实施例提供的针对虚拟资源的位置标记操作流程图;
图17是本申请实施例提供的针对标记名称的编辑操作流程图;
图18是本申请实施例提供的位置标记方法流程图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请进行详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但 是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
如果申请文件中出现“第一/第二”的类似描述则增加以下的说明,在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)客户端,终端中运行的用于提供各种服务的应用程序,例如即时通讯客户端、视频播放客户端。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟场景,是应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。虚拟场景可以是以第一人称视角显示虚拟场景(例如以玩家自己的视角来扮演游戏中的虚拟对象);也可以是以第三人称视角显示虚拟场景(例如玩家追着游戏中的虚拟对象来进行游戏);还可以是以鸟瞰大视角显示虚拟场景;其中,上述的视角之间可以任意切换。
以第一人称视角显示虚拟场景为例,在人机交互界面中显示的虚拟场景可以包括:根据虚拟对象在完整虚拟场景中的观看位置和视场角,确定虚拟对象的视场区域,呈现完整虚拟场景中位于视场区域中的部分虚拟场景,即所显示的虚拟场景可以是相对于全景虚拟场景的部分虚拟场景。因为第一人称视角是最能够给用户冲击力的观看视角,如此,能够实现用户在操作过程中身临其境的沉浸式感知。以鸟瞰大视角显示虚拟场景为例,在人机交互界面中呈现的虚拟场景的界面可以包括:响应于针对全景虚拟场景的缩放操作,在人机交互界面中呈现对应缩放操作的部分虚拟场景,即所显示的虚拟场景可以是相对于全景虚拟场景的部分虚拟场景。如此,能够提高用户在操作过程中的可操作性,从而能够提高人机交互的效率。
4)场景数据,表示虚拟场景中的对象在交互过程中受所表现的各种特征,例如,可以包括对象在虚拟场景中的位置。当然,根据虚拟场景的类型可以包括不同类型的特征;例如,在游戏的虚拟场景中,场景数据可以包括虚拟场景中配置的各种功能时需要等待的时间(取决于在特定时间内能够使用同一功能的次数),还可以表示游戏角色的各种状态的属性值,例如包括生命值(也称为红量)、魔法值(也称为蓝量)、状态值、血量等。
5)开放世界游戏,也被称为漫游式游戏(free roam),游戏关卡设计的一种,在其中玩家可自由地在一个虚拟世界中漫游,并可自由选择完成游戏任务的时间点和方 式。
基于上述对本申请实施例中涉及的名词和术语的解释,下面说明本申请实施例提供的虚拟场景中的位置标记***。参见图1,图1是本申请实施例提供的虚拟场景中的位置标记***的架构示意图,为实现支撑一个示例性应用,终端(示例性示出了终端400-1和终端400-2)通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合,使用无线或有线链路实现数据传输。
终端(如终端400-1和终端400-2),配置为基于视图界面接收到进入虚拟场景的触发操作,向服务器200发送虚拟场景的场景数据的获取请求;
服务器200,配置为接收到场景数据的获取请求,响应于该获取请求,返回虚拟场景的场景数据至终端;
终端(如终端400-1和终端400-2),配置为接收到虚拟场景的场景数据,基于得到的场景数据对虚拟场景的画面进行渲染,在图形界面(示例性示出了图形界面410-1和图形界面410-2)呈现虚拟场景的画面;其中,在虚拟场景的画面中呈现相应的地图信息,虚拟场景的画面呈现的内容均基于返回的虚拟场景的场景数据渲染得到;
终端(如终端400-1和终端400-2),还配置为显示虚拟场景的地图;响应于针对虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;响应于针对至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制目标虚拟资源图标伴随拖动操作的执行,在地图中进行移动;响应于针对拖动操作的释放指令,在地图中目标虚拟资源图标当前所处的位置,标记目标虚拟资源图标。如此,能够仅通过针对目标虚拟资源图标的拖动操作,即可完成整个开放世界地图的任意位置标记。
在一些实施例中,本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络***的后台服务需要大量的计算、存储资源;如当虚拟场景为游戏场景时,相应的游戏为云游戏,终端显示的虚拟场景的画面皆由服务器渲染得到。
在实际应用中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式***,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(CDN,Content Delivery Network)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端(如终端400-1和终端400-2)可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能电视、智能手表等,但并不局限于此。终端(如终端400-1和终端400-2)以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
在实际应用中,终端(包括终端400-1和终端400-2)安装和运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、以转向操作为主导行为的驾驶类游戏、多人在线战术竞技游戏(MOBA,Multiplayer Online Battle Arena games)、二维(Two Dimension,简称2D)游戏应用、三维(Three Dimension,简称3D)游戏应用、虚拟现实应用程序、三维地图程序或者多人生存游戏中的任意一种。该应用程序还可以是单机版的应用程序,比如单机版的3D游戏程序。
以电子游戏场景为示例性场景,用户可以提前在该终端上进行操作,该终端检测到用户的操作后,可以下载电子游戏的游戏配置文件,该游戏配置文件可以包括该电 子游戏的应用程序、界面显示数据或虚拟场景数据等,以使得该用户在该终端上登录电子游戏时可以调用该游戏配置文件,对电子游戏界面进行渲染显示。用户可以在终端上进行触控操作,该终端检测到触控操作后,可以确定该触控操作所对应的游戏数据,并对该游戏数据进行渲染显示,该游戏数据可以包括虚拟场景数据、该虚拟场景中虚拟对象的行为数据等。
参见图2,图2是本申请实施例提供的实施虚拟场景中的位置标记方法的电子设备的结构示意图。在实际应用中,电子设备500可以为图1示出的服务器或终端,以电子设备500为图1示出的终端为例,对实施本申请实施例的虚拟场景中的位置标记方法的电子设备进行说明,本申请实施例提供的电子设备500包括:至少一个处理器510、存储器550、至少一个网络接口520和用户接口530。电子设备500中的各个组件通过总线***540耦合在一起。可理解,总线***540配置为实现这些组件之间的连接通信。总线***540除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线***540。
处理器510可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口530包括使得能够呈现媒体内容的一个或多个输出装置531,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口530还包括一个或多个输入装置532,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器550可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器550可选地包括在物理位置上远离处理器510的一个或多个存储设备。
存储器550包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器550旨在包括任意适合类型的存储器。
在一些实施例中,存储器550能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作***551,包括配置为处理各种基本***服务和执行硬件相关任务的***程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块552,配置为经由一个或多个(有线或无线)网络接口520到达其他计算设备,示例性的网络接口520包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块553,配置为经由一个或多个与用户接口530相关联的输出装置531(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作***设备和显示内容和信息的用户接口);
输入处理模块554,配置为对一个或多个来自一个或多个输入装置532之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景中的位置标记装置可以采用软件方式实现,图2示出了存储在存储器550中的虚拟场景中的位置标记装置555,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块5551、控制模块5552 和标记模块5553,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或拆分,将在下文中说明各个模块的功能。
在另一些实施例中,本申请实施例提供的虚拟场景中的位置标记装置可以采用软硬件结合的方式实现,作为示例,本申请实施例提供的虚拟场景中的位置标记装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的虚拟场景中的位置标记方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
基于上述对本申请实施例提供的虚拟场景中的位置标记***及电子设备的说明,下面说明本申请实施例提供的虚拟场景中的位置标记方法。在一些实施例中,本申请实施例提供的虚拟场景中的位置标记方法可由服务器或终端单独实施,或由服务器及终端协同实施。在一些实施例中,终端或服务器可以通过运行计算机程序来实现本申请实施例提供的虚拟场景中的位置标记方法。举例来说,计算机程序可以是操作***中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作***中安装才能运行的程序,如支持虚拟场景的客户端,如游戏APP;也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
下面以终端实施为例说明本申请实施例提供的虚拟场景中的位置标记方法。参见图3,图3是本申请实施例提供的虚拟场景中的位置标记方法的流程示意图,本申请实施例提供的虚拟场景中的位置标记方法包括:
在步骤101中,终端显示所述虚拟场景的地图。
在实际实施时,终端上可以安装有支持虚拟场景的应用客户端(比如游戏客户端),还可以是集成有虚拟场景功能的客户端(比如即时通信客户端、直播客户端、教育客户端等),当用户打开终端上的应用客户端,且终端运行该应用客户端时,用户可以基于该客户端所显示的虚拟场景的画面,进行虚拟对象间的交互;例如,当客户端为游戏客户端时,用户可基于该游戏客户端显示的游戏画面,执行游戏场景中游戏角色(虚拟对象)间的交互(如进行虚拟对战)。在一些实施例中,终端呈现虚拟场景(如开放世界冒险游戏)的界面,并在虚拟场景的界面中,呈现相应的地图(场景地图),在虚拟场景对应的地图中存在大量的虚拟资源供玩家采集,玩家也可以在地图的位置点标记各类虚拟资源。常见的虚拟资源可以包括宝箱、能量棒等。
示例性地,参见图4,图4是本申请实施例提供的虚拟场景地图界面示意图,图中针对虚拟场景的地图中可以展示虚拟资源位置点。
在步骤102中,响应于针对虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标。
在实际实施时,终端在接收到针对虚拟场景中虚拟资源的位置标记指令后,可以显示至少一个虚拟资源图标。
针对位置标记指令的触发方式进行说明。在一些实施例中,终端在显示至少一个虚拟资源图标之前,可以通过以下方式,接收到针对虚拟场景中虚拟资源的位置标记指令:终端在虚拟场景的界面中,显示位置标记功能项;响应于针对位置标记功能项的第一触发操作,接收到针对虚拟场景中虚拟资源的位置标记指令。
在实际实施时,虚拟场景的界面中展示位置标记功能项,其中,位置标记功能项也即控件,可以有多种呈现形式,如图形按钮、进度条、菜单、列表等,本申请实施例对此不做限制。在用户触发位置标记功能项时,终端可以接收到针对虚拟资源的位置标记指令。
示例性地,参见图4,图中编号1示出的位置标记按钮,用户点击该位置标记按钮时,终端即可接收到位置标记指令。
在一些实施例中,终端在显示至少一个虚拟资源图标之前,可以通过以下方式接收针对虚拟资源的位置标记指令:终端接收到基于虚拟场景的界面触发的图形绘制操作;当图形绘制操作所绘制的图形和预设图形相匹配时,接收到针对虚拟场景中虚拟资源的位置标记指令。
在实际实施时,用户可以在终端屏幕上的任意位置,执行针对虚拟场景界面的图形绘制操作。终端响应于用户进行的图形绘制操作,获取图形绘制操作的各点的位置信息,生成图形绘制操作所绘制的图形,并将绘制的图形与图形库中预先存储的用于触发位置标记指令的预设图形进行匹配,若存在至少一个预设图形与绘制的图形匹配成功(即相似度达到相似度阈值,该相似度阈值的大小具体可依据实际需要进行设定,如0.8),说明图形绘制操作结束后,能够触发针对虚拟资源的位置标记指令。或者终端获取用户执行图形绘制操作时的绘制轨迹,将该绘制轨迹所形成的图案与预先存储的图形进行匹配,当匹配成功时,触发针对虚拟资源的位置标记指令。另外,终端还可以通过部署在终端的针对图形进行分类的、基于人工智能的多分类模型对绘制的图形进行预测,多分类模型的输入信息是绘制图形的位置信息,输出信息是绘制图形属于预设图形库中的图形类别。
示例性地,参见图5,图5是本申请实施例提供的图形绘制示意图。用户针对虚拟场景界面执行图形绘制操作,得到图中编号1所示的图形(图形的样式可以是多种,如圆形、三角形等)。需要说明的是,为了不影响用户的观看体验,图形绘制操作得到的图形可以不在虚拟场景界面中显示,即图中编号1所示的图形在实际的虚拟直播间界面中可以不显示。
上述通过图形绘制触发位置标记指令的方式,能够有效减少虚拟场景界面中的控件占屏比,节省屏幕空间占用率。
在一些实施例中,终端可以通过以下方式显示虚拟资源图标所指示的虚拟资源的资源名称:终端获取虚拟资源图标的图标显示长度,以及虚拟资源图标所指示的虚拟资源的资源名称的名称显示长度;当图标显示长度与名称长度的和未达到长度阈值时,在显示虚拟资源图标的过程中,显示虚拟资源图标所指示的虚拟资源的资源名称。
在实际实施时,终端接收到通过前述触发方式触发的针对虚拟资源的位置标记指令,根据终端屏幕的显示的虚拟场景的实际情况,在虚拟场景界面中显示至少一个虚拟资源图标以及虚拟资源图标所指示的虚拟资源的资源名称。需要说明的是,虚拟资源图标以及虚拟资源图标所指示的虚拟资源的资源名称可以同时显示,也可以仅显示虚拟资源图标。终端可以根据虚拟资源图标的图标长度以及资源名称的名称长度,确定是否同时显示虚拟资源图标以及相应的资源名称。终端根据虚拟场景中各元素(控件、标题等)的分布情况,确定同步显示虚拟资源图标以及相应的资源名称的长度阈值,当图标长度和资源名称长度的和为达到阈值时,同时显示虚拟资源图标以及相应的资源名称。
示例性地,图6A-6B是本申请实施例提供的虚拟资源图标的显示示意图,参见图6A,图6A中同时显示了虚拟资源图标以及相应的资源名称。
上述同时显示虚拟资源图标以及相应的资源名称的方式,能够直观显示虚拟资源 名称,提高人机交互体验。
在一些实施例中,终端可以通过以下方式显示虚拟资源图标所指示的虚拟资源的资源名称:当图标显示长度与名称显示长度的和达到宽度阈值时,终端在显示虚拟资源图标的过程中隐藏相应虚拟资源的资源名称,并当虚拟资源图标处于选中状态时,采用悬浮层的形式显示虚拟资源图标所指示的虚拟资源的资源名称。如此,当图标显示长度与名称显示长度的和达到宽度阈值时,通过隐藏资源名称的方式减少显示资源占用,释放显示空间,而当虚拟资源图标处于选中状态时,采用浮层的方式显示该被选中的虚拟资源图标的资源名称,丰富了信息的显示方式,使得资源名称的显示更为灵活,提高了显示资源的利用率。
在实际实施时,当终端判断图标显示长度与名称显示长度的和达到宽度阈值时,可以仅显示虚拟资源图标,即在显示虚拟资源图标时,隐藏相应虚拟资源的名称,当用户将光标移动到任意一个虚拟资源图标时,可以以悬浮层的相形式展示当前虚拟资源图标所指示的虚拟资源的资源名称。需要说明的是,终端也可以提供针对虚拟资源图标显示模式的设置界面,并采用设置的目标显示模式在虚拟界面中显示虚拟资源图标。
示例性地,参见图6B,图6B中编号1示出的虚拟资源图标的展示方式,图中仅展示了虚拟资源图标,隐藏了虚拟资源的资源名称;当光标移动到任意虚拟资源图标时,以悬浮层的形式展示资源名称,如图中编号2示出的资源名称展示方式。
上述仅展示虚拟资源图标方式,能够减少虚拟界面的空间利用率,提高人机交互体验。
在一些实施例中,终端可以通过以下方式展示虚拟资源图标:终端响应于针对虚拟场景中虚拟资源的位置标记指令,在虚拟场景的界面中,显示图标悬浮层,并在图标悬浮层中,采用目标展示样式展示至少一个虚拟资源图标;其中,目标展示样式包括列表展示样式、轮盘展示样式中至少之一。
在实际实施时,终端提供设置展示至少一个虚拟资源图标的展示样式的设置界面,在设置界面中,呈现至少一个展示样式选项,包括但不限于列表展示样式选项、轮盘展示样式选项等,终端响应于针对至少一个展示样式选项中目标展示样式选项的选择操作,确定目标展示样式选项对应的目标展示样式作为至少一个虚拟资源图标的展示样式。终端确定目标展示样式后,可以以悬浮层的形式,在虚拟场景界面中,采用目标展示样式展示至少一个虚拟资源图标。
示例性地,参见图7,图7是本申请实施例提供的目标展示样式示意图,图中展示了两个目标展示样式选项,并示出了各目标展示样式选项对应的示例图。当目标展示样式为列表展示样式时,至少一个虚拟资源图标的展示样式如图6A-6B所示;参见图8,图8是本申请实施例提供的轮盘展示样式示意图,图8中,终端确定针对至少一个虚拟资源图标的目标展示样式为轮盘展示样式,终端响应于针对位置功能项(图中编号1示出)的长按操作,呼出用于展示至少一个虚拟资源图标的轮盘(图中编号2示出),将光标从位置功能项滑动至轮盘中的任一虚拟资源图标时,可以采用悬浮层的形式展示虚拟资源图标所指示的虚拟资源的资源名称。需要说明的是,轮盘展示样式的至少一个虚拟资源图标,可以处于虚拟界面中的任意位置。
上述多种针对至少一个虚拟资源图标的展示样式,能够满足用户个性化需求,提高人机交互体验。
在一些实施例中,终端可以通过以下方式显示虚拟资源图标:当至少一个虚拟资源图标中存在处于禁用状态的虚拟资源图标时;终端响应于针对处于禁用状态的虚拟资源图标的触发操作,显示提示信息;其中,提示信息,用于提示针对处于禁用状态 的虚拟资源图标所对应虚拟资源的标记数量已达数量阈值。
在实际实施时,针对任意虚拟资源图标所对应的虚拟资源的标记数量都是有数量要求的,当虚拟场景的地图中针对某一个虚拟资源的标记数量已达对应的数量阈值时,则该虚拟资源图标所指示的虚拟资源不能在继续被标记,相应的,该虚拟资源图标可以处于禁用状态,用于提示用户在地图中,针对当前虚拟资源图标所指示的虚拟资源已达数量阈值。
示例性地,参见图9,图9是本申请实施例提供的信息提示示意图,图中,采用列表展示样式同时展示虚拟资源图标以及资源名称(图中编号1示出),其中,“资源名称1”和“资源名称2”处于禁用状态,其他虚拟资源图标处于可用状态。终端响应于针对处于禁用状态的“资源名称1”的触发操作(点击操作),在虚拟场景的界面中呈现图中编号2示出的悬浮层,悬浮层中展示提示信息“资源名称1的标记数量已满,当前图标不可用”。
在一些实施例中,终端可以通过以下方式编辑虚拟资源的资源名称:终端响应于针对虚拟资源图标的触发操作,控制虚拟资源图标所指示的虚拟资源的资源名称处于可编辑状态;响应于针对处于可编辑状态的资源名称的编辑操作,显示编辑后的资源名称。
在实际实施时,终端可以提供针对虚拟资源图标所指示的资源名称的编辑操作,终端响应于针对资源名称的触发操作(如针对虚拟资源名称的双击操作),控制资源名称处于可编辑状态,接收到针对选中的资源名称的编辑操作(重新输入),确定修改后的资源名称。
示例性地,参见图10,图10是本申请实施例提供的资源名称修改示意图,图中,终端接收到针对“资源名称4”的双击操作,控制“资源名称4”处于可编辑状态,此时,光标在“资源名称4”所处的输入框内闪烁,提示用户可以输入新的资源名称,输入完成后,展示新的资源名称。
在步骤103中,响应于针对至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制目标虚拟资源图标伴随所述拖动操作的执行,在地图中进行移动。
在实际实施时,终端可以响应于针对目标虚拟资源图标的拖动操作,控制虚拟资源图标随着拖动操作的执行,在地图中,沿拖动操作的拖动轨迹进行移动。
在一些实施例中,终端可以通过以下方式控制目标虚拟资源图标,在地图中进行移动:终端响应于针对至少一个虚拟资源图标中处于悬浮状态的目标虚拟资源图标的拖动操作,控制处于悬浮状态的目标虚拟资源图标伴随拖动操作的执行,在地图中进行移动。
在实际实施时,终端可以对处于悬浮状态的目标虚拟资源图标进行拖动操作,其中,控制虚拟资源图标处于悬浮状态的方式有多种。在一些实施例中,终端可以提供针对虚拟资源图标是否悬浮状态的设置。
在一些实施例中,终端可以通过以下方式控制虚拟资源图标处于悬浮状态:终端响应于针对至少一个虚拟资源图标中目标虚拟资源图标的按压操作,获取按压操作的操作参数,其中,操作参数包括以下至少之一:操作时长、压力大小;当操作时长达到时长阈值或压力大小达到压力阈值时,控制目标虚拟资源图标处于悬浮状态。
在实际实施时,终端可以基于针对虚拟资源图标的按压操作,获取按压操作的操作时长、压力大小等参数,并将操作时长与预设时长阈值进行比较,或将压力大小与预设压力大小进行比较,当操作时长大于预设时长阈值时,或压力大小大于压力阈值时,可以控制目标虚拟资源图标处于悬浮状态。
针对上述通过执行针对虚拟资源图标的按压操作,触发目标虚拟资源图标处于悬 浮状态的方式,在实际应用中,当用户想要实现对目标虚拟资源图标在地图中的标记时,先对其执行按压操作以使其处于悬浮状态,进而拖动处于悬浮状态的目标虚拟资源图标在地图中移动,以移动至用户想要标记的位置;目标虚拟资源图标悬浮及被拖动的过程用户可一气呵成,操作简单,方便用户实现针对拟资源图标的快速选址及标记。
示例性地,参见图8,终端响应于针对编号1示出的位置标记按钮的长按操作,呼出展示虚拟资源图标的轮盘(图中编号2示出),此时,将长按操作直接切换为从位置标记按钮处开始的滑动操作,滑动到“资源名称2”的图标(目标虚拟资源图标)上,若“资源名称2”的图标已经处于悬浮状态,则将滑动操作切换为针对目标资源图标的拖动操作(图中编号3示出),控制“资源名称2”的图标伴随着拖动操作的执行,在地图上移动;若“资源名称2”的图标处于非悬浮状态(固定状态),此时,滑动到“资源名称2”的图标上,并对“资源名称2”的图标执行按压操作,直至“资源名称2”的图标处于悬浮状态为止,然后,将针对“资源名称2”的图标的长按操作切换为针对“资源名称2”的图标的拖动操作,从而控制控制“资源名称2”的图标伴随着拖动操作的执行,在地图上移动。
上述控制目标虚拟资源图标在地图中进行移动的方式,通过不间断的连续动作,能够快速便捷的实现目标虚拟资源图标在地图中的移动的操作,提升人机交互体验。
在一些实施例中,终端还可以通过以下方式控制目标虚拟资源图标,在地图中进行移动:终端响应于针对虚拟场景中虚拟资源的位置标记指令,显示至少一个处于可移动状态的虚拟资源图标;相应的,响应于针对至少一个处于可移动状态的虚拟资源图标中目标虚拟资源图标的拖动操作,生成对应目标虚拟资源图标的处于可移动状态的图标副本;控制处于可移动状态的图标副本,伴随拖动操作的执行,在地图中进行移动。
在实际实施时,当虚拟资源图标处于可移动状态时,还可以通过创建虚拟资源图标的图标副本,响应于针对图标副本的拖动操作,控制图标副本在地图中进行移动,如此,能够保证原虚拟资源图标的不缺位,保证虚拟资源图标进行展示时的美观性。
在一些实施例中,终端可以通过以下方式显示目标资源图标所处的实时位置:终端在控制目标虚拟资源图标伴随所述拖动操作的执行,在地图中进行移动的过程中,获取目标虚拟资源图标的实时位置;同步显示包括实时位置的局部放大界面。
在实际实施时,当终端控制目标虚拟资源图标在地图中进行移动时,为了能够在不改变地图比例或者不需要手动拖拽移动地图的情况下,实时展示目标虚拟资源图标在地图中所处的实时位置,可以在地图中,同步显示包括目标虚拟资源的实时位置的局部放大界面。
示例性地,参见图11,图11是本申请实施例提供的局部放大界面示意图,图中,终端响应于针对目标资源图标(图中编号1示出)的拖动操作,控制目标资源图标在地图中进行移动,同时为了确定目标资源在地图中的实时位置,在虚拟场景的界面中,展示包括实时位置的局部放大界面(图中编号2示出),在编号2示出的局部放大界面中,可以清晰的展示当前目标资源图标所处的位置以及所处位置周边的情况。
上述通过局部放大界面展示目标资源图标所处实时位置的方式,能够在不移动地图且不改变地图比例尺的情况下,即在地图比例最小的情况下,精准显示虚拟资源图标所处的实时位置,不需要用户自行改变地图比例或者手动拖拽移动地图,提升人机交互体验。
在一些实施例中,终端可以通过以下方式显示局部放大界面:终端在虚拟场景的界面中,显示目标区域;在目标区域中,同步显示包括实时位置的局部放大界面。
在实际实施时,终端可以采用固定式的方式,同步显示包括实时位置的局部放大界面。终端可以提供针对局部放大界面的展示形式设置界面,并提供至少两种展示形式选项,如跟随式和固定式,响应于针对固定式选项的选择操作,在虚拟场景的界面中展示固定式的局部放大界面。即在虚拟场景界面中的合适区域,显示包括目标虚拟图标所处实时位置的局部放大界面。需要说明的是,目标区域的形状可以是四边形、圆形等,另外,目标区域是可以进行移动的,即可以根据用户的实际需求,移动目标区域的位置。
示例性的,参见图12,图12是本申请实施例提供的固定式局部放大界面示意图,图中,终端响应于针对编号1示出的目标虚拟资源图标的拖动操作,在编号2所示的目标区域(圆形区域)内,同步展示目标虚拟图标所处实时位置的局部放大界面。
在一些实施例中,终端还可以通过以下方式显示局部放大界面:终端同步显示与目标虚拟资源图标相关联的伴随浮层,并在伴随浮层内展示包括实时位置的局部放大界面。
在实际实施时,终端可以采用跟随式的方式,同步显示包括实时位置的局部放大界面。终端可以提供针对局部放大界面的展示形式设置界面,并提供至少两种展示形式选项,如跟随式和固定式,响应于针对跟随式选项的选择操作,在虚拟场景的界面中展示跟随式的局部放大界面。
示例性地,参见图11,图中编号2示出的局部放大界面即为跟随式,即终端在控制“资源名称2”对应的虚拟资源图标在地图中进行移动时,同步展示与“资源名称2”对应的虚拟资源图标相关联的伴随浮层,该伴随浮层可以随着“资源名称2”对应的虚拟资源图标的移动而移动,并在该伴随浮层内实时展示包括资源名称2”对应的虚拟资源图标所处的实时位置的局部放大界面。
在步骤104中,响应于针对拖动操作的释放指令,在地图中目标虚拟资源图标当前所处的位置,标记目标虚拟资源图标。
在实际实施时,终端控制目标虚拟资源图标在地图中进行移动时,响应于针对拖动操作的释放指令,即在目标虚拟资源图标当前所处的位置上,标记目标虚拟资源图标。此时,这种标记模式,可以理解为单点标记模式,即针对目标虚拟资源图标的标记操作每次只标记一个位置。
在一些实施例中,可以通过多种方式触发终端进入针对目标虚拟资源图标的多点标记模式,例如,终端在虚拟场景的界面中显示多点标记模式开关,响应于针对该多点标记模式开关的触发操作,终端进入针对目标虚拟资源图标的多点标记模式,也即,当多点标记模式开关被打开时,终端进入多点标记模式,当多点标记模式开关关闭时,终端处于单点标记模式;再如,终端响应于针对至少一个虚拟资源图标中目标虚拟资源图标的第一触发操作,控制进入针对目标虚拟资源图标的多点标记模式。
在实际实施时,终端还可以开启针对目标虚拟资源图标的多点标记模式,在多点标记模式下,针对目标资源图标的标记操作可以连续执行多次。终端响应于针对目标资源图标的触发操作(比如双击等),可以控制进入针对目标虚拟资源图标的多点标记模式。
相应的,在一些实施例中,终端可以通过以下方式在地图中标记目标虚拟资源图标:在多点标记模式下,终端控制目标虚拟资源图标处于光标跟随状态;响应于针对地图中第一位置的点击操作,在地图中第一位置处标记目标虚拟资源图标,并在所述第一位置处标记目标虚拟资源图标之后,接收到针对地图中第二位置的点击操作时,在第二位置处标记所述目标虚拟资源图标。
在实际实施时,在多点标记模式下,终端控制目标虚拟资源图标随光标进行移动, 并响应于针对地图中位置A的点击操作,在位置A处标记目标虚拟资源图标,随后目标虚拟资源图标随光标继续移动,并响应于针对地图中位置B的点击操作时,在位置B处标记目标虚拟资源图标。
示例性地,参见图13,图13是本申请实施例提供的多点标记模式示意图,终端响应于针对“资源名称2”的资源图标的双击操作,控制进行多点标记模式,在编号1示出的位置处标记该资源图标,随后,继续执行针对该资源图标的拖动操作,在编号2示出的位置处再次标记该资源图标,最后,在编号3示出的位置处,第三次标记该资源图标,即在多点标记模式下,针对“资源名称2”的资源图标连续标记了三次。
上述通过开启多点标记模式的方式,对目标虚拟资源图标进行多点标记,也即,用户可以通过针对目标虚拟资源图标的一次触发,实现针对目标虚拟资源图标的连续的多次标记,有效提高针对资源图标的标记效率,减少操作次数,提升人机交互体验。
在一些实施例中,终端可以通过以下方式控制退出多点标记模式:终端将目标虚拟资源图标的显示样式由第一显示样式切换为第二显示样式;相应的,终端在第二位置处标记目标虚拟资源图标之后,响应于针对目标虚拟资源图标的第二触发操作,控制退出多点标记模式,并将目标虚拟资源图标的显示样式由第二显示样式切换为第一显示样式。
在实际实施时,终端可以通过再次响应于针对目标虚拟资源图标的其他触发操作,控制退出针对目标资源图标的多点标记模式,并更改目标资源图标的显示样式,这里,第一显示样式为进入多点标记模式下,目标资源图标的样式;第二显示样式为退出多点标记模式下(正常标记模式或单点标记模式),目标资源图标的样式。
示例性地,参见图13,在针对图中目标资源图标(图中编号4示出)的拖动操作过程中,当图标轮盘中的目标资源图标,再次接收到点击操作时,退出针对该目标资源图标的多点标记模式,并将该目标资源图标的显示样式还原回第一显示样式。
上述退出多点标记模式的方式,能够灵活控制多点标记模式的开启和关闭,提升标记效率。
在一些实施例中,终端可以通过以下方式实现针对目标资源图标的多点标记:终端在虚拟场景中显示拖动操作对应的拖动轨迹;当拖动轨迹在地图上对应的图形是闭合图形时,将闭合图形对应的区域作为多点标记区域;相应的,终端响应于针对拖动操作的释放指令,在多点标记区域内,各处于未标记状态的可标记资源位置处,分别标记目标虚拟资源图标。
这里,在实际应用中,地图中存在多个可标记资源位置,终端可对各可标记资源位置进行可标记指示,使得用户获知该位置为可标记资源位置,当然,终端亦可对用户已标记的不可标记资源位置进行指示,以使用户了解地图中可标记资源位置及不可标记资源位置的整体情况,为了实现对多个可标记资源位置的快速标记,可通过绘制闭合图形的方式,用户通过拖动目标资源图标,在地图上绘制图形是闭合图形的拖动轨迹,以实现对多个可标记资源位置的选定,进而实现资源图标的快速标记,提高针对资源图标的标记效率。
在实际实施时,在针对目标资源图标的多点标记模式下,终端还可以根据针对目标资源图标的拖动操作,选定多点标记区域,并在接收到针对拖动操作的释放指令后,在多点标记区域,同时标记至少一个目标虚拟资源图标。
示例性地,参见图14,图14是本申请实施例提供的多点标记区域示意图,图中,终端响应于针对编号1示出的目标资源图标的拖动操作,显示该拖动操作对应的拖动轨迹(图中编号2示出的),其中拖动轨迹能够构成闭合图形,在该闭合图形内,获取地图上的各处于未标记状态的资源位置,终端接收针对拖动操作的释放指令,标记 图中示出的4处目标虚拟图标。
上述在多点标记模式下确定多点标记区域的方式,能够大大减少多点标记时的操作步骤,提高标记效率。
应用本申请实施例,通过针对目标虚拟资源图标处于不同显示状态下的不同触发操作,控制目标虚拟资源图标在地图中进行移动,并在移动过程中,同步展示包括目标虚拟资源图标所处实时位置的局部放大及界面,响应于针对拖动操作的释放指令,在地图中标记目标虚拟资源图标,如此,在不需要改变地图比例和位置的情况下,通过快捷拖拽和局部放大的组合方式在地图上进行快速精准的标记,同时避免误触的情况。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
相关技术中,参见图15,图15是相关技术提供的地图标记示意图,图中,在开放世界对应的虚拟场景中,玩家点击地图上的位置触发标记弹窗进行标记,然而该种位置标记方式往往存在以下问题:
点击操作次数过多:如果想要标记地图某个位置,需要点击操作多次,如果想重复标记,则需要重复更多次的点击步骤,同时由于标记点可能会散落在地图的不同位置,玩家则需要不停地调整地图位置进行定点标记;
无法精准标记:玩家需要在大世界地图的不同位置进行标记时,如果缩小地图,对于需要精准标记的玩家,难以控制标记落在精准地点;
容易造成误触点到其他的地点:大多数的游戏由于传送点和任务点较多,当点击靠近这些地点的地方想进行标记时,玩家很容易造成误触,影响玩家的操作体验;当地图扩大后,需要频繁的改变地图位置进行标记:
当玩家需要在大世界地图的不同位置进行标记时,如果缩小地图,则难以标记精准地点,如果放大地图,则需要不停拖拽地图改变位置,增加了玩家的操作。
基于此,本申请实施例提供一种虚拟场景中的位置标记方法,该方法首先是尝试仅使用拖拽的方式来减少玩家的点击操作,同时也能够避免点击误触的情况,能够让玩家不受打断地完成标记过程;同时该方法中增加了大地图中的放大镜效果,保证玩家在地图比例较小的情况下,也可以在不移动地图的情况下在一个较大的地图区域进行精准标记。
接下来从产品侧说明本申请实施例提供的位置标记方法,在一些实施例中,参见图16,图16是本申请实施例提供的针对虚拟资源的位置标记操作流程图,玩家长按或点击地图上的标记按钮(图中编号1示出),触发位置标记指令,终端响应于该位置标记指令,展示图中编号2示出标记分类列表(也可称为资源图标列表);然后玩家将标记按钮拖拽到呼出标记分类列表中并选中一个标记或者点击呼出列表后拖拽列表中的标记(图中编号3示出的目标标记);玩家将目标标记移动到地图的具体地点,并且有放大镜(图中编号4示出)跟随;最后,玩家将目标标记拖拽到地图中的目标位置,松开手指,释放针对目标标记的拖拽操作,在目标位置处显示目标标记,针对资源的一次标记过程完成。
本申请实施例示出的标记操作流程,一方面操作快捷方便,仅需拖拽即可完成整个世界大地图的任意位置标记。另一方面,避免了在地图上标记误触任务点的情况,由于目前大多数的游戏由于传送点和任务点较多,当点击靠近这些地点的地方想进行标记时,玩家很容易造成误触,影响玩家的操作体验。另外,还能够在不移动地图,即在大世界地图比例最小的情况下精准标记:玩家在使用时,可以直接在完整的大地图上进行精准标记,而不需要自行改变地图比例或者手动拖拽移动地图。即通过上述标记步骤,在不需要改变地图比例和位置的情况下,通过快捷拖拽和放大镜的组合方 式在地图上进行快速精准的标记,同时避免误触的情况。
接下来说明本申请实施例提供的针对标记名称的编辑操作流程,参见图17,图17是本申请实施例提供的针对标记名称的编辑操作流程图,玩家长按大地图上的标记按钮,生成针对标记名称的编辑指令,终端接收到该编辑指令后,在地图所处界面上呈现针对标记名称的编辑界面(图中编号1示出),玩家在编辑界面中对选中的标记名称进行重命名,点击确定按钮后,修改后的标记名称显示在标记分类列表中。
接下来,从技术侧说明本申请实施例提供的位置标记方法,参见图18,图18是本申请实施例提供的位置标记方法流程图,过程如下:终端执行步骤201:接收到玩家针对标记按钮的触发操作;执行步骤202:判断触发操作是否为长按操作,当触发操作是长按操作时,执行步骤203a:显示标记分类选择列表,此时,该列表中的各个标记项不可点击、不可重命名;当触发操作是点击操作时,执行步骤203b:显示标记分类列表,此时,列表中的各标记项可点击、可重命名;当终端接收到玩家针对标记分类选择列表中的目标标记的触发操作时,执行步骤204a:判断触发操作是否为长按操作,当判断是长按操作时,玩家执行步骤205a:拖动目标标记到地图任一点,在拖动过程中,终端执行步骤206a:判断用户手指是否离开屏幕,即是否接收到针对拖动操作的释放指令,终端接收到释放指令后,执行步骤207a:完成标记,即在地图中的目标位置处标记目标标记;终端为接收到释放指令时,玩家继续执行步骤205a。另外,玩家还可以执行步骤204b:点击或长按标记列表选择目标标记,终端判断玩家执行的是长按操作还是点击操作,若是长按操作,则可以执行步骤205a,若是点击操作,玩家可继续执行步骤205b:重命名目标标记,即按照图16示出的操作流程,对目标标记进行重命名操作。
应用本申请实施例具有以下效果:
1.操作简单:快速拖拽即可进行标记,不必重复点击。
2.能够避免标记误触,提升玩家体验:能够完全避免因为点击位置离传送点过近而造成的误触情况,提高了玩家的标记体验;同时,也能够避免因为点击地图而弹出标记窗口需要重新关闭的多余操作。
3.能够最大地图与精准标记结合,减少玩家操作:能够在显示最大面积地图的情况下进行最精准的标记,减少玩家移动地图和收缩地图的操作。
4.理解成本较低:拖拽的方式能够让玩家快速上手并且学习。
可以理解的是,在本申请实施例中,涉及到用户信息等相关的数据,当本申请实施例运用到产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
下面继续说明本申请实施例提供的虚拟场景中的位置标记装置555的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器550的虚拟场景中的位置标记装置555中的软件模块可以包括:
显示模块5551,配置为显示所述虚拟场景的地图;
所述显示模块5551,还配置为响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;
控制模块5552,配置为响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
标记模块5553,配置为响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置,标记所述目标虚拟资源图标。
在一些实施例中,所述显示模块,还配置为在所述虚拟场景的界面中,显示位置 标记功能项;响应于针对所述位置标记功能项的第一触发操作,接收到针对所述虚拟场景中虚拟资源的位置标记指令。
在一些实施例中,所述显示模块,还配置为接收到基于所述虚拟场景的界面触发的图形绘制操作;当所述图形绘制操作所绘制的图形和预设图形相匹配时,接收到针对所述虚拟场景中虚拟资源的位置标记指令。
在一些实施例中,所述显示模块,还配置为获取所述虚拟资源图标的图标显示长度,以及所述虚拟资源图标所指示的虚拟资源的资源名称的名称显示长度;当所述图标显示长度与所述名称长度的和未达到长度阈值时,在显示所述虚拟资源图标的过程中,显示所述虚拟资源图标所指示的虚拟资源的资源名称。
在一些实施例中,所述显示模块,还配置为当所述图标显示长度与所述名称显示长度的和达到宽度阈值时,在显示所述虚拟资源图标的过程中隐藏相应虚拟资源的资源名称,并当所述虚拟资源图标处于选中状态时,采用悬浮层的形式显示所述虚拟资源图标所指示的虚拟资源的资源名称。
在一些实施例中,所述显示模块,还配置为响应于针对所述虚拟资源图标的触发操作,控制所述虚拟资源图标所指示的虚拟资源的资源名称处于可编辑状态;响应于针对处于所述可编辑状态的资源名称的编辑操作,显示编辑后的资源名称。
在一些实施例中,所述控制模块,还配置为响应于针对所述至少一个虚拟资源图标中处于所述悬浮状态的目标虚拟资源图标的拖动操作,控制处于所述悬浮状态的所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;相应的,所述控制模块,还配置为响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的按压操作,获取所述按压操作的操作参数,所述操作参数包括以下至少之一:操作时长、压力大小;当所述操作时长达到时长阈值或所述压力大小达到压力阈值时,控制所述目标虚拟资源图标处于悬浮状态。
在一些实施例中,所述显示模块,还配置为响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个处于可移动状态的虚拟资源图标;
相应的,在一些实施例中,所述控制模块,还配置为响应于针对所述至少一个处于可移动状态的虚拟资源图标中目标虚拟资源图标的拖动操作,生成对应所述目标虚拟资源图标的处于可移动状态的图标副本;控制处于可移动状态的所述图标副本,伴随所述拖动操作的执行,在所述地图中进行移动。
在一些实施例中,所述显示模块,还配置为当所述至少一个虚拟资源图标中存在处于禁用状态的虚拟资源图标时;响应于针对所述处于禁用状态的虚拟资源图标的触发操作,显示提示信息;其中,所述提示信息,用于提示针对所述处于禁用状态的虚拟资源图标所对应虚拟资源的标记数量已达数量阈值。
在一些实施例中,所述显示模块,还配置为在控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动的过程中,获取所述目标虚拟资源图标的实时位置;同步显示包括所述实时位置的局部放大界面。
在一些实施例中,所述标记模块,还配置为在所述虚拟场景的界面中,显示目标区域;在所述目标区域中,同步显示包括所述实时位置的局部放大界面。
在一些实施例中,所述标记模块,还配置为同步显示与所述目标虚拟资源图标相关联的伴随浮层,并在所述伴随浮层内展示包括所述实时位置的局部放大界面。
在一些实施例中,所述显示模块,还配置为响应于针对所述虚拟场景中虚拟资源的位置标记指令,在所述界面中,显示图标悬浮层,并在所述图标悬浮层中,采用目标展示样式展示至少一个虚拟资源图标;其中,所述目标展示样式包括列表展示样式、轮盘样式中至少之一。
在一些实施例中,所述控制模块,还配置为响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的第一触发操作,控制进入针对所述目标虚拟资源图标的多点标记模式;
相应的,在一些实施例中,所述标记模块,还配置为在所述多点标记模式下,控制所述目标虚拟资源图标处于光标跟随状态;响应于针对所述地图中第一位置的点击操作,在所述地图中第一位置处标记所述目标虚拟资源图标,并在所述第一位置处标记所述目标虚拟资源图标之后,接收到针对所述地图中第二位置的点击操作时,在所述第二位置处标记所述目标虚拟资源图标。
在一些实施例中,所述控制模块,还配置为将所述目标虚拟资源图标的显示样式由第一显示样式切换为第二显示样式;相应的,所述控制模块,还配置为响应于针对所述目标虚拟资源图标的第二触发操作,控制退出所述多点标记模式,并将所述目标虚拟资源图标的显示样式由第二显示样式切换为第一显示样式。
在一些实施例中,所述控制模块,还配置为在所述虚拟场景中显示所述拖动操作对应的拖动轨迹;当所述拖动轨迹在所述地图上对应的图形是闭合图形时,将所述闭合图形对应的区域作为多点标记区域;相应的,所述标记模块,还配置为响应于针对所述拖动操作的释放指令,在所述多点标记区域内,各处于未标记状态的可标记资源位置处,分别标记所述目标虚拟资源图标。
应用本申请实施例,通过针对虚拟场景中显示的目标虚拟资源图标的拖动操作,控制目标虚拟资源图标在地图中进行移动,并在接收到针对拖动操作的释放指令后,完成针对目标虚拟资源图标的标记过程,如此,在不需要改变地图比例和位置的情况下,通过拖动的方式即可实现针对虚拟资源图标的快速及精准标记,相较于相关技术中通过点击地图上的位置触发标记弹窗进行标记的方式,减少了人机交互次数,提高地图中虚拟资源图标的标记效率,同时避免了误触,提升虚拟场景的操控效率。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中的位置标记方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景中的位置标记方法,例如,如图3示出的虚拟场景中的位置标记方法。
在一些实施例中,计算机可读存储介质可以是只读存储器(Read-Only Memory,
ROM)、随即存储器(Random Access Memory,RAM)、可擦写可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件***中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代 码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上所述,通过本申请实施例能够在不改变虚拟场景中地图比例和位置的情况下,通过快捷拖拽和局部放大的组合方式在地图上进行快速精准的标记,同时避免误触的情况。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟场景中的位置标记方法,所述方法由电子设备执行,所述方法包括:
    显示所述虚拟场景的地图;
    响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;
    响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
    响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置,标记所述目标虚拟资源图标。
  2. 如权利要求1所述的方法,其中,在所述响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标之前,所述方法还包括:
    在所述虚拟场景的界面中,显示位置标记功能项;
    响应于针对所述位置标记功能项的第一触发操作,接收到针对所述虚拟场景中虚拟资源的位置标记指令。
  3. 如权利要求1所述的方法,其中,在所述响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标之前,包括:
    接收到基于所述虚拟场景的界面触发的图形绘制操作;
    当所述图形绘制操作所绘制的图形和预设图形相匹配时,接收到针对所述虚拟场景中虚拟资源的位置标记指令。
  4. 如权利要求1所述的方法,其中,所述方法还包括:
    获取所述虚拟资源图标的图标显示长度,以及所述虚拟资源图标所指示的虚拟资源的资源名称的名称显示长度;
    当所述图标显示长度与所述名称长度的和未达到长度阈值时,在显示所述虚拟资源图标的过程中,显示所述虚拟资源图标所指示的虚拟资源的资源名称。
  5. 如权利要求4所述的方法,其中,所述方法还包括:
    当所述图标显示长度与所述名称显示长度的和达到宽度阈值时,在显示所述虚拟资源图标的过程中隐藏相应虚拟资源的资源名称,并
    当所述虚拟资源图标处于选中状态时,采用悬浮层的形式显示所述虚拟资源图标所指示的虚拟资源的资源名称。
  6. 如权利要求4所述的方法,其中,在所述显示所述虚拟资源图标所指示的虚拟资源的资源名称之后,所述方法还包括:
    响应于针对所述虚拟资源图标的触发操作,控制所述虚拟资源图标所指示的虚拟资源的资源名称处于可编辑状态;
    响应于针对处于所述可编辑状态的资源名称的编辑操作,显示编辑后的资源名称。
  7. 如权利要求1所述的方法,其中,所述响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动,包括:
    响应于针对所述至少一个虚拟资源图标中处于所述悬浮状态的目标虚拟资源图标的拖动操作,控制处于所述悬浮状态的所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
    所述控制所述目标虚拟资源图标伴随所述拖动操作的执行之前,所述方法还包 括:
    响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的按压操作,获取所述按压操作的操作参数,所述操作参数包括以下至少之一:操作时长、压力大小;
    当所述操作时长达到时长阈值或所述压力大小达到压力阈值时,控制所述目标虚拟资源图标处于悬浮状态。
  8. 如权利要求1所述的方法,其中,所述响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标,包括:
    响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个处于可移动状态的虚拟资源图标;
    所述响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动,包括:
    响应于针对所述至少一个处于可移动状态的虚拟资源图标中目标虚拟资源图标的拖动操作,生成对应所述目标虚拟资源图标的处于可移动状态的图标副本;
    控制处于可移动状态的所述图标副本,伴随所述拖动操作的执行,在所述地图中进行移动。
  9. 如权利要求1所述的方法,其中,所述方法还包括:
    当所述至少一个虚拟资源图标中存在处于禁用状态的虚拟资源图标时;
    响应于针对所述处于禁用状态的虚拟资源图标的触发操作,显示提示信息;
    其中,所述提示信息,用于提示针对所述处于禁用状态的虚拟资源图标所对应虚拟资源的标记数量已达数量阈值。
  10. 如权利要求1所述的方法,其中,所述方法还包括:
    在控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动的过程中,获取所述目标虚拟资源图标的实时位置;
    同步显示包括所述实时位置的局部放大界面。
  11. 如权利要求10所述的方法,其中,所述同步显示包括所述实时位置的局部放大界面,包括:
    在所述虚拟场景的界面中,显示目标区域;
    在所述目标区域中,同步显示包括所述实时位置的局部放大界面。
  12. 如权利要求10所述的方法,其中,所述同步显示包括所述实时位置的局部放大界面,包括:
    同步显示与所述目标虚拟资源图标相关联的伴随浮层,并在所述伴随浮层内展示包括所述实时位置的局部放大界面。
  13. 如权利要求1所述的方法,其中,所述响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标,包括:
    响应于针对所述虚拟场景中虚拟资源的位置标记指令,在所述界面中,显示图标悬浮层,并在所述图标悬浮层中,采用目标展示样式展示至少一个虚拟资源图标;
    其中,所述目标展示样式包括列表展示样式、轮盘样式中至少之一。
  14. 如权利要求1所述的方法,其中,在所述显示至少一个虚拟资源图标之后,所述方法还包括:
    响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的第一触发操作,控制进入针对所述目标虚拟资源图标的多点标记模式;
    所述在所述地图中所述目标虚拟资源图标当前所处的位置处,标记所述目标虚拟资源图标之后,所述方法还包括:
    在所述多点标记模式下,控制所述目标虚拟资源图标处于光标跟随状态;
    响应于针对所述地图中第一位置的点击操作,在所述地图中第一位置处标记所述目标虚拟资源图标,并
    在所述第一位置处标记所述目标虚拟资源图标之后,接收到针对所述地图中第二位置的点击操作时,在所述第二位置处标记所述目标虚拟资源图标。
  15. 如权利要求14所述的方法,其中,所述控制进入针对所述目标虚拟资源图标的多点标记模式之后,所述方法还包括:
    将所述目标虚拟资源图标的显示样式由第一显示样式切换为第二显示样式;
    所述在所述第二位置处标记所述目标虚拟资源图标之后,所述方法还包括:
    响应于针对所述目标虚拟资源图标的第二触发操作,控制退出所述多点标记模式,并
    将所述目标虚拟资源图标的显示样式由第二显示样式切换为第一显示样式。
  16. 如权利要求1所述的方法,其中,在所述标记所述目标虚拟资源图标之前,所述方法还包括:
    在所述虚拟场景中显示所述拖动操作对应的拖动轨迹;
    当所述拖动轨迹在所述地图上对应的图形是闭合图形时,将所述闭合图形对应的区域作为多点标记区域;
    所述响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置处,标记所述目标虚拟资源图标,包括:
    响应于针对所述拖动操作的释放指令,在所述多点标记区域内,各处于未标记状态的可标记资源位置处,分别标记所述目标虚拟资源图标。
  17. 一种虚拟场景中的位置标记装置,所述装置包括:
    显示模块,配置为显示所述虚拟场景的地图;
    所述显示模块,还配置为响应于针对所述虚拟场景中虚拟资源的位置标记指令,显示至少一个虚拟资源图标;
    控制模块,配置为响应于针对所述至少一个虚拟资源图标中目标虚拟资源图标的拖动操作,控制所述目标虚拟资源图标伴随所述拖动操作的执行,在所述地图中进行移动;
    标记模块,配置为响应于针对所述拖动操作的释放指令,在所述地图中所述目标虚拟资源图标当前所处的位置,标记所述目标虚拟资源图标。
  18. 一种电子设备,所述电子设备包括:
    存储器,配置为存储可执行指令;
    处理器,配置为执行所述存储器中存储的可执行指令时,实现权利要求1至16任一项所述的虚拟场景中的位置标记方法。
  19. 一种计算机可读存储介质,存储有可执行指令,所述可执行指令被处理器执行时实现权利要求1至16任一项所述的虚拟场景中的位置标记方法。
  20. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至16任一项所述的虚拟场景中的位置标记方法。
PCT/CN2022/130823 2022-02-25 2022-11-09 虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品 WO2023160015A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/348,859 US20230350554A1 (en) 2022-02-25 2023-07-07 Position marking method, apparatus, and device in virtual scene, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210179779.2 2022-02-25
CN202210179779.2A CN116688502A (zh) 2022-02-25 2022-02-25 虚拟场景中的位置标记方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/348,859 Continuation US20230350554A1 (en) 2022-02-25 2023-07-07 Position marking method, apparatus, and device in virtual scene, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2023160015A1 true WO2023160015A1 (zh) 2023-08-31

Family

ID=87764629

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130823 WO2023160015A1 (zh) 2022-02-25 2022-11-09 虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品

Country Status (3)

Country Link
US (1) US20230350554A1 (zh)
CN (1) CN116688502A (zh)
WO (1) WO2023160015A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111672115B (zh) * 2020-06-05 2022-09-23 腾讯科技(深圳)有限公司 虚拟对象控制方法、装置、计算机设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690402B1 (en) * 1999-09-20 2004-02-10 Ncr Corporation Method of interfacing with virtual objects on a map including items with machine-readable tags
CN110738738A (zh) * 2019-10-15 2020-01-31 腾讯科技(深圳)有限公司 三维虚拟场景中的虚拟对象标记方法、设备及存储介质
US10642698B1 (en) * 2018-12-21 2020-05-05 EMC IP Holding Company LLC System and method for consumption based tagging of resources
CN111506323A (zh) * 2020-04-20 2020-08-07 武汉灏存科技有限公司 基于虚拟场景的数据处理方法、装置、设备及存储介质
CN112711458A (zh) * 2021-01-15 2021-04-27 腾讯科技(深圳)有限公司 虚拟场景中道具资源的展示方法及装置
CN113171605A (zh) * 2021-05-26 2021-07-27 网易(杭州)网络有限公司 虚拟资源获取方法、计算机可读存储介质和电子设备
CN113546422A (zh) * 2021-07-30 2021-10-26 网易(杭州)网络有限公司 虚拟资源的投放控制方法、装置、计算机设备及存储介质
CN113573089A (zh) * 2021-07-27 2021-10-29 广州繁星互娱信息科技有限公司 虚拟资源交互方法和装置、存储介质及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690402B1 (en) * 1999-09-20 2004-02-10 Ncr Corporation Method of interfacing with virtual objects on a map including items with machine-readable tags
US10642698B1 (en) * 2018-12-21 2020-05-05 EMC IP Holding Company LLC System and method for consumption based tagging of resources
CN111352696A (zh) * 2018-12-21 2020-06-30 Emc知识产权控股有限公司 基于消耗的资源标记***和方法
CN110738738A (zh) * 2019-10-15 2020-01-31 腾讯科技(深圳)有限公司 三维虚拟场景中的虚拟对象标记方法、设备及存储介质
CN111506323A (zh) * 2020-04-20 2020-08-07 武汉灏存科技有限公司 基于虚拟场景的数据处理方法、装置、设备及存储介质
CN112711458A (zh) * 2021-01-15 2021-04-27 腾讯科技(深圳)有限公司 虚拟场景中道具资源的展示方法及装置
CN113171605A (zh) * 2021-05-26 2021-07-27 网易(杭州)网络有限公司 虚拟资源获取方法、计算机可读存储介质和电子设备
CN113573089A (zh) * 2021-07-27 2021-10-29 广州繁星互娱信息科技有限公司 虚拟资源交互方法和装置、存储介质及电子设备
CN113546422A (zh) * 2021-07-30 2021-10-26 网易(杭州)网络有限公司 虚拟资源的投放控制方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
US20230350554A1 (en) 2023-11-02
CN116688502A (zh) 2023-09-05

Similar Documents

Publication Publication Date Title
US11740755B2 (en) Systems, methods, and graphical user interfaces for interacting with augmented and virtual reality environments
CN112860148B (zh) 勋章图标的编辑方法、装置、设备及计算机可读存储介质
CN110090444B (zh) 游戏中行为记录创建方法、装置、存储介质及电子设备
US20120107790A1 (en) Apparatus and method for authoring experiential learning content
WO2022105362A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及计算机程序产品
TWI796804B (zh) 虛擬按鍵的位置調整方法、裝置、設備、儲存介質及程式産品
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
JP7391448B2 (ja) 仮想オブジェクトの制御方法、装置、機器、記憶媒体及びコンピュータプログラム製品
US20230285858A1 (en) Virtual skill control method and apparatus, device, storage medium, and program product
US20180088791A1 (en) Method and apparatus for producing virtual reality content for at least one sequence
WO2023160015A1 (zh) 虚拟场景中的位置标记方法、装置、设备、存储介质及程序产品
JP7232350B2 (ja) 仮想キーの位置調整方法及び装置、並びコンピュータ装置及びプログラム
WO2023138142A1 (zh) 虚拟场景中的运动处理方法、装置、设备、存储介质及程序产品
US20180089877A1 (en) Method and apparatus for producing virtual reality content
CN112221124A (zh) 虚拟对象生成方法、装置、电子设备和存储介质
WO2023221716A1 (zh) 虚拟场景中的标记处理方法、装置、设备、介质及产品
CN111522439B (zh) 一种虚拟样机的修订方法、装置、设备及计算机存储介质
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品
Thorn Unity 5. x by Example
KR102688646B1 (ko) 증강 및 가상 현실 환경들과 상호작용하기 위한 시스템들, 방법들, 및 그래픽 사용자 인터페이스들
WO2023226569A9 (zh) 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
WO2024027344A1 (zh) 社交互动的方法、装置、设备、可读存储介质及程序产品
Falkengren et al. Virtual Reality Operating System User Interface
CN117414584A (zh) 一种游戏中场景组件的编辑方法、装置、电子设备及介质
CN115779438A (zh) 数据处理方法、装置、存储介质和电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22928277

Country of ref document: EP

Kind code of ref document: A1