CN111603770A - Virtual environment picture display method, device, equipment and medium - Google Patents

Virtual environment picture display method, device, equipment and medium Download PDF

Info

Publication number
CN111603770A
CN111603770A CN202010437875.3A CN202010437875A CN111603770A CN 111603770 A CN111603770 A CN 111603770A CN 202010437875 A CN202010437875 A CN 202010437875A CN 111603770 A CN111603770 A CN 111603770A
Authority
CN
China
Prior art keywords
virtual environment
virtual object
virtual
center
position relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010437875.3A
Other languages
Chinese (zh)
Other versions
CN111603770B (en
Inventor
魏嘉城
胡勋
粟山东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010437875.3A priority Critical patent/CN111603770B/en
Publication of CN111603770A publication Critical patent/CN111603770A/en
Application granted granted Critical
Publication of CN111603770B publication Critical patent/CN111603770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for displaying a virtual environment picture, and relates to the field of virtual environments. The method comprises the following steps: displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center; adjusting the center of observation from a first position relative to the first virtual object to a second position relative to the first virtual object in response to the adjustment instruction of the center of observation; and displaying a second virtual environment screen, wherein the second virtual environment screen is a screen obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center. The method and the device can dynamically change the observation center of the camera model, so that the vision field requirement expected by a user can be met.

Description

Virtual environment picture display method, device, equipment and medium
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method, a device, equipment and a medium for displaying a virtual environment picture.
Background
The battle game is a game in which a plurality of user accounts compete in the same scene. Alternatively, the Battle game may be a Multiplayer Online tactical sports game (MOBA).
A typical MOBA game has a three-dimensional virtual environment in which a plurality of virtual objects belonging to two enemy camps move to occupy the enemy camps. Each user controls one master virtual object in the three-dimensional virtual environment using a client. And as for the game picture displayed by any client, the camera model corresponding to the main control virtual object is collected in the three-dimensional virtual environment. In general, the camera model acquires a picture of the three-dimensional virtual environment with the main control virtual object as an observation center to obtain a game picture. The master virtual object is located at the center of the game screen.
The camera model has a limited field of view, not necessarily the optimal field of view desired by the user, and the information displayed on the game screen is limited.
Disclosure of Invention
The embodiment of the application provides a display method, a display device, display equipment and a display medium of a virtual environment picture, which can enable a user to manually adjust a visual field range observed by a camera model, so that an optimal visual field range expected by the user is obtained. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method for displaying a virtual environment screen, the method including:
displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
adjusting the center of observation from a first position relative to the first virtual object to a second position relative to the first virtual object in response to the adjustment instruction of the center of observation;
and displaying a second virtual environment screen, wherein the second virtual environment screen is a screen obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
According to another aspect of the present application, there is provided a display apparatus of a virtual environment screen, the apparatus including:
the display module is used for displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
an adjustment module to adjust the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to an adjustment instruction for the observation center;
the display module is configured to display a second virtual environment picture, where the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
According to another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the display method of a virtual environment screen as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, code set, or set of instructions that is loaded and executed by the processor to implement the display method of a virtual environment screen as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by responding to the adjusting instruction of the observation center, the observation center is modified from the first position relative to the first virtual object to the second position, so that the user can customize the observation center of the camera model, the optimal view field range meeting the user's own expectation is obtained, and more effective information can be displayed in the virtual environment picture as far as possible.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a state synchronization technique provided by another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a frame synchronization technique provided by another exemplary embodiment of the present application;
FIG. 4 is an interface diagram illustrating a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 5 is an interface diagram illustrating a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of a camera model provided in another exemplary embodiment of the present application with a change in center of view;
FIG. 8 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
fig. 9 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 10 is a schematic illustration of a field of view adjustment control provided by another exemplary embodiment of the present application adjusting an anchor position of a camera model;
FIG. 11 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
fig. 12 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 13 is a schematic illustration of an adjustable range of a center of view of a camera model provided by another exemplary embodiment of the present application;
FIG. 14 is a graphical illustration of the calculation of the offset value of the rocker in the wheel area provided by another exemplary embodiment of the present application;
FIG. 15 is a schematic illustration of a dead zone region provided by another exemplary embodiment of the present application;
fig. 16 is a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment of the present application;
FIG. 17 is a side view of a camera model in a three-dimensional virtual environment provided by another exemplary embodiment of the present application;
fig. 18 is a block diagram of a display apparatus of a virtual environment screen provided in another exemplary embodiment of the present application;
fig. 19 is a block diagram of a terminal provided in another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional three-dimensional world, or a purely fictional three-dimensional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. Optionally, the virtual environment is also used for virtual environment engagement between at least two virtual objects, in which virtual resources are available for use by the at least two virtual objects. Optionally, the virtual environment comprises a symmetrical lower left corner region and an upper right corner region, the virtual objects belonging to two enemy camps occupy one of the regions respectively, and the target building/site/base/crystal deep in the other region is destroyed to serve as a winning target.
Virtual object: refers to a movable object in a virtual environment. The movable object may be at least one of a virtual character, a virtual animal, and an animation character. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual objects may be three-dimensional virtual models, each virtual object having its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional character constructed based on three-dimensional human skeletal technology, and the virtual object realizes different external images by wearing different skins. In some implementations, the virtual object may also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
The multi-person online tactical competition is as follows: in the virtual environment, different virtual teams belonging to at least two enemy paradigms respectively occupy respective map areas, and compete with one winning condition as a target. Such winning conditions include, but are not limited to: the method comprises the following steps of occupying site points or destroying enemy battle site points, killing virtual objects of enemy battles, guaranteeing the survival of the enemy battles in a specified scene and time, seizing certain resources, and comparing the resource with the resource of the other party in the specified time. The tactical competitions can be carried out by taking a game as a unit, and the map of each tactical competition can be the same or different. Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5.
The MOBA game: the game is a game which provides a plurality of base points in a virtual environment, and users in different camps control virtual objects to fight in the virtual environment, take the base points or destroy enemy camp base points. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual objects controlled by the user in the virtual environment to compete with each other to destroy or dominate all the points of the enemy as winning conditions. The MOBA game is in the unit of a game, and the duration of the game is from the time of starting the game to the time of reaching a winning condition.
User interface UI (user interface) controls, any visual controls or elements that are visible (without excluding display) on the user interface of the application, such as controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which are responsive to user actions, such as skill controls, to control the first virtual object to release skills. And triggering the skill control by the user to control the first virtual object to release the skill.
FIG. 1 is a block diagram illustrating a computer system according to an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client may be any one of a military Simulation program, a large-fleeing and killing Shooting Game, a Virtual Reality (VR) application program, an Augmented Reality (AR) program, a three-dimensional map program, a Virtual Reality Game, an Augmented Reality Game, a First-person Shooting Game (FPS), a Third-person Shooting Game (TPS), a Multiplayer Online tactical sports Game (MOBA), and a strategy Game (SLG). In the present embodiment, the client is an MOBA game for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, which may be referred to as a first virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual environment, and the client 131 may be a multiplayer online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on the screen of the second terminal 130. The client may be any one of a military simulation program, a large fleeing and killing shooting game, a VR application program, an AR program, a three-dimensional map program, a virtual reality game, an augmented reality game, an FPS, a TPS, an MOBA, and an SLG, and in this embodiment, the client is an MOBA game as an example. The second terminal 130 is a terminal used by the second user 113, and the second user 113 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, and the second virtual object may be referred to as a first virtual object of the second user 113. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server 120 in different embodiments. Optionally, one or more terminals 140 are terminals corresponding to the developer, a development and editing platform supporting a client in the virtual environment is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the client installation package from the server 120 to update the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used for providing background services for clients supporting a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a processor 122, a user account database 123, a combat service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The server 120 may employ synchronization techniques to make the visual appearance consistent among multiple clients. Illustratively, the synchronization techniques employed by the server 120 include: a state synchronization technique or a frame synchronization technique.
State synchronization techniques
In an alternative embodiment based on fig. 1, the server 120 employs a state synchronization technique to synchronize with multiple clients. In the state synchronization technique, as shown in fig. 2, the combat logic runs in the server 120. When a state change occurs to a virtual object in the virtual environment, the server 120 sends the state synchronization result to all clients, such as clients 1 to 10.
In an illustrative example, client 1 sends a request to server 120 requesting virtual object 1 to release the frost skill, and server 120 determines whether the frost skill allows release and what the damage value to other virtual objects 2 is when the release of the frost skill is allowed. The server 120 then sends the skill release results to all clients, which update the local data and interface performance according to the skill release results.
Frame synchronization technique
In an alternative embodiment based on fig. 1, the server 120 employs a frame synchronization technique to synchronize with multiple clients. In the frame synchronization technique, as shown in fig. 3, combat logic operates in each client. Each client sends a frame synchronization request to the server, where the frame synchronization request carries data changes local to the client. After receiving a frame synchronization request, the server 120 forwards the frame synchronization request to all clients. And after each client receives the frame synchronization request, processing the frame synchronization request according to local combat logic, and updating local data and interface expression.
With reference to the above description of the virtual environment and the description of the implementation environment, a method for displaying a virtual environment screen provided in the embodiment of the present application is described, and an execution subject of the method is exemplified by a client running on a terminal shown in fig. 1. The terminal runs with a client, which is an application that supports a virtual environment.
Referring collectively to fig. 4, during a virtual environment-based competition, a client is displayed with a user interface. An exemplary user interface includes: a virtual environment screen 22 and a HUD (Head Up Display) panel 24. The virtual environment screen 22 is a screen obtained by observing the virtual environment with the view angle corresponding to the virtual object 26. The HUD panel 24 includes a plurality of human-computer interaction controls, such as a movement control, three or four skill release controls, and an attack button.
Illustratively, there is a one-to-one correspondence of camera models for each virtual object in the virtual environment. The virtual object 26 in fig. 4 corresponds to a camera model 28. The center of view (or focus) of the camera model 28 is the virtual object 26, and the center of view is the intersection of the virtual environment and the centerline that the camera model 28 emits in the direction of view, also referred to as the focus of the camera model 28. The object located at the observation center is located at the center of the virtual environment picture in the virtual environment picture acquired by the camera model 28. As virtual object 26 moves within the virtual environment, the three-dimensional coordinates (also called anchor point location) of camera model 28 within the virtual environment also move following the movement of virtual object 26. The camera model 28 has a lens height relative to the virtual object 26. The camera model 28 looks down the virtual object 26 at an oblique angle (e.g., 45 degrees).
The image captured by the camera model 28 in the virtual environment is the virtual environment image 22 displayed on the client.
The embodiment of the present application provides a scheme for dynamically changing the anchor point of the camera model 28, so as to dynamically change the view of the virtual environment picture 22, which better meets the expectations of the user.
In the example shown in fig. 5, the terminal displays a user interface including a virtual environment screen and a human-computer interaction control superimposed on the virtual environment screen. The man-machine interaction control comprises: the view adjustment control 32, the movement direction control 34, the skill release control 36, and the attack control 38, as shown in fig. 5 (a). By default, the virtual environment view is a view that is captured by the camera model 28 in the virtual environment with the first position where the virtual object 26 is located as the center of view. When the user wishes to change the field of view, the field of view adjustment control 32 is pressed. The field of view adjustment control 32 changes from the smaller eye display modality to the larger wheel display modality, as shown in fig. 5 (b). In the wheel display configuration, the field of view adjustment control 32 includes a wheel region and a rocker located in the center of the wheel region. Wherein, the rocker corresponds the anchor point position of camera model 28, and the adjustable range of the anchor point position that the carousel region corresponds camera model 28. Illustratively, the adjustable range is a circular range.
Illustratively, as the user drags the joystick to change positions in the wheel region, the anchor point position of the camera model 28 in the virtual environment also changes. After changing the anchor point position, the center of view of the camera model 28 will switch from a first position in which the virtual object 26 is located to a second position that is offset with respect to the virtual object 26. The positional relationship of the second position with respect to the first position corresponds to the positional relationship of the rocker with respect to the center position of the wheel area, as shown in fig. 5 (c).
When the user's finger is released, the camera model 28 will capture a view in the virtual environment with the second position offset from the virtual object 26 as the center of view, as shown in fig. 5 (d). Since the position of the virtual environment 26 may change all the time, the second position, which is offset with respect to the virtual object 26, may also change all the time.
The operation of the whole process may include the following steps:
clicking a rocker:
and clicking a rocker on the visual field adjusting control by the user, and returning the anchor point position of the camera model to the default position when the finger of the user presses down. The default position is the anchor position when the first virtual object controlled by the user is at the center of the screen.
Dragging a rocker:
1. as the user drags the rocker on the view adjustment control, the anchor point position of the camera model may shift from the default position. After the user's finger is lifted, the anchor point position of the camera model stays at the last offset position.
2. What the user drags the rocker is changed is the relative position of the anchor point position of the camera model and the first virtual object, that is, the anchor point position of the camera model is bound with the first virtual object. When the first virtual object moves, the anchor point position of the camera model also moves.
Fig. 6 is a flowchart illustrating a method for displaying a virtual environment screen according to an exemplary embodiment of the present application. The embodiment is exemplified by applying the method to the client. The method comprises the following steps:
step 602, displaying a first virtual environment screen, wherein the first virtual environment screen is a screen obtained by observing a virtual environment with a first position relative to a first virtual object as an observation center;
the first virtual object is a virtual object controlled by a client, but the possibility is not excluded that the first virtual object is controlled by another client or an artificial intelligence module. The client controls the activity of the first virtual object in the virtual environment according to the received user operation (or the man-machine operation). Illustratively, the activity of the first virtual object in the virtual environment includes: at least one of walking, running, jumping, climbing, lying down, attacking, skill releasing, prop picking up, and message sending.
The first virtual environment screen is a screen obtained by observing the virtual environment with the first position relative to the first virtual object as the observation center by the camera model. Optionally, the camera model also has a certain lens height with respect to the first virtual object. The virtual environment picture is a two-dimensional picture which is displayed on the client after picture collection is carried out on the three-dimensional virtual environment. Illustratively, the shape of the virtual environment screen is determined according to the shape of a display screen of the terminal or according to the shape of a user interface of the client. Taking the display screen of the terminal as a rectangle as an example, the virtual environment picture is also displayed as a rectangle picture.
A camera model bound with the first virtual object is arranged in the virtual environment. The first virtual environment picture is a picture taken by the camera model with a certain observation position in the virtual environment as an observation center. The viewing center is a first position relative to the first virtual object when the first virtual environment picture is acquired. For example, the first location is where the first virtual object is located. Taking the first virtual environment picture as an example, the intersection point of the rectangular diagonal lines in the first virtual environment picture is the object located at the observation center.
In general, the camera model bound to the first virtual object takes the position of the first virtual object as an observation center, and the position of the first virtual object in the virtual environment is the first position. When the virtual environment is a three-dimensional virtual environment, the observation center is the three-dimensional coordinates of the first virtual object in the virtual environment. For example, if the ground in the virtual environment is a horizontal plane, and the height coordinate of the observation center is 0, the observation center may be approximately represented as a two-dimensional coordinate on the horizontal plane.
Step 604, adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to the adjustment instruction of the observation center;
the adjustment instruction is an instruction for adjusting the observation center. The adjustment instruction is triggered by the user, but does not exclude the possibility that the adjustment instruction is automatically triggered by Artificial Intelligence (AI) in the terminal, or sent by the server.
When the terminal is an electronic device with a touch screen, the adjustment instruction may be an instruction triggered by a touch operation on the touch screen; when the terminal is an electronic device with a handle peripheral, the adjusting instruction can be an instruction triggered on a handle physical key; when the terminal is a VR or AR, the adjustment instruction may be an instruction triggered by eyes or voice of the user, and the triggering manner of the adjustment instruction is not limited in the embodiment of the present application.
Illustratively, the first position is a position at which the first virtual object is located, or the first position is a position at which a first offset exists with respect to the position at which the first virtual object is located.
Illustratively, the second position is a position at which there is a second offset relative to the position at which the first virtual object is located. Typically, the first location and the second location are different.
Since the position of the first virtual object is dynamically changed, the second position relative to the first virtual object is also changed following the change in the position of the first virtual object.
Step 606, displaying a second virtual environment picture, wherein the second virtual environment picture is a picture obtained by observing the virtual environment by taking a second position relative to the first virtual object as an observation center;
since the first virtual environment screen and the second virtual environment screen have different observation centers, the first virtual environment screen and the second virtual environment screen have different visual fields.
As shown in fig. 7, in the first virtual environment picture, the picture center is a first position 72 relative to the first virtual object, and the first position 72 is a position where the first virtual object is located, where the first virtual object is located in the picture center; in the second virtual environment screen, the screen center is a second position 74 with respect to the first virtual object, and the second position 74 is a position that is offset to the first virtual object in the upper right direction, and at this time, the first virtual object is positioned at the lower left corner of the screen center, and the user has a larger field of view at the upper right of the first virtual object and a smaller field of view at the lower left.
In summary, in the method provided in this embodiment, the observation center is modified from the first position to the second position relative to the first virtual object in response to the adjustment instruction of the observation center, so that the user can customize the observation center of the camera model, thereby obtaining the optimal view field range meeting the user's own desire, and displaying more effective information in the virtual environment picture as much as possible.
The adjustment instruction may be triggered by a drag operation of the user on a view adjustment control displayed on the touch display screen.
Fig. 8 is a flowchart illustrating a method for displaying a virtual environment screen according to another exemplary embodiment of the present application. The method may be performed by a client as shown in fig. 1. The method comprises the following steps:
step 802, displaying a first user interface, where the first user interface includes a first virtual environment picture and a visual field adjustment control, and the first virtual environment picture is a picture obtained by observing a virtual environment with a first position relative to a first virtual object as an observation center;
the client displays a first user interface, the first user interface comprising: a first virtual environment frame 90 captured by the camera model, and a field of view adjustment control 32 superimposed on the first virtual environment frame.
A camera model bound with the first virtual object is arranged in the virtual environment. The first virtual environment picture is that the camera model takes the position of the first virtual object as an observation center, and the position of the first virtual object in the virtual environment is the first position.
Illustratively, when the virtual environment is a three-dimensional virtual environment, the observation center is the three-dimensional coordinates of the first virtual object in the virtual environment. For example, if the ground in the virtual environment is a horizontal plane, and the height coordinate of the observation center is 0, the observation center may be approximately represented as a two-dimensional coordinate on the horizontal plane.
Optionally, the field of view adjustment control 32 includes: a wheel area a and a rocker b. The wheel area a is a circular area and the rocker b is a circular button. The area of rocker b is smaller than the area of wheel area a. In a case where the drag operation by the user is not received, the jog dial b is displayed at the center of the jog dial area a. In case of receiving a drag operation by the user, the jog dial b may change its position within the circular area of the jog dial area a.
The rocker b corresponds to an observation center of the camera model (that is, an anchor point position of the camera model), the wheel disc area a corresponds to a default observation center of the camera model, for example, a position of the first virtual object, and a circular area range of the wheel disc area a corresponds to an adjustable range of the observation center of the camera model.
Step 804, responding to a dragging instruction triggered when the rocker is dragged in the wheel disc area, and adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object according to the position of the dragged rocker in the wheel disc area;
since the wheel area a corresponds to the default observation center of the camera model, the circular area range of the wheel area a corresponds to the adjustable range of the observation center of the camera model. Therefore, after receiving the dragging operation of the user, the client adjusts the observation center from the first position relative to the first virtual object to the second position relative to the first virtual object according to the position of the dragged rocker in the wheel area in response to the dragging instruction triggered when the rocker is dragged in the wheel area. Specifically, the method includes the following steps S1 to S5, as shown in fig. 9:
s1, responding to a dragging instruction triggered when the rocker is dragged in the wheel disc area, and calculating a wheel disc transverse offset value and a wheel disc longitudinal offset value of the position of the dragged rocker in the wheel disc area relative to the center position of the wheel disc area;
referring to fig. 10, assuming that the position of the dragged rocker in the wheel area is P2 and the center position of the wheel area is P1, the lateral offset value of the wheel is (P2-P1) the projection length in the horizontal direction (P2-P1). x, and the longitudinal offset value of the wheel is (P2-P1) the projection length in the vertical direction (P2-P1). z.
The wheel lateral offset value and the wheel longitudinal offset value are two-dimensional offsets on the UI layer, and the camera model is in a three-dimensional virtual environment, and the two-dimensional offsets need to be mapped into three-dimensional offsets.
Referring to fig. 10, an x-axis, a y-axis, and a z-axis are provided in the three-dimensional virtual environment. Wherein the y-axis corresponds to the lens height of the camera model. In this embodiment, the horizontal offset value of the wheel disc is set to correspond to the offset value of the camera model on the x axis, and the vertical offset value of the wheel disc is set to correspond to the offset value of the camera model on the z axis, both of which do not affect the position of the camera model on the y axis. That is, the lens height of the camera model is a fixed value or is otherwise adjusted, but is not related to the adjustment logic in this embodiment.
S2, determining a first camera offset value according to the wheel disc transverse offset value and determining a second camera offset value according to the wheel disc longitudinal offset value;
the first camera offset value is an offset value of the camera model in the x-axis and the second camera offset value is an offset value of the camera model in the z-axis. The X-axis and the z-axis are two coordinate axes parallel to the ground plane in the virtual environment, and the X-axis is perpendicular to the z-axis.
Optionally, the wheel lateral offset distance and the first camera offset value are in a positive correlation; the wheel longitudinal offset distance and the second camera offset value are in a positive correlation. For example, the wheel lateral offset distance is directly proportional to the first camera offset value; the wheel longitudinal offset distance and the second camera offset value are in a directly proportional relationship.
S3, calculating a first anchor point position of the camera model corresponding to the first virtual object based on the position of the first virtual object in the virtual environment;
the first anchor point position moves following the movement of the position of the first virtual object.
Optionally, the first anchor point location is a three-dimensional coordinate of the camera model in the virtual environment when the observation center of the camera model is the first virtual object.
S4, offsetting the first anchor point position according to the first camera offset value and the second camera offset value, and calculating to obtain a second anchor point position of the camera model;
and offsetting the first anchor point position on the x axis according to the first camera offset value, offsetting the first anchor point position on the z axis according to the second camera offset value, and calculating to obtain a second anchor point position of the camera model.
And S5, offsetting the camera model according to the second anchor point position, wherein the observation center of the offset camera model is the second position relative to the first virtual object.
And S6, responding to the position change of the first virtual object in the virtual environment, executing the steps of offsetting the first anchor point position according to the first camera offset value and the second camera offset value, and calculating to obtain a second anchor point position of the camera model.
Since the first virtual object can move in the three-dimensional virtual environment, when the position of the first virtual object in the three-dimensional virtual environment changes, the client performs the step of calculating the second position relative to the first virtual object according to the offset direction and the second offset distance by taking the position of the first virtual object in the virtual environment as a reference again.
The centre of view of the camera model (relative to the second position of the first virtual object) will still dynamically change following the position of the first virtual object.
Before the adjustment is finished to the next adjustment (manual or automatic adjustment for opening), the observation center of the camera model is kept at the second position, that is, the camera model continuously performs image acquisition by taking the second position relative to the first virtual object as the observation center.
806, displaying a second user interface, where the second user interface includes a second virtual environment picture and a view field adjustment control, and the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center;
and after the observation center is adjusted to be the second position, observing the virtual environment by using the camera model relative to the second position of the first virtual object as the observation center to obtain a second virtual environment picture. The client also superimposes and displays the visual field adjusting control on the second virtual environment picture, so that a second user interface is displayed.
Optionally, if the view adjustment control is opaque, the blocked area in the second virtual environment picture does not need to be displayed; and if the visual field adjusting control is transparent or semitransparent, overlapping the shielded area in the second virtual environment picture and the visual field adjusting control into a fused image for displaying.
Since the first virtual environment screen and the second virtual environment screen have different observation centers, the first virtual environment screen and the second virtual environment screen have different visual fields.
In summary, in the method provided in this embodiment, the wheel disc offset information of the rocker on the two-dimensional plane is used to calculate the offset information of the camera model in the three-dimensional environment, so that the rocker change of the view field adjustment control and the three-dimensional offset of the camera model are in a positive correlation, and the "what you see is what you get" operation effect is achieved.
In an alternative embodiment based on fig. 8, the drag instruction comprises: at least two sub-instructions ordered in time. Typical drag instructions include: the touch control method comprises a touch start sub-instruction, a plurality of touch moving sub-instructions and a touch end sub-instruction which are sequenced according to time, wherein each sub-instruction carries real-time touch coordinates of a user on a touch screen. The touch start sub-instruction can be regarded as a first sub-instruction, and the touch end sub-instruction can be regarded as a last sub-instruction. As illustrated in fig. 11, the method further comprises:
step 803a, in response to receiving a first sub-instruction in the dragging instruction, switching the view field adjustment control from a default display mode to a wheel disc display mode, wherein the wheel disc display mode comprises a rocker and a wheel disc area;
and the display area of the default display form is smaller than that of the wheel disc display form.
Optionally, the default display modality is a smaller eye button. The smaller eye button does not obscure too much of the screen content on the second virtual environment screen.
Step 805, in response to receiving the last sub-instruction in the dragging instruction, the view adjustment control is switched from the wheel disc display form to the default display form.
In summary, the method provided in this embodiment provides two display forms for the visual field adjustment control, and the visual field adjustment control in the default display form has a smaller display area, so that the influence on the screen content in the virtual environment screen can be reduced.
In an alternative embodiment based on fig. 8, since the user may adjust the center of view of the camera model multiple times during a session, in order to ensure operational consistency during each adjustment. As shown in fig. 12, the method further includes:
in response to receiving the first sub-instruction in the drag instruction, the observation center is reset to the position of the first virtual object in the virtual environment, step 803 b.
And when the client receives the dragging operation on the visual field adjusting control, triggering a dragging instruction on the visual field adjusting control. The drag instruction includes: the touch control method comprises a touch start sub-instruction, a plurality of touch moving sub-instructions and a touch end sub-instruction which are sequenced according to time, wherein each sub-instruction carries real-time touch coordinates of a user on a touch screen.
In response to receiving the touch start sub-instruction, the client resets the observation center to a position in the virtual environment where the first virtual object is located. That is, the client resets the coordinates of the camera model in the virtual environment to the first anchor point position.
In summary, according to the method provided in this embodiment, when the dragging instruction on the view field adjustment control starts to be received, the observation center is reset to the position where the first virtual object is located in the virtual environment, so that the user can obtain consistent operation feeling when adjusting the observation center each time, and thus the observation center is quickly and accurately adjusted to the desired position of the user.
In an alternative embodiment, there is an upper limit to the adjustment range of the center of view of the camera model.
As shown in fig. 13, the client sets the maximum offset values of the observation center of the camera model in four directions, i.e., up, down, left, and right: UP, DOWN, LEFT, RIGHT. UP is the maximum offset value in the UP direction, DOWN is the maximum offset value in the DOWN direction, LEFT is the maximum offset value in the LEFT direction, and RIGHT is the maximum offset value in the RIGHT direction.
When the field of vision adjustment controlling part adopts the rim plate controlling part to realize, the rim plate controlling part includes: a wheel area and a rocker located on the wheel area. The position of the rocker on the region of the wheel (offset position with respect to the center of the wheel) corresponds to the center of view of the camera model (offset position with respect to the first virtual object). The circular area range of the wheel disc area corresponds to the maximum adjustable range of the observation center. Each position on the roulette region "one-to-one" corresponds to the maximum adjustable range of the viewing center in the virtual environment.
In an alternative embodiment, the anchor point position of the camera model is calculated as follows:
as shown in fig. 14, an included angle between the rocker and the horizontal direction is set to be α, a distance between the rocker and the circle in the wheel disc region is set to be a, and a radius of the circle in the wheel disc region is set to be B. If the rocker is deviated to the upper left corner, the anchor point position of the camera model is deviated from the default anchor point position to the upper left corner according to the same angle.
LEFT offset of camera model cos α LEFT (a/B);
the upward offset of the camera model is sin α UP (a/B).
In an alternative embodiment, the following three variables are first defined:
CameraOffset: to measure the offset value of the center of view of the camera model relative to the first virtual object. The control module of the camera model reads the value and adds the offset value to the logic of the camera model following the first virtual object, so that the function of offsetting the observation center of the camera model is achieved.
IsMoved: is used for distinguishing whether the operation of the user on the rocker is click operation or drag operation. When the user presses the rocker button, it is set to no (false); when the dragging distance of the user to the rocker exceeds the dead zone area, the drag distance is set to true.
DeadZone (dead zone): configured by the plan, a threshold value of a drag distance for distinguishing whether a touch operation on the jog dial is a click operation or a drag operation in the roulette region is schematically shown in fig. 15, and an edge of a dead zone is shown by a dotted circle 40.
As shown in fig. 16, the method for displaying the virtual environment screen includes the following steps:
step 1601, pressing a rocker of the visual field adjusting control;
and pressing a rocker of the visual field adjusting control by the user, and triggering and generating a touch starting event on the touch screen. The touch start sub-event carries the two-dimensional coordinates of the location of the user's press on the touch screen.
Step 1602, resetting the offset value of the camera model to zero;
the client resets the offset value CameraOffset of the camera model to zero. This clears the offset value of the previous camera model.
Step 1603, IsMoved ═ false;
and the client sets the value of IsMoved to false.
Step 1604, dragging a rocker of the visual field adjusting control;
the user drags a rocker of the visual field adjusting control, and continuously triggers (according to the reporting frequency of touch) a touch moving event on the touch screen. Each touch movement event carries two-dimensional coordinates of the user's touch location on the touch screen.
Step 1605, assigning the position of the dragged rocker to be P2;
and the client assigns the two-dimensional coordinate of the dragged rocker on the touch screen to P2.
Step 1606, determining whether | P2-P1| is greater than or equal to the dead zone radius, and IsMoved ═ true;
p1 is the two-dimensional coordinate of the touch screen where the center point of the wheel area is located.
In the process of dragging the rocker by the user, the client judges whether the displacement | P2-P1| of the position of the dragged rocker relative to the central point of the wheel area reaches the dead zone threshold value. I.e. | P2-P1| is ≧ deadZone.
If the dragging distance reaches the dead zone threshold, executing step 1607; if the drag distance does not reach the dead zone threshold, the IsMoved value is maintained.
Step 1607, IsMoved ═ true;
and the client sets the value of IsMoved to true.
Step 1608, calculate offset values for the camera model with vectors (P2-P1).
Firstly, a client reads five preset values:
LEFT: a maximum offset value in the left direction;
RIGHT: maximum offset value in right direction;
DOWN: maximum offset value in the down direction;
UP: maximum offset value in the up direction;
MAX _ RADIUS: maximum drag distance of the rocker in the area of the wheel.
Because at the User Interface (UI) level is a two-dimensional plane, (P2-P1) is a two-dimensional vector, and the anchor point location of the camera model is in a three-dimensional scene, the offset vector of the camera model is a three-dimensional vector. Assuming that the lens height of the camera model is kept constant, it is necessary to fix the y-coordinate offset component (coordinate component corresponding to the lens height) in the anchor point position of the camera model by 0, and then calculate the lens offset by the following steps:
1. the length of the calculation (P2-P1) is length,
2. the formula for calculating the offset value, camera offset.x, of the camera model in the x-axis (horizontal direction) is as follows
If (p2-p1). x >0, indicating a rightward shift;
cameraOffset.x=(p2-p1).x/MAX_RADIUS*RIGHT;
other cases, indicating a leftward shift;
cameraOffset.x=(p2-p1).x/MAX_RADIUS*LEFT。
3. the formula for calculating the offset value of the camera model in the z-axis (horizontal direction) cameraoffset.z is as follows:
if (p2-p1). z >0, indicating an upward shift;
cameraOffset.z=(p2-p1).y/MAX_RADIUS*UP;
other cases, indicating a leftward shift;
cameraOffset.z=(p2-p1).y/MAX_RADIUS*DOWN。
referring to fig. 16 in combination, it can be seen that the client generally configures a height for the height distance between the first virtual object operator and the camera model, and configures an inclination angle angel for the downward viewing angle of the camera model. The client then calculates the default anchor point position of the camera model from the current position of the first virtual object. Since the setting of the game, the x-axis of the camera relative to the first virtual character does not change, only the y-axis and the z-axis need to change, the calculation formula of the camera model in the virtual environment (following the first virtual object) is as follows:
cameraPos.x=ActorPos.x;
cameraPos.y=ActorPos.y+height*cos(angle);
camearPos.z=ActorPos.z–height*san(angle);
after the default anchor point position cameraPos of the camera model is calculated, the final anchor point position of the camera model is obtained by adding the calculated offset value (cameraoffset.x, cameraoffset.z) to the cameraPos, and the lens offset effect on the camera model is also achieved.
cameraPos=cameraPos+cameraOffset。
The arrangement of the x-axis, the y-axis and the z-axis in the three-dimensional virtual environment is shown in fig. 4.
Fig. 18 is a block diagram illustrating a display apparatus of a virtual environment screen according to an exemplary embodiment of the present application. The device includes:
a display module 1820, configured to display a first virtual environment picture, where the first virtual environment picture is a picture obtained by observing the virtual environment with a first position relative to a first virtual object as an observation center;
an adjustment module 1840 for adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to an adjustment instruction of the observation center;
the display module 1820 is configured to display a second virtual environment picture, where the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
In an optional embodiment of the present application, a visual field adjustment control is displayed on the first virtual environment screen, where the visual field adjustment control includes a rocker and a wheel disc region, and the rocker is located in the wheel disc region;
the adjusting module 1840 is further configured to, in response to a dragging instruction triggered when the rocker is dragged in the wheel disk region, adjust the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object according to a position of the dragged rocker in the wheel disk region.
In an optional embodiment of the present application, the adjusting module 1840 is further configured to, in response to a dragging instruction triggered when the rocker is dragged in the wheel disc region, calculate a wheel disc lateral offset value and a wheel disc longitudinal offset value of a position of the dragged rocker in the wheel disc region relative to a center position of the wheel disc region;
determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value;
calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference;
offsetting the first anchor point position according to the first camera offset value and the second camera offset value, and calculating to obtain a second anchor point position of the camera model;
offsetting the camera model according to the second anchor point position, the offset observation center of the camera model being a second position relative to the first virtual object.
In an alternative embodiment of the present application, the wheel lateral offset distance and the first camera offset value are in a positive correlation;
the wheel longitudinal offset distance and the second camera offset value are in a positive correlation.
In an optional embodiment of the present application, the adjusting module 1840 is further configured to, in response to a change in the position of the first virtual object in the virtual environment, perform the calculation again to obtain the second position relative to the first virtual object according to the offset direction and the second offset distance, with reference to the position of the first virtual object in the virtual environment.
In an optional embodiment of the present application, the drag instruction includes: the adjusting module 1840 is further configured to, in response to receiving a first sub-instruction of the drag instruction, reset the observation center to a position where the first virtual object is located in the virtual environment.
In an optional embodiment of the present application, the drag instruction includes: the display module 1820 is further configured to switch, in response to receiving a first sub-instruction in the dragging instruction, the view adjustment control from a default display form to a wheel disc display form, where the wheel disc display form includes the rocker and the wheel disc region;
wherein the display area of the default display form is smaller than the display area of the wheel display form.
It should be noted that: the display device of the virtual environment screen provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the display apparatus of the virtual environment picture provided by the above embodiment and the display method embodiment of the virtual environment picture belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
The application also provides a computer device (terminal or server), which includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the display method of the virtual environment picture provided by the above method embodiments. It should be noted that the computer device may be a computer device as provided in fig. 19 below.
Fig. 19 shows a block diagram of a computer device 1900 according to an exemplary embodiment of the present application. The computer device 1900 may be: smart phones, tablet computers, MP3 players (Moving Picture Experts group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts compression standard Audio Layer IV, motion video Experts compression standard Audio Layer 4), notebook computers, or desktop computers. Computer device 1900 may also be referred to by other names such as user device, portable computer device, laptop computer device, desktop computer device, and so on.
Generally, computer device 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1902 is configured to store at least one instruction for execution by the processor 1901 to implement a method of displaying a virtual environment screen provided by method embodiments herein.
In some embodiments, computer device 1900 may also optionally include: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a touch screen display 1905, a camera 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902, and the peripheral interface 1903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1904 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, providing the front panel of computer device 1900; in other embodiments, display 1905 may be at least two, each disposed on a different surface of computer device 1900 or in a folded design; in still other embodiments, display 1905 may be a flexible display disposed on a curved surface or on a folding surface of computer device 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of a computer apparatus, and a rear camera is disposed on a rear surface of the computer apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing, or inputting the electric signals into the radio frequency circuit 1904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
The Location component 1908 is used to locate the current geographic Location of the computer device 1900 for navigation or LBS (Location Based Service). The Positioning component 1908 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1909 is used to provide power to the various components in computer device 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.
The acceleration sensor 1911 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the computer apparatus 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the computer device 1900, and the gyro sensor 1912 may cooperate with the acceleration sensor 1911 to acquire a 3D motion of the user with respect to the computer device 1900. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1913 may be disposed on a side bezel of computer device 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is disposed on the side frame of the computer device 1900, the user can detect a holding signal of the computer device 1900, and the processor 1901 can perform right-left hand recognition or quick operation based on the holding signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1914 is configured to collect a fingerprint of the user, and the processor 1901 identifies the user according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1914 may be disposed on the front, back, or side of computer device 1900. When a physical button or vendor Logo is provided on computer device 1900, fingerprint sensor 1914 may be integrated with the physical button or vendor Logo.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also known as a distance sensor, is typically disposed on the front panel of computer device 1900. Proximity sensor 1916 is used to capture the distance between the user and the front of computer device 1900. In one embodiment, the touch display 1905 is controlled by the processor 1901 to switch from a bright screen state to a dark screen state when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 is gradually decreasing; when the proximity sensor 1916 detects that the distance between the user and the front of the computer device 1900 gradually becomes larger, the touch display 1905 is controlled by the processor 1901 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 19 is not intended to be limiting of computer device 1900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the one or more programs include a display method for displaying a virtual environment screen provided in the embodiment of the present application.
The application provides a computer-readable storage medium, in which at least one instruction is stored, and the at least one instruction is loaded and executed by the processor to implement the display method of the virtual environment picture provided by the above method embodiments.
The present application further provides a computer program product, which when running on a computer, causes the computer to execute the method for displaying a virtual environment picture provided by the above method embodiments.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for displaying a virtual environment picture, the method comprising:
displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
adjusting the center of observation from a first position relative to the first virtual object to a second position relative to the first virtual object in response to the adjustment instruction of the center of observation;
and displaying a second virtual environment screen, wherein the second virtual environment screen is a screen obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
2. The method of claim 1, wherein a visual field adjustment control is displayed on the first virtual environment screen, the visual field adjustment control comprising a rocker and a wheel area, the rocker being located in the wheel area;
the adjusting the center of observation from a first position relative to the first virtual object to a second position relative to the first virtual object in response to the adjusting instructions of the center of observation includes:
in response to a dragging instruction triggered when the rocker is dragged in the wheel area, the observation center is adjusted from a first position relative to the first virtual object to a second position relative to the first virtual object according to the position of the dragged rocker in the wheel area.
3. The method of claim 2, wherein the adjusting the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object according to the dragged position of the joystick in the wheel area in response to a drag instruction triggered when the joystick is dragged in the wheel area comprises:
responding to a dragging instruction triggered when the rocker is dragged in the wheel disc area, and calculating a wheel disc transverse offset value and a wheel disc longitudinal offset value of the position of the dragged rocker in the wheel disc area relative to the center position of the wheel disc area;
determining a first camera offset value according to the wheel disc transverse offset value, and determining a second camera offset value according to the wheel disc longitudinal offset value;
calculating a first anchor point position of a camera model corresponding to the first virtual object by taking the position of the first virtual object in the virtual environment as a reference;
offsetting the first anchor point position according to the first camera offset value and the second camera offset value, and calculating to obtain a second anchor point position of the camera model;
offsetting the camera model according to the second anchor point position, the offset observation center of the camera model being a second position relative to the first virtual object.
4. The method of claim 3,
the wheel disc lateral offset distance and the first camera offset value are in positive correlation;
the wheel longitudinal offset distance and the second camera offset value are in a positive correlation.
5. The method of claim 3, further comprising:
and in response to the position of the first virtual object in the virtual environment changing, calculating a second position relative to the first virtual object according to the offset direction and the second offset distance by taking the position of the first virtual object in the virtual environment as a reference again.
6. The method of any of claims 2 to 5, wherein the drag command comprises: at least two sub-instructions ordered in time, the method further comprising:
in response to receiving a first sub-instruction of the drag instruction, resetting the center of view to a location in the virtual environment where the first virtual object is located.
7. The method of any of claims 2 to 5, wherein the drag command comprises: at least two sub-instructions ordered in time, the method further comprising:
in response to receiving a first sub-instruction in the dragging instruction, switching the view adjusting control from a default display form to a wheel disc display form, wherein the wheel disc display form comprises the rocker and the wheel disc area;
wherein the display area of the default display form is smaller than the display area of the wheel display form.
8. An apparatus for displaying a virtual environment screen, the apparatus comprising:
the display module is used for displaying a first virtual environment picture, wherein the first virtual environment picture is a picture obtained by observing the virtual environment by taking a first position relative to a first virtual object as an observation center;
an adjustment module to adjust the observation center from a first position relative to the first virtual object to a second position relative to the first virtual object in response to an adjustment instruction for the observation center;
the display module is configured to display a second virtual environment picture, where the second virtual environment picture is a picture obtained by observing the virtual environment with a second position relative to the first virtual object as an observation center.
9. A computer device comprising a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for displaying a virtual environment screen according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the readable storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the display method of the virtual environment screen according to any one of claims 1 to 7.
CN202010437875.3A 2020-05-21 2020-05-21 Virtual environment picture display method, device, equipment and medium Active CN111603770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010437875.3A CN111603770B (en) 2020-05-21 2020-05-21 Virtual environment picture display method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010437875.3A CN111603770B (en) 2020-05-21 2020-05-21 Virtual environment picture display method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111603770A true CN111603770A (en) 2020-09-01
CN111603770B CN111603770B (en) 2023-05-05

Family

ID=72194464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010437875.3A Active CN111603770B (en) 2020-05-21 2020-05-21 Virtual environment picture display method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111603770B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111467802A (en) * 2020-04-09 2020-07-31 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying picture of virtual environment
CN112118358A (en) * 2020-09-21 2020-12-22 腾讯科技(深圳)有限公司 Shot picture display method, terminal and storage medium
CN112619140A (en) * 2020-12-18 2021-04-09 网易(杭州)网络有限公司 Method and device for determining position in game and method and device for adjusting path
CN112667220A (en) * 2021-01-27 2021-04-16 北京字跳网络技术有限公司 Animation generation method and device and computer storage medium
CN113827969A (en) * 2021-09-27 2021-12-24 网易(杭州)网络有限公司 Interaction method and device for game objects
CN114307145A (en) * 2022-01-04 2022-04-12 腾讯科技(深圳)有限公司 Picture display method, device, terminal, storage medium and program product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1125609A2 (en) * 2000-01-21 2001-08-22 Sony Computer Entertainment Inc. Entertainment apparatus, storage medium and object display method
JP2013158456A (en) * 2012-02-03 2013-08-19 Konami Digital Entertainment Co Ltd Game device, game system, control method of game device and program
CN107168611A (en) * 2017-06-16 2017-09-15 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and storage medium
JP6223533B1 (en) * 2016-11-30 2017-11-01 株式会社コロプラ Information processing method and program for causing computer to execute information processing method
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment
CN108786110A (en) * 2018-05-30 2018-11-13 腾讯科技(深圳)有限公司 Gun sight display methods, equipment and storage medium in virtual environment
US20190377473A1 (en) * 2018-06-06 2019-12-12 Sony Interactive Entertainment Inc. VR Comfort Zones Used to Inform an In-VR GUI Editor
CN110575671A (en) * 2019-10-08 2019-12-17 网易(杭州)网络有限公司 Method and device for controlling view angle in game and electronic equipment
CN110665222A (en) * 2019-09-29 2020-01-10 网易(杭州)网络有限公司 Aiming direction control method and device in game, electronic equipment and storage medium
CN110917616A (en) * 2019-11-28 2020-03-27 腾讯科技(深圳)有限公司 Orientation prompting method, device, equipment and storage medium in virtual scene
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
JP2020062376A (en) * 2019-07-18 2020-04-23 株式会社セガゲームス Information processor and program

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1125609A2 (en) * 2000-01-21 2001-08-22 Sony Computer Entertainment Inc. Entertainment apparatus, storage medium and object display method
JP2013158456A (en) * 2012-02-03 2013-08-19 Konami Digital Entertainment Co Ltd Game device, game system, control method of game device and program
JP6223533B1 (en) * 2016-11-30 2017-11-01 株式会社コロプラ Information processing method and program for causing computer to execute information processing method
CN107168611A (en) * 2017-06-16 2017-09-15 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and storage medium
CN108786110A (en) * 2018-05-30 2018-11-13 腾讯科技(深圳)有限公司 Gun sight display methods, equipment and storage medium in virtual environment
US20190377473A1 (en) * 2018-06-06 2019-12-12 Sony Interactive Entertainment Inc. VR Comfort Zones Used to Inform an In-VR GUI Editor
CN108717733A (en) * 2018-06-07 2018-10-30 腾讯科技(深圳)有限公司 View angle switch method, equipment and the storage medium of virtual environment
JP2020062376A (en) * 2019-07-18 2020-04-23 株式会社セガゲームス Information processor and program
CN110665222A (en) * 2019-09-29 2020-01-10 网易(杭州)网络有限公司 Aiming direction control method and device in game, electronic equipment and storage medium
CN110575671A (en) * 2019-10-08 2019-12-17 网易(杭州)网络有限公司 Method and device for controlling view angle in game and electronic equipment
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN110917616A (en) * 2019-11-28 2020-03-27 腾讯科技(深圳)有限公司 Orientation prompting method, device, equipment and storage medium in virtual scene

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111467802B (en) * 2020-04-09 2022-02-22 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying picture of virtual environment
US11878242B2 (en) 2020-04-09 2024-01-23 Tencent Technology (Shenzhen) Company Limited Method and apparatus for displaying virtual environment picture, device, and storage medium
CN111467802A (en) * 2020-04-09 2020-07-31 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying picture of virtual environment
CN114928673B (en) * 2020-09-21 2023-04-07 腾讯科技(深圳)有限公司 Shot picture display method, terminal and storage medium
WO2022057627A1 (en) * 2020-09-21 2022-03-24 腾讯科技(深圳)有限公司 Captured image display method and apparatus, terminal, and storage medium
CN114928673A (en) * 2020-09-21 2022-08-19 腾讯科技(深圳)有限公司 Shot picture display method, terminal and storage medium
CN112118358A (en) * 2020-09-21 2020-12-22 腾讯科技(深圳)有限公司 Shot picture display method, terminal and storage medium
CN112619140A (en) * 2020-12-18 2021-04-09 网易(杭州)网络有限公司 Method and device for determining position in game and method and device for adjusting path
CN112619140B (en) * 2020-12-18 2024-04-26 网易(杭州)网络有限公司 Method and device for determining position in game and method and device for adjusting path
CN112667220A (en) * 2021-01-27 2021-04-16 北京字跳网络技术有限公司 Animation generation method and device and computer storage medium
CN113827969A (en) * 2021-09-27 2021-12-24 网易(杭州)网络有限公司 Interaction method and device for game objects
CN114307145A (en) * 2022-01-04 2022-04-12 腾讯科技(深圳)有限公司 Picture display method, device, terminal, storage medium and program product
CN114307145B (en) * 2022-01-04 2023-06-27 腾讯科技(深圳)有限公司 Picture display method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN111603770B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN111589128B (en) Operation control display method and device based on virtual scene
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN111603770B (en) Virtual environment picture display method, device, equipment and medium
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111589127B (en) Control method, device and equipment of virtual role and storage medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN111467802B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN112402949B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111035918A (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111589146A (en) Prop operation method, device, equipment and storage medium based on virtual environment
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN112691370B (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN112704876B (en) Method, device and equipment for selecting virtual object interaction mode and storage medium
CN113577765B (en) User interface display method, device, equipment and storage medium
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112843679A (en) Skill release method, device, equipment and medium for virtual object
CN112569600A (en) Path information transmission method in virtual scene, computer device and storage medium
CN112083848A (en) Method, device and equipment for adjusting position of control in application program and storage medium
CN113559495A (en) Method, device, equipment and storage medium for releasing skill of virtual object
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN112169330B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN112156471B (en) Skill selection method, device, equipment and storage medium of virtual object
CN113599819A (en) Prompt message display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028090

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant