WO2019080870A1 - 交互界面的显示方法和装置、存储介质、电子装置 - Google Patents

交互界面的显示方法和装置、存储介质、电子装置

Info

Publication number
WO2019080870A1
WO2019080870A1 PCT/CN2018/111650 CN2018111650W WO2019080870A1 WO 2019080870 A1 WO2019080870 A1 WO 2019080870A1 CN 2018111650 W CN2018111650 W CN 2018111650W WO 2019080870 A1 WO2019080870 A1 WO 2019080870A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
display
interactive interface
mesh
data
Prior art date
Application number
PCT/CN2018/111650
Other languages
English (en)
French (fr)
Inventor
沈超
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019080870A1 publication Critical patent/WO2019080870A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Definitions

  • the present application relates to the field of the Internet, and in particular, to a display method and apparatus for an interactive interface, a storage medium, and an electronic device.
  • VR Virtual Reality
  • VR also known as virtual technology or virtual environment
  • VR is a virtual world that uses computer simulation to generate a three-dimensional space, providing users with simulations of visual and other senses, so that users feel as if they are immersed. Observe things in three dimensions in a timely and unrestricted manner.
  • VR can be implemented by means of "software + hardware devices”.
  • Common VR software includes Steam and Oculus, a digital distribution, digital rights management and social system for digital software and games distribution and subsequent updates, supporting Windows, OS X and Linux operating systems, currently The world's largest PC digital game platform.
  • Oculus VR is a virtual reality technology company.
  • Steam's hardware product is Steam VR, a full-featured 360-degree room-based virtual reality experience.
  • This development kit includes a head-mounted display, two single-handheld controllers, and a simultaneous tracking display in space.
  • the controller's positioning system, along with the rest of the equipment available on Steam, can experience high-end virtual reality.
  • Oculus VR hardware products include the Oculus Rift and the Oculus Touch, a realistic virtual reality head-mounted display that is currently available in the market.
  • the Oculus Touch is the motion capture handle of the Oculus Rift. Used in conjunction with the space positioning system, the Oculus Touch uses a bracelet-like design that allows the camera to track the user's hand. The sensor also tracks finger movements while also providing convenience to the user. The way to grasp.
  • the user can experience the scenario of experiencing virtual reality, but in the experience process of the virtual reality scenario, when the user touches an object in the virtual reality scene, the virtual reality scene does not belong to the user.
  • the touch operation performs feedback, which in turn causes the user to not know whether the object is touched.
  • the embodiment of the present application provides a display method and device for an interactive interface, a storage medium, and an electronic device, so as to at least solve the technical problem that the related technology cannot feedback the operation of the user.
  • a display method of an interactive interface includes: displaying, by a terminal, a three-dimensional interactive interface according to a first display manner in a virtual reality scene of a target application, where the three-dimensional interactive interface is used for the target
  • the application performs an interaction; the terminal acquires an operation instruction of the first account, the first account is an account of the target application, the operation instruction is used to instruct to perform a first operation on the target object on the three-dimensional interaction interface; and the terminal executes the target object in response to the operation instruction
  • the first operation displays a three-dimensional interactive interface according to the second display manner, and the second display manner is used to identify the first operation by adopting a display manner different from the first display manner.
  • a display device of an interactive interface which is applied to a terminal, where the display device includes: a first display unit configured to follow the first display in a virtual reality scene of the target application The method displays a three-dimensional interactive interface, wherein the three-dimensional interactive interface is used to interact with the target application; the acquiring unit is configured to obtain an operation instruction of the first account, wherein the first account is an account of the target application, and the operation instruction is used to indicate the three-dimensional The target object on the interaction interface performs a first operation; the second display unit is configured to perform a first operation on the target object in response to the operation instruction, and display the three-dimensional interaction interface according to the second display manner, wherein the second display mode is used to The first operation is identified by adopting a display manner different from the first display mode.
  • the terminal displays a three-dimensional interactive interface according to the first display manner in the virtual reality scenario of the target application, where the three-dimensional interactive interface is used to interact with the target application; the terminal acquires an operation instruction of the first account, and the operation instruction indicates Performing a first operation on the target object on the three-dimensional interactive interface; the terminal performing a first operation on the target object in response to the operation instruction, and displaying the three-dimensional interactive interface according to the second display manner, where the second display manner is used by adopting the first display
  • Different display manners are used to identify the first operation, and display is performed by using a display manner different from that when the data is not touched.
  • This method is used to feedback the first operation of the user, and the operation of the user cannot be solved in the related art.
  • the technical problem of feedback is achieved, and the technical effect of realizing feedback to the user's operation is achieved.
  • FIG. 1 is a schematic diagram of a hardware environment of a display method of an interactive interface according to an embodiment of the present application
  • FIG. 2 is a flowchart of a display method of an optional interactive interface according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an optional interactive interface according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an optional interactive interface according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an optional interactive interface according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an optional panel parameter in accordance with an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an optional material information according to an embodiment of the present application.
  • FIG. 8 is a flowchart of a method for displaying an optional interactive interface according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an optional interactive interface display device according to an embodiment of the present application.
  • FIG. 10 is a structural block diagram of a terminal according to an embodiment of the present application.
  • HUD Head Up Display
  • flight aid used on aircraft, and can also be used in other fields, such as games.
  • a method embodiment of a display method of an interactive interface is provided.
  • the display method of the interaction interface may be applied to a hardware environment formed by the server 102, the terminal 104, and the VR glasses 106 as shown in FIG. 1.
  • the server 102 is connected to the terminal 104 through a network, including but not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 104 is not limited to a PC, a mobile phone, a tablet, or the like.
  • the display method of the interactive interface in the embodiment of the present application may be executed by the server 102, may be executed by the terminal 104, or may be jointly executed by the server 102 and the terminal 104.
  • the method for displaying the interactive interface of the terminal 104 in the embodiment of the present application may also be performed by a client installed thereon, and the execution result of the method is displayed by the VR glasses 106.
  • the program code corresponding to the method of the present application may be directly executed on the terminal.
  • FIG. 2 is a flowchart of a method for displaying an optional interactive interface according to an embodiment of the present application. As shown in FIG. 2, the method may include the following steps:
  • Step S202 The terminal displays a three-dimensional interactive interface according to the first display manner in the virtual reality scenario of the target application, where the three-dimensional interactive interface is used to interact with the target application.
  • the above-mentioned virtual reality scenario can be implemented by means of a “software+hardware device”, which is a software for realizing a virtual reality scenario, and a hardware device for displaying a three-dimensional interactive interface.
  • the first display manner described above may be a display manner of a default three-dimensional interactive interface in the target application.
  • the target applications described above include, but are not limited to, social applications and game applications.
  • Step S204 The terminal acquires an operation instruction of the first account, where the first account is an account of the target application, and the operation instruction is used to indicate that the first operation is performed on the target object on the three-dimensional interaction interface.
  • the first account is an account for identifying a virtual object in the virtual reality scenario, and the action of the virtual object in the virtual reality scenario is indicated by a user of the first account in reality, such as the virtual object executing the instruction of the user to perform execution.
  • the operation, the virtual object execution follows the action of the user in reality, and the like.
  • the three-dimensional interactive interface may include one or more operation controls (such as operation buttons, slide bars, etc.), and the area in which each operation control is located may be understood as a target object.
  • the operation instruction is an instruction generated when the virtual object touches the three-dimensional interactive interface.
  • the first action performed on the target object may be a click, double click, drag, or other action that the target application can recognize and perform.
  • the first operation can be used to set the target application, ie the result of the first operation can be to set the target application (eg, set the application parameters).
  • the first operation may also be used to control the operation of the target application, that is, the result of the first operation may control the target application to enter the next interface, or control the virtual character in the game application to complete the target task, and the like. This embodiment is not specifically limited.
  • Step S206 the terminal performs a first operation on the target object in response to the operation instruction, and displays a three-dimensional interactive interface according to the second display manner, where the second display manner is used to identify the first operation by adopting a display manner different from the first display manner.
  • the three-dimensional interactive interface When the virtual object in the virtual reality scene touches the three-dimensional interactive interface, the three-dimensional interactive interface is displayed in a different display manner than when the touch is not touched, that is, the second display mode is used for display, and the second display mode is used for display. It is equivalent to the feedback to the first operation, that is, the feedback to the user's operation. When the user observes that the display mode changes, it is known that the first operation has touched the three-dimensional interactive interface.
  • the terminal displays the three-dimensional interactive interface in the first display manner in the virtual reality scene of the target application, and the three-dimensional interactive interface is used to interact with the target application; the terminal acquires an operation instruction of the first account, the operation instruction Instructing to perform a first operation on the target object on the three-dimensional interactive interface; the terminal performing a first operation on the target object in response to the operation instruction, and displaying the three-dimensional interactive interface according to the second display manner, where the second display manner is used by adopting Displaying different display modes to identify the first operation, and displaying by using a display manner different from that when not touched, this method is used to feedback the first operation of the user, which can solve the problem that the related technology cannot be used for the user.
  • the technical problem of feedback is operated, and the technical effect of realizing feedback to the user's operation is achieved.
  • host PC games, PC games, etc. are non-touch screen games in a 2D display environment. Whether it is based on a handle or a keyboard or a mouse, there is no real-world touch mechanism, because in reality the player is always Holding the action control (ie the touch mechanism is converted to operate on the control), that is, it has been touched all the time. Touches in the virtual world, such as the touch relationship of the player who is being manipulated by others, in order to let the player experience the collision feeling during the game design, the general practice is to use the visual and auditory methods to let the player experience In this sense, the user's cognition is usually enhanced in such a way as to assist the handle vibration. But in general, the actual touch feedback and the touch feedback in the game are split, because the players have not completed the touch.
  • touch-screen game represented by mobile games.
  • the biggest change brought by touch-screen games is that the player really has touched this action, and because the 2D screen is real, the player has a real touch.
  • Feedback this is a real touch interaction means, which is the biggest advantage of touch screen games, so the player's feeling of playing some click games on the touch screen game is very real, because the accuracy and naturalness of the operation are players. want.
  • the actual touch feedback and the touch feedback in the game are still split, because the virtual world is a 3D world, the real world is also a 3D world, for mobile games.
  • the 2D screen acts as a window, folding the user's feelings like a mirror, the player's real finger touches the screen, and then needs a mapping relationship to affect the operation of the virtual world, thus affecting the player's immersion and empathy.
  • the applicant realizes that the VR virtual reality environment has the greatest advantage that the user can perceive the change and feeling of the 3D space because the user's real 3D spatial location feels in the virtual world.
  • the current VR device there is no ability to simulate tactile.
  • the hand of the virtual world should always follow the position of the real hand, but if there is a virtual table in the virtual world that blocks the player's hand, but there is no such thing in the real world.
  • the player's hand can reach the position of the table in the virtual world.
  • the hand of the virtual world will stay at the edge of the table instead of being inserted into the table with the real hand.
  • This is also a common processing method (if it is The first method is to insert the hand into the table and shake it);
  • the third way is the most realistic but affects the changes of the game scene, and is not applicable to some components that should not affect the game logic, such as the 3D panel for displaying the UI;
  • the two methods can also be regarded as a kind of visual reinforcement, because the virtual hand seen by the player is inconsistent with the actual hand position felt by the player, which will bring the player's feeling of resentment; the first way It's simpler and more rude, and the visual feedback is not very comfortable. Therefore, the above three kinds of feedback affect the user experience more or less.
  • the present application also provides a feedback method in a VR environment, which is a method for implementing a visual feedback effect based on a finger click operation.
  • a visual feedback effect is achieved, when the player's finger clicks on the plane, the plane will have a dynamic effect of fluctuation, and the visual feedback is used to simulate and express the tactile feedback.
  • the advantage of this method is that the player feels more realistic in the virtual world.
  • this method is particularly suitable for using a 3D panel that displays UI content in a virtual world. For this panel interaction, this method can enhance the visual performance of the user's click accuracy, and let the user know where the specific location of the click is. .
  • the terminal displays the three-dimensional interactive interface in the first display manner in the virtual reality scene of the target application.
  • a first display mode of a three-dimensional interactive interface that is, a default display mode is shown.
  • the user or the virtual object can set the target application, such as setting a certain function, and the function can be embodied in the form of an icon icon (that is, the target object).
  • the operation instruction of the terminal acquiring the first account includes, but is not limited to, the following implementation manners:
  • the location data of the user is collected in real time by the positioning device, and the operation action of the user is mapped to the virtual object according to the collected location data. If the operation of the virtual object touches the three-dimensional interactive interface, the trigger generates the above. Operation instruction
  • the input device may be a part of the hardware device or a device connected to the hardware device, and the user may control the virtual object in the virtual reality scene through the input device, when the virtual object is controlled by the input device to be set on the three-dimensional interactive panel , generating the above operation instructions.
  • step S206 the terminal performs a first operation on the target object in response to the operation instruction, and displays the three-dimensional interactive interface in the second display manner.
  • the second display mode identifies the first operation by adopting a display manner different from the first display manner, including but not limited to being embodied by the following forms:
  • the second display mode is different from the color used in the first display mode, such as the background color, the font color, the overall color of the three-dimensional interactive interface, and the like;
  • the second display mode is different from the background image used by the first display mode
  • the second display mode and the first display mode display different contents of the content in the three-dimensional interactive interface.
  • the terminal determines the second display mode indicating texture, forms a three-dimensional texture at least in the first region of the three-dimensional interactive interface, and displays a three-dimensional interactive interface in which the three-dimensional texture is formed at least in the first region.
  • the first area mentioned above is the area where the target object is located on the three-dimensional interactive interface, that is, the position where the virtual object is clicked.
  • the terminal displays, according to the indication of the second display manner, that the three-dimensional interactive interface in which the preset three-dimensional texture is formed at least in the first area may be implemented by: displaying a three-dimensional interactive interface formed with a three-dimensional texture in a preset time period.
  • the distance between the three-dimensional texture displayed at the first moment in the preset time period and the target object is smaller than the distance between the three-dimensional texture displayed at the second moment and the target object, and the second moment in the preset time period is later than the first time.
  • the three-dimensional texture may be displayed in the vicinity of the target object, and if the three-dimensional texture is displayed centering on the target object, the display effect is better.
  • the above three-dimensional texture includes a three-dimensional corrugation, and when a three-dimensional interactive interface formed with a three-dimensional texture is displayed within a preset time period, the following steps can be implemented:
  • step S2062 the terminal displays a three-dimensional interactive interface formed with the first three-dimensional corrugation at the first moment, and the first three-dimensional corrugation is centered on the target object.
  • Step S2062 can be implemented by the following sub-steps (Step 1 and Step 2):
  • Step 1 The terminal acquires a first data set and a second data set.
  • the first data set includes a plurality of first data, where each first data is used to indicate that a vertex of the mesh of the mesh panel is located at the first moment.
  • the grid panel is configured to display a three-dimensional interactive interface in the second area, the second area is an area in which the three-dimensional interactive interface is displayed according to the first display manner, and the second data set includes a plurality of second data, each second data The position at which the normal of one vertex of the grid indicating the grid panel is at the first moment.
  • the initial offset generated by the position of the target vertex (recorded as the first vertex) under the influence of the operating force can be pre-configured, and the range of the impact of the operating force is expanded to Other regions (ie, the vertices where the diffused ripple is located, recorded as the second vertex, the radius of the second vertex is larger than the vertex of the corrugation of the first vertex), but the resulting offset is less than the initial offset described above.
  • the influence of the operation strength is lowered, that is, the position of the first vertex is offset by less than the initial offset, and can be specifically configured.
  • An optional configuration is as follows:
  • the offset y y 0 -at, y 0 is the initial offset, t is the time from the click to the 3D interactive interface, and a is a constant, indicating the offset of each second decay.
  • the offset can also be nonlinearly related to time, such as a quadratic curve relationship, a logarithmic curve relationship, and the like.
  • the position offset of each vertex can be obtained as described above.
  • the first position is determined according to the position offset and the second position described above, and the second position is a position before the target vertex is offset.
  • the first data described above is used to indicate the position data of the position where the target vertex is shifted.
  • the following data optimization process can be performed.
  • the data of each vertex is averaged with the first data of adjacent vertices. For example, a certain target vertex (denoted as a third vertex) is averaged with adjacent and close to the target object, and the third vertex is averaged with adjacent and distant vertex.
  • the second data is used to indicate the position of the normal of one vertex of the grid at the first moment, and the location may be a vector of the normal.
  • the second data may refer to a vector having a binding relationship with the four vectors, such as an average of the four vectors, and the second data may also be These four vectors have vectors with other binding relationships.
  • the data optimization processing may be performed, and the specific processing manner is similar to the optimization processing manner of the first data, and details are not described herein again.
  • Step 2 rendering the mesh of the mesh panel according to the first data set and the second data set to display a three-dimensional interactive interface formed with the first three-dimensional corrugation, and the material of the mesh of the mesh panel is set to liquid,
  • a three-dimensional texture is the texture produced by the perturbation of the liquid.
  • rendering the mesh of the mesh panel according to the first data set and the second data set comprises: determining light and shadow information of the target mesh according to the first data and the second data of the vertices of the target mesh, the target mesh The grid to be rendered in the grid panel; the material of the target grid is rendered according to the light and shadow information.
  • the above-mentioned light and shadow information includes one or more of information such as an incident direction, a reflection angle, and a refraction angle of the light.
  • step S2064 the terminal displays a three-dimensional interactive interface formed with a second three-dimensional corrugation at a second time, and the second three-dimensional corrugation is a corrugation formed after the first three-dimensional corrugation is diffused.
  • step S2064 is similar to the implementation of the step S2062. The difference is that when the first data and the second data are obtained, the second time is taken, and the attenuation of the position offset needs to be considered.
  • the terminal includes the 3D Mesh
  • the above operation control ie, the target object
  • the user can click on the operation control (as shown in the figure). 4)
  • the panel will produce a disturbing effect (ie, ripple) in the place where it touches. Similar to the finger touching the water surface, there will be a vibration.
  • the present application also provides an alternative embodiment, and the implementation process of the above products is detailed from the technical side below.
  • FIG. 6 shows the parameters of the panel.
  • the panel used is The number of vertices is much larger.
  • the panel used here contains 1000*512 mesh vertices and their materials, instead of a simple square with only four vertices. The reason for this is that the ripple effect behind is the true vertex position change. Caused, so there needs to be enough vertices to achieve this.
  • the second message shown in Figure 6 is the material named "Water Material Widget".
  • the concrete implementation of this material is shown in Figure 7.
  • the final effect of this 3D panel can be seen in the game.
  • the main implementation includes three aspects, one is SlateUI, which is to paste the target UI Widget panel as a texture onto the Mesh, and the other is to map the pre-computed normal map (corresponding to the second data set) In use, the third is to use the pre-calculated panel position perturbation map (corresponding to the first data set).
  • Slate is a cross-platform UI framework that can be used to make the UI of the application (such as UE4 Editor), the UI of the tool, or the UI of the game.
  • the above SlateUI is the UI in the game, which is equivalent to the game. HUD.
  • the UI Widget above is a basic UI component. It is just a rectangle that can be positioned on the screen as needed. This widget has an area that is not visible at runtime. It is an ideal container for other components. .
  • the above mesh Mesh is a kind of mesh, which can produce shocking effects such as terrain and sleep.
  • the Mesh is mainly composed of three parameters: vertex, triangle and segment number.
  • step S801 when the finger touches the panel or the finger leaves the panel, information such as coordinates of the position of the click or touch is acquired.
  • step S802 the intensity of the click is set, and the initial intensity can be determined according to the speed of the finger movement per unit time.
  • Step S803 drawing a point of click (ie, Draw Material to Render Target A).
  • step S804 the animation of each frame is drawn, and the animation for each frame can be implemented in the following manner.
  • Every frame is changed.
  • the "first frame” node that is, the first frame in the game
  • first of all What you do is get the current texture that describes the vertex displacement information (first data) of the grid panel, and then modify the texture to make the original data smooth, so as to spread slowly around the water until it is completely smooth.
  • first data vertex displacement information
  • modify the texture to make the original data smooth, so as to spread slowly around the water until it is completely smooth.
  • each position in the texture is averaged with the data around it, and then the result is rendered onto a texture (ie, the image is updated), which is the "Draw Material to Render Target A"
  • Do things in one step In the first frame of the game, the only thing to do is “Draw Material to Render Target AN”. This step is similar to the above one.
  • the above is the preparation of the data required for the material "Water Material_Widget”, so the following three steps are to use the data prepared above to finally use the 3D mesh panel to get the final desired effect.
  • the step of "upgrading the vertex position of the grid” is to obtain the texture of the grid vertex displacement data calculated above, and then obtain the data of the corresponding map for each vertex position of the 3D panel, and then the vertex of the 3D panel is based on the data.
  • There is a certain change in position in the world space which is why, after seeing the click panel, the panel changes position like a water surface. Because the panel is transparent, in order to display the effect correctly, you need to update the normal of the panel in real time.
  • the step of “updating the mesh material” is to obtain the normal map prepared above, and then adjust according to the corresponding relationship of the position.
  • the "Update Grid Map” step is to get the texture that stores the current UI Widget performance and paste it directly onto the grid panel. This will allow you to see in the game a UI grid panel that changes due to finger click interaction.
  • This application mainly describes an implementation method of visual feedback effect based on finger click operation in VR environment.
  • the virtual reality environment provides an environment for realizing the experience that the real world can't do, especially the effective improvement of the visual and auditory immersion.
  • current VR devices also have a very fatal disadvantage, that is, the way of human-computer interaction input and output is very limited, especially in terms of output, except for the visual and auditory, there is only vibration.
  • the present application implements a visual feedback effect, so that when the player's finger clicks on the plane, the plane will have a dynamic effect of fluctuation, and the visual feedback is used to simulate and express the tactile feedback. By enhancing the visual feedback in this way, users can be more integrated into the virtual world.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
  • the instructions include a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
  • FIG. 9 is a schematic diagram of an optional interactive interface display device according to an embodiment of the present application.
  • the device may include: a first display unit 91, an acquisition unit 93, and a second display unit 95.
  • the first display unit 91 is configured to display a three-dimensional interactive interface according to the first display manner in the virtual reality scene of the target application, where the three-dimensional interactive interface is used to interact with the target application.
  • the above-mentioned virtual reality scenario can be implemented by means of a “software+hardware device”, which is a software for realizing a virtual reality scenario, and is used for displaying a three-dimensional interactive interface, that is, a hardware device.
  • the first display manner described above may be a display manner of a default three-dimensional interactive interface in the target application.
  • the target applications described above include, but are not limited to, social applications and game applications.
  • the obtaining unit 93 is configured to obtain an operation instruction of the first account, where the first account is an account of the target application, and the operation instruction is used to indicate that the first operation is performed on the target object on the three-dimensional interaction interface.
  • the first account is an account for identifying a virtual object in the virtual reality scenario, and the action of the virtual object in the virtual reality scenario is indicated by a user of the first account in reality, such as the virtual object executing the instruction of the user to perform execution.
  • the operation, the virtual object execution follows the action of the user in reality, and the like.
  • the three-dimensional interactive interface includes one or more operation controls (such as operation buttons, slide bars, etc.), and the area in which each operation control is located can be understood as a target object.
  • the operation instruction is an instruction generated when the virtual object touches the three-dimensional interactive interface.
  • the first action performed on the target object may be a click, double click, drag, or other action that the target application can recognize and perform.
  • the first operation can be used to set the target application, i.e., the result of the first operation can be to set the target application (e.g., set application parameters).
  • the first operation may also be used to control the operation of the target application, that is, the result of the first operation may control the target application to enter the next interface, or control the virtual character in the game application to complete the target task, and the like. This embodiment is not specifically limited.
  • the second display unit 95 is configured to perform a first operation on the target object in response to the operation instruction, and display the three-dimensional interactive interface according to the second display manner, wherein the second display manner is used to adopt a display different from the first display manner The way to identify the first operation.
  • the three-dimensional interactive interface When the virtual object in the virtual reality scene touches the three-dimensional interactive interface, the three-dimensional interactive interface is displayed in a different display manner than when the touch is not touched, that is, the second display mode is used for display, and the second display mode is used for display. It is equivalent to the feedback to the first operation, that is, the feedback to the user's operation. When the user observes that the display mode changes, it is known that the first operation has touched the three-dimensional interactive interface.
  • first display unit 91 in this embodiment may be configured to perform step S202 in the embodiment of the present application.
  • the obtaining unit 93 in this embodiment may be configured to perform step S204 in the embodiment of the present application.
  • the second display unit 95 in the embodiment may be configured to perform step S206 in the embodiment of the present application.
  • the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the three-dimensional interactive interface is displayed in the virtual reality scene of the target application according to the first display manner, and the three-dimensional interactive interface is used to interact with the target application; and the operation instruction of the first account is obtained, and the operation instruction is displayed on the three-dimensional interactive interface.
  • the target object performs a first operation; in response to the operation instruction, performs a first operation on the target object, and displays a three-dimensional interactive interface according to the second display manner, and the second display manner is used to adopt a display manner different from the first display manner.
  • the first operation is identified, and the display is performed by using a different display manner than when the user is not touched. This method is used to feedback the first operation of the user, which can solve the technical problem that the user cannot feedback the operation of the user in the related art. In turn, the technical effect of implementing feedback on the user's operation is achieved.
  • the second display unit is further configured to display a three-dimensional interactive interface formed with a three-dimensional texture at least in the first region according to the indication of the second display manner, wherein the first region is an area where the target object is located on the three-dimensional interactive interface.
  • the second display unit is further configured to display a three-dimensional interactive interface formed with a three-dimensional texture in a preset time period, wherein a distance between the three-dimensional texture displayed at the first time in the preset time period and the target object The distance between the three-dimensional texture displayed at the second time and the target object is smaller, wherein the second time in the preset time period is later than the first time.
  • the second display unit may include: a first display module configured to display a three-dimensional interactive interface formed with the first three-dimensional corrugation at a first moment, wherein the first three-dimensional corrugation is centered on the target object; and the second display module is configured To display a three-dimensional interactive interface formed with a second three-dimensional corrugation at a second time, wherein the second three-dimensional corrugation is a corrugation formed after the first three-dimensional corrugation is diffused.
  • the foregoing first display module includes: an obtaining submodule, configured to acquire a first data set and a second data set, where the first data set includes a plurality of first data, and each first data is used to indicate The position of one vertex of the grid of the grid panel is at the first moment, the grid panel is used to display the three-dimensional interactive interface in the second area, and the second area is the area where the three-dimensional interactive interface is displayed according to the first display manner.
  • the second data set includes a plurality of second data, each of the second data is used to indicate a position at which the normal of one vertex of the mesh of the mesh panel is at the first moment; the display submodule is set to be based on the first data
  • the set and the second data set render the mesh of the mesh panel to display a three-dimensional interactive interface formed with the first three-dimensional corrugation, wherein the mesh of the mesh panel is set to a liquid, and the first three-dimensional texture is a liquid
  • the perturbation produces the texture.
  • the obtaining sub-module is further configured to obtain an operation strength of the first operation indicated in the operation instruction; and acquire a position offset corresponding to the operation strength, wherein the position offset is used to indicate the target vertex under the influence of the operation strength Position generated offset, the target vertex is any one of the grids of the grid panel; the first data for indicating the first position is obtained according to the position offset, wherein the first position is based on the position offset And the second position is determined, wherein the second position is a position before the target vertex is offset.
  • the display sub-module is further configured to determine light and shadow information of the target mesh according to the first data and the second data of the vertices of the target mesh, wherein the target mesh is a mesh currently to be rendered in the mesh panel; according to the light and shadow information Renders the material of the target mesh.
  • the biggest advantage brought by the VR virtual reality environment is that the user can perceive the changes and feelings of the 3D space, because the user's real 3D spatial position feels one-to-one correspondence in the virtual world, so the realistic action for the user and The actions in the game are not split, but completely integrated. Therefore, for touch feedback, this provides a large environment and possibility that the user's realistic touch feedback and the touch feedback in the game are not separated from each other, and the visual feedback is used to simulate the touch feedback to enhance the user experience.
  • the current VR device there is no ability to simulate tactile.
  • the present application provides a display device for an interactive interface in a VR environment, which implements a visual feedback effect, so that when a player's finger clicks on a plane, the plane has a dynamic effect of fluctuation, and the visual feedback is used to simulate and express the sense of touch.
  • Feedback the advantage of this approach is that the player's feelings in the virtual world are more natural and natural, not as uncomfortable as the method described above, and this way does not affect the logic of the original virtual world. Therefore, this method is particularly suitable for using a 3D panel that displays UI content in a virtual world. For this panel interaction, this method can enhance the visual performance of the user's click accuracy, and let the user know where the specific location of the click is. .
  • the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware, where the hardware environment includes a network environment.
  • a server or a terminal for implementing the display method of the above interaction interface is further provided.
  • FIG. 10 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the terminal may include: one or more (only one shown in FIG. 10) processor 1001, memory 1003, and transmission device. 1005 (such as the transmitting device in the above embodiment), as shown in FIG. 10, the terminal may further include an input/output device 1007.
  • the memory 1003 can be used to store software programs and modules, such as the display method of the interactive interface and the program instructions/modules corresponding to the device in the embodiment of the present application.
  • the processor 1001 runs the software program and the module stored in the memory 1003, thereby Performing various functional applications and data processing, that is, implementing the above-described display method of the interactive interface.
  • the memory 1003 may include a high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 1003 can also include memory remotely located relative to processor 1001, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the above-mentioned transmission device 1005 is used to receive or transmit data via a network, and can also be used for data transmission between a processor and a memory. Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 1005 includes a Network Interface Controller (NIC), which can be connected to other network devices and routers through a network cable to communicate with the Internet or a local area network.
  • the transmission device 1005 is a Radio Frequency (RF) module for communicating with the Internet by wireless.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 1003 is configured to store an application.
  • the processor 1001 can call the application stored in the memory 1003 through the transmission device 1005 to perform the following steps:
  • the operation instruction of the first account is obtained, where the first account is an account of the target application, and the operation instruction is used to indicate that the first operation is performed on the target object on the three-dimensional interaction interface;
  • the processor 1001 is further configured to perform the following steps:
  • the first data set includes a plurality of first data
  • each first data is used to indicate that a vertex of the mesh of the mesh panel is at the first moment Position
  • the grid panel is configured to display a three-dimensional interactive interface in the second area
  • the second area is an area in which the three-dimensional interactive interface is displayed according to the first display manner
  • the second data set includes a plurality of second data, each second data a position at which the normal of one vertex of the grid indicating the grid panel is at the first moment;
  • the mesh of the mesh panel is rendered according to the first data set and the second data set to display a three-dimensional interactive interface formed with the first three-dimensional corrugation, wherein the mesh of the mesh panel is set to a liquid,
  • the first three-dimensional texture is the texture produced by the perturbation of the liquid.
  • a three-dimensional interactive interface is displayed in a virtual reality scenario of the target application according to the first display manner, and the three-dimensional interactive interface is configured to set the target application; and the operation instruction of the first account is acquired, and the operation instruction indicates the interaction with the three-dimensional
  • the target object on the interface performs a first operation, where the first operation is used to set the target application; in response to the operation instruction, the first operation is performed on the target object, and the three-dimensional interactive interface is displayed according to the second display manner, and the second display mode is used.
  • the first operation is identified by adopting a display manner different from the first display mode, and the display is performed by using a display manner different from that when the touch is not touched. This method is used to feedback the first operation of the user, and the related operation can be solved.
  • the technical problem of not being able to feedback the user's operation is achieved, thereby achieving the technical effect of realizing feedback to the user's operation.
  • the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID for short). ), PAD and other terminal devices.
  • FIG. 10 does not limit the structure of the above electronic device.
  • the terminal may also include more or fewer components (such as a network interface, display device, etc.) than shown in FIG. 10, or have a different configuration than that shown in FIG.
  • a person of ordinary skill in the art may understand that all or part of the steps of the foregoing embodiments may be completed by a program to instruct terminal device related hardware, and the program may be stored in a computer readable storage medium, and the storage medium may be Including: flash disk, read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM), disk or optical disk.
  • the storage medium may be Including: flash disk, read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM), disk or optical disk.
  • Embodiments of the present application also provide a storage medium.
  • the foregoing storage medium may be used to execute program code of a display method of the interactive interface.
  • the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in the foregoing embodiment.
  • the storage medium is arranged to store program code for performing the following steps:
  • S21 Display a three-dimensional interactive interface according to the first display manner in the virtual reality scenario of the target application, where the three-dimensional interactive interface is used to interact with the target application;
  • S22 Obtain an operation instruction of the first account, where the first account is an account of the target application, and the operation instruction is used to indicate that the first operation is performed on the target object on the three-dimensional interaction interface;
  • the storage medium is further arranged to store program code for performing the following steps:
  • the first data set includes a plurality of first data, where each first data is used to indicate that a vertex of the mesh of the mesh panel is at the first moment.
  • the grid panel is configured to display a three-dimensional interactive interface in the second area, the second area is an area in which the three-dimensional interactive interface is displayed according to the first display manner, and the second data set includes a plurality of second data, each second data a position at which the normal of one vertex of the grid indicating the grid panel is at the first moment;
  • the mesh of the mesh panel is rendered according to the first data set and the second data set to display a three-dimensional interactive interface formed with the first three-dimensional corrugation, wherein the mesh of the mesh panel is set to a liquid.
  • the first three-dimensional texture is the texture produced by the perturbation of the liquid.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product, or the whole or part of the technical solution, which is stored in the storage medium, including
  • the instructions are used to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the terminal displays the three-dimensional interactive interface according to the first display mode, wherein the three-dimensional interactive interface is used to interact with the target application; the terminal acquires an operation instruction of the first account, where the first account is An operation account of the target application, the operation instruction is used to perform a first operation on the target object on the three-dimensional interaction interface; the terminal performs a first operation on the target object in response to the operation instruction, and displays the three-dimensional interaction interface according to the second display manner, where The second display mode is used to identify the first operation by adopting a display manner different from the first display manner, so as to achieve feedback for the operation of the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开一种交互界面的显示方法和装置、存储介质、电子装置。其中,该方法包括:在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互;获取第一帐号的操作指令,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。本申请解决了相关技术中无法对用户的操作进行反馈的技术问题。

Description

交互界面的显示方法和装置、存储介质、电子装置
本申请要求于2017年10月24日提交中国专利局、优先权号为2017110009720、发明名称为“交互界面的显示方法和装置、存储介质、电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网领域,具体而言,涉及一种交互界面的显示方法和装置、存储介质、电子装置。
背景技术
虚拟实境(Virtual Reality,简称为VR),也称虚拟技术或虚拟环境,是利用电脑模拟产生一个三维空间的虚拟世界,提供用户关于视觉等感官的模拟,让用户感觉仿佛身历其境,可以及时、没有限制地观察三维空间内的事物。VR可以通过“软件+硬件设备”的方式实现。
常见的VR软件包括Steam和Oculus,Steam是一种数字发行、数字版权管理及社交***,它用于数字软件及游戏的发行销售与后续更新,支持Windows、OS X和Linux等操作***,目前是全球最大的PC数字游戏平台。Oculus VR是一家虚拟实境科技公司。
Steam的硬件产品有Steam VR,Steam VR是一个功能完整的360度房型空间虚拟现实体验,此开发套件包含了一个头戴式显示器、两个单手持控制器、一个能于空间内同时追踪显示器与控制器的定位***,搭配Steam上提供的其余设备,能够体验高阶的虚拟现实。
Oculus VR的硬件产品有Oculus Rift和Oculus Touch,Oculus Rift是一款逼真的虚拟实境头戴式显示器,且目前已经在市面上贩售。Oculus  Touch是Oculus Rift的动作捕捉手柄,配合空间定位***使用,Oculus Touch采用了类似手环的设计,允许摄像机对用户的手部进行追踪,传感器也可以追踪手指运动,同时还为用户带来便利的抓握方式。
通过使用上述的硬件产品,用户可以对体验虚拟现实的场景进行体验,但是在虚拟现实场景的体验过程中,当用户去触碰虚拟现实场景中的一个对象时,由于虚拟现实场景不会对用户的触碰操作进行反馈,进而导致用户不知道是否触碰到了该对象。
针对相关技术中无法对用户的操作进行反馈的技术问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种交互界面的显示方法和装置、存储介质、电子装置,以至少解决相关技术中无法对用户的操作进行反馈的技术问题。
根据本申请实施例的一个方面,提供了一种交互界面的显示方法,该显示方法包括:终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互;终端获取第一帐号的操作指令,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
根据本申请实施例的另一方面,还提供了一种交互界面的显示装置,应用于终端中,该显示装置包括:第一显示单元,设置为在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,三维交互界面用于与目标应用进行交互;获取单元,设置为获取第一帐号的操作指令,其中,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;第二显示单元,设置为响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,其中,第 二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
在本申请实施例中,终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互;终端获取第一帐号的操作指令,该操作指令指示对三维交互界面上的目标对象执行第一操作;终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作,通过采用与未被触碰到时不同的显示方式进行显示,这一方式来对用户的第一操作进行反馈,可以解决相关技术中无法对用户的操作进行反馈的技术问题,进而达到了实现对用户的操作进行反馈的技术效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的交互界面的显示方法的硬件环境的示意图;
图2是根据本申请实施例的一种可选的交互界面的显示方法的流程图;
图3是根据本申请实施例的一种可选的交互界面的示意图;
图4是根据本申请实施例的一种可选的交互界面的示意图;
图5是根据本申请实施例的一种可选的交互界面的示意图;
图6是根据本申请实施例的一种可选的面板参数的示意图;
图7是根据本申请实施例的一种可选的材质信息的示意图;
图8是根据本申请实施例的一种可选的交互界面的显示方法的流程图;
图9是根据本申请实施例的一种可选的交互界面的显示装置的示意图;
以及
图10是根据本申请实施例的一种终端的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本申请实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
HUD:平视显示器(Head Up Display),以下简称HUD,是运用在航空器上的飞行辅助仪器,也可以用于其他领域,如游戏。
根据本申请实施例,提供了一种交互界面的显示方法的方法实施例。
可选地,在本实施例中,上述交互界面的显示方法可以应用于如图1所示的由服务器102、终端104和VR眼镜106所构成的硬件环境中。如图1所示,服务器102通过网络与终端104进行连接,上述网络包括但不 限于:广域网、城域网或局域网,终端104并不限定于PC、手机、平板电脑等。本申请实施例的交互界面的显示方法可以由服务器102来执行,也可以由终端104来执行,还可以是由服务器102和终端104共同执行。其中,终端104执行本申请实施例的交互界面的显示方法也可以是由安装在其上的客户端来执行,由VR眼镜106来显示方法的执行结果。
当本申请实施例的交互界面的显示方法由终端来单独执行时,直接在终端上来执行与本申请的方法对应的程序代码即可。
图2是根据本申请实施例的一种可选的交互界面的显示方法的流程图,如图2所示,该方法可以包括以下步骤:
步骤S202,终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互。
上述的虚拟现实场景可以通过“软件+硬件设备”的方式实现,目标应用即用于实现虚拟现实场景的软件,而用于显示三维交互界面的即硬件设备。上述的第一显示方式可以为目标应用中默认的三维交互界面的显示方式。
可选地,上述的目标应用包括但不局限于社交应用和游戏应用。
步骤S204,终端获取第一帐号的操作指令,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作。
上述的第一帐号为用于标识虚拟现实场景中一个虚拟对象的帐号,该虚拟对象在虚拟现实场景中的动作由现实中第一帐号的用户进行指示,如该虚拟对象执行用户的指令指示执行的操作、该虚拟对象执行跟随于现实中用户的动作等。
在三维交互界面可以包括一个或多个操作控件(如操作按钮、滑动条等),每个操作控件所在的区域可以理解为一个目标对象。操作指令为虚拟对象触碰到三维交互界面时生成的指令。对目标对象执行的第一操作可以是点击、双击、拖动或其他目标应用可以识别并执行的操作。第一操作 可以用于对目标应用进行设置,即,第一操作的结果可以是对目标应用进行设置(例如,设置应用参数)。第一操作也可以用于控制目标应用的运行,即,第一操作的结果可以控制目标应用进入下一个界面,或者控制游戏应用中的虚拟角色完成目标任务等。本实施例中对此不作具体限定。
步骤S206,终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
当虚拟现实场景中的虚拟对象触碰到三维交互界面时,三维交互界面采用与未被触碰到时不同的显示方式进行显示,即使用第二显示方式进行显示,采用第二显示方式进行显示相当于对第一操作的反馈,也即是对用户的操作的反馈,当用户观察到显示方式发生改变时,就会知道第一操作已经触碰到了三维交互界面。
通过上述步骤S202至步骤S206,终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互;终端获取第一帐号的操作指令,该操作指令指示对三维交互界面上的目标对象执行第一操作;终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作,通过采用与未被触碰到时不同的显示方式进行显示,这一方式来对用户的第一操作进行反馈,可以解决相关技术中无法对用户的操作进行反馈的技术问题,进而达到了实现对用户的操作进行反馈的技术效果。
对于非VR环境下的应用(如社交应用和游戏),申请人认识到,现实的触碰反馈和应用中的触碰反馈是割裂的:
(1)比如说主机PC游戏,PC游戏等是2D显示器环境下的非触摸屏游戏,不管是基于手柄的还是基于键盘或鼠标的,都不存在现实世界的触碰机制,因为现实中玩家是一直拿着操作控件的(即触碰机制被转换为了对控件的操作),也就是一直触碰着的。虚拟世界中的触碰,比如说操 作的玩家被别人打中子弹这样的触碰关系,在游戏设计的时候为了让玩家体验到这种碰撞感,一般的做法是采用视觉听觉的方法来让玩家体验到这种感觉,通常情况下还会辅助手柄振动这样的方式来强化用户的认知。但是总的来说现实的触碰反馈和游戏中的触碰反馈是割裂的,因为玩家都没有去完成触碰这样的动作。
(2)再如,以***为代表的触屏游戏,触屏游戏带来的最大改变就是,玩家真的有了触碰这个动作,而且因为2D屏幕是真实存在的,玩家存在真实的触觉反馈,这是一种真实的触觉交互手段,这是触屏游戏最大的优势所在,所以玩家在触屏游戏上面玩一些点击类游戏的感受非常真实,因为操作的准确度和自然程度都是玩家想要的。但从虚拟世界和现实世界的角度来看,现实的触碰反馈和游戏中的触碰反馈还是割裂的,因为虚拟世界是一个3D化的世界,真实世界也是一个3D化的世界,对于***而言只有2D屏幕作为一个窗口,像镜子一样折叠了玩家的用户感受,玩家的真实的手指触碰到屏幕,然后需要一个映射关系才能影响到虚拟世界的运转,因此很影响玩家的沉浸感和代入感。
因此,对于VR虚拟现实环境下的触碰反馈,在非VR环境下的应用并未给出可用的实现机制。
可选地,申请人通过对VR虚拟显示进行分析认识到,VR虚拟现实环境带来的最大优势就是用户可以感知到3D空间的变化和感受,因为用户的真实的3D空间位置的感受在虚拟世界中的位置一一对应,因此对于用户来说现实的动作和游戏中的动作不是割裂的,而是完全融合的。因此对于触碰反馈来说,这提供了用户现实的触碰反馈和游戏中的触碰反馈相互不割裂的大环境和可能性,通过强化视觉反馈来模拟触碰反馈来提升用户体验。但是由于当前的VR设备不存在触觉模拟的能力。
在本申请的VR应用里面,对于触碰的反馈,提供有以下几种实现方式:
(1)通过震动反馈,只要是引发了碰撞,则触发震动提示用户,用 户操作不会受到任何的影响;
(2)引发逻辑变化来进行反馈,比如虚拟世界的手应该是一直跟随着真实的手的位置变化的,但是如果虚拟世界中有个虚拟的桌子挡住了玩家的手,但是现实世界中没有这个桌子,玩家的手可以到达虚拟世界中桌子的位置,这时候虚拟世界的手会停留在桌子边缘而不是更随着真实的手穿插到桌子里面去,这也是一种常见的处理手法(如果是第一种方法就是手穿插到桌子里面去并且震动);
(3)改变被触碰对象的位置,比如虚拟世界的手应该是一直跟随着真实的手的位置变化的,但是如果虚拟世界中有个虚拟的桌子挡住了玩家的手,但是现实世界中没有这个桌子,玩家的手可以到达虚拟世界中桌子的位置,若虚拟世界的手继续推动桌子,桌子则被手碰撞出原来的位置,表现为在虚拟世界中移动了。
在以上的三种方式中,第三种方式感觉上最真实但是会影响到游戏场景的变化,对于一些不应该影响到游戏逻辑的部件来说不适用,比如用于显示UI的3D面板;第二种方式则也可以看作是一种视觉强化的手法,因为玩家看到的虚拟的手和玩家感受到的实际的手的位置不一致了,这样会带玩家感受上的反感;第一种方式则比较简单粗暴,视觉上面的反馈让人不是很舒服。因此,以上三种反馈多多少少影响了用户的体验。
为了提高用户的体验,本申请还提供了一种VR环境下的反馈方式,是一种基于手指点击操作的视觉反馈效果的实现方法。实现了一种视觉反馈效果,让玩家的手指点击到平面的时候,平面会有波动的动态效果,通过视觉反馈来模拟和表达触觉反馈,这种方式的优势在于玩家在虚拟世界的感受比较真实自然,不像前面讲述的方法一样有违和感,同时这样的方式不会影响到原有的虚拟世界的逻辑。因此这种方式特别适合采用在虚拟世界中显示UI内容的3D面板,对于这种面板的交互,采用这种方式可以强化用户点击的精确度的视觉表现,让用户知道自己点击的具***置在哪里。
下面结合图2所示的步骤详述本申请的实施例:
在步骤S202提供的技术方案中,终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面。
如图3所示,示出了一种三维交互界面的第一显示方式,即默认的显示方式。在三维交互界面中,用户或虚拟对象可以对目标应用进行设置,如对其某个功能进行设置,功能在可以图标icon的形式体现(也即目标对象)。
在步骤S204提供的技术方案中,终端获取第一帐号的操作指令包括但不局限于如下实现方式:
(1)根据用户的操作行为生成的操作指令
当用户进行操作时,通过定位设备实时采集用户的位置数据,并根据采集到的位置数据将用户的操作动作映射到虚拟对象身上,若虚拟对象的操作触碰到三维交互界面,则触发生成上述的操作指令;
(2)根据输入设备的输入操作触发生成的操作指令
上述输入设备可以为硬件设备的一部分,或者与硬件设备连接的设备,用户可通过该输入设备对虚拟现实场景中的虚拟对象进行控制,当通过输入设备控制虚拟对象在三维交互面板上进行设置时,生成上述的操作指令。
在步骤S206提供的技术方案中,终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面。
第二显示方式通过采用与第一显示方式不同的显示方式来标识第一操作,包括但不局限于通过如下形式来体现:
(1)第二显示方式与第一显示方式所采用的颜色不同,如背景颜色、 字体颜色、三维交互界面的整体颜色等;
(2)第二显示方式与第一显示方式所采用的背景图片不同;
(3)第二显示方式与第一显示方式对三维交互界面中的内容的显示方式不同。
对于前两种较为容易实现,下面着重对第3种进行详述:
在按照第二显示方式显示三维交互界面时,终端确定第二显示方式指示纹理,至少在三维交互界面的第一区域形成三维纹理,并显示至少在第一区域形成有三维纹理的三维交互界面。
上述的第一区域为三维交互界面上目标对象所在的区域,也即虚拟对象点击的位置。
可选地,终端按照第二显示方式的指示,显示至少在第一区域形成有预设三维纹理的三维交互界面可以通过如下过程实现:在预设时间段内显示形成有三维纹理的三维交互界面,在预设时间段内的第一时刻显示的三维纹理与目标对象之间的距离小于第二时刻显示的三维纹理与目标对象之间的距离,预设时间段内的第二时刻晚于第一时刻。
需要说明的是,在预设时间段内显示形成有三维纹理的三维交互界面时,三维纹理在目标对象附近进行显示即可,若三维纹理以目标对象为中心进行显示,显示效果更好。
上述的三维纹理包括三维波纹,在预设时间段内显示形成有三维纹理的三维交互界面时,可以通过如下步骤实现:
步骤S2062,终端在第一时刻显示形成有第一三维波纹的三维交互界面,第一三维波纹以目标对象为中心。步骤S2062可通过如下子步骤(步骤一和步骤二)实现:
步骤一,终端获取第一数据集合和第二数据集合,第一数据集合包括多个第一数据,每个第一数据用于指示网格面板的网格的一个顶点在第一 时刻所处的位置,网格面板用于在第二区域显示三维交互界面,第二区域为按照第一显示方式显示的三维交互界面所在的区域,第二数据集合包括多个第二数据,每个第二数据用于指示网格面板的网格的一个顶点的法线在第一时刻所处的位置。
(1)获取操作指令中所指示的第一操作的操作力度
对于每一个操作力度,可以预先配置好在该操作力度的影响下目标顶点(记为第一顶点)的位置产生的初始偏移量,随着时间的推移,该操作力度影响的范围会扩大至其它区域(即扩散后的波纹所在的顶点,记为第二顶点,第二顶点所在波纹的半径大于第一顶点所在波纹的顶点),但是所造成的偏移量会小于上述的初始偏移量,同时,在原来产生波纹的位置(即第一顶点),该操作力度的影响会降低,也即第一顶点的位置产生的偏移量小于初始偏移量,具体可以进行配置。一种可选的配置方式如下:
偏移量y=y 0-at,y 0为初始偏移量,t为从点击到三维交互界面开始的时间,a为常数,表示每一秒衰减的偏移量。
可选地,偏移量还可以和时间成非线性关系,如二次曲线关系、对数曲线关系等。
在获取与操作力度对应的位置偏移量,可以按照上述方式获取每个顶点的位置偏移量。
(2)根据位置偏移量获取用于指示第一位置的第一数据
第一位置是根据上述的位置偏移量和第二位置确定的,第二位置是目标顶点发生偏移前所在的位置。
上述的第一数据即用于表示目标顶点发生偏移后所在位置的位置数据。
可选地,为了是得到的曲线更为均匀,可以执行如下的数据优化处理。
(3)数据优化处理
将每个顶点的数据与相邻顶点的第一数据做平均处理。如将某个目标顶点(记为第三顶点)与相邻且靠近目标对象的顶点做平均处理、将第三顶点与相邻且远离目标对象的顶点做平均处理。
(4)获取第二数据集合中的第二数据
上述第二数据用于表示网格的一个顶点的法线在第一时刻所处的位置,该位置可以是该法线的矢量。
对于一个顶点而言,一般处于四个网格的交接处,那么该顶点则存在四根法线,其分布是对应于每一个网格(相当于一个平面)的发现。那么相当于一个顶点对应四个矢量,而在本申请中第二数据可以是指一个与这四个矢量具有绑定关系的矢量,如这四个矢量的平均值,第二数据也可以是与这四个矢量具有其它绑定关系的矢量。
在得到了上述的第二数据之后,可以进行“数据优化处理”,具体的处理方式与上述第一数据的优化处理方式类似,在此不再赘述。
步骤二,根据第一数据集合和第二数据集合对网格面板的网格进行渲染,以显示形成有第一三维波纹的三维交互界面,网格面板的网格的材质被设置为液体,第一三维纹理为液体的扰动产生的纹理。
可选地,根据第一数据集合和第二数据集合对网格面板的网格进行渲染包括:根据目标网格的顶点的第一数据和第二数据确定目标网格的光影信息,目标网格为网格面板中当前待渲染的网格;按照光影信息对目标网格的材质进行渲染。
上述的光影信息包括光线的入射方向、反射角、折射角等信息中的一个或多个。
步骤S2064,终端在第二时刻显示形成有第二三维波纹的三维交互界面,第二三维波纹是第一三维波纹扩散后形成的波纹。
步骤S2064的实现方式与步骤S2062的实现方式类似,区别点在于获 取第一数据和第二数据时,以第二时刻为准,需要考虑位置偏移量的衰减。
作为一种可选的实施例,下面从产品侧详述本申请的实施例。
对于如图3所示交互界面(即一块3D Mesh上面显示了一个半透明的UI,终端包括了该3D Mesh)以及上面的操作控件(即目标对象),用户可以对操作控件进行点击(如图4所示),用手指去点击UI交互界面,当用户的手指碰到面板的时候,面板在碰到的地方会产生一个扰动效果(即波纹),类似手指碰到水面一样,会有一个震动的波纹和扩散的效果,如图5所示,这效果的明显程度参数都是可以调整的,这效果是点击网格Mesh都会产生的,但同时如果点击的位置是UI面板上面的一些可交互的位置,比如按钮,则按钮的视觉表现也会同步发生。
本申请还提供了一种可选实施例,下面从技术侧详述上述产品的实现过程。
(1)关于整个产品的结构逻辑
对于实现的这个部件的逻辑结构,包含一块可以贴UI Widget的网格面板(3D Mesh,终端包括该3D Mesh),图6展示了这块面板的参数,这里需要注意的是所使用的面板的顶点数目要足够的多,如这里使用的面板包含1000*512个网格顶点及其材质,而不是简单的只有四个顶点的方块,这样做的原因是后面的波纹效果是真实的顶点位置变化造成的,因此需要有足够多的顶点来实现这件事情。图6展示的第二个信息就是采用了命名为“Water Material Widget”(水材料部件)的材质,这个材质的具体实现如图7所示,最终能在游戏中看见这块3D面板的最终效果都是由这个材质做到的。实现时主要的包括三个方面,一个是SlateUI,也就是将目标的UI Widget面板当作贴图贴到这个Mesh上面,另一个是为了将预先计算好的法线贴图(对应于第二数据集合)使用上,第三个就是将预先计算 好的面板的位置扰动贴图(对应于第一数据集合)使用上。
Slate是一套跨平台的UI框架,既可以用来做应用程序的UI(如UE4 Editor)、工具的UI,也可以做游戏中的UI,上述的SlateUI就是游戏中的UI,相当于游戏中的HUD。
上述的UI Widget是一个基础UI组件,它只是一个能根据需要在屏幕上随意定位的矩形,这个挂件有一个区域,它在运行时是看不到的,它是用于容纳其它组件的理想容器。
上述的网格Mesh是一种网格,可以产生像地形、睡眠等震撼的效果,创建Mesh主要包括顶点、三角形、段数三个参数。
(2)关于整个产品的操作逻辑:
如图8所示:
步骤S801,当手指触碰面板或手指离开面板时,获取点击或触碰的位置的信息,如坐标。
步骤S802,设置点击的强度,可以根据单位时间手指一动的速度来确定初始强度。
步骤S803,绘制点击的点(也即Draw Material to Render Target A)。
步骤S804,绘制每一帧的动画,针对每一帧的动画绘制,可以按照以下方式实现。
对于上面提及的对最终的3D网格面板的显示,是每一帧去做改变的,这里首先从“第一帧”这个节点开始讲,也就是说在游戏中的第一帧,首先要做的就是获取当前的描述网格面板顶点位移信息(第一数据)的贴图,然后来修改这张贴图,让其原来的数据平滑,以做到像水波一样向周围慢慢扩散直到完全平稳为止的那种效果,具体的做法就是贴图中的每一个位置都拿其周围的数据做平均,然后将结果渲染到一张贴图上面(即更新图像),这就是“Draw Material to Render Target A”这一步做的事情。在游戏 中的第一帧,还要做的事情是“更新法线”(即Draw Material to Render Target AN),这一步做的事情与上面一部类似,获取当前的描述网格面板顶点法线信息的贴图,然后也同样使用上面类似的,取周围数据做平均的算法来计算得到新的法线贴图。还有一步需要做的事情是获取当前的UIWidget的表现效果(即更新UI面板),同样将其结果绘制到一张早已准备好的专门用于存UI Widget表现的贴图上面,这就是“UI Widget”这一步要做的事情。至此就准备好了最终显示的3D网格所使用的材质所需要的所有数据。
上面的步骤中要注意的是对于每一类贴图需要准备的都不止一张(即将第一数据集合和第二数据集合分别拷贝为至少两份),至少是两张,为了防止读写访问冲突和避免渲染中的等待。比如在N帧中我们使用A贴图的数据修改渲染得到的结果存到B贴图,然后N+1帧会使用B贴图的数据修改以后得到的结果存到A贴图。在下面要讲的步骤中,在N帧的时候使用的是A贴图的结果,在N+1帧的时候使用的是B贴图的结果,这样就避免了多次渲染的时候的数据访问冲突。
上面完成了“Water Material_Widget”这个材质所需要的数据的准备工作,因此下面的三步就是使用上面准备好的数据去最终使用到3D网格面板上面,得到最终所期望的效果。“更新网格顶点位置”这一步就是获取上面计算好网格顶点位移数据的贴图,然后对于3D面板的每一个顶点位置获取其对应到的贴图上面的数据,然后根据数据将3D面板的顶点在世界空间内做一定的位置变化,这就是为什么能看见点击面板以后,面板像水面一样发生了位置变化。因为面板是透明的,因此为了正确的显示效果,需要对面板的法线做实时的更新,“更新网格材质”这一步就是获取上面准备好的法线贴图,然后根据位置的对应关系去调整3D网格面板的每一个顶点的法线。“更新网格贴图”这一步就是获取存储当前UI Widget表现的贴图,直接贴到网格面板上面展示出来。这样就能在游戏中看见会因为手指点击交互而变化的UI网格面板了。
上面讲了每一帧都会做的事情,做的事情就是让整个面板更加的平滑,直至完全的平静,像睡眠一样。而当手指触碰到面板的时候,或者手指离开面板的时候,则就好像水面扔一块石头一样,造成水面的波动,在这里就是产生响应,这个实现就是如上面图8的流程图所示。当发生“手指触碰到面板”或者“手指离开面板”事件的时候,首先要确认的是手指点击的位置以及强度,将这两个参数对应到保存顶点位移贴图的对应位置,然后在这个贴图上面这个对应位置,根据强度绘制一个大大的圆,表示的就是这个地方的顶点将发生很大偏移,也就是产生了波动效果。
本申请主要描述的是VR环境下,一种基于手指点击操作的视觉反馈效果的实现方法。虚拟现实环境为一些现实世界没法做到的体验提供了实现的环境,特别其所带来的视觉和听觉的沉浸感的有效提升。但是当前的VR设备也有着非常致命的缺点,就是人机交互输入输出的方式很有限,特别是在输出方面,除了视觉和听觉以外就只有震动了。而本申请实现了一种视觉反馈效果,让玩家的手指点击到平面的时候,平面会有波动的动态效果,通过视觉反馈来模拟和表达触觉反馈。通过这样强化视觉反馈的方式,可以让用户更加能融入到虚拟世界中去。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可 以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例,还提供了一种用于实施上述交互界面的显示方法的交互界面的显示装置,该装置应用于终端中。图9是根据本申请实施例的一种可选的交互界面的显示装置的示意图,如图9所示,该装置可以包括:第一显示单元91、获取单元93以及第二显示单元95。
第一显示单元91,设置为在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互。
上述的虚拟现实场景可以通过“软件+硬件设备”的方式实现,目标应用即用于实现虚拟现实场景的软件,而用于显示三维交互界面即硬件设备。上述的第一显示方式可以为目标应用中默认的三维交互界面的显示方式。
可选地,上述的目标应用包括但不局限于社交应用和游戏应用。
获取单元93,设置为获取第一帐号的操作指令,其中,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作。
上述的第一帐号为用于标识虚拟现实场景中一个虚拟对象的帐号,该虚拟对象在虚拟现实场景中的动作由现实中第一帐号的用户进行指示,如该虚拟对象执行用户的指令指示执行的操作、该虚拟对象执行跟随于现实中用户的动作等。
在三维交互界面包括一个或多个操作控件(如操作按钮、滑动条等),每个操作控件所在的区域可以理解为一个目标对象。操作指令为虚拟对象触碰到三维交互界面时生成的指令。对目标对象执行的第一操作可以是点击、双击、拖动或其他目标应用可以识别并执行的操作。第一操作可以用于对目标应用进行设置,即,第一操作的结果可以是对目标应用进行设置 (例如,设置应用参数)。第一操作也可以用于控制目标应用的运行,即,第一操作的结果可以控制目标应用进入下一个界面,或者控制游戏应用中的虚拟角色完成目标任务等。本实施例中对此不作具体限定。
第二显示单元95,设置为响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,其中,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
当虚拟现实场景中的虚拟对象触碰到三维交互界面时,三维交互界面采用与未被触碰到时不同的显示方式进行显示,即使用第二显示方式进行显示,采用第二显示方式进行显示相当于对第一操作的反馈,也即是对用户的操作的反馈,当用户观察到显示方式发生改变时,就会知道第一操作已经触碰到了三维交互界面。
需要说明的是,该实施例中的第一显示单元91可以设置为执行本申请实施例中的步骤S202,该实施例中的获取单元93可以设置为执行本申请实施例中的步骤S204,该实施例中的第二显示单元95可以设置为执行本申请实施例中的步骤S206。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
通过上述模块,在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于与目标应用进行交互;获取第一帐号的操作指令,该操作指令指示对三维交互界面上的目标对象执行第一操作;响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作,通过采用与未被触碰到时不同的显示方式进行显示,这一方式来对用户的第一操作进行反馈,可以解决相关技术中无法对用户的操作进行反馈的技术问题,进而达到了实现对用户的操作进行反馈的技术 效果。
上述的第二显示单元还设置为按照第二显示方式的指示,显示至少在第一区域形成有三维纹理的三维交互界面,其中,第一区域为三维交互界面上目标对象所在的区域。
可选地,第二显示单元还设置为在预设时间段内显示形成有三维纹理的三维交互界面,其中,在预设时间段内的第一时刻显示的三维纹理与目标对象之间的距离小于第二时刻显示的三维纹理与目标对象之间的距离,其中,预设时间段内的第二时刻晚于第一时刻。
上述的第二显示单元可以包括:第一显示模块,设置为在第一时刻显示形成有第一三维波纹的三维交互界面,其中,第一三维波纹以目标对象为中心;第二显示模块,设置为在第二时刻显示形成有第二三维波纹的三维交互界面,其中,第二三维波纹是第一三维波纹扩散后形成的波纹。
可选地,上述的第一显示模块包括:获取子模块,设置为获取第一数据集合和第二数据集合,其中,第一数据集合包括多个第一数据,每个第一数据用于指示网格面板的网格的一个顶点在第一时刻所处的位置,网格面板用于在第二区域显示三维交互界面,第二区域为按照第一显示方式显示的三维交互界面所在的区域,第二数据集合包括多个第二数据,每个第二数据用于指示网格面板的网格的一个顶点的法线在第一时刻所处的位置;显示子模块,设置为根据第一数据集合和第二数据集合对网格面板的网格进行渲染,以显示形成有第一三维波纹的三维交互界面,其中,网格面板的网格的材质被设置为液体,第一三维纹理为液体的扰动产生的纹理。
上述的获取子模块还设置为获取操作指令中所指示的第一操作的操作力度;获取与操作力度对应的位置偏移量,其中,位置偏移量用于指示在操作力度的影响下目标顶点的位置产生的偏移量,目标顶点为网格面板的网格的任意一个顶点;根据位置偏移量获取用于指示第一位置的第一数据,其中,第一位置是根据位置偏移量和第二位置确定的,其中,第二位置是目标顶点发生偏移前所在的位置。
上述的显示子模块还设置为根据目标网格的顶点的第一数据和第二数据确定目标网格的光影信息,其中,目标网格为网格面板中当前待渲染的网格;按照光影信息对目标网格的材质进行渲染。
VR虚拟现实环境带来的最大优势就是用户可以感知到3D空间的变化和感受,因为用户的真实的3D空间位置的感受在虚拟世界中的位置一一对应,因此对于用户来说现实的动作和游戏中的动作不是割裂的,而是完全融合的。因此对于触碰反馈来说,这提供了用户现实的触碰反馈和游戏中的触碰反馈相互不割裂的大环境和可能性,通过强化视觉反馈来模拟触碰反馈来提升用户体验。但是由于当前的VR设备不存在触觉模拟的能力。
本申请提供了一种VR环境下的交互界面的显示装置,实现了一种视觉反馈效果,让玩家的手指点击到平面的时候,平面会有波动的动态效果,通过视觉反馈来模拟和表达触觉反馈,这种方式的优势在于玩家在虚拟世界的感受比较真实自然,不像前面讲述的方法一样有违和感,同时这样的方式不会影响到原有的虚拟世界的逻辑。因此这种方式特别适合采用在虚拟世界中显示UI内容的3D面板,对于这种面板的交互,采用这种方式可以强化用户点击的精确度的视觉表现,让用户知道自己点击的具***置在哪里。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本申请实施例,还提供了一种用于实施上述交互界面的显示方法的服务器或终端。
图10是根据本申请实施例的一种终端的结构框图,如图10所示,该 终端可以包括:一个或多个(图10中仅示出一个)处理器1001、存储器1003、以及传输装置1005(如上述实施例中的发送装置),如图10所示,该终端还可以包括输入输出设备1007。
其中,存储器1003可用于存储软件程序以及模块,如本申请实施例中的交互界面的显示方法和装置对应的程序指令/模块,处理器1001通过运行存储在存储器1003内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的交互界面的显示方法。存储器1003可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1003还可以包括相对于处理器1001远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置1005用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1005包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1005为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
其中,可选地,存储器1003用于存储应用程序。
处理器1001可以通过传输装置1005调用存储器1003存储的应用程序,以执行下述步骤:
S1,在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,三维交互界面用于与目标应用进行交互;
S2,获取第一帐号的操作指令,其中,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;
S3,响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,其中,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
处理器1001还用于执行下述步骤:
S1,获取第一数据集合和第二数据集合,其中,第一数据集合包括多个第一数据,每个第一数据用于指示网格面板的网格的一个顶点在第一时刻所处的位置,网格面板用于在第二区域显示三维交互界面,第二区域为按照第一显示方式显示的三维交互界面所在的区域,第二数据集合包括多个第二数据,每个第二数据用于指示网格面板的网格的一个顶点的法线在第一时刻所处的位置;
S2,根据第一数据集合和第二数据集合对网格面板的网格进行渲染,以显示形成有第一三维波纹的三维交互界面,其中,网格面板的网格的材质被设置为液体,第一三维纹理为液体的扰动产生的纹理。
采用本申请实施例,在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,三维交互界面用于对目标应用进行设置;获取第一帐号的操作指令,该操作指令指示对三维交互界面上的目标对象执行第一操作,第一操作用于对目标应用进行设置;响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作,通过采用与未被触碰到时不同的显示方式进行显示,这一方式来对用户的第一操作进行反馈,可以解决相关技术中无法对用户的操作进行反馈的技术问题,进而达到了实现对用户的操作进行反馈的技术效果。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图10所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及 移动互联网设备(Mobile Internet Devices,简称为MID)、PAD等终端设备。图10其并不对上述电子装置的结构造成限定。例如,终端还可包括比图10中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图10所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取器(Random Access Memory,简称为RAM)、磁盘或光盘等。
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行交互界面的显示方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S21,在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,三维交互界面用于与目标应用进行交互;
S22,获取第一帐号的操作指令,其中,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;
S23,响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,其中,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
S31,获取第一数据集合和第二数据集合,其中,第一数据集合包括 多个第一数据,每个第一数据用于指示网格面板的网格的一个顶点在第一时刻所处的位置,网格面板用于在第二区域显示三维交互界面,第二区域为按照第一显示方式显示的三维交互界面所在的区域,第二数据集合包括多个第二数据,每个第二数据用于指示网格面板的网格的一个顶点的法线在第一时刻所处的位置;
S32,根据第一数据集合和第二数据集合对网格面板的网格进行渲染,以显示形成有第一三维波纹的三维交互界面,其中,网格面板的网格的材质被设置为液体,第一三维纹理为液体的扰动产生的纹理。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的, 例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
本实施例终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,三维交互界面用于与目标应用进行交互;终端获取第一帐号的操作指令,其中,第一帐号为目标应用的帐号,操作指令用于指示对三维交互界面上的目标对象执行第一操作;终端响应于操作指令,对目标对象执行第一操作,并按照第二显示方式显示三维交互界面,其中,第二显示方式用于通过采用与第一显示方式不同的显示方式来标识第一操作,以达到实现对用户的操作进行反馈的目的。

Claims (14)

  1. 一种交互界面的显示方法,包括:
    终端在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,所述三维交互界面用于与所述目标应用进行交互;
    所述终端获取第一帐号的操作指令,其中,所述第一帐号为所述目标应用的帐号,所述操作指令用于指示对所述三维交互界面上的目标对象执行第一操作;
    所述终端响应于所述操作指令,对所述目标对象执行所述第一操作,并按照第二显示方式显示所述三维交互界面,其中,所述第二显示方式用于通过采用与所述第一显示方式不同的显示方式来标识所述第一操作。
  2. 根据权利要求1所述的方法,其中,所述终端按照第二显示方式显示所述三维交互界面包括:
    所述终端按照所述第二显示方式的指示,显示至少在第一区域形成有三维纹理的所述三维交互界面,其中,所述第一区域为所述三维交互界面上所述目标对象所在的区域,所述三维纹理用于标识所述第一操作。
  3. 根据权利要求2所述的方法,其中,所述按照所述第二显示方式的指示,显示至少在第一区域形成有预设三维纹理的所述三维交互界面包括:
    所述终端在预设时间段内显示形成有所述三维纹理的所述三维交互界面,其中,在所述预设时间段内的第一时刻显示的所述三维纹理与所述目标对象之间的距离小于第二时刻显示的所述三维纹理与所述目标对象之间的距离,其中,所述预设时间段内的所述第二时刻晚于所述第一时刻。
  4. 根据权利要求3所述的方法,其中,所述三维纹理包括三维波纹,其 中,在预设时间段内显示形成有所述三维纹理的所述三维交互界面包括:
    所述终端在所述第一时刻显示形成有第一三维波纹的所述三维交互界面,其中,所述第一三维波纹以所述目标对象为中心;
    所述终端在所述第二时刻显示形成有第二三维波纹的所述三维交互界面,其中,所述第二三维波纹是所述第一三维波纹扩散后形成的波纹。
  5. 根据权利要求4所述的方法,其中,所述终端在所述第一时刻显示形成有第一三维波纹的所述三维交互界面包括:
    所述终端获取第一数据集合和第二数据集合,其中,所述第一数据集合包括多个第一数据,每个所述第一数据用于指示网格面板的网格的一个顶点在所述第一时刻所处的位置,所述网格面板用于在第二区域显示所述三维交互界面,所述第二区域为按照所述第一显示方式显示的所述三维交互界面所在的区域,所述第二数据集合包括多个第二数据,每个所述第二数据用于指示所述网格面板的网格的一个顶点的法线在所述第一时刻所处的位置;
    所述终端根据所述第一数据集合和所述第二数据集合对所述网格面板的网格进行渲染,以显示形成有所述第一三维波纹的所述三维交互界面,其中,所述网格面板的网格的材质被设置为液体,所述第一三维纹理为所述液体的扰动产生的纹理。
  6. 根据权利要求5所述的方法,其中,所述终端根据所述第一数据集合和所述第二数据集合对所述网格面板的网格进行渲染包括:
    所述终端根据目标网格的顶点的所述第一数据和所述第二数据确定所述目标网格的光影信息,其中,所述目标网格为所述网格面板中当前待渲染的网格;
    所述终端按照所述光影信息对所述目标网格的材质进行渲染。
  7. 根据权利要求5所述的方法,其中,所述终端获取第一数据集合包括 按照如下方式获取所述网格面板的网格的每个顶点的所述第一数据包括:
    所述终端获取所述操作指令中所指示的所述第一操作的操作力度;
    所述终端获取与所述操作力度对应的位置偏移量,其中,所述位置偏移量用于指示在所述操作力度的影响下目标顶点的位置产生的偏移量,所述目标顶点为所述网格面板的网格的任意一个顶点;
    所述终端根据所述位置偏移量获取用于指示第一位置的所述第一数据,其中,所述第一位置是根据所述位置偏移量和第二位置确定的,其中,所述第二位置是所述目标顶点发生偏移前所在的位置。
  8. 一种交互界面的显示装置,应用于终端中,包括:
    第一显示单元,设置为在目标应用的虚拟现实场景中按照第一显示方式显示三维交互界面,其中,所述三维交互界面用于与所述目标应用进行交互;
    获取单元,设置为获取第一帐号的操作指令,其中,所述第一帐号为所述目标应用的帐号,所述操作指令用于指示对所述三维交互界面上的目标对象执行第一操作;
    第二显示单元,设置为响应于所述操作指令,对所述目标对象执行所述第一操作,并按照第二显示方式显示所述三维交互界面,其中,所述第二显示方式用于通过采用与所述第一显示方式不同的显示方式来标识所述第一操作。
  9. 根据权利要求8所述的装置,其中,所述第二显示单元还设置为按照所述第二显示方式的指示,显示至少在第一区域形成有三维纹理的所述三维交互界面,其中,所述第一区域为所述三维交互界面上所述目标对象所在的区域,所述三维纹理用于标识所述第一操作。
  10. 根据权利要求9所述的装置,其中,所述第二显示单元还设置为在预设时间段内显示形成有所述三维纹理的所述三维交互界面,其中,在 所述预设时间段内的第一时刻显示的所述三维纹理与所述目标对象之间的距离小于第二时刻显示的所述三维纹理与所述目标对象之间的距离,其中,所述预设时间段内的所述第二时刻晚于所述第一时刻。
  11. 根据权利要求10所述的装置,其中,所述第二显示单元包括:
    第一显示模块,设置为在所述第一时刻显示形成有第一三维波纹的所述三维交互界面,其中,所述第一三维波纹以所述目标对象为中心;
    第二显示模块,设置为在所述第二时刻显示形成有第二三维波纹的所述三维交互界面,其中,所述第二三维波纹是所述第一三维波纹扩散后形成的波纹。
  12. 根据权利要求11所述的装置,其中,所述第一显示模块包括:
    获取子模块,设置为获取第一数据集合和第二数据集合,其中,所述第一数据集合包括多个第一数据,每个所述第一数据用于指示网格面板的网格的一个顶点在所述第一时刻所处的位置,所述网格面板用于在第二区域显示所述三维交互界面,所述第二区域为按照所述第一显示方式显示的所述三维交互界面所在的区域,所述第二数据集合包括多个第二数据,每个所述第二数据用于指示所述网格面板的网格的一个顶点的法线在所述第一时刻所处的位置;
    显示子模块,设置为根据所述第一数据集合和所述第二数据集合对所述网格面板的网格进行渲染,以显示形成有所述第一三维波纹的所述三维交互界面,其中,所述网格面板的网格的材质被设置为液体,所述第一三维纹理为所述液体的扰动产生的纹理。
  13. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至7任一项中所述的方法。
  14. 一种电子装置,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器通过所述计算机程序执行上述权利要求1至7任一项中所述的方法。
PCT/CN2018/111650 2017-10-24 2018-10-24 交互界面的显示方法和装置、存储介质、电子装置 WO2019080870A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711000972.0 2017-10-24
CN201711000972.0A CN109697001B (zh) 2017-10-24 2017-10-24 交互界面的显示方法和装置、存储介质、电子装置

Publications (1)

Publication Number Publication Date
WO2019080870A1 true WO2019080870A1 (zh) 2019-05-02

Family

ID=66227798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111650 WO2019080870A1 (zh) 2017-10-24 2018-10-24 交互界面的显示方法和装置、存储介质、电子装置

Country Status (2)

Country Link
CN (1) CN109697001B (zh)
WO (1) WO2019080870A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116672712A (zh) * 2020-12-29 2023-09-01 苏州幻塔网络科技有限公司 道具的控制方法和装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289337A (zh) * 2010-06-18 2011-12-21 上海三旗通信科技有限公司 一种全新的移动终端界面的显示方法
CN102430244A (zh) * 2011-12-30 2012-05-02 领航数位国际股份有限公司 一种通过手指接触产生视觉上的人机互动的方法
CN104460988A (zh) * 2014-11-11 2015-03-25 陈琦 一种智能手机虚拟现实设备的输入控制方法
CN105630160A (zh) * 2015-12-21 2016-06-01 黄鸣生 虚拟现实使用界面***
US9378592B2 (en) * 2012-09-14 2016-06-28 Lg Electronics Inc. Apparatus and method of providing user interface on head mounted display and head mounted display thereof
CN106775258A (zh) * 2017-01-04 2017-05-31 虹软(杭州)多媒体信息技术有限公司 利用手势控制实现虚拟现实交互的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183276A (zh) * 2007-12-13 2008-05-21 上海交通大学 基于摄像头投影仪技术的交互***
CN103474007B (zh) * 2013-08-27 2015-08-19 湖南华凯文化创意股份有限公司 一种互动式显示方法及***
CN104281260A (zh) * 2014-06-08 2015-01-14 朱金彪 操作虚拟世界里的电脑和手机的方法、装置以及使用其的眼镜
US10296086B2 (en) * 2015-03-20 2019-05-21 Sony Interactive Entertainment Inc. Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments
US9851799B2 (en) * 2015-09-25 2017-12-26 Oculus Vr, Llc Haptic surface with damping apparatus
CN106774824B (zh) * 2016-10-26 2020-02-04 网易(杭州)网络有限公司 虚拟现实交互方法及装置
CN106896915B (zh) * 2017-02-15 2020-05-29 阿里巴巴(中国)有限公司 基于虚拟现实的输入控制方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289337A (zh) * 2010-06-18 2011-12-21 上海三旗通信科技有限公司 一种全新的移动终端界面的显示方法
CN102430244A (zh) * 2011-12-30 2012-05-02 领航数位国际股份有限公司 一种通过手指接触产生视觉上的人机互动的方法
US9378592B2 (en) * 2012-09-14 2016-06-28 Lg Electronics Inc. Apparatus and method of providing user interface on head mounted display and head mounted display thereof
CN104460988A (zh) * 2014-11-11 2015-03-25 陈琦 一种智能手机虚拟现实设备的输入控制方法
CN105630160A (zh) * 2015-12-21 2016-06-01 黄鸣生 虚拟现实使用界面***
CN106775258A (zh) * 2017-01-04 2017-05-31 虹软(杭州)多媒体信息技术有限公司 利用手势控制实现虚拟现实交互的方法和装置

Also Published As

Publication number Publication date
CN109697001A (zh) 2019-04-30
CN109697001B (zh) 2021-07-27

Similar Documents

Publication Publication Date Title
US10754531B2 (en) Displaying a three dimensional user interface
KR102165124B1 (ko) 동적 그래픽 인터페이스 셰도우들
JP6659644B2 (ja) 応用素子の代替的グラフィック表示の事前の生成による入力に対する低レイテンシの視覚的応答およびグラフィック処理ユニットの入力処理
CN107890672B (zh) 补偿声音信息的视觉方法及装置、存储介质、电子设备
CN111167120A (zh) 游戏中虚拟模型的处理方法和装置
CN108762482A (zh) 一种大屏幕和增强现实眼镜间数据交互方法和***
CN109725956B (zh) 一种场景渲染的方法以及相关装置
JP2018509686A (ja) インクストロークの編集および操作
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
WO2019166005A1 (zh) 智能终端及其感控方法、具有存储功能的装置
CN108776544A (zh) 增强现实中的交互方法及装置、存储介质、电子设备
US9940757B2 (en) Modifying a simulated character by direct manipulation
Setareh et al. Development of a virtual reality structural analysis system
WO2024124805A1 (zh) 交互动画的处理方法、装置、存储介质及电子装置
WO2019080870A1 (zh) 交互界面的显示方法和装置、存储介质、电子装置
CN104503663A (zh) 一种3d人机交互桌面***
US11093117B2 (en) Method for controlling animation's process running on electronic devices
JP2016018363A (ja) 仮想空間平面上に配置したオブジェクトを表示制御するゲーム・プログラム
CN115375797A (zh) 图层处理方法、装置、存储介质及电子装置
JP2016016319A (ja) 仮想空間平面上に配置したオブジェクトを表示制御するゲーム・プログラム
Jung et al. Interactive textures as spatial user interfaces in X3D
US11393171B2 (en) Mobile device based VR content control
WO2023216771A1 (zh) 虚拟天气交互方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN116774835B (zh) 基于vr手柄的虚拟环境中交互方法、设备和存储介质
Ramsbottom A virtual reality interface for previsualization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18870841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18870841

Country of ref document: EP

Kind code of ref document: A1