CN116225205A - Virtual reality equipment and content input method - Google Patents

Virtual reality equipment and content input method Download PDF

Info

Publication number
CN116225205A
CN116225205A CN202111464920.5A CN202111464920A CN116225205A CN 116225205 A CN116225205 A CN 116225205A CN 202111464920 A CN202111464920 A CN 202111464920A CN 116225205 A CN116225205 A CN 116225205A
Authority
CN
China
Prior art keywords
content
control
virtual reality
focus cursor
reality device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111464920.5A
Other languages
Chinese (zh)
Inventor
罗桂边
陆华色
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202111464920.5A priority Critical patent/CN116225205A/en
Publication of CN116225205A publication Critical patent/CN116225205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The application provides virtual reality equipment and a content input method, wherein a user can control the movement of a focus cursor of the virtual reality equipment through rotating a head and the like, so that the focus cursor is controlled to be positioned on a target position. When the user controls the focus cursor to be positioned on the target key, if the target key is a key with character content, the virtual reality device can input the character content corresponding to the target key into an input box of the webpage. The mode of selecting the target key through the focus cursor positioning simulates the process that the user clicks the target key through the remote controller or the somatosensory handle, avoids the user frequently and manually clicking the target key, and greatly improves the interaction experience between the user and the virtual reality equipment.

Description

Virtual reality equipment and content input method
Technical Field
The application relates to the technical field of virtual reality, in particular to virtual reality equipment and a content input method.
Background
Virtual Reality (VR) technology is a display technology that simulates a Virtual environment by a computer, thereby giving an environmental immersion. A virtual reality device is a device that presents virtual pictures to a user using virtual reality technology. The virtual user interface in a virtual reality device may be implemented based on the web page of the VR browser when displaying certain content.
When the VR browser opens a blank webpage, a user can input a website in an input field at the top of the webpage, so that the virtual reality equipment is controlled to refresh the webpage, and webpage content corresponding to the target website is displayed. In the process, the user needs to click the input field by manually controlling the remote controller or the somatosensory handle, and the virtual reality equipment can display an input method soft keyboard on the current webpage. Then, the user inputs the website by manually controlling the remote controller or the somatosensory handle and clicking the combined contents such as letters and/or numbers on the soft keyboard of the input method, and the virtual reality equipment displays the webpage corresponding to the website.
The method for inputting the content in the input field on the webpage requires that the user frequently and manually clicks the input method software, and the manual operation is excessive, so that the experience of the user using the virtual reality device can be affected.
Disclosure of Invention
The application provides virtual reality equipment and a content input method, which are used for solving the problem that a user needs to frequently click an input method software disc manually when inputting content in an input field at present.
In a first aspect, the present application provides a virtual reality device, comprising: a display configured to display a virtual user interface; a controller configured to: when a focus cursor of a virtual reality device is positioned on a target position, displaying a keyboard control in a rendering scene of the virtual reality device; the keyboard control is independent of the virtual user interface and is positioned at the front end of the virtual user interface; when the focus cursor is positioned on a target key of the keyboard control, displaying a content confirmation control in the rendering scene; the content confirmation control is close to the focus cursor and is positioned at the front end of the keyboard control; and when the focus cursor is positioned on the content confirmation control, displaying the character content corresponding to the target key in an input box.
When the user uses the virtual reality device, the focus cursor of the virtual reality device can be controlled to move through rotating the head and the like, so that the focus cursor is controlled to be positioned on a target position. The target position may refer to a certain key position on the keyboard control, a position of other controls in the rendering scene, and a certain position of a webpage displayed on the virtual user interface. When the user controls the focus cursor to be positioned on the target key, if the target key is a key with character content, the virtual reality device can input the character content corresponding to the target key into an input box of the webpage. The mode of selecting the target key through the focus cursor positioning simulates the process that the user clicks the target key through the remote controller or the somatosensory handle, avoids the user frequently and manually clicking the target key, and greatly improves the interaction experience between the user and the virtual reality equipment.
In some implementations, the controller is further configured to: when the virtual user interface displays a blank webpage, determining the position of the focus cursor; when the focus cursor is positioned to an input box at the top of the blank webpage, displaying a content display control in the rendering scene; the content display control is close to the focus cursor and is positioned at the front end of the virtual user interface; and displaying a keyboard control in the rendering scene when the focus cursor is positioned on the content display control.
In some implementations, the controller is further configured to: when the focus cursor is positioned to an input box at the top of the blank webpage, determining whether the first stay time of the focus cursor is greater than or equal to a first preset time; and if the first stay time is greater than or equal to a first preset time, displaying a content display control in the rendering scene.
In some implementations, the controller is further configured to: after the content display control is displayed in the rendering scene, if the focus cursor is not positioned on the content display control, determining whether a second stay time of the focus cursor on the current position is greater than or equal to a second preset time; and if the second stay time is greater than or equal to a second preset time, controlling the content display control to disappear.
In some implementations, the controller is further configured to: when the focus cursor is positioned on the content confirmation control, determining the content represented by the target key; and when the content represented by the target key is character content, inputting the character content corresponding to the target key into the input box.
In some implementations, the controller is further configured to: and deleting the last character content in the input box when the content represented by the target key is a deleting operation.
In some implementations, the controller is further configured to: when the content represented by the target key is a determining operation, controlling the virtual user interface to display a target webpage represented by the website in the input box; the website is composed of all character contents in the input frame.
In some implementations, the controller is further configured to: when the focus cursor is positioned at a position beyond the keyboard control, displaying a content hiding control in the rendering scene; and when the focus cursor is positioned on the content hiding control, controlling the keyboard control to disappear.
In a second aspect, the present application further provides a content input method, including: when a focus cursor of a virtual reality device is positioned on a target position, displaying a keyboard control in a rendering scene of the virtual reality device; the keyboard control is independent of the virtual user interface and is positioned at the front end of the virtual user interface; when the focus cursor is positioned on a target key of the keyboard control, displaying a content confirmation control in the rendering scene; the content confirmation control is close to the focus cursor and is positioned at the front end of the keyboard control; and when the focus cursor is positioned on the content confirmation control, displaying the character content corresponding to the target key in an input box.
The content input method in the second aspect of the present application may be applied to the virtual reality device in the first aspect and specifically implemented by the controller in the virtual reality device, so that the beneficial effects of the content input method in the second aspect are the same as those of the virtual reality device in the first aspect, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 illustrates a display system architecture diagram including a virtual reality device, according to some embodiments;
FIG. 2 illustrates a VR scene global interface schematic in accordance with some embodiments;
FIG. 3 illustrates a recommended content region schematic diagram of a global interface, according to some embodiments;
FIG. 4 illustrates an application shortcut entry area schematic for a global interface in accordance with some embodiments;
FIG. 5 illustrates a suspension diagram of a global interface, according to some embodiments;
FIG. 6 illustrates a schematic diagram of displaying a web page on a virtual user interface in accordance with some embodiments;
FIG. 7 illustrates another diagram of displaying a web page on a virtual user interface in accordance with some embodiments;
FIG. 8 illustrates a flow diagram of a virtual reality device inputting content into an input box, in accordance with some embodiments;
FIG. 9 illustrates a schematic diagram of a keyboard control in a rendered scene, in accordance with some embodiments;
FIG. 10 illustrates a schematic diagram of a content validation control in a rendered scene in accordance with some embodiments;
FIG. 11 illustrates a flow diagram for a virtual reality device displaying keyboard controls, according to some embodiments;
FIG. 12 illustrates a schematic diagram of content display controls in a rendered scene, in accordance with some embodiments;
FIG. 13 illustrates a flow diagram of a virtual reality device displaying content display controls, according to some embodiments;
FIG. 14 illustrates another flow diagram for a virtual reality device displaying content display controls, according to some embodiments;
FIG. 15 illustrates another flow diagram for a virtual reality device entering content into an input box, according to some embodiments;
FIG. 16 illustrates a schematic diagram of symbol keyboard controls in a rendered scene, in accordance with some embodiments;
FIG. 17 illustrates a schematic diagram of numeric keyboard controls in a rendered scene, in accordance with some embodiments;
FIG. 18 illustrates a schematic diagram of a keyboard control with capital letter keys in a rendered scene, in accordance with some embodiments;
FIG. 19 illustrates a schematic diagram of content hiding controls in a rendered scene according to some embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the exemplary embodiments of the present application more apparent, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is apparent that the described exemplary embodiments are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure has been presented in terms of an exemplary embodiment or embodiments, it should be understood that various aspects of the disclosure can be practiced separately from the disclosure in a complete subject matter.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
Reference throughout this specification to "multiple embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic shown or described in connection with one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without limitation. Such modifications and variations are intended to be included within the scope of the present application.
In this embodiment, the virtual reality device 500 generally refers to a display device that can be worn on the face of a user to provide an immersive experience for the user, including, but not limited to, VR glasses, augmented reality devices (Augmented Reality, AR), VR gaming devices, mobile computing devices, and other wearable computers. In some embodiments of the present application, VR glasses are taken as an example to describe a technical solution, and it should be understood that the provided technical solution may be applied to other types of virtual reality devices at the same time. The virtual reality device 500 may operate independently or be connected to other intelligent display devices as an external device, where the display device may be an intelligent tv, a computer, a tablet computer, a server, etc.
The virtual reality device 500 may display a media asset screen after being worn on the face of the user, providing close range images for both eyes of the user to bring an immersive experience. To present the asset screen, the virtual reality device 500 may include a plurality of components for displaying the screen and face wear. Taking VR glasses as an example, the virtual reality device 500 may include components such as a housing, a position fixture, an optical system, a display assembly, a gesture detection circuit, an interface circuit, and the like. In practical applications, the optical system, the display assembly, the gesture detection circuit and the interface circuit may be disposed in the housing, so as to be used for presenting a specific display screen; the two sides of the shell are connected with position fixing pieces so as to be worn on the face of a user.
When the gesture detection circuit is used, gesture detection elements such as a gravity acceleration sensor and a gyroscope are arranged in the gesture detection circuit, when the head of a user moves or rotates, the gesture of the user can be detected, detected gesture data are transmitted to processing elements such as a controller, and the processing elements can adjust specific picture contents in the display assembly according to the detected gesture data.
As shown in fig. 1, in some embodiments, the virtual reality device 500 may be connected to the display device 200, and a network-based display system is constructed between the virtual reality device 500, the display device 200, and the server 400, and data interaction may be performed in real time, for example, the display device 200 may obtain media data from the server 400 and play the media data, and transmit specific screen content to the virtual reality device 500 for display.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device, among others. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired. The display device 200 may provide a broadcast receiving tv function, and may additionally provide an intelligent network tv function of a computer supporting function, including, but not limited to, a network tv, an intelligent tv, an Internet Protocol Tv (IPTV), etc.
The display device 200 and the virtual reality device 500 also communicate data with the server 400 via a variety of communication means. The display device 200 and the virtual reality device 500 may be allowed to communicate via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
In the course of data interaction, the user may operate the display device 200 through the mobile terminal 300 and the remote controller 100. The mobile terminal 300 and the remote controller 100 may communicate with the display device 200 by a direct wireless connection or by a non-direct connection. That is, in some embodiments, the mobile terminal 300 and the remote controller 100 may communicate with the display device 200 through a direct connection manner of bluetooth, infrared, etc. When transmitting the control instruction, the mobile terminal 300 and the remote controller 100 may directly transmit the control instruction data to the display device 200 through bluetooth or infrared.
In other embodiments, the mobile terminal 300 and the remote controller 100 may also access the same wireless network with the display device 200 through a wireless router to establish indirect connection communication with the display device 200 through the wireless network. When transmitting the control command, the mobile terminal 300 and the remote controller 100 may transmit the control command data to the wireless router first, and then forward the control command data to the display device 200 through the wireless router.
In some embodiments, the user may also use the mobile terminal 300 and the remote controller 100 to directly interact with the virtual reality device 500, for example, the mobile terminal 300 and the remote controller 100 may be used as handles in a virtual reality scene to implement functions such as somatosensory interaction.
In some embodiments, the display components of the virtual reality device 500 include a display screen and drive circuitry associated with the display screen. To present a specific picture and bring about a stereoscopic effect, two display screens may be included in the display assembly, corresponding to the left and right eyes of the user, respectively. When the 3D effect is presented, the picture contents displayed in the left screen and the right screen are slightly different, and a left camera and a right camera of the 3D film source in the shooting process can be respectively displayed. Because of the content of the screen observed by the left and right eyes of the user, a display screen with a strong stereoscopic impression can be observed when the display screen is worn.
The optical system in the virtual reality device 500 is an optical module composed of a plurality of lenses. The optical system is arranged between the eyes of the user and the display screen, and the optical path can be increased through the refraction of the optical signals by the lens and the polarization effect of the polaroid on the lens, so that the content presented by the display component can be clearly presented in the visual field of the user. Meanwhile, in order to adapt to the vision condition of different users, the optical system also supports focusing, namely, the position of one or more of the lenses is adjusted through the focusing assembly, the mutual distance among the lenses is changed, and therefore the optical path is changed, and the picture definition is adjusted.
The interface circuit of the virtual reality device 500 may be used to transfer interaction data, and besides transferring gesture data and displaying content data, in practical application, the virtual reality device 500 may also be connected to other display devices or peripheral devices through the interface circuit, so as to implement more complex functions by performing data interaction with the connection device. For example, the virtual reality device 500 may be connected to a display device through an interface circuit, so that a displayed screen is output to the display device in real time for display. For another example, the virtual reality device 500 may also be connected to a handle via interface circuitry, which may be operated by a user in a hand, to perform related operations in the VR user interface.
Wherein the VR user interface can be presented as a plurality of different types of UI layouts depending on user operation. For example, the user interface may include a global interface, such as the global UI shown in fig. 2 after the AR/VR terminal is started, which may be displayed on a display screen of the AR/VR terminal or may be displayed on a display of the display device. The global UI may include a recommended content area 1, a business class extension area 2, an application shortcut entry area 3, and a hover area 4.
The recommended content area 1 is used for configuring TAB columns of different classifications; media resources, themes and the like can be selectively configured in the columns; the media assets may include 2D movies, educational courses, travel, 3D, 360 degree panoramas, live broadcasts, 4K movies, program applications, games, travel, etc. services with media asset content, and the fields may select different template styles, may support simultaneous recommended programming of media assets and themes, as shown in fig. 3.
In some embodiments, the content recommendation area 1 may also include a main interface and a sub-interface. As shown in fig. 3, the portion located in the center of the UI layout is a main interface, and the portions located at both sides of the main interface are sub-interfaces. The main interface and the auxiliary interface can be used for respectively displaying different recommended contents. For example, according to the recommended type of the sheet source, the service of the 3D sheet source may be displayed on the main interface; and the left side sub-interface displays the business of the 2D film source, and the right side sub-interface displays the business of the full-scene film source.
Obviously, for the main interface and the auxiliary interface, different service contents can be displayed and simultaneously presented as different content layouts. And, the user can control the switching of the main interface and the auxiliary interface through specific interaction actions. For example, by controlling the focus mark to move left and right, the focus mark moves right when the focus mark is at the rightmost side of the main interface, the auxiliary interface at the right side can be controlled to be displayed at the central position of the UI layout, at this time, the main interface is switched to the service for displaying the full-view film source, and the auxiliary interface at the left side is switched to the service for displaying the 3D film source; and the right side sub-interface is switched to the service of displaying the 2D patch source.
In addition, in order to facilitate the user to watch, the main interface and the auxiliary interface can be displayed respectively through different display effects. For example, the transparency of the secondary interface can be improved, so that the secondary interface obtains a blurring effect, and the primary interface is highlighted. The auxiliary interface can be set as gray effect, the main interface is kept as color effect, and the main interface is highlighted.
In some embodiments, the top of the recommended content area 1 may also be provided with a status bar, in which a plurality of display controls may be provided, including time, network connection status, power, and other common options. The content included in the status bar may be user-defined, e.g., weather, user avatar, etc., may be added. The content contained in the status bar may be selected by the user to perform the corresponding function. For example, when the user clicks on a time option, the virtual reality device 500 may display a time device window in the current interface or jump to a calendar interface. When the user clicks on the network connection status option, the virtual reality device 500 may display a WiFi list on the current interface or jump to the network setup interface.
The content displayed in the status bar may be presented in different content forms according to the setting status of a specific item. For example, the time control may be displayed directly as specific time text information and display different text at different times; the power control may be displayed as different pattern styles according to the current power remaining situation of the virtual reality device 500.
The status bar is used to enable the user to perform a common control operation, so as to implement quick setting of the virtual reality device 500. Since the setup procedure for the virtual reality device 500 includes a number of items, all of the commonly used setup options cannot generally be displayed in the status bar. To this end, in some embodiments, an expansion option may also be provided in the status bar. After the expansion options are selected, an expansion window may be presented in the current interface, and a plurality of setting options may be further provided in the expansion window for implementing other functions of the virtual reality device 500.
For example, in some embodiments, after the expansion option is selected, a "shortcut center" option may be set in the expansion window. After clicking the shortcut center option, the user may display a shortcut center window by the virtual reality device 500. The shortcut center window can comprise screen capturing, screen recording and screen throwing options for respectively waking up corresponding functions.
The traffic class extension area 2 supports extension classes that configure different classes. And if the new service type exists, supporting configuration independent TAB, and displaying the corresponding page content. The service classification in the service classification expansion area 2 can also be subjected to sequencing adjustment and offline service operation. In some embodiments, the service class extension area 2 may include content: movie, education, travel, application, my. In some embodiments, the traffic class extension area 2 is configured to show large traffic classes TAB and support more classes configured, the icon of which supports the configuration as shown in fig. 3.
The application shortcut entry area 3 may specify that pre-installed applications, which may be specified as a plurality, are displayed in front for operational recommendation, supporting configuration of special icon styles to replace default icons. In some embodiments, the application shortcut entry area 3 further includes a left-hand movement control, a right-hand movement control for moving the options target, for selecting different icons, as shown in fig. 4.
The hover region 4 may be configured to be above the left diagonal side, or above the right diagonal side of the fixation region, may be configured as an alternate character, or may be configured as a jump link. For example, the suspension jumps to an application or displays a designated function page after receiving a confirmation operation, as shown in fig. 5. In some embodiments, the suspension may also be configured without jump links, purely for visual presentation.
In some embodiments, the global UI further includes a status bar at the top for displaying time, network connection status, power status, and more shortcut entries. After the handle of the AR/VR terminal is used, namely the handheld controller selects the icon, the icon displays a text prompt comprising left and right expansion, and the selected icon is stretched and expanded left and right according to the position.
For example, after selecting the search icon, the search icon will display the text "search" and the original icon, and after further clicking the icon or text, the search icon will jump to the search page; for another example, clicking on the favorites icon jumps to favorites TAB, clicking on the history icon defaults to locating the display history page, clicking on the search icon jumps to the global search page, clicking on the message icon jumps to the message page.
In some embodiments, the interaction may be performed through a peripheral device, e.g., a handle of the AR/VR terminal may operate a user interface of the AR/VR terminal, including a back button; the home key can realize the reset function by long-time pressing; volume up and down buttons; and the touch area can realize clicking, sliding and holding drag functions of the focus.
The VR user interfaces in the above embodiments may also be referred to as virtual user interfaces. The content displayed by the virtual user interface may be provided by an application or VR browser. When the virtual user interface is provided by the VR browser, the content displayed on the virtual user interface is the webpage content of the VR browser at the moment because the webpage content is provided in the browser.
FIG. 6 illustrates a schematic diagram of displaying a web page on a virtual user interface, according to some embodiments. As shown in fig. 6, if the web page currently displayed on the virtual user interface is a web page with content, the web address of the current web page is typically displayed in the input field at the top of the web page, for example, "https: www/searchq=% e8%91& form=qn & sp=fe489F 2", etc. If the current web page does not have substantial content or no content on the current web page, then the web page top entry field is without the web address of the current web page. For example, as shown in fig. 7, on a web page on which icons or marks or the like having no substantial content are displayed, a home page of only one VR browser or a home page of a certain website or the like is represented, a specific web address is not displayed in the input field. Generally, a web page without any content is referred to as a blank web page, and since the web page as shown in fig. 7 does not have any substantial content, it may also be referred to as a blank web page in the embodiment of the present application.
When the VR browser in the virtual reality device 500 opens a blank web page, the user may input a web address in the input field at the top of the web page, thereby controlling the virtual reality device 500 to refresh the page and displaying the web page content corresponding to the target web address. In this process, the user needs to manually control the remote controller or the somatosensory handle to simulate the operation of clicking the mouse, and click the input field, so that the virtual reality device 500 displays the input method soft keyboard on the current webpage. Then, the user manually controls the remote controller or the somatosensory handle to simulate the clicking operation of the mouse, and clicks the combined contents such as letters and/or numbers on the soft keyboard of the input method to input the website. After the website is input, the virtual reality device 500 displays the webpage content corresponding to the website on the virtual user interface.
In the manner of inputting contents in the input field on the web page, the user needs to frequently and manually click the input method software, and the experience of the user using the virtual reality device 500 is affected due to excessive manual operation.
In order to solve the above-mentioned problems, a virtual reality device 500 is provided in an embodiment of the present application, which includes a display and a controller. Wherein the display is configured to display a virtual user interface. As shown in fig. 8, the controller is configured to perform the steps of:
in step S101, when the focus cursor 5 of the virtual reality device 500 is positioned on the target position, the keyboard control 6 is displayed in the rendered scene of the virtual reality device 500.
In general, the virtual reality device 500 has a focus cursor 5 of the device itself, and the position of the focus cursor 5 relative to the user in the rendering scene provided by the virtual reality device 500 is unchanged, so that when the head of the user moves, the focus cursor 5 moves along with the movement of the head of the user, and replaces a remote controller or a somatosensory handle to control the virtual reality device 500. When the user controls the focus cursor 5 to move to a control or a content, the user can be considered to select the control or the content.
Also, in the embodiment of the present application, when the user controls the focus cursor 5 to be positioned on the target position, the user may be considered to need to control the display of the keyboard control 6. The target position may be an input field position at the top of the blank web page, or may be a position of other controls displayed in the rendering scene after the focus cursor 5 is positioned to the input field.
The rendering scene referred to herein refers to a virtual scene constructed by a rendering engine of the virtual reality device 500 through a rendering program. For example, the virtual reality device 500 based on the units 3D rendering engine may construct a unit 3D scene when rendering a display. In a unit 3D scene, various virtual objects and functionality controls may be added to render a particular usage scene. For example, when playing multimedia resources, a display panel may be added in the unit 3D scene, where the display panel is used to present the multimedia resource picture. Meanwhile, virtual object models such as seats, sound equipment, people and the like can be added in the units 3D scene, so that cinema effect is created.
FIG. 9 illustrates a schematic diagram of a keyboard control in a rendered scene, according to some embodiments. As shown in fig. 9, the keyboard control 6 in the rendered scene is independent of and at the front end of the virtual user interface. The key contents displayed on the keyboard control 6 may refer to the key contents on the input method keyboard in the mobile phone, and may include letter keys such as a, b, c, etc., a symbol switching key 10, a number switching key 11, a case switching key 12, a delete key 13, a go key 14, a space key 15, etc.
The user can control the focus cursor 5 to be positioned on any key of the keyboard control according to the own requirement, so as to select the target key.
In step S102, when the focus cursor 5 is positioned on the target key of the keyboard control 6, the content confirmation control 7 is displayed in the rendering scene.
For example, the user first positions the focus cursor 5 over the letter key "q" on the keyboard control 6, and then the virtual reality device 500 will display the content confirmation control 7 as shown in fig. 10 at a position near the focus cursor 5 at that time. The content confirmation control 7 shown in fig. 10 is near the focus cursor 5 and at the front end of the keyboard control 6. A prompt, for example, "select this content, focus on here" may be displayed on the content confirmation control 7. If the user confirms that the letter "q" input is desired, control of the focus cursor 5 to position over the content confirmation control 7 may continue.
Step S103, when the focus cursor 5 is positioned on the content confirmation control 7, the character content corresponding to the target key is displayed in the input box.
Taking the website "abc.com" as an example, when inputting the website in the input field, the user needs to position the focus cursor 5 on the letter key "a", and then after displaying the content confirmation control 7 in the rendering scene, position the focus cursor 5 on the content confirmation control 7, so as to input the letter "a" in the input field. The user positions the focus cursor 5 over the letter key "b", and then after displaying the content confirmation control 7 in the rendering scene, positions the focus cursor 5 over the content confirmation control 7, thereby inputting the letter "b" in the input field. The user positions the focus cursor 5 over the letter key "c", and then after displaying the content confirmation control 7 in the rendering scene, positions the focus cursor 5 over the content confirmation control 7, thereby inputting the letter "c" in the input field. The user positions the focus cursor 5 over the symbol key "," and then after displaying the content confirmation control 7 in the rendering scene, positions the focus cursor 5 over the content confirmation control 7, thereby inputting the symbol "," in the input field. In the same manner as described above, the following letters "c", "o", "m" continue to be input.
In the embodiment of the present application, the content determination control 7 is displayed once every time the focus cursor 5 is positioned on one key. After the focus cursor 5 is positioned on a certain button, the display of the content determination control 7 is triggered, and the content determination control 7 displayed at this time is only related to the button positioning operation at this time. The virtual reality device 500 inputs the content corresponding to the key to the input field, and also controls the content determination control 7 corresponding to the key positioning operation to disappear.
When using the virtual reality device 500 of the present application, the user can control the movement of the focus cursor 5 of the virtual reality device 500 by turning the head or the like, thereby controlling the positioning of the focus cursor 5 to the target position. The target position may refer to a certain key position on the keyboard control 6, a position of other controls in the rendering scene, and a certain position of a webpage displayed on the virtual user interface. When the user controls the focus cursor 5 to be positioned on the target key, if the target key is a key having character contents, the virtual reality device 500 may input the character contents corresponding to the target key into the input box of the web page. The mode of selecting the target key through the focus cursor 5 positioning simulates the process that the user clicks the target key through the remote controller or the somatosensory handle, avoids the user frequently and manually clicking the target key, and greatly improves the interaction experience between the user and the virtual reality device 500.
In the foregoing embodiment, when the user controls the focus cursor 5 to be positioned on the target position, the virtual reality device 500 may display the keyboard control 6 in the rendered scene. The target position may be an input field position at the top of the blank web page, or may be a position of other controls displayed in the rendering scene after the focus cursor 5 is positioned to the input field.
In some embodiments, if the target location is an input field location at the top of a blank web page. The user can directly control the focus cursor 5 to be positioned to the input field. In this process, the controller of the virtual reality device 500 may be configured to display the keyboard control 6 in the rendered scene after determining that the focus cursor 5 is positioned to the input field.
In actual operation, the user may pass through other key positions during the process of moving the focus cursor 5 to the target position, so as to avoid that the controller of the virtual reality device 500 determines the other key positions as the target positions, and in some embodiments, the first preset time may be further set to determine whether the residence time of the focus cursor 5 at the target positions meets the requirement. For example, if the time that the focus cursor 5 remains on the input field is greater than or equal to the first preset time, it may be determined that the dwell time of the focus cursor 5 satisfies the condition, at which time the keyboard control 6 is displayed again. The focus cursor 5 may pass through other keys, such as letter keys "c", "d", "e", etc., while moving to the input field, and the time that the focus cursor 5 stays on these letter keys is short due to the passing only, and may not meet the above requirement, i.e., the stay time on the letter keys is less than the first preset time, in which case the virtual reality device 500 may not display the keyboard control 6 in the rendering scene.
In some embodiments, if the target position is the position of the focus cursor 5 to other controls displayed in the rendered scene after positioning to the input field. The user may first position the focus cursor 5 to the input field and then the virtual reality device 500 will display the content display control 8 in the rendered scene, and when the user positions the focus cursor 5 over the content display control 8, the virtual reality device 500 will then display the keyboard control 6. In this process, as shown in fig. 11, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S201, when the virtual user interface displays a blank web page, the position where the focus cursor 5 is positioned is determined.
In step S202, when the focus cursor 5 is positioned to the input box at the top of the blank web page, the content display control 8 is displayed in the rendered scene.
FIG. 12 illustrates a schematic diagram of content display controls in a rendered scene, in accordance with some embodiments. As shown in fig. 12, the content display control 8 is near the focus cursor 5 and at the front end of the virtual user interface. A prompt such as "pop-up keyboard, focus here" may be displayed on the content display control 8. If the user does need to display an input method soft keyboard, the focus cursor 5 can be controlled to be positioned over the content display control 8.
The position of the content display control 8 is the target position.
In step S203, when the focus cursor 5 is positioned on the content display control 8, the keyboard control 6 is displayed in the rendered scene.
At the same time as the keyboard control 6 is displayed, the virtual reality device 500 also needs to control the content display control 8 to disappear. Thereafter, the user may select a content input corresponding to a key on the keyboard control 6.
Additionally, in some embodiments, to avoid that the controller of the virtual reality device 500 determines other positions where the focus cursor 5 stays when moving as the position of the content display control 8, a first preset time may also be set to determine whether the stay time of the focus cursor 5 on the content display control 8 meets the requirement. In this process, as shown in fig. 13, the controller of the virtual reality device 500 may be configured to perform the following steps:
in step S301, when the focus cursor 5 is positioned to the input box at the top of the blank web page, it is determined whether the first dwell time of the focus cursor 5 is greater than or equal to the first preset time.
In step S302, if the first dwell time is greater than or equal to the first preset time, the content display control 8 is displayed in the rendered scene. If the first dwell time is less than the first preset time, it is indicated that the user may not need to display the keyboard control 6, and thus the virtual reality device 500 will not display the content display control 8.
After the content display control 8 is displayed in the rendered scene, the user may not be able to position the focus cursor 5 over the content display control 8 in a timely manner for some reason, in which case the virtual reality device 500 may temporarily hold the content display control 8 for a period of time during which the keyboard control 6 may continue to be displayed if the user positions the focus cursor 5 over the content display control 8. During this time, the content display control 8 will disappear if the user has not yet positioned the focus cursor 5 over the content display control 8.
In the above procedure, as shown in fig. 14, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S401, if the focus cursor 5 is not positioned on the content display control 8, it is determined whether the second stay time of the focus cursor 5 at the current position is greater than or equal to the second preset time.
In fact, if the user keeps the focus cursor 5 within the input box, the virtual reality device 500 always displays the content display control 8. In this way, when a user desires to display the keyboard control 6, the focus cursor 5 can be positioned over the content display control 8 at any time.
When the user moves the focus cursor 5 to a position outside the input box, the content display control 8 cannot always be displayed. Thus, in embodiments of the present application, a second preset time may be set to determine whether to continue displaying the content display control 8. And, after the focus cursor 5 moves out of the input box, and is not positioned on the content display control 8, determining whether the second stay time of the focus cursor 5 staying at the current position is greater than or equal to the second preset time.
In step S402, if the second residence time is greater than or equal to the second preset time, the content display control 8 is controlled to disappear. The second dwell time is greater than or equal to the second preset time, which indicates the time content displayed on the content display control 8, and the user may always position the focus cursor 5 to a position outside the input box and the content display control 8, so the virtual reality device 500 may consider that the user does not need to use the keyboard control 6, and at this time, needs to control the content display control 8 to disappear.
The second dwell time is less than the second preset time, which indicates that the user moves the focus cursor 5 in the time of displaying the content display control 8, and the focus cursor 5 may be positioned on the content display control 8 when moving, so the virtual reality device 500 considers that the user may need to use the keyboard control 6, and at this time, the content display control 8 may be continuously controlled to display. Until the focus cursor 5 is positioned at a position outside the input box and the content display control 8 again, and the second time of stay is greater than or equal to the second preset time, the content display control 8 is controlled to disappear.
The key configuration on the keyboard control 6 in the embodiments of the present application may be similar to the key configuration of a 26-key soft keyboard on a cell phone. As shown in fig. 9, the letter keys a, b, c, etc., the symbol switching key 10, the number switching key 11, the case switching key 12, the delete key 13, the go-to key 14, the space key 15, etc., are included thereon. Of course, different keys have different functions or modes of operation. For example, when the user determines to select a letter key, a symbol key, or a number key, the virtual reality device 500 directly inputs the corresponding letter, symbol, or number, etc. into the input box. When the user selects the function key, the virtual reality device 500 will first operate according to the function requirement of the function key, and then be selected by the user.
Thus, in some embodiments, as shown in fig. 15, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S501, when the focus cursor 5 is positioned on the content confirmation control 7, the content indicated by the target key is determined. The content represented by the target key includes letters, symbols, numbers, functions, etc. Also, the letter, symbol, and number content may be collectively referred to as character content.
In step S502, when the content indicated by the target key is character content, the character content corresponding to the target key is input into the input box.
For example, when the target key represents the letter a, the controller will input the letter "a" into the input box; when the target key represents a symbol\the controller will input the symbol "\into the input box; when the target key represents the number 4, the controller will input the number "4" into the input box.
When the content indicated by the target key is a function or an operation, the controller firstly realizes the corresponding function or performs the corresponding operation.
When the target key represents a deleting operation, the controller deletes the last character content in the input box; when the target key represents the determining operation, the controller can jump the webpage according to the website consisting of the character content in the current input box. The delete key 13 corresponds to a delete operation, and the go-to key 14 corresponds to a confirm operation.
When the target key represents a symbol switching operation, the controller switches the currently displayed keyboard control 6 to be displayed as the symbol keyboard control 16, and then the user can select a specific key on the symbol keyboard control 16.
FIG. 16 illustrates a schematic diagram of symbol keyboard controls in a rendered scene, in accordance with some embodiments. After the user positions the focus cursor 5 to the symbol switching button 10 on the keyboard control 6 shown in fig. 9, the content confirmation control 7 is displayed in the rendering scene, and after the user positions the focus cursor 5 to the content confirmation control 7, the virtual reality device 500 switches the currently displayed keyboard control 6 shown in fig. 9 to the symbol keyboard control 16 shown in fig. 16.
In some embodiments, the symbol toggle button 10 may also be located on the keypad control 17, and the user may control the focus cursor 5 to select the symbol toggle button 10 on the keypad control 17, so that the virtual reality device 500 will toggle the currently displayed keypad control 17 to be displayed as the symbol keypad control 16.
When the target key represents the number switching operation, the controller will switch the currently displayed keyboard control 6 to display the number keyboard control 17, and then the user can select a specific key on the number keyboard control 17.
FIG. 17 illustrates a schematic diagram of numeric keyboard controls in a rendered scene, in accordance with some embodiments. After the user positions the focus cursor 5 to the number switching button 11 on the keyboard control 6 shown in fig. 9, the content confirmation control 7 is displayed in the rendering scene, and after the user positions the focus cursor 5 to the content confirmation control 7, the virtual reality device 500 switches the currently displayed keyboard control 6 shown in fig. 9 to the number keyboard control 17 shown in fig. 17.
In some embodiments, the number switch button 11 may also exist on the symbol keyboard control 16, and the user may control the focus cursor 5 to select the number switch button 11 on the symbol keyboard control 16, so that the virtual reality device 500 will switch and display the currently displayed symbol keyboard control 16 as the number keyboard control 17.
When the target key represents the return operation, the controller switches the currently displayed numeric keyboard control 17 or symbol keyboard control 16 back to the previous level of keyboard control 6, and then the user can select a specific key on the keyboard control 6, etc.
When the virtual reality device 500 displays the keyboard control 6, typically the control with lower case buttons as shown in fig. 9 is displayed first. After the user selects the symbol switch button 10 or the number switch button 11, the symbol keyboard control 16 or the number keyboard control 17 corresponds to the next level of control of the keyboard control 6, so, as shown in fig. 16 and 17, the return button 9 is generally displayed on the number keyboard control 17 and the symbol keyboard control 16, and thus, after the user selects the return button 9 on the symbol keyboard control 16 or the number keyboard control 17, the virtual reality device 500 returns to displaying the previous level of keyboard control 6.
When the target key represents the case-to-case conversion operation, the controller will switch and display the lower case letter keys on the currently displayed keyboard control 6 as upper case letter keys, and then the user can select a specific key on the switched keyboard control 6.
FIG. 18 illustrates a schematic diagram of a keyboard control with capital letter keys in a rendered scene, according to some embodiments. After the user positions the focus cursor 5 over the case shift key 12 on the keyboard control 6 shown in fig. 9, the content confirmation control 7 is displayed in the rendering scene, and after the user positions the focus cursor 5 over the content confirmation control 7, the virtual reality device 500 switches the currently displayed keyboard control 6 shown in fig. 9 to the keyboard control 6 with the uppercase letter keys as shown in fig. 18.
After the keyboard control 6 has been displayed in the current rendering scene, if the user wants to cancel the keyboard control 6 or has entered completion, the user has moved the control focus cursor 5 to a position outside the keyboard control 6. In this process, the controller of the virtual reality device 500 may be further configured to perform the steps of:
in step S601, when the focus cursor 5 is positioned at a position other than the keyboard control 6, the content hiding control 18 is displayed in the rendered scene.
FIG. 19 illustrates a schematic diagram of content hiding controls in a rendered scene according to some embodiments. As shown in fig. 19, the user positioning the focus cursor 5 anywhere over the keyboard control 6, the virtual reality device 500 will display a content hiding control 18, and a prompt, such as "hide keyboard, focus here" may be displayed on the content hiding control 18. If the user wants to cancel the keyboard control 6, the user can continue to position the focus cursor 5 over the content hiding control 18.
In addition, in order to avoid the problem that the user moves the focus cursor 5 out of the keyboard control 6 due to a misoperation when inputting the content, so that the content hiding control 18 is displayed immediately, in some embodiments, a third preset time may also be set to determine whether to display the content hiding control 18.
For example, upon the virtual reality device 500 detecting that the focus cursor 5 is moving outside of the keyboard control 6, the determination of the time or dwell time for the focus cursor 5 to move outside of the keyboard control 6 may continue. If the time of movement or dwell exceeds a third preset time, then the virtual reality device 500 will display the content hiding control 18.
In step S602, when the focus cursor 5 is positioned on the content hiding control 18, the control keyboard control 6 disappears.
In addition, the user may want to cancel the keyboard control when using the symbol keyboard control 16 or the numeric keyboard control 17 described above. In this case, the user may still move the focus cursor 5 to a position outside of the symbol keyboard control 16 or the numeric keyboard control 17, and the virtual reality device 500 will also display the content hiding control 18 in the rendered scene.
As can be seen from the above, when the user controls the focus cursor 5 to be positioned on the target key, if the target key is a key having character content, the virtual reality device 500 can input the character content corresponding to the target key into the input box of the web page. The mode of selecting the target key through the focus cursor 5 positioning simulates the process that the user clicks the target key through the remote controller or the somatosensory handle, avoids the user frequently clicking the target key manually, and greatly improves the interaction experience between the user and the virtual reality device 500.
In order to solve the above problem that the user needs to frequently click the input method software disc manually when inputting the content in the input field, the embodiment of the present application further provides a content input method, which can be applied to the virtual reality device 500 of the foregoing embodiment and implemented by the controller of the virtual reality device 500. The method specifically comprises the following steps:
in step S101, when the focus cursor 5 of the virtual reality device 500 is positioned on the target position, the keyboard control 6 is displayed in the rendered scene of the virtual reality device 500.
Wherein the keyboard control 6 is independent of the virtual user interface and at the front end of the virtual user interface.
Step S102, when the focus cursor 5 is positioned on the target key of the keyboard control 6, displaying a content confirmation control 7 in the rendered scene.
Wherein the content confirmation control 7 is close to the focus cursor 5 and at the front end of the keyboard control 6.
And step S103, when the focus cursor 5 is positioned on the content confirmation control 7, displaying the character content corresponding to the target key in an input box.
Since the content input method in the embodiment of the present application may be applied to the foregoing virtual reality device 500, other content related to the content input method in the embodiment of the present application may refer to the foregoing virtual reality and the content of the device 500, and will not be described herein. In addition, the content input method in the embodiment of the application can also avoid that the user frequently clicks the target key manually, and greatly improves the interaction experience between the user and the virtual reality device 500.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (10)

1. A virtual reality device, comprising:
a display configured to display a virtual user interface;
a controller configured to:
when a focus cursor of a virtual reality device is positioned on a target position, displaying a keyboard control in a rendering scene of the virtual reality device; the keyboard control is independent of the virtual user interface and is positioned at the front end of the virtual user interface;
when the focus cursor is positioned on a target key of the keyboard control, displaying a content confirmation control in the rendering scene; the content confirmation control is close to the focus cursor and is positioned at the front end of the keyboard control;
and when the focus cursor is positioned on the content confirmation control, displaying the character content corresponding to the target key in an input box.
2. The virtual reality device of claim 1, wherein the controller is further configured to:
when the virtual user interface displays a blank webpage, determining the position of the focus cursor;
when the focus cursor is positioned to an input box at the top of the blank webpage, displaying a content display control in the rendering scene; the content display control is close to the focus cursor and is positioned at the front end of the virtual user interface;
and displaying a keyboard control in the rendering scene when the focus cursor is positioned on the content display control.
3. The virtual reality device of claim 2, wherein the controller is further configured to:
when the focus cursor is positioned to an input box at the top of the blank webpage, determining whether the first stay time of the focus cursor is greater than or equal to a first preset time;
and if the first stay time is greater than or equal to a first preset time, displaying a content display control in the rendering scene.
4. The virtual reality device of claim 2, wherein the controller is further configured to:
after the content display control is displayed in the rendering scene, if the focus cursor is not positioned on the content display control, determining whether a second stay time of the focus cursor on the current position is greater than or equal to a second preset time;
And if the second stay time is greater than or equal to a second preset time, controlling the content display control to disappear.
5. The virtual reality device of claim 1, wherein the controller is further configured to:
when the focus cursor is positioned on the content confirmation control, determining the content represented by the target key;
and when the content represented by the target key is character content, inputting the character content corresponding to the target key into the input box.
6. The virtual reality device of claim 5, wherein the controller is further configured to:
and deleting the last character content in the input box when the content represented by the target key is a deleting operation.
7. The virtual reality device of claim 5, wherein the controller is further configured to:
when the content represented by the target key is a determining operation, controlling the virtual user interface to display a target webpage represented by the website in the input box; the website is composed of all character contents in the input frame.
8. The virtual reality device of claim 1, wherein the controller is further configured to:
When the focus cursor is positioned at a position beyond the keyboard control, displaying a content hiding control in the rendering scene;
and when the focus cursor is positioned on the content hiding control, controlling the keyboard control to disappear.
9. A method of content input, the method comprising:
when a focus cursor of a virtual reality device is positioned on a target position, displaying a keyboard control in a rendering scene of the virtual reality device; the keyboard control is independent of the virtual user interface and is positioned at the front end of the virtual user interface;
when the focus cursor is positioned on a target key of the keyboard control, displaying a content confirmation control in the rendering scene; the content confirmation control is close to the focus cursor and is positioned at the front end of the keyboard control;
and when the focus cursor is positioned on the content confirmation control, displaying the character content corresponding to the target key in an input box.
10. The method according to claim 9, wherein the method further comprises:
when the virtual user interface displays a blank webpage, determining the position of the focus cursor;
When the focus cursor is positioned to an input box at the top of the blank webpage, displaying a content display control in the rendering scene; the content display control is close to the focus cursor and is positioned at the front end of the virtual user interface;
and displaying a keyboard control in the rendering scene when the focus cursor is positioned on the content display control.
CN202111464920.5A 2021-12-03 2021-12-03 Virtual reality equipment and content input method Pending CN116225205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111464920.5A CN116225205A (en) 2021-12-03 2021-12-03 Virtual reality equipment and content input method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111464920.5A CN116225205A (en) 2021-12-03 2021-12-03 Virtual reality equipment and content input method

Publications (1)

Publication Number Publication Date
CN116225205A true CN116225205A (en) 2023-06-06

Family

ID=86575514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111464920.5A Pending CN116225205A (en) 2021-12-03 2021-12-03 Virtual reality equipment and content input method

Country Status (1)

Country Link
CN (1) CN116225205A (en)

Similar Documents

Publication Publication Date Title
CN110692031B (en) System and method for window control in a virtual reality environment
CN110636354A (en) Display device
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
CN112073798B (en) Data transmission method and equipment
WO2021088888A1 (en) Focus switching method, and display device and system
CN112732089A (en) Virtual reality equipment and quick interaction method
CN105812945A (en) Information input method, device and smart terminal
CN114442872A (en) Layout and interaction method of virtual user interface and three-dimensional display equipment
US11900530B1 (en) Multi-user data presentation in AR/VR
CN114302221B (en) Virtual reality equipment and screen-throwing media asset playing method
CN116260999A (en) Display device and video communication data processing method
CN112788378B (en) Display device and content display method
CN111385631B (en) Display device, communication method and storage medium
CN114286077B (en) Virtual reality device and VR scene image display method
CN116225205A (en) Virtual reality equipment and content input method
WO2022083554A1 (en) User interface layout and interaction method, and three-dimensional display device
WO2020248682A1 (en) Display device and virtual scene generation method
WO2009031102A2 (en) Apparatus and method for quick navigation between recommendation sets in a tv content discovery system
CN115129280A (en) Virtual reality equipment and screen-casting media asset playing method
CN112788375B (en) Display device, display method and computing device
CN112905007A (en) Virtual reality equipment and voice-assisted interaction method
CN116540905A (en) Virtual reality equipment and focus operation method
CN116126175A (en) Virtual reality equipment and video content display method
CN116149517A (en) Virtual reality equipment and interaction method of virtual user interface
CN116132656A (en) Virtual reality equipment and video comment display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination