CN111161396A - Virtual content control method and device, terminal equipment and storage medium - Google Patents

Virtual content control method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111161396A
CN111161396A CN201911137088.0A CN201911137088A CN111161396A CN 111161396 A CN111161396 A CN 111161396A CN 201911137088 A CN201911137088 A CN 201911137088A CN 111161396 A CN111161396 A CN 111161396A
Authority
CN
China
Prior art keywords
content
dimensional
dimensional model
virtual
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911137088.0A
Other languages
Chinese (zh)
Other versions
CN111161396B (en
Inventor
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201911137088.0A priority Critical patent/CN111161396B/en
Publication of CN111161396A publication Critical patent/CN111161396A/en
Application granted granted Critical
Publication of CN111161396B publication Critical patent/CN111161396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a control method and device of virtual content, terminal equipment and a storage medium, and relates to the technical field of display. The method comprises the following steps: acquiring relative spatial position information between the terminal equipment and the interactive equipment; rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region; generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area; receiving operation data sent by the interaction equipment according to the touch operation detected in the interaction area; acquiring first target content of the two-dimensional plane content, which is executed with control operation, according to the operation data; and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content. The method can realize the control of the three-dimensional model by operating the two-dimensional plane corresponding to the virtual three-dimensional model.

Description

Virtual content control method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a method and an apparatus for controlling virtual content, a terminal device, and a storage medium.
Background
In recent years, with the progress of science and technology, technologies such as Augmented Reality (AR) and Virtual Reality (VR) have become hot spots of research at home and abroad. The user can be immersed in the virtual world through the virtual picture displayed by the AR/VR and interacts with the virtual world. However, in AR/VR technology, interaction with three-dimensional virtual content in a virtual world is difficult, especially with control of a particular virtual content. Therefore, how to improve the interactivity of the user with the virtual content is an important research direction of AR or VR.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling virtual content, terminal equipment and a storage medium, which can realize control on a three-dimensional model by operating a two-dimensional plane corresponding to the virtual three-dimensional model and improve the interactivity between a user and the virtual content.
In a first aspect, an embodiment of the present application provides a method for controlling virtual content, which is applied to a terminal device, where the terminal device is connected to an interactive device, the interactive device includes an interactive area, and the method includes: acquiring relative spatial position information between the terminal equipment and the interactive equipment; rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region; generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area; receiving operation data sent by the interaction equipment according to the touch operation detected in the interaction area; acquiring first target content of the two-dimensional plane content, which is executed with control operation, according to the operation data; and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In a second aspect, an embodiment of the present application provides a control apparatus for virtual content, which is applied to a terminal device, where the terminal device is connected to an interactive device, the interactive device includes an interactive area, and the apparatus includes: the system comprises an information acquisition module, a model rendering module, a two-dimensional generation module, an operation receiving module, a content acquisition module and a target control module. The information acquisition module is used for acquiring relative spatial position information between the terminal equipment and the interactive equipment; the model rendering module is used for rendering a virtual three-dimensional model according to the relative space position information, and the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region; the two-dimensional generation module is used for generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, and the display position of the two-dimensional plane content corresponds to the interaction area; the operation receiving module is used for receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area; the content acquisition module is used for acquiring first target content of executed control operation in the two-dimensional plane content according to the operation data; the target control module is used for acquiring second target content corresponding to the first target content in the three-dimensional model and performing control operation on the second target content.
In a third aspect, an embodiment of the present application provides a display system, where the system includes a terminal device and an interaction device, the interaction device is connected to the terminal device, and the interaction device includes an interaction area, where: the terminal equipment is used for acquiring relative space position information between the terminal equipment and the interactive equipment, rendering a virtual three-dimensional model according to the relative space position information, superposing the three-dimensional model in a region of which the position in a real space is outside an interactive region, generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative space position information, and enabling the display position of the two-dimensional plane content to correspond to the interactive region; the interactive device is used for controlling the interactive area to display the two-dimensional plane content; the interaction device is also used for detecting touch operation through the interaction area, and sending operation data to the terminal device when the detected touch operation is performed; the terminal device is further used for receiving the operation data, obtaining a first target content of the two-dimensional plane content, which is executed with the control operation, according to the operation data, obtaining a second target content corresponding to the first target content in the three-dimensional model, and performing the control operation on the second target content.
In a fourth aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs being configured to perform the method of controlling virtual content as provided by the first aspect above.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the control method for virtual content provided in the first aspect.
According to the scheme provided by the embodiment of the application, the virtual three-dimensional model is rendered according to the relative space position information by acquiring the relative space position information between the terminal equipment and the interactive equipment, so that the position of the three-dimensional model superposed in the real space is positioned in the region outside the interactive region. And then generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, and enabling the display position of the two-dimensional plane content to correspond to the interactive area. And then receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area, so as to obtain first target content of the two-dimensional plane content, which is executed with the control operation, according to the operation data, obtain second target content corresponding to the first target content in the three-dimensional model, and perform the control operation on the second target content. Therefore, the control operation of the three-dimensional model can be realized by controlling the virtual two-dimensional plane content corresponding to the three-dimensional model through the two-dimensional interaction area on the touch interaction equipment, meanwhile, the controlled target content in the three-dimensional model can be accurately positioned through the two-dimensional plane content, the accurate control of the three-dimensional model by using a two-dimensional interaction mode is realized, the interaction effect between a user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a flowchart of a method for controlling virtual content according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 4 shows another display effect diagram according to an embodiment of the application.
Fig. 5 shows a flowchart of a method of controlling virtual content according to another embodiment of the present application.
Fig. 6 shows a flowchart of step S230 in the control method of virtual content according to the embodiment of the present application.
Fig. 7 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 8 shows a flowchart of step S233 in the control method of virtual content according to the embodiment of the present application.
Fig. 9 shows another display effect diagram according to an embodiment of the application.
Fig. 10 is a schematic diagram illustrating still another display effect according to an embodiment of the application.
Fig. 11 is a schematic diagram illustrating a further display effect according to an embodiment of the application.
Fig. 12 is a flowchart illustrating a method of controlling virtual content according to still another embodiment of the present application.
Fig. 13 is a schematic diagram illustrating a display effect according to an embodiment of the application.
Fig. 14 shows a flowchart of step S360 in the control method of virtual content according to the embodiment of the present application.
Fig. 15 is a flowchart illustrating a method of controlling virtual content according to still another embodiment of the present application.
Fig. 16 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 17 is a flowchart illustrating a method of controlling virtual content according to still another embodiment of the present application.
Fig. 18 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 19 shows a block diagram of a control apparatus of virtual content according to an embodiment of the present application.
Fig. 20 is a block diagram of a display for executing a control method of virtual content according to an embodiment of the present application.
Fig. 21 is a block diagram of a terminal device for executing a control method of virtual content according to an embodiment of the present application.
Fig. 22 is a storage unit for storing or carrying program codes for implementing a control method of virtual content according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, a display system provided in an embodiment of the present application is shown, including a terminal device 100 and an interaction device 200, where the terminal device 100 is in communication connection with the interaction device 200.
In this embodiment, the terminal device 100 may be a head-mounted display device, and the head-mounted display device may be an integrated head-mounted display device, or may be a head-mounted display device connected to an external electronic device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to an external/access head-mounted display device, that is, the terminal device 100 may be used as a processing and storage device of the head-mounted display device, inserted into or accessed to the external head-mounted display device, and display the virtual content 300 through the head-mounted display device.
In some embodiments, the interactive device 200 may be an electronic device provided with the markers 201, the number of the markers 201 provided on the interactive device 200 is not limited, and the number of the markers 201 may be one or more. The specific configuration, structure, and size of the interactive device 200 are not limited, and may be various shapes such as a square shape and a circular shape, or various shapes such as a flat electronic device. In some embodiments, the tag 201 may be integrated into the interactive device 200 or may be adhesively attached to the interactive device 200.
The terminal device 100 and the interactive device 200 may be connected through Wireless communication methods such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (ZigBee technology), and the like, or may be connected through wired communication methods such as a data line, and of course, the connection method of the terminal device 100 and the interactive device 200 is not limited in this embodiment of the application.
When the terminal device 100 and the interactive device 200 are used, the marker 201 can be positioned in the visual range of the image sensor on the terminal device 100 to acquire an image containing the marker 201, so that the acquired image containing the marker 201 is identified and tracked, the spatial position information such as the position and the posture of the marker 201 relative to the terminal device 100 and the identification result such as the identity information of the marker 201 are obtained, the spatial position information such as the position and the posture of the interactive device 200 relative to the terminal device 100 is further obtained, and the positioning and tracking of the interactive device 200 are realized. The terminal device 100 may display corresponding virtual content according to the relative position and posture information with the interactive device 200.
In some embodiments, the interactive device 200 is provided with at least one interactive area 202, and the user can perform related control and interaction through the interactive area 202. The interactive area 202 may include a touch pad or a touch screen, among others. The interactive device 200 may generate a control instruction corresponding to the control operation through the control operation detected in the interactive region 202, and perform related control. The interactive device 200 may further send the control instruction to the terminal device 100, or the interactive device 200 generates operation data according to the operation detected in the interaction area, and sends the operation data to the terminal device 100. When the terminal device 100 receives the control instruction transmitted by the interactive device 200, the display of the virtual content (e.g., the virtual content is controlled to rotate, displace, etc.) may be controlled according to the control instruction.
In some embodiments, the interaction device 200 may also be a mobile terminal with a touch screen, such as a smart phone, a tablet computer, and the like, having a touch screen that can display a picture and can be manipulated. In one embodiment, the marker 201 may be disposed on a housing of the mobile terminal, may be displayed on a touch screen, or may be inserted into the mobile terminal in an accessory manner, such as, but not limited to, inserting into the mobile terminal through a USB interface or a headset interface. For example, referring to fig. 1 again, the terminal device 100 is a head-mounted display device, and the user can observe that the virtual automobile model 300 is displayed outside the interactive device 200 in an overlaid manner by wearing the head-mounted display device, and the virtual image 400 corresponding to the virtual automobile model 300 is displayed on the interactive device 200 in an overlaid manner. Of course, when the interactive device 200 is a tablet computer, the image 400 corresponding to the virtual automobile model 300 may be displayed on the screen of the tablet computer.
Based on the display system, the embodiment of the application provides a control method of virtual content, which is applied to terminal equipment and interactive equipment of the display system. A specific control method of the virtual content will be described below.
Referring to fig. 2, an embodiment of the present application provides a method for controlling virtual content, which is applicable to the terminal device, where the terminal device is connected to an interactive device, and the interactive device includes an interactive area, where the method may include:
step S110: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
In the embodiment of the application, the terminal device may obtain relative spatial position information between the terminal device and the interactive device, so that the terminal device renders corresponding virtual content according to the relative spatial position information. The relative spatial position information between the terminal device and the interactive device may include relative position information between the terminal device and the interactive device, posture information, and the like, and the posture information may be an orientation, a rotation angle, and the like of the interactive device relative to the terminal device.
As an implementation manner, the terminal device may acquire an image of a marker on the interactive device through the image sensor, identify the marker in the tracked image, and acquire relative spatial position information between the terminal device and the interactive device, so that information such as rotation and orientation of the terminal device may be obtained. In some embodiments, the marker may be a pattern having a topology, which refers to the connectivity between sub-markers and feature points, etc. in the marker.
In some embodiments, the interactive device may further include an optical spot and an Inertial Measurement Unit (IMU), and the terminal device may acquire an optical spot image on the interactive device through the image sensor, acquire measurement data through the inertial measurement unit, and determine relative spatial position information between the terminal device and the interactive device according to the optical spot image and the measurement data, thereby positioning and tracking the interactive device. The light spots arranged on the interactive device can be visible light spots or infrared light spots, and the number of the light spots can be one or a light spot sequence consisting of a plurality of light spots.
Of course, the specific manner of acquiring the relative spatial location information between the terminal device and the interactive device may not be limited in this embodiment of the application. For example, the terminal device may also determine the current position and posture of the terminal device relative to the interactive device only through the IMU of the interactive device and a motion prediction algorithm.
Step S120: and rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region.
In some embodiments, after the terminal device obtains the constructed virtual three-dimensional model, the terminal device may obtain a spatial position of the three-dimensional model relative to the terminal device according to relative spatial position information between the terminal device and the interactive device and a positional relationship between the virtual three-dimensional model to be displayed and the interactive device, so as to obtain a rendering position of the three-dimensional model. The terminal device can render the virtual three-dimensional model in the virtual space according to the rendering position, so that the position of the three-dimensional model superposed in the real space is positioned in the region outside the interaction region of the interaction device. Therefore, the virtual content can be displayed in the virtual space according to the spatial position of the interactive equipment relative to the terminal equipment, so that a user can observe the display effect that the virtual content is superimposed on a real scene through the display lens of the head-mounted display device. For example, referring again to fig. 1, the user may observe that the virtual automobile model 300 is displayed superimposed in the real space in the area in front of the interactive device 200 by wearing the head-mounted display device.
In some embodiments, the rendering position may be a three-dimensional space coordinate of the virtual three-dimensional model in a virtual space. The rendering position may be a three-dimensional space coordinate of the virtual three-dimensional model in the virtual space with the virtual camera as an origin (which may be regarded as human eyes as an origin), and the rendering coordinate may also be a world coordinate representation established by a world coordinate origin in the virtual space.
Step S130: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area.
In conventional AR/VR display technologies, a user usually controls a virtual three-dimensional model through a controller such as a handle or through gestures, such as controlling the three-dimensional model to translate, rotate, or zoom. However, the handle control operation is complicated, the gesture control depends on gesture recognition, the algorithm is complex, and the accuracy is poor. When a two-dimensional interaction device (such as a touch pad) is used for interacting with the virtual three-dimensional model, although the two-dimensional interaction device can interact with the virtual three-dimensional model more efficiently and quickly than a handle and a gesture, the two-dimensional interaction device cannot effectively and directly touch the three-dimensional model. In the embodiment of the application, the terminal device may generate virtual two-dimensional plane content corresponding to the three-dimensional model, so that a user may control the three-dimensional model in the virtual space by operating the two-dimensional plane content. Specifically, the terminal device may generate virtual two-dimensional plane content corresponding to the three-dimensional model based on relative spatial position information between the terminal device and the interactive device. And the display position of the two-dimensional plane content corresponds to the interactive area.
In some embodiments, the virtual two-dimensional planar content corresponding to the three-dimensional model may be content rendered by the three-dimensional model from different perspectives. As an embodiment, the two-dimensional planar content may be what a human eye can see when viewing the three-dimensional model through a head-mounted display device. The terminal device can obtain the current position and rotation angle information of the terminal device through the relative spatial position information between the terminal device and the interactive device, so that the relative position of the human eye and the three-dimensional model can be determined, the content which can be seen when the human eye observes the three-dimensional model at present can be obtained, and the two-dimensional plane content can be obtained.
In another embodiment, the two-dimensional plane content may be what can be seen when the three-dimensional model is observed with any point in the virtual space as a viewpoint. For example, a point in the virtual space that is directly opposite to the three-dimensional model may be used as a viewpoint, and the two-dimensional plane content obtained when the three-dimensional model is observed is the main view of the three-dimensional model. Of course, the specific two-dimensional plane content is not limited in the embodiment of the present application, and only the two-dimensional plane content may represent the whole or partial structural shape of the three-dimensional model. For example, the two-dimensional planar content may also be a three-view, a six-view, a perspective view (perspective), an axonometric view (isometric), etc. of the three-dimensional model.
In some embodiments, the display position of the two-dimensional plane content corresponds to the interaction area, which may be rendering the virtual two-dimensional plane content at the interaction area of the interaction device by the terminal device. As a manner, the terminal device may obtain a spatial position of the two-dimensional plane content relative to the terminal device based on the relative spatial position information between the terminal device and the interactive device and a positional relationship between the two-dimensional plane content to be displayed and the interactive device (for example, the two-dimensional plane content is fixedly superimposed and displayed on an interactive region of the interactive device in a real space), so as to obtain a rendering position of the two-dimensional plane content. The terminal device can render the virtual two-dimensional plane content in the virtual space according to the rendering position, so that the position of the virtual two-dimensional plane content superposed in the real space corresponds to the interactive device.
In other embodiments, the interaction area may include a touch screen, and the display position of the two-dimensional plane content corresponds to the interaction area, or the two-dimensional plane content may be displayed through the touch screen of the interaction device. As a mode, the terminal device may send content data corresponding to the virtual two-dimensional plane content to the interactive device, so that the interactive device may receive the content data, generate screen content corresponding to the two-dimensional plane content according to the content data, and control the touch screen to display the screen content, so that the touch screen of the interactive device displays the two-dimensional plane content.
In some embodiments, the display mode of the two-dimensional planar content may be the same as the display mode of the three-dimensional model, such as the display mode with the texture of the original model. Other display modes are also possible, such as outline (outline or stroke) mode, xray (for displaying the outline of an object occluded by a wall or other object), wireframe (wireframe, network wireframe for displaying a model), etc.
Step S140: and receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area.
In some embodiments, the interaction area of the interaction device may include a two-dimensional touch area such as a touch pad or a touch screen, so that the interaction area may detect a touch operation (e.g., a single-finger click, a single-finger sliding, a multi-finger click, a multi-finger sliding, a long press, etc.) made by a user in the interaction area, where the touch operation may be a user's finger touch or a pen touch, which is not limited herein. When the interaction area of the interaction device detects the touch operation of the user, the interaction device may generate operation data according to the touch operation detected by the interaction area. The operation data may include operation parameters of the touch operation detected by the interaction area.
In some embodiments, the operation parameter may include at least a touch coordinate of the touch operation in the interaction area. The touch coordinates may be two-dimensional coordinates in a plane coordinate system established by a touch pad or a touch screen of the interaction area, for example, an origin may be at an angular point of the touch pad or the touch screen (e.g., a point at a lower left corner). The touch coordinates corresponding to the touch operation may represent a position of the touch operation in the interaction area.
Of course, the above-described operating parameters may also include other parameters. For example, the operation parameters may further include parameters such as the type of touch operation (click, slide, long press, and the like), the number of fingers of the touch operation, the finger pressing pressure, and the duration of the touch operation. Of course, the specific operation data may not be limited in this embodiment, and the operation data may also include other operation parameters, for example, a sliding track, a click frequency of a click operation, and the like.
The interactive device can send the operation data to the terminal device after generating the operation data according to the touch operation detected in the interactive area. Correspondingly, the terminal device may receive the operation data sent by the interactive device, so that the terminal device determines, according to the operation data, the operated content in the two-dimensional plane content corresponding to the interactive region, and performs the related processing operation.
Step S150: according to the operation data, first target content of the two-dimensional plane content, on which the control operation is performed, is acquired.
In some embodiments, since the display position of the two-dimensional plane content corresponds to the interaction area, the terminal device may determine the first target content corresponding to the touch coordinate in the two-dimensional plane content according to the position relationship between the two-dimensional plane content and the interaction area and the touch coordinate of the touch operation in the interaction area, so that the terminal device may acquire the first target content of the two-dimensional plane content on which the control operation is performed. Wherein the control operation may include, but is not limited to, rotation, movement, zooming, selecting, deselecting, hiding, displaying, etc.
As an embodiment, when the virtual two-dimensional plane content is rendered by the terminal device at the interaction area of the interaction device, the terminal device may determine rendering coordinates corresponding to the touch coordinates from rendering coordinates of the two-dimensional plane content in the virtual space according to coordinates of the touch coordinates in the virtual space. When the terminal device acquires the operation data, the touch coordinates of the touch operation in the interaction area can be acquired. Therefore, the terminal device may obtain a relative position relationship between the position of the touch operation and the interactive device according to the touch coordinate, and then may determine the position relationship between the position of the touch operation and the terminal device according to the relative spatial position information between the terminal device and the interactive device, so as to obtain a coordinate of the position of the touch operation in the virtual space, and further may determine a first target content of the executed control operation in the virtual two-dimensional plane content.
In some embodiments, when the first target content of the two-dimensional plane content for which the control operation is performed is determined, the first target content may be brought into a hover state and highlighted to inform the user of the currently controlled object of the two-dimensional plane content.
It should be noted that, in the embodiment of the present application, even if the user wearing the head-mounted display device does not look at the two-dimensional plane content, the user can still control the two-dimensional plane content in the interaction area.
Step S160: and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In this embodiment of the application, after determining a first target content on which a control operation is performed in two-dimensional plane content, the terminal device may use a second target content corresponding to the first target content in the three-dimensional model as a controlled content in the three-dimensional model according to a correspondence between the three-dimensional model and the two-dimensional plane content, and perform a control operation on the second target content. Wherein the control operation performed on the second target content corresponds to the control operation on the first target content. In some embodiments, when a second target content corresponding to the first target content in the three-dimensional model is determined, the second target content may be brought into a hover state and highlighted to inform the user of the currently controlled object in the two-dimensional plane content.
In some embodiments, the control operation performed on the second target content may be consistent with the control operation on the first target content described above. That is, when a first target content in the two-dimensional plane content is subjected to the a operation, a second target content in the three-dimensional model corresponding to the first target content is also subjected to the a operation. For example, referring to fig. 3, an image 400 corresponding to the virtual automobile model 300 is displayed on the screen of the tablet pc, and when the user selects the front wheel in the image 400, the virtual front wheel 301 of the virtual automobile model 300 superimposed in the real space is highlighted to represent that the virtual front wheel is in the selected state, so that the user can clearly see the controlled part in the three-dimensional model through the head-mounted display device, and the accurate selection of the controlled object is ensured.
In other embodiments, the control operation performed on the second target content may also be associated with the control operation on the first target content. For example, when a first target content in the two-dimensional plane content is subjected to a selection operation, a second target content in the three-dimensional model corresponding to the first target content may be subjected to the selection operation and a zoom-in operation. For example, referring to fig. 4, an image 400 corresponding to the virtual automobile model 300 is displayed on the screen of the tablet pc, and when the user selects the front wheel in the image 400, the virtual front wheel 301 of the virtual automobile model 300 superimposed in the real space is enlarged and moves forward, so that the user can clearly see the controlled part in the three-dimensional model through the head-mounted display device, and the accurate selection of the controlled object is ensured.
As a specific implementation manner, when a user moves while touching on an interaction area of the interaction device through a finger, the user can rotate and move the whole two-dimensional plane content, and simultaneously correspondingly rotate and move the whole three-dimensional model, so that the user can only see the three-dimensional model in the virtual space, and does not need to see the two-dimensional plane content in real time to perform operation, thereby realizing the effect of 'blind operation' of the finger.
In the method for controlling virtual content provided in the foregoing embodiment, the relative spatial position information between the terminal device and the interactive device is acquired, so as to render a virtual three-dimensional model according to the relative spatial position information, and the position of the three-dimensional model superimposed in the real space is located in a region outside the interactive region. And then generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, and enabling the display position of the two-dimensional plane content to correspond to the interactive area. And then receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area, so as to obtain first target content of the control operation executed in the two-dimensional plane content according to the operation data. And finally, acquiring second target content corresponding to the first target content in the three-dimensional model, and performing the control operation on the second target content. Therefore, the virtual two-dimensional plane content corresponding to the three-dimensional model can be controlled through the two-dimensional interaction area on the touch interaction equipment, the control operation of the three-dimensional model is achieved, meanwhile, the controlled target content in the three-dimensional model can be accurately positioned through the two-dimensional plane content, the three-dimensional model is accurately controlled through the two-dimensional interaction mode, the interaction effect between a user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Referring to fig. 5, another embodiment of the present application provides a method for controlling virtual content, which is applied to a terminal device, and the method may include:
step S210: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
Step S220: and rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region.
In the embodiment of the present application, step S210 and step S220 can refer to the foregoing embodiments, and are not described herein again.
Step S230: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area.
In some embodiments, the two-dimensional plane content may correspond to a display angle and a display posture of a three-dimensional model currently displayed by the terminal device. In one way, the two-dimensional plane content can be consistent with the content of the three-dimensional model when the user wears the head-mounted display device to look at the three-dimensional model, so that the user can accurately control the three-dimensional model in a contrasting way when operating the two-dimensional plane content. Specifically, referring to fig. 6, step S230 may include:
step S231: a left eye display image and a right eye display image for displaying the three-dimensional model are acquired.
In order to ensure that the two-dimensional plane content can be consistent with the content of the three-dimensional model when the user wears the head-mounted display device to look at the three-dimensional model, in some embodiments, the terminal device may acquire a left-eye display image and a right-eye display image for displaying the three-dimensional model, so as to determine the two-dimensional plane content through the left-eye display image and the right-eye display image.
In the embodiment of the application, when the head-mounted display device is used, the left eye display image displayed by the image source can be projected into the left eye of a user through the optical element, the right eye display image displayed by the image source can be projected into the right eye of the user, and the left eye display image and the right eye display image with parallax can form a stereoscopic image after being fused with the brain of the user, so that the user can see the display effect of the stereoscopic three-dimensional model.
In some embodiments, the head-mounted display device is required to render the virtual content according to rendering coordinates of the three-dimensional model when displaying the virtual three-dimensional model. The rendering coordinates of the three-dimensional model may be spatial coordinates of each point of the three-dimensional model in a virtual space with the virtual camera as an origin. The virtual camera is a camera used for simulating human eye visual angles in a 3D software system, can track motion changes of a three-dimensional model in a virtual space according to changes of motion (namely head motion) of the virtual camera, can generate corresponding left eye display images and right eye display images after rendering, and projects the left eye display images and the right eye display images onto an optical lens to realize three-dimensional display.
Specifically, the virtual camera includes a left virtual camera and a right virtual camera. The left virtual camera is used for simulating the left eye of the human eye, and the right virtual camera is used for simulating the right eye of the human eye. Therefore, the rendering coordinates of the three-dimensional model include a left rendering coordinate of the three-dimensional model in a first space coordinate system with the left virtual camera as an origin and a right rendering coordinate of the three-dimensional model in a second space coordinate system with the right virtual camera as an origin. And the head-mounted display device renders the three-dimensional model according to the left rendering coordinates to obtain a left eye display image of the three-dimensional model. Similarly, after the head-mounted display device renders the three-dimensional model according to the right rendering coordinate, a right-eye display image of the three-dimensional model can be obtained.
Therefore, if the terminal device is a head-mounted display device, the terminal device can directly acquire the left-eye display image and the right-eye display image for displaying the three-dimensional model when displaying the virtual three-dimensional model. If the terminal device is an intelligent terminal such as a mobile phone connected with the head-mounted display device, when the head-mounted display device displays the virtual three-dimensional model, the left eye display image and the right eye display image for displaying the three-dimensional model can be transmitted to the terminal device, so that the terminal device can acquire the left eye display image and the right eye display image for displaying the three-dimensional model.
Step S232: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information and a target display image, wherein the target display image comprises at least one of a left-eye display image and a right-eye display image.
In some embodiments, the terminal device may directly generate virtual two-dimensional plane content corresponding to the three-dimensional model according to the image data of the left-eye display image and the relative spatial position information between the terminal device and the interactive device, so that the two-dimensional plane content corresponds to the interactive region of the interactive device, thereby implementing that the two-dimensional plane content may be consistent with the content of the three-dimensional model seen when the user wears the head-mounted display device to look at the three-dimensional model. As one approach, the generated two-dimensional planar content may contain only left-eye display images.
Alternatively, the generated two-dimensional plane content may include the left-eye display image and other views when the left-eye display image is the main view. Specifically, the terminal device may determine a display angle of the current three-dimensional model according to the left-eye display image, and use the display angle as an angle of the main view, so that the terminal device may determine angles of other views of the three-dimensional model, and further obtain other views when the left-eye display image is used as the main view. Therefore, when a user wears the head-mounted display device to look at the virtual three-dimensional model, the user can see parts such as the back, the side and the like which cannot be seen by the three-dimensional model through the two-dimensional plane contents, and meanwhile, the function of selecting a certain object of the parts can be realized, so that the interaction effect between the user and the virtual contents is improved.
In other embodiments, the terminal device may also generate the two-dimensional plane content directly according to the image data of the right-eye display image and the relative spatial position information, so that the two-dimensional plane content may be consistent with the content of the three-dimensional model seen when the user wears the head-mounted display device to look at the three-dimensional model. As one way, the generated two-dimensional plane content may contain only the right-eye display image. Alternatively, the generated two-dimensional plane content may include the right-eye display image and other views when the right-eye display image is the main view.
In still other embodiments, the terminal device may further generate two-dimensional plane content according to the image data of the right-eye display image and the relative spatial position information, which is not limited herein.
In some embodiments, the terminal device may update the two-dimensional plane content in time when a change in the relative spatial location information is detected. Specifically, referring to fig. 6 again, the method for controlling the virtual content may further include:
step S233: when the change of the relative spatial position information is detected, the rotation direction and the rotation angle of the terminal device relative to the reference direction are acquired based on the changed relative spatial position information.
In some embodiments, when the terminal device is a head-mounted display device, when the spatial position and the posture of the head of the user wearing the head-mounted display device change, such as head lowering, head deviation, and the like, the relative spatial position information between the terminal device and the interaction device also changes, so that the three-dimensional models generated by rendering are also different, and the generated two-dimensional plane content corresponding to the three-dimensional models is also updated, so that when the user looks at the three-dimensional models from different angles, the terminal device can update the two-dimensional plane content correspondingly, so as to ensure that the three-dimensional model at the current angle of view can be accurately controlled when the user operates the two-dimensional plane content. Therefore, when the terminal device detects that the relative spatial position information changes, the rotation direction and the rotation angle of the terminal device with respect to the reference direction can be acquired based on the changed relative spatial position information. The reference direction is a reference direction for determining a rotation direction and a rotation angle of the terminal device.
In some embodiments, the reference direction may be a direction in which the head mounted display device points toward the three-dimensional model when the three-dimensional model is initially displayed. As one way, the direction in which the head mounted display device points at the three-dimensional model may be a direction in which a two-dimensional view camera in a virtual space points at a target point in the virtual space. Specifically, before step 233, the method for controlling virtual content may further include: when the three-dimensional model is rendered for the first time, the initial relative position between the two-dimensional visual angle camera and the target point in the virtual space is obtained, the direction of the initial relative position is taken as the reference direction, and the target point is any fixed point in the virtual space. In some embodiments, the target point may be a center point of the three-dimensional model.
The two-dimensional visual angle camera is used for simulating the visual angle of human eyes in the virtual space, can track the movement change of the three-dimensional model in the virtual space according to the movement change (namely the head movement) of the two-dimensional visual angle camera, and can generate corresponding two-dimensional plane content after being rendered. In other words, the two-dimensional planar content may be rendered based on the three-dimensional model content that is "seen" by the two-dimensional perspective camera within the field of view of the camera.
As an aspect, the two-dimensional angle camera may be the left virtual camera corresponding to the left eye, or may be the right virtual camera corresponding to the right eye, so that the terminal device may directly acquire the two-dimensional plane content according to the left eye display image and/or the right eye display image (i.e., the target display image). As another mode, the two-dimensional view camera may be a virtual camera corresponding to a center of a binocular, and in this mode, the rendering coordinates of the three-dimensional model further include new rendering coordinates of the three-dimensional model in a third spatial coordinate system with the two-dimensional view camera as an origin. And after the head-mounted display device renders the three-dimensional model according to the new rendering coordinates, a new display image of the three-dimensional model, namely the two-dimensional plane content, can be obtained.
In some embodiments, the target point is a central point of the three-dimensional model, and the initial relative position between the two-dimensional viewing angle camera in the virtual space and the target point when the three-dimensional model is rendered for the first time may be understood as a relative position P where the human eye of the user and the three-dimensional model exist when the user wears the head-mounted display device to look at the three-dimensional model rendered for the first time. When the head of a user rotates from top to bottom to look at the two-dimensional plane content of the interaction area, P is changed inevitably, so that the direction of the two-dimensional visual angle camera pointing to the target point when the three-dimensional model is rendered for the first time can be used as the reference direction, the change of P is determined according to the reference direction, and the rotation angle and the rotation direction of the head-mounted display device are further judged.
For example, referring to fig. 7, when the target point is a central point a of the three-dimensional model and the two-dimensional view camera is a right virtual camera 500 corresponding to the right eye in the virtual space, the initial relative position between the two-dimensional view camera and the target point a may be understood as a relative position P where the eyes of the user and the virtual automobile model 300 exist when the user wears the head mounted display device to look at the virtual automobile model 300 rendered for the first time, and a direction in which the initial relative position exists, that is, a direction in which the two-dimensional view camera points at the target point a, is taken as a reference direction. The terminal device may determine, according to the relative position between the two-dimensional view camera and the target point a, the virtual automobile model 300 that can be "seen" by the view range 600 of the two-dimensional view camera, that is, one side of the virtual automobile model 300 in fig. 7, so that the two-dimensional plane content rendered by the two-dimensional view camera acquired by the terminal device may be a two-dimensional image of the side of the virtual automobile model 300.
In this embodiment, referring to fig. 8, the step S233 of "acquiring the rotation direction and the rotation angle of the terminal device relative to the reference direction based on the changed relative spatial position information" may include:
step S2331: and acquiring the current position and the posture of the two-dimensional visual angle camera based on the changed relative spatial position information.
In some embodiments, when the terminal device is a head-mounted display device, since the positions of the human eyes of the user and the head-mounted display device worn on the head are fixed, when the changed relative spatial position information is acquired, the terminal device may determine the current position of the human eyes, that is, may determine the current position and posture of the two-dimensional viewing angle camera used for simulating the human eyes.
Step S2332: and acquiring the current relative position between the two-dimensional visual angle camera and the target point based on the target point and the current position and the posture of the two-dimensional visual angle camera.
The relative position between the two-dimensional view camera and the target point changes due to the change of the position of the two-dimensional view camera. Therefore, in some embodiments, the terminal device may obtain the current relative position between the two-dimensional view camera and the target point based on the target point and the current position and posture of the two-dimensional view camera, so as to determine a specific change of the relative position according to the current relative position, thereby determining the rotation direction and the rotation angle of the terminal device.
Step S2333: and acquiring the rotation direction and the rotation angle of the terminal equipment relative to the reference direction according to the initial relative position and the current relative position.
In some embodiments, the terminal device may determine a rotation direction and a rotation angle of the terminal device with respect to a reference direction based on the initial relative position and the current relative position between the two-dimensional view camera and the target point. Specifically, the terminal device may determine, according to the current relative position, a direction in which the current two-dimensional viewing angle camera points to the target point, and thus may determine, according to the direction and the reference direction, a rotation direction and a rotation angle of the terminal device with respect to the reference direction.
For example, referring to fig. 7 and 9, after the head wearing the head-mounted display device rotates, the current relative position between the two-dimensional viewing angle camera and the target point a (fig. 8) changes relative to the initial relative position between the two-dimensional viewing angle camera and the target point a (fig. 7), and the terminal device may determine the rotation direction and the rotation angle of the terminal device according to the initial relative position and the current relative position. At this time, what the two-dimensional view camera can "see" is the top of the virtual automobile model 300.
Step S234: and when the rotating direction faces the interactive equipment and the rotating angle is larger than a preset angle, recording a target display image corresponding to the relative spatial position information before change.
The preset angle may be an angle condition that the rotation angle needs to satisfy when the terminal device rotates towards the interactive device and is converted from a virtual three-dimensional model (model visible state) in the view range to a virtual three-dimensional module (model invisible state) in the view range, that is, a rotation critical value. As one embodiment, a description will be given with a reference direction as a horizontal direction (a virtual three-dimensional model that can be viewed head-up when a user wears a head-mounted display device). When the terminal equipment rotates towards the interactive equipment, the closer the terminal equipment is to the interactive equipment, the less the content of the virtual three-dimensional model is visible in the visual field range of the terminal equipment. When the rotation angle with respect to the horizontal direction reaches a preset angle (e.g., 45 °), it can be approximately assumed that the user cannot observe the virtual three-dimensional model when wearing the head-mounted display device. The preset angle can be stored in the terminal device in advance, and the user deduces and sets the preset angle according to a theory. Or the terminal device automatically judges and updates according to the specific display condition of the virtual three-dimensional model.
It can be understood that, when the rotation direction of the terminal device is toward the interactive device and the rotation angle is greater than the preset angle, it can be considered that the user looks down at the two-dimensional plane content of the interactive region and looks down at a certain angle, so that the user may not see the three-dimensional model through the head-mounted display device worn by the user and only can see the two-dimensional plane content. However, since the rendered three-dimensional model and the two-dimensional plane content are updated in real time according to the position and posture change of the head-mounted display device, there is a possibility that the two-dimensional plane content currently seen by the user is generated according to the current position of the head-mounted display device and corresponds to the currently rendered three-dimensional model. However, the three-dimensional model may not be consistent with the three-dimensional model seen by the user before lowering the head, and thus visual errors are easily caused to the user.
In some embodiments, the terminal device may record a target display image corresponding to the relative spatial position information before the change when the rotation direction is toward the interactive device and the rotation angle is greater than the preset angle, so that the two-dimensional plane content currently seen by the user is generated according to the three-dimensional model before the rotation, and it is ensured that the two-dimensional plane content currently seen by the user may be consistent with the content of the three-dimensional model seen when the user wears the head-mounted display device to look at the three-dimensional model in front of the head. The target display image corresponding to the relative spatial position information before change may be content that can be "seen" in a visual field range of the two-dimensional view angle camera before the relative spatial position information changes, and may be determined according to a relative position and a posture between the terminal device and the three-dimensional model, or may be specifically determined according to a relative position and a posture between the two-dimensional view angle camera and the target point. As an embodiment, when the two-dimensional angle of view camera is the left virtual camera or the right virtual camera, the target display image may be the left eye display image or the right eye display image. As another embodiment, when the two-dimensional view camera is the virtual camera corresponding to the center of the two eyes, the target display image may be the new display image.
Step S235: and generating virtual two-dimensional plane content corresponding to the three-dimensional model rendered according to the relative spatial position information before the change based on the changed relative spatial position information and the recorded target display image.
In some embodiments, after the terminal device obtains the target display image corresponding to the relative spatial position information before the change, the terminal device may generate corresponding two-dimensional plane content according to the target display image. Specific ways of generating two-dimensional plane contents from a target display image can be referred to the foregoing embodiments.
In some embodiments, the terminal device may determine the rendering position of the virtual two-dimensional planar content according to the changed relative spatial position information between the terminal device and the interactive device. The terminal device can render the virtual two-dimensional plane content in the virtual space according to the rendering position, so that the position of the virtual two-dimensional plane content superposed in the real space corresponds to the interactive device.
In other embodiments, the interactive area may include a touch screen, and the terminal device may send content data corresponding to the virtual two-dimensional plane content to the interactive device, so that the interactive device may control the touch screen to display the two-dimensional plane content.
In some embodiments, the terminal device may correspond the two-dimensional planar content to the currently seen three-dimensional model when the angle to which the user's head is facing down may be such that the user can see both the three-dimensional model and the two-dimensional planar content through the head mounted display device worn by the user. Specifically, referring to fig. 6 again, after the obtaining of the rotation direction and the rotation angle of the terminal device relative to the reference direction, the method for controlling the virtual content may further include:
step S236: when the rotating direction faces the interactive equipment and the rotating angle is smaller than a preset angle, recording a target display image corresponding to the changed relative spatial position information;
step S237: based on the changed relative spatial position information and the recorded target display image, virtual two-dimensional plane content corresponding to the three-dimensional model corresponding to the changed relative spatial position information is generated.
In some embodiments, when the rotation direction is toward the interactive apparatus and the rotation angle is less than the preset angle, there may be a case where the user can see both the three-dimensional model and the two-dimensional plane content through the head mounted display device worn. Therefore, the terminal equipment can update the two-dimensional plane content in real time according to the position change of the head-mounted display device, so as to ensure that the two-dimensional plane content can be consistent with the content of the three-dimensional model when a user wears the head-mounted display device to look at the three-dimensional model. Specifically, when the rotation direction of the terminal device with respect to the reference direction is toward the interactive device, and the rotation angle with respect to the reference direction is smaller than the preset angle, the target display image corresponding to the changed relative spatial position information may be recorded in real time, so as to generate the two-dimensional plane content according to the changed target display image and the changed relative spatial position information, and the two-dimensional plane content corresponds to the interactive area.
In some embodiments, the terminal device may also fix the position of the two-dimensional view camera. For example, when the rotation direction is towards the interactive device and the rotation angle is smaller than the preset angle, the average value of the initial relative positions is recorded, and the position of the two-dimensional visual angle camera corresponding to the average value is used as the position fixed value of the two-dimensional visual angle camera, so that the accuracy of the user control operation is improved. Further, when the relative position between the two-dimensional view camera and the target point is changing for a long time, the position of the two-dimensional view camera may be updated again.
In some embodiments, when the rotation direction is toward the interactive apparatus and the rotation angle is less than the preset angle, there may be a case where the user can only see the three-dimensional model through the head mounted display device worn. In this case, the terminal device may also record the target display image corresponding to the changed relative spatial position information in real time, so as to generate two-dimensional planar content according to the changed target display image and the changed relative spatial position information, where the two-dimensional planar content corresponds to the interaction area. As an implementation manner, if it is detected that the rotation angle of the head-mounted display device suddenly reaches the preset angle within a short time, that is, the user quickly lowers the head to view the two-dimensional plane content in the interaction area, the terminal device may generate the two-dimensional plane content according to the target display image recorded before rotation. In some embodiments, if the two-dimensional plane content is rendered to the interaction area of the interaction device for the terminal device, when the user cannot see the two-dimensional plane content through the head-mounted display device worn by the user, the terminal device may not generate the virtual two-dimensional plane content, so as to reduce the processing amount and power consumption of the terminal device.
Step S240: and receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area.
Step S250: according to the operation data, first target content of the two-dimensional plane content, on which the control operation is performed, is acquired.
In the embodiment of the present application, step S240 and step S250 can refer to the foregoing embodiments, and are not described herein again.
In some embodiments, the operation data may include a touch position, and when there are a plurality of contents in the vicinity corresponding to the touch position in the two-dimensional plane content, the two-dimensional plane content may be amplified to accurately select the target content. Specifically, before step 250, the manner of controlling the virtual content may further include: and when the touch position corresponds to the designated content in the two-dimensional plane content, generating virtual two-dimensional amplified content corresponding to the amplified two-dimensional plane content, and displaying the two-dimensional amplified content on the two-dimensional plane content in an overlapping manner. The designated content may be content corresponding to a dense content area in the two-dimensional plane content.
Under this embodiment, step 250 may include:
and when the touch position corresponds to the display position of the two-dimensional amplified content, acquiring a first target content of the two-dimensional amplified content, on which the control operation is executed.
In some embodiments, a window for displaying the two-dimensional enlarged content may be regenerated on the two-dimensional plane content, so that a user may perform a touch operation within the window to select a first target content in the two-dimensional enlarged content within the window for a control operation. When the touch operation is detected outside the window range or the touch operation is not detected in the window range within a preset time period, the operation can be regarded as a cancel operation, and the display of the window is cancelled. Further, in some embodiments, to inform the user of the enlarged range, the terminal device may display a border of the window on the three-dimensional model of the virtual space while generating the window. For example, referring to fig. 10, the flat computer superimposes a window for displaying the enlarged front wheel 410 on the displayed image 400, and at the same time, a frame 310 is displayed at the virtual front wheel of the virtual automobile model 300.
In some embodiments, the enlarged two-dimensional plane content may be an enlargement of content near the touch position, or an enlargement of the entire two-dimensional plane content, which is not limited herein. As an embodiment, when the enlarged two-dimensional plane content is an enlargement of the entire two-dimensional plane content, the content near the touch position may be displayed in the center area of the window. As another embodiment, when the magnified two-dimensional plane content is the content magnified near the touch position, the terminal device may further separate the content near the touch position into a list and rearrange the list, so that the content is completely not blocked by each other, so as to facilitate the selection of the user. For example, referring to fig. 11, the flat computer superimposes a window for displaying enlarged and rearranged model contents 430 on the displayed overlapped contents, and a frame 310 is displayed around the virtual three-dimensional model.
Step S260: and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In the embodiment of the present application, step S260 may refer to the foregoing embodiments, and is not described herein again.
According to the control method of the virtual content, the virtual two-dimensional plane content corresponding to the three-dimensional model can be generated according to the left eye display image and the right eye display image which display the three-dimensional model, so that the two-dimensional plane content seen by a user at present can be ensured to be consistent with the content of the three-dimensional model seen when the user wears the head-mounted display device to look at the three-dimensional model before lowering the head, the controlled target content in the three-dimensional model can be accurately positioned through the two-dimensional plane content, the accurate control of the three-dimensional model by using a two-dimensional interaction mode is realized, the interaction effect between the user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Referring to fig. 12, another embodiment of the present application provides a method for controlling virtual content, where the method is applied to the terminal device, the terminal device is connected to an interactive device, and the interactive device includes an interactive area, and the method may include:
step S310: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
Step S320: and rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region.
In the embodiment of the present application, step S310 and step S320 can refer to the foregoing embodiments, and are not described herein again.
Step S330: when a combination of a plurality of parts is detected in the three-dimensional model, a virtual hierarchical list corresponding to the plurality of parts is generated according to the hierarchical relationship between the plurality of parts and the plurality of parts.
In some embodiments, when it is detected that a combination of multiple components is included in the three-dimensional model, the terminal device may generate a hierarchical operation Interface (UI) corresponding to the three-dimensional model, so that a User may select a target component in the three-dimensional model through the hierarchical UI, which is convenient for operation. Specifically, when a combination of a plurality of components is detected in the three-dimensional model, the terminal device may generate a virtual hierarchical list corresponding to the plurality of components according to the hierarchical relationship between the plurality of components and the plurality of components. The hierarchical relationship may be an inclusive relationship from inside to outside, or a parallel relationship, and is not limited herein.
In some embodiments, other UI controls may also be generated to manipulate virtual two-dimensional planar content, such as various gizmos.
Step S340: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information and the hierarchical list, wherein the two-dimensional plane content comprises the hierarchical list and plane content corresponding to the plurality of parts.
In some embodiments, the terminal device may generate virtual two-dimensional planar content corresponding to the three-dimensional model based on the relative spatial position information between the terminal device and the interaction device and the hierarchical list such that a display position of the two-dimensional planar content corresponds to the interaction region. The two-dimensional plane content comprises a hierarchical list and plane content corresponding to the plurality of parts. Since the hierarchical list corresponds to the plurality of components in the three-dimensional model, the plane contents corresponding to the plurality of components may also correspond to the hierarchical list, so that a user may select a target hierarchy from the hierarchical list through a touch operation in the interaction region, and control the target plane contents corresponding to the target hierarchy and the target components in the three-dimensional model. In some embodiments, the hierarchical list may also contain thumbnails of the flat content of the parts corresponding to each hierarchy.
For example, referring to fig. 13, when the virtual three-dimensional model 320 is a combination of 3 parts, the two-dimensional plane contents displayed on the flat computer include plane contents 420 corresponding to the 3 parts and a hierarchical list 440.
Step S350: and receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area.
Step S360: according to the operation data, first target content of the two-dimensional plane content, on which the control operation is performed, is acquired.
In the embodiment of the present application, step S350 and step S360 can refer to the foregoing embodiments, and are not described herein again.
In some embodiments, the hierarchical list may be hidden in an initial state and displayed when a user is detected to perform a specified touch operation or when a touch position is detected to correspond to overlapping content in the two-dimensional planar content. The designated touch operation may be a long press, a heavy press, or the like.
In some embodiments, referring to fig. 14, when the operation data includes a touch position, step 360 may include:
step S361: and when the touch position corresponds to the display position of the hierarchy list, acquiring a target hierarchy corresponding to the touch position.
Step S362: and acquiring the plane content of the part corresponding to the target level, and taking the plane content as the first target content of the two-dimensional plane content on which the control operation is executed.
In some embodiments, when there is overlapping content or dense content in the two-dimensional plane content, the user cannot accurately touch some content therein by using a finger. Therefore, the control object can be accurately selected from the two-dimensional plane content through the hierarchical list, and the control object can be accurately and correspondingly controlled to the corresponding part of the three-dimensional model. Specifically, when it is detected that the touch position corresponds to a display position of the hierarchy list, the terminal device may acquire a target hierarchy corresponding to the touch position. Then, according to the correspondence relationship between the hierarchical list and the plurality of components in the three-dimensional model and the correspondence relationship between the plurality of components and the flat content, the terminal device may acquire the flat content of the component corresponding to the target hierarchical level and take the flat content as the first target content of the two-dimensional flat content on which the control operation is performed. As one mode, when the terminal device acquires a target hierarchy corresponding to the touch position, the planar content and the component corresponding to the hierarchy may be highlighted, and the transparency of the other components may be reduced or darkened, so as to inform the user of the currently selected target component and the target content of the planar content in the three-dimensional model.
In some embodiments, when the operation data includes a touch strength, the terminal device may also determine the target level according to the size of the touch strength. For example, if a hierarchy list is generated according to the part relationship from inside to outside of the three-dimensional model, when the touch force is larger, the selected target hierarchy in the hierarchy list corresponds to the content of the more "inside" of the three-dimensional model, and meanwhile, the feedback of vibration is assisted. When the fact that the touch strength is suddenly and greatly reduced is detected (namely, when the user releases the finger quickly), or another touch operation is detected (confirmed by using the other finger), or the fact that the UI confirmation button is pressed is detected, the fact that the current selected level is selected as the target level can be confirmed, and the component corresponding to the target level is the selected component.
In some embodiments, when the operation data includes a touch time, the terminal device may also determine the target level according to the length of the touch time. For example, when it is detected that the touch position is on the overlapped content in the two-dimensional plane content and the touch time exceeds the preset time length, the terminal device may cycle the selected state back and forth at each level in the level list corresponding to the overlapped content. When the touch time is checked to stop, the current selected level can be confirmed as the target level.
In some embodiments, when it is detected that the touch position is on the overlapped content in the two-dimensional plane content, the terminal device may also determine the target level from the level list corresponding to the overlapped content by using a touch operation of another finger. For example, the other finger slides downwards, the more downwards, the selected target level in the level list corresponds to the more "inside" content of the three-dimensional model, and when the sliding stop is detected, the level currently in the selected state can be confirmed to be selected as the target level.
Step S370: and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In the embodiment of the present application, step S370 can refer to the foregoing embodiments, and is not described herein again.
According to the control method of the virtual content provided by the embodiment of the application, a virtual hierarchical list corresponding to a plurality of components can be generated according to the hierarchical relationship among the plurality of components and the plurality of components, when a touch position corresponds to a display position of the hierarchical list, a target hierarchy corresponding to the touch position is acquired, planar content of the component corresponding to the target hierarchy is acquired, the planar content is used as first target content of a two-dimensional planar content on which control operation is executed, so that second target content corresponding to the first target content in a three-dimensional model is acquired, and the second target content is controlled. Therefore, when the three-dimensional model comprises a combination of a plurality of components, the controlled target object can be conveniently and accurately selected through the hierarchical UI, the interaction effect between the user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Referring to fig. 15, a further embodiment of the present application provides a method for controlling virtual content, where the method is applied to the terminal device, the terminal device is connected to an interactive device, and the interactive device includes an interactive area, and the method may include:
step S410: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
Step S420: and rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region.
Step S430: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area.
Step S440: and receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area, wherein the operation data comprises a touch position.
In the embodiment of the present application, steps S410 to S440 can refer to the foregoing embodiments, and are not described herein again.
Step S450: when the touch position corresponds to overlapping content in the two-dimensional plane content, the three-dimensional model is rotated by a specified angle.
In some embodiments, when the touch position corresponds to the overlapped content in the two-dimensional plane content, the overlapped structure of the three-dimensional model may be selected, so that the three-dimensional model may be rotated, the overlapped structure of the three-dimensional model before rotation may not be overlapped after rotation, and the terminal device may generate the two-dimensional plane content correspondingly, and the overlapped content in the two-dimensional plane content may not be overlapped at this time, so that the specific form of the three-dimensional model may be clearly seen, and the target content in the overlapped content may be accurately selected according to the overlapped content in the two-dimensional plane content that is not overlapped. Specifically, when it is detected that the touch position corresponds to overlapping content in the two-dimensional plane content, the terminal device may rotate the three-dimensional model at a specified angle for display. The specified angle may be an angle when the overlapped contents are not overlapped, or may be a fixed value, such as 30 ° or 90 °.
In some embodiments, rotating the three-dimensional model by a specified angle may be rotating the three-dimensional model every specified angle. As one mode, the rotation may be performed once and then the rotation may be performed for a predetermined time, and the next rotation may be performed automatically. As another mode, the next rotation may be triggered according to one touch operation of the user.
In some embodiments, when other content exists near the overlapped content in the two-dimensional plane content, if the three-dimensional model is rotated, the three-dimensional model may be blocked by other components, and thus the target content cannot be accurately selected. Therefore, other parts which may be blocked can be hidden after the three-dimensional model is rotated. Specifically, the terminal device may determine an overlapping structure corresponding to the overlapping content in the three-dimensional model according to the overlapping content, and determine a structure to be hidden in the three-dimensional model through the overlapping structure, so as to perform hiding. The structure to be hidden may be a structure other than the overlapping structure in the three-dimensional model, or may be a structure that blocks the overlapping structure in the rotated three-dimensional model, which is not limited herein.
Step S460: and generating virtual two-dimensional target content corresponding to the rotated three-dimensional model based on the rotated three-dimensional model, and taking the two-dimensional target content as new two-dimensional plane content.
In some embodiments, the terminal device may generate, based on the rotated three-dimensional model, virtual two-dimensional target content corresponding to the rotated three-dimensional model, and use the two-dimensional target content as new two-dimensional plane content, that is, a display position of the new two-dimensional plane content corresponds to an interaction area of the interaction device. For example, after the three-dimensional model is rotated by 90 °, the generated two-dimensional plane content is updated from the front content of the three-dimensional model to the side content of the three-dimensional model.
In other embodiments, the terminal device may also generate a display window, and the display window is used for displaying the two-dimensional target content. The display window is displayed on the two-dimensional plane content in an overlapped mode, so that a user can perform touch operation within the range of the display window to select specific content in the two-dimensional target content in the display window to perform control operation.
For example, referring to fig. 16, when the user clicks the overlapped content 450 with a finger on the tablet pc, the three-dimensional model 320 in the virtual space automatically rotates, and the tablet pc updates and displays the plane content corresponding to the automatically rotated three-dimensional model 320.
Step S470: according to the operation data, first target content of the two-dimensional plane content, on which the control operation is performed, is acquired.
Step S480: and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In the embodiment of the present application, step S470 and step S480 may refer to the foregoing embodiments, and are not described herein again.
According to the control method of the virtual content provided by the embodiment of the application, when the touch position corresponds to the overlapped content in the two-dimensional plane content, the three-dimensional model can be rotated by a specified angle, the virtual two-dimensional target content corresponding to the rotated three-dimensional model is generated based on the rotated three-dimensional model, the two-dimensional target content is used as new two-dimensional plane content, then the first target content of the two-dimensional plane content, on which the control operation is executed, is obtained according to the operation data, so that the second target content corresponding to the first target content in the three-dimensional model is obtained, and the control operation is performed on the second target content. Therefore, when the three-dimensional model comprises the overlapping part, the user can see the specific form of the overlapping part from the side by rotating the three-dimensional model, and meanwhile, the controlled target object can be conveniently and accurately selected by operating the virtual two-dimensional target content corresponding to the rotated three-dimensional model, so that the interaction effect between the user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Referring to fig. 17, a further embodiment of the present application provides a method for controlling virtual content, where the method is applied to the terminal device, the terminal device is connected to an interactive device, and the interactive device includes an interactive area, and the method may include:
step S510: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
Step S520: and rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region.
Step S530: and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area.
Step S540: and receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area, wherein the operation data comprises a touch position.
In the embodiment of the present application, steps S510 to S540 may refer to the foregoing embodiments, and are not described herein again.
Step S550: and when the touch position corresponds to the overlapped content in the two-dimensional plane content, determining an overlapped structure corresponding to the overlapped content in the three-dimensional model.
Step S560: the overlapping structures are split into single structures and arranged.
In some embodiments, when the touch position corresponds to the overlapping content in the two-dimensional plane content, the overlapping structure corresponding to the overlapping content in the three-dimensional model can be determined according to the overlapping content, and then the overlapping structure is split into a single structure and arranged to separate the overlapping structure, so that the specific form of each structure after the overlapping structure is separated can be clearly seen, and the target content can be accurately selected.
In some embodiments, the overlapping structure is divided into a single structure and arranged, which may be arranged according to a hierarchical relationship of the overlapping structure, so as to ensure that a user can clearly understand an original logical structure of the overlapping structure. For example, the terminal device may split the overlay structure into a single structure according to the hierarchical relationship between the list and the list of the overlay structure, and arrange the single structure sequentially from left to right.
In some embodiments, when other content exists near the overlapped content in the two-dimensional plane content, other parts which may be occluded in the three-dimensional model can be hidden after the overlapped structure is split into a single structure and arranged.
Step S570: and generating virtual two-dimensional display content corresponding to the arranged overlapped structure based on the arranged overlapped structure, and taking the two-dimensional display content as new two-dimensional plane content.
In some embodiments, the terminal device may generate, based on the arranged overlapping structure, virtual two-dimensional display content corresponding to the arranged overlapping structure, and use the two-dimensional display content as new two-dimensional plane content, that is, a display position of the new two-dimensional display content corresponds to an interaction area of the interaction device.
In other embodiments, the terminal device may also generate a display window for displaying the two-dimensional display content. The display window is displayed on the two-dimensional plane content in an overlapped mode, so that a user can perform touch operation within the range of the display window to select specific content in the two-dimensional display content in the display window to perform control operation. For example, referring to fig. 18, when a user clicks the overlapped content 450 with a finger on a tablet pc, the three-dimensional model 320 in the virtual space is automatically split into a single structure and rearranged to face the user, and the tablet pc updates and displays the plane content corresponding to the three-dimensional model 320 after being automatically split and arranged.
Step S580: according to the operation data, first target content of the two-dimensional plane content, on which the control operation is performed, is acquired.
Step S590: and acquiring second target content corresponding to the first target content in the three-dimensional model, and controlling the second target content.
In the embodiment of the present application, step S580 and step S590 refer to the foregoing embodiments, and are not described herein again.
According to the control method of the virtual content, when the touch position corresponds to the overlapped content in the two-dimensional plane content, the overlapped structure corresponding to the overlapped content in the three-dimensional model can be determined. And then splitting the overlapping structure into a single structure and arranging the single structure to generate virtual two-dimensional display content corresponding to the arranged overlapping structure based on the arranged overlapping structure, and taking the two-dimensional display content as new two-dimensional plane content. And then, according to the operation data, acquiring first target content of the two-dimensional plane content, on which control operation is executed, so as to acquire second target content corresponding to the first target content in the three-dimensional model, and performing control operation on the second target content. Therefore, when the three-dimensional model comprises the overlapping part, the user can see the specific form of the separated overlapping part by splitting the overlapping structure, and meanwhile, the controlled target object can be conveniently and accurately selected by operating the virtual two-dimensional display content corresponding to the separated three-dimensional model, so that the interaction effect between the user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
Referring to fig. 19, a block diagram of a structure of an apparatus 500 for controlling virtual content according to an embodiment of the present application is shown, where the apparatus is applied to a terminal device, the terminal device is connected to an interactive device, the interactive device includes an interactive area, and the apparatus may include: an information acquisition module 510, a model rendering module 520, a two-dimensional generation module 530, an operation reception module 540, a content acquisition module 550, and a target control module 560. The information obtaining module 510 is configured to obtain relative spatial position information between the terminal device and the interactive device; the model rendering module 520 is configured to render a virtual three-dimensional model according to the relative spatial position information, where a position of the three-dimensional model superimposed in the real space is in a region outside the interaction region; the two-dimensional generation module 530 is configured to generate virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, where a display position of the two-dimensional plane content corresponds to the interaction area; the operation receiving module 540 is configured to receive operation data sent by the interaction device according to the touch operation detected in the interaction area; the content acquiring module 550 is configured to acquire a first target content of the two-dimensional plane content on which the control operation is performed according to the operation data; the target control module 560 is configured to obtain a second target content corresponding to the first target content in the three-dimensional model, and perform a control operation on the second target content.
In some embodiments, the two-dimensional generation module 530 may be specifically configured to: acquiring a left eye display image and a right eye display image for displaying the three-dimensional model; and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information and a target display image, wherein the target display image comprises at least one of a left-eye display image and a right-eye display image.
In some embodiments, the control device 500 of the virtual content may further include: the device comprises a rotation acquisition module, a pre-change recording module and a pre-change generation module. The rotation acquisition module is used for acquiring the rotation direction and the rotation angle of the terminal equipment relative to the reference direction based on the changed relative spatial position information when the change of the relative spatial position information is detected; the pre-change recording module is used for recording a target display image corresponding to the relative spatial position information before change when the rotating direction faces the interactive equipment and the rotating angle is greater than a preset angle; and the pre-change generation module is used for generating virtual two-dimensional plane content corresponding to the three-dimensional model rendered according to the relative space position information before change based on the changed relative space position information and the recorded target display image.
In some embodiments, the control device 500 of the virtual content may further include: the device comprises a post-change recording module and a post-change generating module. The post-change recording module is used for recording a target display image corresponding to the changed relative spatial position information when the rotating direction faces the interactive equipment and the rotating angle is smaller than a preset angle; and the post-change generation module is used for generating virtual two-dimensional plane content corresponding to the three-dimensional model rendered according to the changed relative spatial position information based on the changed relative spatial position information and the recorded target display image.
In some embodiments, the control device 500 of the virtual content may further include: and a position acquisition module. The position acquisition module is used for acquiring an initial relative position between a two-dimensional visual angle camera and a target point in a virtual space when a three-dimensional model is rendered for the first time, and taking the direction of the initial relative position as a reference direction, wherein the target point is any fixed point in the virtual space.
In this embodiment, the rotation acquiring module may be specifically configured to: acquiring the current position and the posture of the two-dimensional visual angle camera based on the changed relative spatial position information; acquiring a current relative position between the two-dimensional visual angle camera and the target point based on the target point and the current position and posture; and acquiring the rotation direction and the rotation angle of the terminal equipment relative to the reference direction according to the initial relative position and the current relative position.
In some embodiments, the two-dimensional generation module 530 may be specifically configured to: when a combination of a plurality of parts is detected in the three-dimensional model, generating a virtual hierarchical list corresponding to the parts according to the hierarchical relation among the parts; and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information and the hierarchical list, wherein the two-dimensional plane content comprises the hierarchical list and plane content corresponding to the plurality of parts.
In this embodiment, the operation data may include a touch position, and the content obtaining module 550 may be specifically configured to: when the touch position corresponds to the display position of the hierarchy list, acquiring a target hierarchy corresponding to the touch position; and acquiring the plane content of the part corresponding to the target level, and taking the plane content as the first target content of the two-dimensional plane content on which the control operation is executed.
In some embodiments, the operation data may include a touch position, and the control device 500 for virtual content may further include: and an amplification generation module. The amplification generating module is used for generating virtual two-dimensional amplification content corresponding to the amplified two-dimensional plane content when the touch position corresponds to the specified content in the two-dimensional plane content, and the two-dimensional amplification content is displayed on the two-dimensional plane content in an overlapping mode.
In this embodiment, the content obtaining module 550 may be specifically configured to: and when the touch position corresponds to the display position of the two-dimensional amplified content, acquiring a first target content of the two-dimensional amplified content, on which the control operation is executed.
In some embodiments, the operation data may include a touch position, and the control device 500 for virtual content may further include: the device comprises a model rotation module and a rotation generation module. The model selection module is used for rotating the three-dimensional model at a specified angle when the touch position corresponds to overlapped content in the two-dimensional plane content; and the rotation generation module is used for generating virtual two-dimensional target content corresponding to the rotated three-dimensional model based on the rotated three-dimensional model and taking the two-dimensional target content as new two-dimensional plane content.
In some embodiments, the operation data may include a touch position, and the control device 500 for virtual content may further include: the device comprises a structure determining module, a structure re-listing module and a re-listing generating module. The structure determining module is used for determining an overlapping structure corresponding to overlapping contents in the three-dimensional model when the touch position corresponds to the overlapping contents in the two-dimensional plane contents; the structure rearrangement module is used for splitting the overlapped structure into a single structure and arranging the single structure; and the rearrangement generating module is used for generating virtual two-dimensional display content corresponding to the arranged overlapping structure based on the arranged overlapping structure and taking the two-dimensional display content as new two-dimensional plane content.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
In summary, the control apparatus for virtual content provided in the embodiment of the present application renders a virtual three-dimensional model according to relative spatial position information by obtaining the relative spatial position information between the terminal device and the interactive device, where the position of the three-dimensional model superimposed in the real space is located in a region outside the interactive region. And then generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area. And then receiving operation data sent by the interactive equipment according to the touch operation detected in the interactive area, so as to obtain first target content of the control operation executed in the two-dimensional plane content according to the operation data. And finally, acquiring second target content corresponding to the first target content in the three-dimensional model, and performing the control operation on the second target content. Therefore, the control operation of the three-dimensional model can be realized by controlling the virtual two-dimensional plane content corresponding to the three-dimensional model through the two-dimensional interaction area on the touch interaction equipment, meanwhile, the controlled target content in the three-dimensional model can be accurately positioned through the two-dimensional plane content, the accurate control of the three-dimensional model by using a two-dimensional interaction mode is realized, the interaction effect between a user and the virtual content is improved, and the interactivity between the user and the virtual content is enhanced.
In some embodiments, the terminal device 100 may be an external/access head-mounted display device, and the head-mounted display device is connected to the interactive device. The head-mounted display device may only complete the display of the virtual image, all the processing operations related to the generation of the virtual image, the adjustment of the display position, and the like may be completed by the interactive device, and after the virtual image is generated by the interactive device, the display image corresponding to the virtual image is transmitted to the head-mounted display device, so that the display of the virtual image is completed.
Referring to fig. 20, which shows a schematic structural diagram of a display system provided in an embodiment of the present application, the display system 10 may include: the terminal device 100 and the interactive device 200 are connected with the terminal device 100 in a communication mode, and the interactive device 200 comprises an interactive area. Wherein:
the terminal device 100 is configured to obtain relative spatial position information between the terminal device 100 and the interaction device 200, render a virtual three-dimensional model according to the relative spatial position information, superimpose the three-dimensional model in a region outside the interaction region at a position in a real space, and generate virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, where a display position of the two-dimensional plane content corresponds to the interaction region.
The interactive apparatus 200 is used to control the interactive area to display two-dimensional plane contents.
The interactive device 200 is further configured to detect a touch operation through the interactive area, and send operation data to the terminal device when the detected touch operation occurs.
The terminal device 200 is further configured to receive the operation data, obtain, according to the operation data, a first target content of the two-dimensional plane content on which a control operation is performed, obtain, in the three-dimensional model, a second target content corresponding to the first target content, and perform a control operation on the second target content.
Referring to fig. 21, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device such as a head-mounted display device capable of running an application. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, wherein the memory 120 has one or more applications stored therein, the one or more applications configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the foregoing method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
In some embodiments, the terminal device 100 may further include an image sensor 130 for capturing images of real objects and capturing scene images of the target scene. The image sensor 130 may be an infrared camera or a visible light camera, and the specific type is not limited in the embodiment of the present application.
In one embodiment, the terminal device is a head-mounted display device, and may further include one or more of the following components in addition to the processor, the memory, and the image sensor described above: display module assembly, optical module assembly, communication module and power.
The display module may include a display control unit. The display control unit is used for receiving the display image of the virtual content rendered by the processor, and then displaying and projecting the display image onto the optical module, so that a user can view the virtual content through the optical module. The display module can be a display screen or a projection device and the like and can be used for displaying images.
The optical module can adopt an off-axis optical system or a waveguide optical system, and a display image displayed by the display module can be projected to eyes of a user after passing through the optical module. The user sees the display image that the display module assembly was thrown through optical module assembly simultaneously. In some embodiments, the user can also observe the real environment through the optical module, and experience the augmented reality effect after the virtual content and the real environment are superimposed.
The communication module can be a module such as Bluetooth, WiFi (Wireless-Fidelity), ZigBee (Violet technology) and the like, and the head-mounted display device can be in communication connection with the terminal equipment through the communication module. The head-mounted display device in communication connection with the terminal equipment can perform information and instruction interaction with the terminal equipment. For example, the head-mounted display device may receive image data transmitted from the terminal device via the communication module, and generate and display virtual content of a virtual world from the received image data.
The power supply can supply power for the whole head-mounted display device, and the normal operation of each part of the head-mounted display device is ensured.
Referring to fig. 22, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. A control method of virtual content is applied to a terminal device, the terminal device is connected with an interactive device, the interactive device comprises an interactive area, and the method comprises the following steps:
acquiring relative spatial position information between the terminal equipment and the interactive equipment;
rendering a virtual three-dimensional model according to the relative space position information, wherein the position of the three-dimensional model superposed in the real space is positioned in a region outside the interaction region;
generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, wherein the display position of the two-dimensional plane content corresponds to the interactive area;
receiving operation data sent by the interaction equipment according to the touch operation detected in the interaction area;
acquiring first target content of the two-dimensional plane content, which is executed with control operation, according to the operation data;
and acquiring second target content corresponding to the first target content in the three-dimensional model, and performing the control operation on the second target content.
2. The method of claim 1, wherein generating virtual two-dimensional planar content corresponding to the three-dimensional model based on the relative spatial location information comprises:
acquiring a left eye display image and a right eye display image for displaying the three-dimensional model;
and generating virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information and a target display image, wherein the target display image comprises at least one of the left eye display image and the right eye display image.
3. The method of claim 2, further comprising:
when the change of the relative spatial position information is detected, acquiring the rotation direction and the rotation angle of the terminal equipment relative to the reference direction based on the changed relative spatial position information;
when the rotating direction faces the interactive equipment and the rotating angle is larger than a preset angle, recording a target display image corresponding to the relative spatial position information before change;
and generating virtual two-dimensional plane content corresponding to the three-dimensional model rendered according to the relative spatial position information before the change based on the changed relative spatial position information and the recorded target display image.
4. The method according to claim 3, wherein after the obtaining of the rotation direction and the rotation angle of the terminal device relative to the reference direction, the method further comprises:
when the rotating direction faces the interactive equipment and the rotating angle is smaller than a preset angle, recording a target display image corresponding to the changed relative spatial position information;
and generating virtual two-dimensional plane content corresponding to the three-dimensional model rendered according to the changed relative spatial position information based on the changed relative spatial position information and the recorded target display image.
5. The method according to claim 3, wherein before the acquiring, when the change of the relative spatial position information is detected, the rotation direction and the rotation angle of the terminal device relative to a reference direction based on the changed relative spatial position information, the method further comprises:
when the three-dimensional model is rendered for the first time, acquiring an initial relative position between a two-dimensional visual angle camera in a virtual space and a target point, and taking the direction of the initial relative position as a reference direction, wherein the target point is any fixed point in the virtual space;
the acquiring a rotation direction and a rotation angle of the terminal device relative to a reference direction based on the changed relative spatial position information includes:
acquiring the current position and the posture of the two-dimensional visual angle camera based on the changed relative spatial position information;
acquiring the current relative position between the two-dimensional visual angle camera and the target point based on the target point and the current position and posture;
and acquiring the rotation direction and the rotation angle of the terminal equipment relative to the reference direction according to the initial relative position and the current relative position.
6. The method of claim 1, wherein generating virtual two-dimensional planar content corresponding to the three-dimensional model based on the relative spatial position information comprises:
when detecting that a combination of a plurality of components is contained in the three-dimensional model, generating a virtual hierarchical list corresponding to the plurality of components according to the plurality of components and the hierarchical relation among the plurality of components;
generating virtual two-dimensional planar content corresponding to the three-dimensional model based on the relative spatial position information and the hierarchical list, wherein the two-dimensional planar content includes the hierarchical list and planar content corresponding to the plurality of components;
the operation data comprises a touch position, and the acquiring of the first target content of the two-dimensional plane content, on which the control operation is executed, comprises:
when the touch position corresponds to a display position of the hierarchy list, acquiring a target hierarchy corresponding to the touch position;
and acquiring the plane content of the component corresponding to the target level, and taking the plane content as the first target content of the two-dimensional plane content on which the control operation is executed.
7. The method of claim 1, wherein the operation data comprises a touch position, and before the obtaining of the first target content of the two-dimensional plane content on which the control operation is performed, the method further comprises:
when the touch position corresponds to the designated content in the two-dimensional plane content, generating virtual two-dimensional amplified content corresponding to the amplified two-dimensional plane content, and displaying the two-dimensional amplified content on the two-dimensional plane content in an overlapping manner;
the acquiring of the first target content of the two-dimensional plane content on which the control operation is executed includes:
and when the touch position corresponds to the display position of the two-dimensional amplified content, acquiring a first target content of the two-dimensional amplified content, wherein the first target content is executed with a control operation.
8. The method of claim 1, wherein the operation data comprises a touch position, and before the obtaining of the first target content of the two-dimensional plane content on which the control operation is performed, the method further comprises:
rotating the three-dimensional model by a specified angle when the touch position corresponds to overlapping content in the two-dimensional planar content;
and generating virtual two-dimensional target content corresponding to the rotated three-dimensional model based on the rotated three-dimensional model, and taking the two-dimensional target content as new two-dimensional plane content.
9. The method of claim 1, wherein the operation data comprises a touch position, and before the obtaining of the first target content of the two-dimensional plane content on which the control operation is performed, the method further comprises:
when the touch position corresponds to overlapped content in the two-dimensional plane content, determining an overlapped structure corresponding to the overlapped content in the three-dimensional model;
splitting the overlapping structure into single structures and arranging;
and generating virtual two-dimensional display content corresponding to the arranged overlapped structure based on the arranged overlapped structure, and taking the two-dimensional display content as new two-dimensional plane content.
10. A display system, characterized in that, the system includes terminal equipment and interactive equipment, the interactive equipment is connected with terminal equipment, the interactive equipment includes the interactive region, wherein:
the terminal device is configured to obtain relative spatial position information between the terminal device and the interaction device, render a virtual three-dimensional model according to the relative spatial position information, superimpose the three-dimensional model in a region outside the interaction region in a real space, and generate virtual two-dimensional plane content corresponding to the three-dimensional model based on the relative spatial position information, where a display position of the two-dimensional plane content corresponds to the interaction region;
the interactive device is used for controlling the interactive area to display the two-dimensional plane content;
the interactive device is further configured to detect a touch operation through the interactive area, and send operation data to the terminal device when the touch operation is detected;
the terminal device is further configured to receive the operation data, obtain, according to the operation data, a first target content of the two-dimensional plane content on which a control operation is performed, obtain, in the three-dimensional model, a second target content corresponding to the first target content, and perform the control operation on the second target content.
11. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-9.
12. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 9.
CN201911137088.0A 2019-11-19 2019-11-19 Virtual content control method, device, terminal equipment and storage medium Active CN111161396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911137088.0A CN111161396B (en) 2019-11-19 2019-11-19 Virtual content control method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911137088.0A CN111161396B (en) 2019-11-19 2019-11-19 Virtual content control method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111161396A true CN111161396A (en) 2020-05-15
CN111161396B CN111161396B (en) 2023-05-16

Family

ID=70556027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911137088.0A Active CN111161396B (en) 2019-11-19 2019-11-19 Virtual content control method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111161396B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112076470A (en) * 2020-08-26 2020-12-15 北京完美赤金科技有限公司 Virtual object display method, device and equipment
CN113126770A (en) * 2021-04-30 2021-07-16 塔普翊海(上海)智能科技有限公司 Interactive three-dimensional scenery system based on augmented reality
CN115033133A (en) * 2022-05-13 2022-09-09 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium
US20230179757A1 (en) * 2021-12-03 2023-06-08 Honda Motor Co., Ltd. Control device, control method, and recording medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061700A1 (en) * 2015-02-13 2017-03-02 Julian Michael Urbach Intercommunication between a head mounted display and a real world object
US20180108147A1 (en) * 2016-10-17 2018-04-19 Samsung Electronics Co., Ltd. Method and device for displaying virtual object
CN108700942A (en) * 2016-05-17 2018-10-23 谷歌有限责任公司 Change the technology of object's position in virtual/augmented reality system
US20190004684A1 (en) * 2017-06-30 2019-01-03 Microsoft Technology Licensing, Llc Annotation using a multi-device mixed interactivity system
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061700A1 (en) * 2015-02-13 2017-03-02 Julian Michael Urbach Intercommunication between a head mounted display and a real world object
CN108700942A (en) * 2016-05-17 2018-10-23 谷歌有限责任公司 Change the technology of object's position in virtual/augmented reality system
US20180108147A1 (en) * 2016-10-17 2018-04-19 Samsung Electronics Co., Ltd. Method and device for displaying virtual object
US20190004684A1 (en) * 2017-06-30 2019-01-03 Microsoft Technology Licensing, Llc Annotation using a multi-device mixed interactivity system
CN110456907A (en) * 2019-07-24 2019-11-15 广东虚拟现实科技有限公司 Control method, device, terminal device and the storage medium of virtual screen

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112076470A (en) * 2020-08-26 2020-12-15 北京完美赤金科技有限公司 Virtual object display method, device and equipment
CN113126770A (en) * 2021-04-30 2021-07-16 塔普翊海(上海)智能科技有限公司 Interactive three-dimensional scenery system based on augmented reality
US20230179757A1 (en) * 2021-12-03 2023-06-08 Honda Motor Co., Ltd. Control device, control method, and recording medium
CN115033133A (en) * 2022-05-13 2022-09-09 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111161396B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN111161396B (en) Virtual content control method, device, terminal equipment and storage medium
US10671239B2 (en) Three dimensional digital content editing in virtual reality
JP6551502B2 (en) Head mounted display, information processing method, and program
JP4679661B1 (en) Information presenting apparatus, information presenting method, and program
JP3926837B2 (en) Display control method and apparatus, program, and portable device
JP6057396B2 (en) 3D user interface device and 3D operation processing method
CN109743892B (en) Virtual reality content display method and device
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
US9766793B2 (en) Information processing device, information processing method and program
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
KR20170031733A (en) Technologies for adjusting a perspective of a captured image for display
JP6479199B2 (en) Information processing device
CN108474950A (en) HMD device and its control method
EP3819752A1 (en) Personalized scene image processing method and apparatus, and storage medium
KR20200138349A (en) Image processing method and apparatus, electronic device, and storage medium
WO2015093130A1 (en) Information processing device, information processing method, and program
KR20180010845A (en) Head mounted display and method for controlling the same
JP2016122392A (en) Information processing apparatus, information processing system, control method and program of the same
CN111813214A (en) Virtual content processing method and device, terminal equipment and storage medium
JP6065908B2 (en) Stereoscopic image display device, cursor display method thereof, and computer program
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
JP2022058753A (en) Information processing apparatus, information processing method, and program
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
JP2013168120A (en) Stereoscopic image processing apparatus, stereoscopic image processing method, and program
CN111818326B (en) Image processing method, device, system, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant