WO2018098720A1 - 一种基于虚拟现实的数据处理方法及*** - Google Patents

一种基于虚拟现实的数据处理方法及*** Download PDF

Info

Publication number
WO2018098720A1
WO2018098720A1 PCT/CN2016/108118 CN2016108118W WO2018098720A1 WO 2018098720 A1 WO2018098720 A1 WO 2018098720A1 CN 2016108118 W CN2016108118 W CN 2016108118W WO 2018098720 A1 WO2018098720 A1 WO 2018098720A1
Authority
WO
WIPO (PCT)
Prior art keywords
teaching
virtual
virtual reality
reality device
operation instruction
Prior art date
Application number
PCT/CN2016/108118
Other languages
English (en)
French (fr)
Inventor
熊益冲
Original Assignee
深圳益强信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳益强信息科技有限公司 filed Critical 深圳益强信息科技有限公司
Priority to PCT/CN2016/108118 priority Critical patent/WO2018098720A1/zh
Publication of WO2018098720A1 publication Critical patent/WO2018098720A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Definitions

  • the present invention relates to the field of computer applications, and in particular, to a data processing system and system based on virtual reality.
  • the user can only visually view the two-dimensional. Particle motion on the plane, and does not allow users to explore from different angles Intends to showcase the environment, resulting in user experience simulation resources is extremely limited, and thus the customer can not fully enjoy the immersive experience.
  • the technical problem to be solved by the embodiments of the present invention is to provide a data processing method and a data processing system based on virtual reality, which can enrich user experience resources and provide users with a realistic teaching scene so that users can fully enjoy Immersive experience.
  • the first aspect of the embodiments of the present invention provides a data processing method based on virtual reality, where the data processing method includes:
  • the first virtual reality device captures the first environment information through the camera, and records the first capture time
  • the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting Command to the background server;
  • the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, and a virtual seat And virtual classrooms;
  • the first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays the first a virtual teaching scene;
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model .
  • a second aspect of the embodiments of the present invention provides a virtual reality-based data processing system, including: a first virtual reality device and a background server;
  • the first virtual reality device is configured to capture first environment information by using a camera, and record a first capture time
  • the first virtual reality device is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
  • the background server is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
  • the first virtual reality device is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
  • the first virtual reality device is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time, and Displaying the first virtual teaching scene;
  • the first virtual reality device is further configured to receive an operation instruction for the target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the target based on the character model Teaching content of the subject.
  • the implementation of the embodiment of the present invention has the following beneficial effects: the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting of the virtual teaching application. And receiving and buffering the teaching environment parameter returned by the background server according to the setting instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information And generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. .
  • FIG. 1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart diagram of still another virtual reality-based data processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another data processing system based on virtual reality according to an embodiment of the present invention.
  • the execution of the virtual reality-based data processing method mentioned in the embodiments of the present invention depends on a computer program and can run on a computer system of the Von Ruyman system.
  • the computer program can be integrated into the application or run as a standalone tool class application.
  • the computer system can be a terminal device such as a personal computer, a tablet computer, a notebook computer, or a smart phone.
  • FIG. 1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention. As shown in FIG. 1 , the data processing method includes at least:
  • Step S101 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
  • Step S102 the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
  • the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device is configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user, and after the first virtual reality device receives the setting instruction, to the background
  • the server sends the setting instruction;
  • the setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S103 the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device;
  • the background server when receiving the setting request information sent by the first virtual reality device, the background server returns setting response information according to the setting request information, and acquires the teaching environment parameter, and the teaching environment is Sending parameters back to the first virtual reality device; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • Step S104 The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the teaching scenario of the virtual teaching application may be set according to the teaching environment parameter; wherein the teaching Environmental parameters include: virtual projectors, virtual desks, virtual seats, and virtual classrooms;
  • Step S105 the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
  • the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting the corresponding teaching environment parameter on the server, and performing data fusion with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and displaying The first virtual teaching scene;
  • the first virtual reality device captures the current position of the user in the room A by the camera, and detects that the user's head is slightly shifted to the left, and the first virtual reality device positions the user. And the head offset angle or the like as the environmental data, and extracting teaching environment parameters corresponding to the environmental data according to the environmental data, so that the environmental data and the teaching environment parameters are data-fused to be virtualized
  • the teaching environment parameters are applied to the real teaching environment, so that the real environment data and the virtual teaching scene are superimposed on the same picture or space in real time, and the two exist at the same time.
  • the user will be located in the middle of the classroom in the virtual teaching environment, and the projection display interface is positioned at a slightly left position with the user as the dividing line.
  • Step S106 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection Operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and select an operation according to the seat selection.
  • the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a plurality of subjects) in the first virtual teaching scene by voice recognition to select a user favorite user.
  • Teaching model may be an animal cartoon model of the image or a realistic star simulation model.
  • the first capture time is read, and A virtual target seat in the teaching environment parameter is arranged according to the seat selection operation command and the first capture time (eg, the position of the projection screen facing the left side of the center).
  • the first capture time may be used to record the learning duration of the target subject, and may also be used to select the virtual target seat, so that other users who subsequently enter the English subject cannot repeatedly occupy the selected one of the learning durations. Target seat.
  • the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information, and according to the first capturing time Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction a character teaching model, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
  • FIG. 2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention. As shown in FIG. 2, the data processing method includes at least:
  • Step S201 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
  • Step S202 The first virtual reality device receives a setting instruction for the virtual teaching application, and receives and caches a teaching environment parameter returned by the background server according to the setting instruction.
  • the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server, so that the background server acquires the teaching environment parameter according to the setting instruction, and the teaching The environmental parameter is sent back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom; and the teaching environment received by the first virtual reality device After the parameter, the teaching environment parameters are cached.
  • the first virtual reality device may be a head mounted device, including: a virtual reality glasses or a virtual reality helmet; the first virtual reality device may be configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user. And after the first virtual reality device receives the setting instruction, send the setting instruction to the background server;
  • the setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S203 the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and uses the first capture time to set the three-dimensional classroom interface data. Merging with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject;
  • the first virtual reality device performs split screen processing on the teaching environment parameters buffered in the bottom layer graphics card cache according to the active split screen technology, so that the teaching environment parameters displayed by the system are equally divided.
  • the data can be converted into three-dimensional teaching interface data, and then the first virtual reality device performs data fusion between the captured first environment information and the three-dimensional teaching interface data to generate a first virtual teaching scene. And displaying the first virtual teaching scene;
  • the active split screen technology realizes the split screen processing through the underlying driver of the system, and can realize the split screen from the display buffer area of the bottom layer of the system, and uses the unique algorithm to perform the equal division split screen processing in the layer of FrameBuffer.
  • the screen can be split, and then equipped with virtual reality glasses to achieve the effect of 3D display; in addition, the first virtual reality device can capture the user in the real environment in which the user is currently located by the camera.
  • Step S204 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model.
  • Teaching content
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters. Receiving an operation instruction for the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model.
  • the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat
  • the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a physical subject) in the first virtual teaching scene by voice recognition to select a user favorite user.
  • Teaching model may be an animal cartoon model of the image or a realistic star simulation model.
  • the operation instruction is invoked by the teaching content
  • the first virtual reality device can receive the operation instruction for the teaching content of the target subject in the first virtual teaching scene by using voice recognition, so that the first virtual reality device can be
  • the virtual image extracted from the background server covers the function in the real world picture, that is, within a certain projection distance
  • the virtual image of the computer is merged and marked in the real world picture of the user position through the active split screen technology.
  • the superimposed three-dimensional image is output. For example, you can think of a dinosaur model that has disappeared for many years, and let it be placed around the user's location. It can also help users to reproduce the simulation scene of the launch and launch of the Shenzhou 11 spacecraft in front of the scene. The vivid 3D image allows more users to get to know the technology around them.
  • the data processing method further includes Including the following steps:
  • the first virtual reality device may further send the first virtual teaching scenario to a user terminal having a wireless connection relationship with the first virtual reality device based on a wireless video transmission technology, so that the user terminal Displaying the first virtual teaching scene;
  • the user terminal includes: a smart TV. Laptops, PDAs, gaming peripherals and tablets.
  • the first virtual reality device first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to the Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and according to the first capture time Combining the three-dimensional classroom interface data with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject; and finally, The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model It can be seen that with the present invention, the teaching content displayed by the system can be in the underlying driver. Now equalize the screen, so that the application interface in the system realizes the active split screen effect,
  • FIG. 3 is a schematic flowchart of still another method for processing data based on virtual reality according to an embodiment of the present invention.
  • the data processing method includes at least:
  • Step S301 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and The environmental data is used as the first environmental information.
  • Step S302 the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
  • Step S303 the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
  • Step S304 The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
  • the setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S305 the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
  • the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting corresponding teaching environment parameters from the server, and the teaching
  • the environment parameter is data-fused with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and display the first virtual teaching scene;
  • Step S306 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters.
  • the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters.
  • Step S307 the second virtual reality device sends a join request to the background server.
  • Step S308 the background server forwards the received join request to the first virtual reality device
  • Step S309 the first virtual reality device generates an acknowledgement response message corresponding to the join request, and sends the acknowledgement response message to the second virtual reality device.
  • Step S310 the second virtual reality device uploads second environment information to the background server according to the confirmation response message, and records a second capture time;
  • the second virtual reality device may capture the current environment data by using the front or rear camera, and use the environment data as the second environment information, and record the second capture time corresponding to the second environment information.
  • the camera also has functions such as video calling and projection. After the second virtual reality device receives the confirmation response message sent by the first virtual reality device, uploading the second environment information to the background server;
  • the environment data is uploaded, and the environment data is uploaded to the background server as the second environment information to obtain the second teaching scene data of the target subject corresponding to the second capturing time.
  • Step S311 the background server compares the first virtual teaching scene with the second environment letter Converging, generating first teaching scene data corresponding to the first virtual reality device, and generating second teaching scene data corresponding to the second virtual reality device;
  • the background server after receiving the second environment information uploaded by the second virtual reality device, the background server performs data fusion between the second environment information and the first virtual teaching scenario to generate and The first teaching scene data corresponding to the first virtual reality device is sent to the first virtual reality device; at the same time, the generated second teaching scene data corresponding to the second virtual reality device is sent to the The second virtual reality device is configured to enable the two virtual reality devices to display simulated teaching scenes in different orientations or perspectives in the corresponding virtual teaching scenes.
  • Step S312 the first virtual reality device receives the first teaching scene data sent by the background server, and updates and displays the first virtual teaching scene according to the second capturing time;
  • Step S313 the second virtual reality device receives the second teaching scene data sent by the background server, and generates and displays a second virtual teaching scene according to the second capturing time.
  • the data processing method further includes performing the following steps, the first virtual reality device may further send the first virtual teaching scenario to the wireless video transmission technology to a user terminal having a wireless connection relationship with the first virtual reality device, so that the user terminal displays the first virtual teaching scene;
  • the user terminal includes: a smart TV. Laptops, PDAs, parade peripherals, and tablets.
  • the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to The teaching environment parameter returned by the setting instruction; the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and according to the first capture Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction Corresponding character teaching model, and displaying the teaching content of the target subject based on the character model; then, after receiving the joining request of the second virtual reality device, the background server forwards the joining request to the a first virtual reality device to cause the second virtual reality device to receive the confirmation response After interest, the second environmental information uploaded;
  • the first virtual server, the background scene and the teaching The second environment information is used for data fusion, and the first teaching scene data
  • the present invention not only provides a user with a realistic simulation teaching scene, but also provides a rich virtual interactive platform for multiple users, thereby creating a richer and more diverse visual experience for the user, and providing the user with a visual experience. Fit the realistic teaching scene, greatly help users understand the teaching content and improve learning efficiency.
  • FIG. 4 is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention.
  • the data processing system 1 includes: a first virtual reality device 10 and a background. Server 20;
  • the first virtual reality device 10 is configured to capture first environment information by using a camera, and record a first capture time;
  • the first virtual reality device 10 may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the corresponding information of the first environment information.
  • the first capture time in addition, the camera also has functions such as video call and projection.
  • the first virtual reality device 10 is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
  • the background server 20 is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual class Tables, virtual seats and virtual classrooms;
  • the first virtual reality device 10 is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
  • the first virtual reality device 10 receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20, so that the background service device 20 obtains the teaching environment parameter according to the setting instruction, and Sending the teaching environment parameter back to the first virtual reality device 10; subsequently, the first virtual reality device 10 refers to the teaching according to the received teaching environment parameter Learn environment parameters for cache processing.
  • the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the first virtual reality device 10 may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
  • the first virtual reality device 10 sends the setting request information to the background server 20, and causes the background server 20 to return setting response information according to the setting request information to extract the teaching stored on the background server 20.
  • An environmental parameter, and the teaching scene of the virtual teaching application is set according to the teaching environment parameter; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • the first virtual reality device 10 is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time. And displaying the first virtual teaching scene;
  • the first virtual reality device 10 may be configured to capture, by using the camera, the environment data in a real environment where the user is currently located, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to
  • the background server 20 extracts corresponding teaching environment parameters, and combines the teaching environment parameters with the captured first environment information to generate a first virtual teaching according to the first capturing time. a scene and displaying the first virtual teaching scene;
  • the first virtual reality device 10 is further configured to receive an operation instruction for a target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the Teaching content of the target subject;
  • the first virtual reality device 10 is specifically configured to receive an operation instruction for a target account in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, and a teaching content call The operation instruction and the seat selection operation instruction; at this time, the first virtual reality device 10 is further configured to select a corresponding character teaching model according to the model selection operation instruction, and invoke the operation instruction to display the target according to the teaching content.
  • the teaching content of the subject, and the virtual target seat in the teaching environment parameter is arranged according to the seat selection operation instruction and the first capturing time.
  • the first virtual reality device 10 first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device 10 receives the setting instruction for the virtual teaching application, and receives And the teaching environment parameter returned by the cache background server 20 according to the setting instruction; then, the first virtual reality device 10 fuses the cached processed teaching environment parameter with the captured first environment information, and Generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device 10 receives an operation instruction for a target subject in the first virtual teaching scene, And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
  • FIG. 5 is another virtual reality-based data processing system according to an embodiment of the present invention.
  • the data processing system 1 includes the first virtual reality device in the specific embodiment corresponding to FIG. 4. 10 and the background server 20, further, the data processing system 1 further includes a second virtual reality device 30;
  • the second virtual reality device 30 is configured to send a join request to the background server 20;
  • the background server 20 is further configured to forward the received join request to the first virtual reality device 10;
  • the first virtual reality device 10 is further configured to generate an acknowledgment response message corresponding to the join request, and send the acknowledgment response message to the second virtual reality device 30;
  • the second virtual reality device 30 is further configured to upload the second environment information to the background server 20 according to the confirmation response message, and record the second capture time;
  • the background server 20 is further configured to merge the first virtual teaching scenario with the second environment information, generate first teaching scene data corresponding to the first virtual reality device 10, and generate and The second teaching scene data corresponding to the second virtual reality device 30;
  • the first virtual reality device 10 is further configured to receive the first teaching scene data sent by the background server 20, and update and display the first virtual teaching scene according to the second capturing time;
  • the second virtual reality device 30 is further configured to receive the second teaching scene data sent by the background server 20, and generate and display a second virtual teaching scene according to the second capturing time.
  • the first virtual reality device 10 is further configured to send the first virtual teaching scenario to the first A virtual reality device 10 has a user terminal in a wireless connection relationship to cause the user terminal to display the first virtual teaching scene.
  • the first virtual reality device 10 first receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20; secondly, the background server 20 acquires the teaching environment parameter according to the setting instruction, and The teaching environment parameter is sent back to the first virtual reality device 10; then, the first virtual reality device 10 caches the received teaching environment parameter; and the cached processed teaching environment parameter Merging with the captured first environment information, generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; then, the first virtual reality device 10 receives An operation instruction of the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model; and finally, the background server 20 After receiving the join request of the second virtual reality device 30, forwarding the join request to the first virtual The real device 10, so that the second virtual reality device 30 uploads the second environment information after receiving the confirmation response message; subsequently, the background server 20 compares the first virtual teaching scenario with the The
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于虚拟现实的数据处理方法及***,其中,所述方法包括:第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据设置指令返回的教学环境参数;第一虚拟现实设备将缓存处理后的教学环境参数与捕捉到的第一环境信息进行融合,并根据第一捕捉时间生成以及显示所述第一虚拟教学场景;第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。可提供贴合真实、丰富地模拟教学体验,以提高用户的学习效率。

Description

一种基于虚拟现实的数据处理方法及*** 技术领域
本发明涉及计算机应用程序领域,尤其涉及一种基于虚拟现实的数据处理***及***。
背景技术
随着人类文明的进步,具有数据处理和数据通信功能的电子设备日渐融入了人们日常生活中,比如,智能手机、平板电脑等电子设备已经成为人们日常生活中不可缺失的重要组成部分,并在一定程度上解决了用户对游戏、电影和音乐等多种娱乐的需求;但对于智能手机而言,当用户通过触屏操控其屏幕区域上的教学应用时,由于其手机屏幕的限制,教学内容的展示效果一般,甚至可以说是展示效果单一,与用户在真实环境中的学***面上的粒子运动,且无法让用户从不同角度去探索虚拟展示环境,致使用户的模拟体验资源极其有限,进而导致用户无法充分地享受沉浸式体验。
发明内容
本发明实施例所要解决的技术问题在于,提供一种基于虚拟现实的数据处理方法和数据处理***,可丰富用户的体验资源,为用户提供贴合真实的教学场景,以使用户能充分地享受沉浸式体验。
为了解决上述技术问题,本发明实施例第一方面提供了一种基于虚拟现实的数据处理方法,所述数据处理方法包括:
第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置 指令到后台服务器;
所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
本发明实施例第二方面提供了一种基于虚拟现实的数据处理***,包括:第一虚拟现实设备和后台服务器;
所述第一虚拟现实设备,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
所述后台服务器,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
所述第一虚拟现实设备,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
所述第一虚拟现实设备,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
所述第一虚拟现实设备,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
由上可见,实施本发明实施例,具有如下有益效果:第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种基于虚拟现实的数据处理方法的流程示意图;
图2是本发明实施例提供的另一种基于虚拟现实的数据处理方法的流程示意图;
图3是本发明实施例提供的又一种基于虚拟现实的数据处理方法的流程示意图;
图4是本发明实施例提供的一种基于虚拟现实的数据处理***的结构示意图;
图5是本发明实施例提供的另一种一种基于虚拟现实的数据处理***的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
本发明实施例中提及的基于虚拟现实的数据处理方法的执行依赖于计算机程序,可运行于冯若依曼体系的计算机***之上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。该计算机***可以是个人电脑、平板电脑、笔记本电脑、智能手机等终端设备。
以下分别进行详细说明。
请参见图1,是本发明实施例提供的一种基于虚拟现实的数据处理方法的流程示意图,如图1所示,所述数据处理方法至少包括:
步骤S101,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。
步骤S102,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
具体地,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼睛或 虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;
其中,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。
步骤S103,所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;
具体地,所述后台服务器在接收到所述第一虚拟现实设备发送来的设置请求信息时,根据所述设置请求信息返回设置响应信息,并获取所述教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
步骤S104所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
具体地,所述第一虚拟现实设备在接收到所述后台服务器返回的所述教学环境参数时,可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
步骤S105,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
具体地,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
比如,所述第一虚拟现实设备通过摄像头捕捉到目前用户处于房间A内的中间位置,且检测到用户的头部略微向左偏移,则所述第一虚拟现实设备将用户所处的位置和头部偏移角度等作为所述环境数据,并根据所述环境数据提取与该环境数据对应的教学环境参数,以使所述环境数据与所述教学环境参数进行数据融合,以将虚拟的所述教学环境参数应用到真实的教学环境中,使真实的环境数据和虚拟的教学场景实时地叠加到同一个画面或空间,并使二者同时存在。此时,用户在该虚拟的教学环境中将位于教室中间位置,且投影显示界面定位于以用户为分界线的稍偏左位置。
步骤S106,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;
此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。
比如,以模型选择操作指令为例,所述第一虚拟现实设备可通过语音识别接收对所述第一虚拟教学场景中目标科目(比如:数学科目)的模型选择操作指令去选择用户喜爱的人物教学模型。其中,所述人物教学模型可以是形象的动物卡通模型,也可以是逼真的明星模拟模型。
又比如:以座位选择操作指令为例,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中英语科目的座位选择指令后,将读取所述第一捕捉时间,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位(比如:正中偏左处正对投影屏幕的位置)。此外,所述第一捕捉时间可用于记录所述目标科目的学习时长,还可用于选择所述虚拟目标座位,以使后续进入该英语科目中的其他用户无法重复占用该学习时长内已选的目标座位。
由上可见,所述第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率,
进一步地,请参见图2,是本发明实施例提供的另一种基于虚拟现实的数据处理方法的流程示意图,如图2所示,所述数据处理方法至少包括:
步骤S201,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。
步骤S202,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;
具体地,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器,以使所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;在所述第一虚拟现实设备接收到的所述教学环境参数后,对所述教学环境参数进行缓存处理。
其中,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼镜或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;
其中,所述设置指令可用于向所述后台服务器发送设置请求信息,并使所述后台服务器根据所述设置请求信息返回设置响应信息,以提取所述后台服务器上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
此外,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。
步骤S203,所述第一虚拟现实设备基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长;
具体地,所述第一虚拟现实设备根据主动分屏技术,将***底层显卡缓存中所缓存的所述教学环境参数进行分屏处理,使***显示的所述教学环境参数在进行等比例分屏处理后,能转换成三维教学界面数据,随后,所述第一虚拟现实设备将捕捉到的所述第一环境信息与所述所述三维教学界面数据进行数据融合,以生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
其中,主动分屏技术是通过***底层驱动来实现分屏处理的,可从***底层的显示缓存区来实现分屏,通过在FrameBuffer这一层用采用独有的算法进行等比例分屏处理,以达到所有***显示内容都能进行分屏,然后配以虚拟现实眼镜从而达到3D显示的效果;另外,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算 和形成相应的教学环境参数模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
步骤S204,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
比如,以模型选择操作指令为例,所述第一虚拟现实设备可通过语音识别接收对所述第一虚拟教学场景中目标科目(比如:物理科目)的模型选择操作指令去选择用户喜爱的人物教学模型。其中,所述人物教学模型可以是形象的动物卡通模型,也可以是逼真的明星模拟模型。
又比如:以教学内容调用操作指令为例,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的教学内容调用操作指令后,可使第一虚拟现实设备将从所述后台服务器中提取到的虚拟的图像覆盖到真实世界画面中的功能,即在一定投影距离内,通过主动分屏技术,将电脑虚拟的图像融合标记在该用户位置的真实世界画面中,最后输出经过叠加的三维图像。例如,可以想用户展现一个消失多年的恐龙模型,让它置于用户所在位置的周围;还可以帮助用户在眼前重现神州十一号飞船发射及升空的模拟场景,以做到给用户以生动形象的3D影像,让更多的用户能最贴近现实的了解身边的科技。
可选地,在执行完上述步骤S201-S204步骤之后,所述数据处理方法还包 括执行如下步骤:
步骤S205,所述第一虚拟现实设备还可基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景;
其中,所述用户终端包括:智能电视。笔记本电脑、掌上电脑、游戏外设和平板电脑。
由此可见,第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;可见,采用本发明,可使***展示的教学内容在底层驱动中实现等比例分屏,以使***中的应用界面实现主动分屏效果,这样可从根本上改善3D展示效果,并可丰富用户的学习资源,并为用户提供多样化、且贴合真实的虚拟教学场景。
进一步地,再请参见图3,是本发明实施例提供的又一种基于虚拟现实的数据处理方法的流程示意图,如图3所示,所述数据处理方法至少包括:
步骤S301,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将 所述环境数据作为所述第一环境信息。
步骤S302,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
步骤S303,所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
步骤S304所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
具体地,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼睛或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;
其中,所述设置指令可用于向所述后台服务器发送设置请求信息,并使所述后台服务器根据所述设置请求信息返回设置响应信息,以提取所述后台服务器上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
此外,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。
步骤S305,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
具体地,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学 环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
步骤S306,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。
步骤S307,第二虚拟现实设备向所述后台服务器发送加入请求;
步骤S308,所述后台服务器将接收到的所述加入请求转发给所述第一虚拟现实设备;
步骤S309,所述第一虚拟现实设备生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;
步骤S310,所述第二虚拟现实设备根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;
具体地,所述第二虚拟现实设备可通过前置或后置摄像头捕捉当前环境数据,并将所述环境数据作为第二环境信息,并记录所述第二环境信息对应的第二捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。在所述第二虚拟现实设备接收到所述第一虚拟现实设备发送来的所述确认响应消息后,向所述后台服务器上传所述第二环境信息;
其中,所述第二环境信息的获得是通过所述第二虚拟现实设备实时检测该用户的头部转动角度,捕捉对应摄像区域范围内的空间数据,并将所述空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第二环境信息上传给所述后台服务器以获取所述第二捕捉时间对应的目标科目的第二教学场景数据。
步骤S311,所述后台服务器将所述第一虚拟教学场景与所述第二环境信 息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;
具体地,所述后台服务器在接收到所述第二虚拟现实设备所上传的第二环境信息后,将所述第二环境信息与所述第一虚拟教学场景进行数据融合,以生成与所述第一虚拟现实设备对应的第一教学场景数据,并发送给所述第一虚拟现实设备;与此同时,将生成的与所述第二虚拟现实设备对应的第二教学场景数据发送给所述第二虚拟现实设备,以使两虚拟现实设备在相应的虚拟教学场景中能相互显示不同方位或视角下的模拟教学场景。
步骤S312,所述第一虚拟现实设备接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;
步骤S313,所述第二虚拟现实设备接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。
可选地,在执行完上述步骤S301-S313后,所述数据处理方法还包括执行如下步骤,所述第一虚拟现实设备还可基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景;
其中,所述用户终端包括:智能电视。笔记本电脑、掌上电脑、游行外设和平板电脑。
由上可见,所述第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;紧接着,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;然后,所述后台服务器在接收到所述第二虚拟现实设备的加入请求后,转发所述加入请求给所述第一虚拟现实设备,以使所述第二虚拟现实设备在接收到所述确认响应消息后,上传所述第二环境信息;最好,所述后台服务器将所述第一虚拟教学场景与所 述第二环境信息进行数据融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据,并分别发送给所述第一虚拟现实设备和所述第二虚拟现实设备。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为多个用户提供丰富的虚拟互动平台,进而为用户打造更加丰富和更多样化的视觉体验,为用户提供贴合现实的教学场景,极大地帮助用户理解教学内容,提高学习效率,
进一步地,请参见图4,是本发明实施例提供的一种基于虚拟现实的数据处理***的结构示意图,如图4所示,所述数据处理***1包括:第一虚拟现实设备10和后台服务器20;
所述第一虚拟现实设备10,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
具体地,所述第一虚拟现实设备10可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备10实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。
所述第一虚拟现实设备10,所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
所述后台服务器20,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
所述第一虚拟现实设备10,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
具体地,所述第一虚拟现实设备10接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器20,以使所述后台服务20器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设10;随后,所述第一虚拟现实设备10根据接收到的所述教学环境参数,对所述教 学环境参数进行缓存处理。
其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
此外,所述第一虚拟现实设备10可为头戴式设备,包括:虚拟现实眼睛或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;
其中,所述第一虚拟现实设备10向所述后台服务器20发送设置请求信息,并使所述后台服务器20根据所述设置请求信息返回设置响应信息,以提取所述后台服务器20上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
其中,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。
所述第一虚拟现实设备10,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
具体地,所述第一虚拟现实设备10,可用于通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器20上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
所述第一虚拟现实设备10,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;
具体地,所述第一虚拟现实设备10,具体用于通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备10,还可用于根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。
由上可见,所述第一虚拟现实设备10,首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,所述第一虚拟现实设备10接收对虚拟教学应用的设置指令,并接收和缓存后台服务器20根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备10将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备10接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率,
进一步地,请参见图5,是本发明实施例提供的另一种基于虚拟现实的数据处理***,所述数据处理***1包括上述图4对应的具体实施例中的所述第一虚拟现实设备10和后台服务器20,进一步地,所述数据处理***1还包括第二虚拟现实设备30;
所述第二虚拟现实设备30,用于向所述后台服务器20发送加入请求;
所述后台服务器20,还用于将接收到的所述加入请求转发给所述第一虚拟现实设备10;
所述第一虚拟现实设备10,还用于生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备30;
所述第二虚拟现实设备30,还用于根据所述确认响应消息向所述后台服务器20上传第二环境信息,并记录第二捕捉时间;
所述后台服务器20,还用于将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备10对应的第一教学场景数据,并生成与所述第二虚拟现实设备30对应的第二教学场景数据;
所述第一虚拟现实设备10,还用于接收所述后台服务器20发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;
所述第二虚拟现实设备30,还用于接收所述后台服务器20发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。
可选地,在图4或图5给出的具体实施例中,所述第一虚拟现实设备10,还可用于基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备10具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。
由此可见,第一虚拟现实设备10首先接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器20;其次,所述后台服务器20根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备10;然后,所述第一虚拟现实设备10将接收到的所述教学环境参数进行缓存处理;并将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;紧接着,所述第一虚拟现实设备10接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;最后,所述后台服务器20在接收到所述第二虚拟现实设备30的加入请求后,转发所述加入请求给所述第一虚拟现实设备10,以使所述第二虚拟现实设备30在接收到所述确认响应消息后,上传所述第二环境信息;随后,所述后台服务器20将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备10对应的第一教学场景数据,并生成与所述第二虚拟现实设备30对应的第二教学场景数据。可见,采用本发明,可为多个用户提供贴合真实的虚拟互动平台,丰富用户的学习资源,让用户充分地享受到沉浸式的教学体验,以提高用户的学习效率。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (10)

  1. 一种基于虚拟现实的数据处理方法,其特征在于,包括:
    第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
    所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
    所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
    所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
    所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
    所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
  2. 根据权利要求1所述的方法,其特征在于,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景,包括:
    基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长。
  3. 根据权利要求1所述的方法,其特征在于,所述第一虚拟现实设备接 收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容,包括:
    所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;
    所述第一虚拟现实设备根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。
  4. 根据权利要求1所述的方法,其特征在于,在所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,以使所述人物模型通过语音识别掌控所述目标科目的教学进度之后,还包括:
    第二虚拟现实设备向所述后台服务器发送加入请求;
    所述后台服务器将接收到的所述加入请求转发给所述第一虚拟现实设备;
    所述第一虚拟现实设备生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;
    所述第二虚拟现实设备根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;
    所述后台服务器将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;
    所述第一虚拟现实设备接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;
    所述第二虚拟现实设备接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,还包括:
    所述第一虚拟现实设备基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。
  6. 一种基于虚拟现实的数据处理***,其特征在于,所述数据处理***包括:第一虚拟现实设备和后台服务器;
    所述第一虚拟现实设备,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;
    所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;
    所述后台服务器,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;
    所述第一虚拟现实设备,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;
    所述第一虚拟现实设备,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;
    所述第一虚拟现实设备,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。
  7. 根据权利要求6所述的数据处理***,其特征在于,
    所述第一虚拟现实设备,还具体用于基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;
    其中,所述第一捕捉时间用于记录所述目标科目的学习时长。
  8. 根据权利要求6所述的数据处理***,其特征在于,
    所述第一虚拟现实设备,还用于通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;
    所述第一虚拟现实设备,还用于根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。
  9. 根据权利要求6所述的数据处理***,其特征在于,所述数据处理***还包括:第二虚拟现实设备和后台服务器;
    所述第二虚拟现实设备,用于向所述后台服务器发送加入请求;
    所述后台服务器,用于将接收到的所述加入请求转发给所述第一虚拟现实设备;
    所述第一虚拟现实设备,还用于生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;
    所述第二虚拟现实设备,还用于根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;
    所述后台服务器,还用于将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;
    所述第一虚拟现实设备,还用于接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;
    所述第二虚拟现实设备,还用于接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。
  10. 根据权利要求6至9任一项所述的数据处理***,其特征在于:
    所述第一虚拟现实设备,还具体用于基于无线视频传输技术,将所述第一 虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。
PCT/CN2016/108118 2016-11-30 2016-11-30 一种基于虚拟现实的数据处理方法及*** WO2018098720A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108118 WO2018098720A1 (zh) 2016-11-30 2016-11-30 一种基于虚拟现实的数据处理方法及***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108118 WO2018098720A1 (zh) 2016-11-30 2016-11-30 一种基于虚拟现实的数据处理方法及***

Publications (1)

Publication Number Publication Date
WO2018098720A1 true WO2018098720A1 (zh) 2018-06-07

Family

ID=62241079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108118 WO2018098720A1 (zh) 2016-11-30 2016-11-30 一种基于虚拟现实的数据处理方法及***

Country Status (1)

Country Link
WO (1) WO2018098720A1 (zh)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460482A (zh) * 2018-11-15 2019-03-12 平安科技(深圳)有限公司 课件展示方法、装置、计算机设备及计算机可读存储介质
CN109817052A (zh) * 2019-03-19 2019-05-28 河南理工大学 基于虚拟现实技术的煤层瓦斯含量测定实验***及方法
CN110413112A (zh) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 一种基于虚拟现实技术的安全体验教育***及其方法
CN110413130A (zh) * 2019-08-15 2019-11-05 泉州师范学院 基于动作捕捉的虚拟现实手语学习、测试和评价方法
CN110794952A (zh) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 虚拟现实协同处理方法、装置和***
CN111464577A (zh) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 一种设备控制方法和装置
CN111538412A (zh) * 2020-04-21 2020-08-14 北京恒华伟业科技股份有限公司 基于vr的安全培训方法及装置
CN111540057A (zh) * 2020-04-24 2020-08-14 湖南翰坤实业有限公司 一种基于伺服电缸技术的vr场景动作展示方法及***
CN111862346A (zh) * 2020-07-29 2020-10-30 重庆邮电大学 基于虚拟现实及互联网的高锰酸钾制取氧气实验教学方法
CN112286354A (zh) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 基于虚拟现实的教育教学方法和***
CN112347507A (zh) * 2020-10-29 2021-02-09 北京市商汤科技开发有限公司 一种在线数据的处理方法、电子设备以及存储介质
CN112445808A (zh) * 2020-11-18 2021-03-05 傲普(上海)新能源有限公司 一种基于遥感数据更新监控***的方法
CN112740280A (zh) * 2018-09-28 2021-04-30 苹果公司 计算高效的模型选择
CN113507599A (zh) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 基于大数据分析的教育云服务平台
CN113936516A (zh) * 2021-09-30 2022-01-14 国能神东煤炭集团有限责任公司 一种基于虚拟现实技术的协同演练***
CN114170859A (zh) * 2021-10-22 2022-03-11 青岛虚拟现实研究院有限公司 一种基于虚拟现实的在线教学***和方法
CN114327220A (zh) * 2021-12-24 2022-04-12 软通动力信息技术(集团)股份有限公司 一种虚拟展示***及方法
CN114697755A (zh) * 2022-03-31 2022-07-01 北京百度网讯科技有限公司 虚拟场景信息交互方法、装置、设备以及存储介质
CN114779942A (zh) * 2022-05-23 2022-07-22 广州芸荟数字软件有限公司 一种虚拟现实沉浸式互动***、设备及方法
CN115035278A (zh) * 2022-06-06 2022-09-09 北京新唐思创教育科技有限公司 基于虚拟形象的教学方法、装置、设备及存储介质
CN116506559A (zh) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 一种虚拟现实全景多媒体处理***及其方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573592A (zh) * 2016-01-29 2016-05-11 北京宝贝星球科技有限公司 一种学前教育智能互动***及方法
CN105654800A (zh) * 2016-04-05 2016-06-08 瞿琛 一种基于沉浸式虚拟现实技术的仿真教学***
CN105872575A (zh) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实的直播方法及装置
CN106023693A (zh) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 一种基于虚拟现实技术和模式识别技术的教育***及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573592A (zh) * 2016-01-29 2016-05-11 北京宝贝星球科技有限公司 一种学前教育智能互动***及方法
CN105654800A (zh) * 2016-04-05 2016-06-08 瞿琛 一种基于沉浸式虚拟现实技术的仿真教学***
CN105872575A (zh) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实的直播方法及装置
CN106023693A (zh) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 一种基于虚拟现实技术和模式识别技术的教育***及方法

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794952A (zh) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 虚拟现实协同处理方法、装置和***
CN110794952B (zh) * 2018-08-01 2023-11-28 北京鑫媒世纪科技发展有限公司 虚拟现实协同处理方法、装置和***
CN112740280A (zh) * 2018-09-28 2021-04-30 苹果公司 计算高效的模型选择
CN109460482B (zh) * 2018-11-15 2024-05-28 平安科技(深圳)有限公司 课件展示方法、装置、计算机设备及计算机可读存储介质
CN109460482A (zh) * 2018-11-15 2019-03-12 平安科技(深圳)有限公司 课件展示方法、装置、计算机设备及计算机可读存储介质
CN111464577A (zh) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 一种设备控制方法和装置
CN111464577B (zh) * 2019-01-21 2022-05-27 阿里巴巴集团控股有限公司 一种设备控制方法和装置
CN109817052A (zh) * 2019-03-19 2019-05-28 河南理工大学 基于虚拟现实技术的煤层瓦斯含量测定实验***及方法
CN110413112A (zh) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 一种基于虚拟现实技术的安全体验教育***及其方法
CN110413130B (zh) * 2019-08-15 2024-01-26 泉州师范学院 基于动作捕捉的虚拟现实手语学习、测试和评价方法
CN110413130A (zh) * 2019-08-15 2019-11-05 泉州师范学院 基于动作捕捉的虚拟现实手语学习、测试和评价方法
CN111538412B (zh) * 2020-04-21 2023-12-15 北京恒华伟业科技股份有限公司 基于vr的安全培训方法及装置
CN111538412A (zh) * 2020-04-21 2020-08-14 北京恒华伟业科技股份有限公司 基于vr的安全培训方法及装置
CN111540057B (zh) * 2020-04-24 2023-07-28 湖南翰坤实业有限公司 一种基于伺服电缸技术的vr场景动作展示方法及***
CN111540057A (zh) * 2020-04-24 2020-08-14 湖南翰坤实业有限公司 一种基于伺服电缸技术的vr场景动作展示方法及***
CN111862346A (zh) * 2020-07-29 2020-10-30 重庆邮电大学 基于虚拟现实及互联网的高锰酸钾制取氧气实验教学方法
CN111862346B (zh) * 2020-07-29 2023-11-07 重庆邮电大学 基于虚拟现实及互联网的高锰酸钾制取氧气实验教学方法
CN112286354A (zh) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 基于虚拟现实的教育教学方法和***
CN112347507A (zh) * 2020-10-29 2021-02-09 北京市商汤科技开发有限公司 一种在线数据的处理方法、电子设备以及存储介质
CN112445808A (zh) * 2020-11-18 2021-03-05 傲普(上海)新能源有限公司 一种基于遥感数据更新监控***的方法
CN113507599B (zh) * 2021-07-08 2022-07-08 四川纵横六合科技股份有限公司 基于大数据分析的教育云服务平台
CN113507599A (zh) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 基于大数据分析的教育云服务平台
CN113936516A (zh) * 2021-09-30 2022-01-14 国能神东煤炭集团有限责任公司 一种基于虚拟现实技术的协同演练***
CN114170859B (zh) * 2021-10-22 2024-01-26 青岛虚拟现实研究院有限公司 一种基于虚拟现实的在线教学***和方法
CN114170859A (zh) * 2021-10-22 2022-03-11 青岛虚拟现实研究院有限公司 一种基于虚拟现实的在线教学***和方法
CN114327220A (zh) * 2021-12-24 2022-04-12 软通动力信息技术(集团)股份有限公司 一种虚拟展示***及方法
CN114327220B (zh) * 2021-12-24 2023-10-17 软通动力信息技术(集团)股份有限公司 一种虚拟展示***及方法
CN114697755A (zh) * 2022-03-31 2022-07-01 北京百度网讯科技有限公司 虚拟场景信息交互方法、装置、设备以及存储介质
CN114779942A (zh) * 2022-05-23 2022-07-22 广州芸荟数字软件有限公司 一种虚拟现实沉浸式互动***、设备及方法
CN115035278B (zh) * 2022-06-06 2023-06-27 北京新唐思创教育科技有限公司 基于虚拟形象的教学方法、装置、设备及存储介质
CN115035278A (zh) * 2022-06-06 2022-09-09 北京新唐思创教育科技有限公司 基于虚拟形象的教学方法、装置、设备及存储介质
CN116506559A (zh) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 一种虚拟现实全景多媒体处理***及其方法

Similar Documents

Publication Publication Date Title
WO2018098720A1 (zh) 一种基于虚拟现实的数据处理方法及***
US11899900B2 (en) Augmented reality computing environments—immersive media browser
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
JP3212833U (ja) 対話型教育支援システム
US10200654B2 (en) Systems and methods for real time manipulation and interaction with multiple dynamic and synchronized video streams in an augmented or multi-dimensional space
US7840638B2 (en) Participant positioning in multimedia conferencing
US20120192088A1 (en) Method and system for physical mapping in a virtual world
JP2017522682A (ja) 拡張現実技術に基づく手持ち式閲覧デバイス及びその方法
EP3776146A1 (en) Augmented reality computing environments
JP6683864B1 (ja) コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム
CN110174950B (zh) 一种基于传送门的场景切换方法
US20240155074A1 (en) Movement Tracking for Video Communications in a Virtual Environment
WO2022255262A1 (ja) コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム
WO2022151882A1 (zh) 虚拟现实设备
WO2020248682A1 (zh) 一种显示设备及虚拟场景生成方法
TWI652582B (zh) 以虛擬實境/擴增實境為基礎並結合即時通訊服務的檔案分享系統及方法
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20220417449A1 (en) Multimedia system and multimedia operation method
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20240185546A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
KR101816446B1 (ko) 평면 디스플레이에서 표시되는 텔레프레전스 이론을 적용한 3 차원 콘텐츠 영상 처리 시스템 및 그 방법
Haider et al. Towards Representation of Real Entities using Holographic Technology
WO2023215637A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
TWM592332U (zh) 一種擴增實境多螢幕陣列整合系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922988

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922988

Country of ref document: EP

Kind code of ref document: A1