WO2020078354A1 - 视频串流***、视频串流方法及装置 - Google Patents

视频串流***、视频串流方法及装置 Download PDF

Info

Publication number
WO2020078354A1
WO2020078354A1 PCT/CN2019/111315 CN2019111315W WO2020078354A1 WO 2020078354 A1 WO2020078354 A1 WO 2020078354A1 CN 2019111315 W CN2019111315 W CN 2019111315W WO 2020078354 A1 WO2020078354 A1 WO 2020078354A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
software
positioning
streaming
server
Prior art date
Application number
PCT/CN2019/111315
Other languages
English (en)
French (fr)
Inventor
冉瑞元
张佳宁
张道宁
Original Assignee
北京凌宇智控科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811203090.9A external-priority patent/CN111064985A/zh
Priority claimed from CN201811202640.5A external-priority patent/CN111064981B/zh
Priority claimed from CN201811203106.6A external-priority patent/CN111065053B/zh
Application filed by 北京凌宇智控科技有限公司 filed Critical 北京凌宇智控科技有限公司
Priority to US17/286,387 priority Critical patent/US11500455B2/en
Publication of WO2020078354A1 publication Critical patent/WO2020078354A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Definitions

  • the invention relates to a video streaming system, and also relates to a corresponding video streaming method and a video streaming device, belonging to the field of virtual reality technology.
  • Video streaming refers to a video playback technology that compresses a series of video data and transmits them in segments, and transmits audio and video on the network for viewing.
  • Players such as the popular QuickTime Player, Real Player, etc. have adopted video streaming technology.
  • video streaming has been more and more widely used.
  • existing VR devices have insufficient support for video streaming, and are prone to many problems such as image delay and image jitter.
  • the primary technical problem to be solved by the present invention is to provide a video streaming system.
  • Another technical problem to be solved by the present invention is to provide a video streaming method.
  • Another technical problem to be solved by the present invention is to provide a video streaming device.
  • the present invention adopts the following technical solutions:
  • a video streaming system including a terminal and a VR device;
  • an application platform software and a streaming software server end are installed on the terminal;
  • a streaming software client is installed on the VR device, the streaming software client sends gesture data to the streaming software server on the terminal; the streaming software server sends the gesture data to the
  • the application platform software renders a picture by the application platform software.
  • the location tracking equipment is better;
  • the positioning tracking device is used to collect positioning data and send it to the VR device;
  • the streaming software client sends gesture data and the positioning data to the streaming software server on the terminal; the streaming software server sends the gesture data and the positioning data to the application Platform software, the screen is rendered by the application platform software.
  • the server side of the streaming software includes a control interface and a server driver.
  • the server driver is loaded.
  • the streaming software client wirelessly sends the attitude data and / or positioning data to the streaming software server.
  • the streaming software server includes a server driver; a positioning prediction unit is located in the server driver, and is used to obtain predicted positioning data / predicted attitude data based on the positioning data / attitude data sent by the VR device .
  • a video streaming method including the following steps:
  • a video streaming method including the following steps:
  • a video streaming method including the following steps:
  • a video streaming method including the following steps:
  • a video streaming device including a processor and a memory, where the processor is used to execute a program for implementing video streaming stored in the memory, so as to implement the foregoing video streaming method.
  • the VR application software can only be run on the terminal, and the VR device is only responsible for displaying pictures.
  • the hardware configuration of the terminal itself is fully utilized for picture processing, and a satisfactory picture can be obtained on the screen of the VR device.
  • the positioning data / attitude data of the application platform software when rendering the picture is predicted, Screen rendering based on predicted positioning data / predicted posture data can effectively reduce screen jitter and display delay.
  • FIG. 1 is a schematic structural diagram of a video streaming system in the first embodiment of the present invention
  • FIG. 2 is a flowchart of a video streaming method in the first embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a video streaming system in a second embodiment of the present invention.
  • FIG. 4 is a flowchart of a video streaming method in a second embodiment of the present invention.
  • FIG. 5 is a flowchart of a video streaming method in a third embodiment of the present invention.
  • FIG. 6 is a flowchart of the position prediction unit predicting posture data in the third embodiment of the present invention.
  • FIG. 7 is a schematic diagram of data delay in the streaming of the screen of the VR application software to the VR device in the third embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a video streaming system in a fourth embodiment of the present invention.
  • FIG. 9 is a flowchart of a video streaming method in a fourth embodiment of the present invention.
  • FIG. 10 is a flowchart of positioning data predicted by a positioning prediction unit in a fourth embodiment of the present invention.
  • FIG. 11 is a schematic diagram of data delay in streaming video from a VR application software to a VR device in a fourth embodiment of the present invention.
  • the video streaming system in the embodiment of the present invention includes a terminal and a VR device (such as a VR integrated machine, etc.).
  • a VR device such as a VR integrated machine, etc.
  • the server side of the application platform software and streaming software installed on the terminal is described by taking a PC (personal computer) as an example, and may also be a tablet computer, a smart TV, a smart phone, or the like with data processing capabilities.
  • the application platform software installed on the PC is Steam VR platform software (the corresponding APP on the smart phone), or other application platforms such as VIVEPORT platform, HYPEREAL platform, Antview VR application software platform, large Assistant, Tencent WEGAME, OGP application platform, etc.
  • the VR application software in the application platform software uses an application engine (such as Unreal Engine 4, Universal 3D, etc.) and integrates the SDK provided by the OpenVR data interface, such as the SDK provided by the OpenVR data interface of the Steam platform software, so that it can be displayed on the PC See the screen of VR application software.
  • the streaming software server can be set to the A end of NOLOHOME software.
  • the server side of the streaming software includes two parts, one is the control interface and the other is the server driver.
  • the server driver is preferably a dll file, but may also be in other implementation forms, such as SDK, API file, and so on.
  • the application platform software such as Steam VR platform software is started on the PC, the above server driver will be loaded accordingly.
  • a streaming software client is installed on the VR device, for example, it can be set as the B end of the NOLOHOME software.
  • VR devices are equipped with various sensors, such as nine-axis sensors, inertial sensors, etc., which can sense attitude actions, that is, pitch, roll, and yaw.
  • the VR device sends its posture data to the streaming software server on the PC through the streaming software client; and to the application platform software through the streaming software server on the PC, so that the application platform software renders real-time images.
  • the VR device may be a VR all-in-one machine, the streaming software client is installed in the system of the VR all-in-one machine, the screen is also displayed on the display screen of the VR all-in-one machine, and the sensor is fixedly installed in the VR all-in-one machine on board.
  • the VR device may also be a mobile VR device, the streaming software client is installed in the mobile VR device's smartphone, and the screen may be displayed on the mobile VR device's smartphone or on the mobile.
  • the display of the mobile VR device shows that the sensor can be fixed in the casing of the mobile VR device or borrow the sensor of the smartphone installed in the mobile VR device.
  • the above-mentioned PC and the VR device are connected in a wired / wireless manner.
  • the wireless method is adopted, it is preferably operated in a WLAN (wireless local area network) or 5G communication environment. Because the 5G communication environment has the characteristics of high rate and low delay, the actual delay caused by PC and VR equipment in the 5G communication environment is basically negligible.
  • the server driver of the streaming software server the VR device, installed in Streaming software client in VR device.
  • the VR device is used to obtain its own attitude data; the streaming software client and server driver are used for data transmission and processing.
  • the video streaming method includes the following steps: starting the streaming software client on the VR device, such as the B-side NOLOHOME software, and starting the streaming software server on the PC, such as the A-side NOLOHOME software.
  • the control interface UI on the server side of the streaming software includes various control buttons, and the streaming software is started through the control buttons to connect the A terminal and the B terminal.
  • the VR device can send data such as attitude and control information to the server driver of the streaming software server on the PC through the streaming software client.
  • the server driver processes the received data and sends it to the application platform software for screen rendering.
  • the server driver sends the rendered screen to the VR device for screen display.
  • the posture data of the VR device is obtained by sensors installed on the VR device, such as a nine-axis sensor, an inertial sensor, a six-axis sensor, a gyroscope, a geomagnetometer, and the like.
  • the attitude data of the VR device is transmitted to the streaming software client installed on the VR device, and then sent to the server driver of the streaming software server through the UDP protocol through the streaming software client.
  • UDP User Datagram Protocol
  • UDP is a connectionless transport layer protocol in the open system interconnection reference model, which provides a simple and unreliable transaction-oriented information transmission service.
  • the streaming software server obtains the VR device's attitude data.
  • control information of the VR device can also be obtained, and the control information of the VR device can also be sent to the server driver on the server side of the streaming software using the UDP protocol through the streaming software client.
  • S21 Send the acquired posture data of the VR device to the data interface, and then transmit it to the VR application software via the data interface.
  • control information of the VR device acquired by the server driver on the server side of the streaming software is also sent to the VR application software to perform picture rendering.
  • the control information obtained by the streaming software server is sent to the data interface, and then transmitted to the VR application software via the data interface.
  • the VR application software is transmitted to the application engine to obtain the exact content of the rendered screen and render the screen.
  • the application engines are Unreal Engine 4, Universal 3D, etc.
  • the VR application software is also transmitted to the application engine according to the obtained control information to obtain the exact content of the rendered picture and render the picture.
  • the data rendered by the application engine is stored in the video memory of the graphics card, such as the video memory of the Nvidia graphics card, and the VR application software is notified that the picture has been rendered, and the VR application software notifies the data interface OpenVR, data interface OpenVR notifies the server driver of the streaming software that the server driver has finished rendering the event.
  • the server driver on the server side of the streaming software learns that the picture is rendered well, it finds the corresponding texture data in the video memory through the texture address from OpenVR, which is the data of a frame, and encodes the frame into multiple frames. Packets.
  • a dedicated library for video encoding and decoding provided by Nvidia, NvCodec library is used.
  • NvCodec library When initializing, inform the NvCodec library encoding format and screen format in advance.
  • the H.264 standard is used to encode data.
  • the picture format the image in NV_ENC_BUFFER_FORMAT_ABGR format is used.
  • the NvCodec library will encode a frame of pictures into multiple small data packets as required.
  • S32 Send the encoded multiple data packets to the VR device for decoding and display.
  • the server driver of the streaming software server sends the encoded multiple data packets to the streaming software client installed on the VR device.
  • the streaming software client is then transmitted to the VR device.
  • the VR device receives a complete After the frame data of the frame, decode the received data packet to form a complete image on the VR device and display it.
  • the method and related hardware for the screen display of the VR device can use any existing method and hardware, and no specific requirements are made here.
  • the streaming software server first obtains the posture data and control information of the VR device; the obtained posture data and control information is sent to the VR application software for screen rendering; Get the rendered picture and send it to the VR device for display.
  • the streaming software server is installed on the terminal. This method makes the terminal responsible for running the VR application software, and the VR device is only responsible for the screen display. Therefore, picture processing can be performed through the hardware configuration of the terminal itself, and a satisfactory picture can be obtained on the screen of the VR device.
  • the 5G router wirelessly connects the terminal to the VR device, which solves the technical problem of "VR wireless" that plagues many manufacturers.
  • the video streaming system in the second embodiment of the present invention includes a terminal, a VR device, and a location tracking device.
  • the terminal is installed with application platform software and streaming software server; in the embodiment of the present invention, the terminal uses a PC as an example to illustrate, it can also be a tablet computer, smart TV, smart phone and the like with data processing capabilities terminal.
  • the application platform software installed on the PC is Steam VR platform software (the corresponding APP on the smart phone).
  • it can also be other application platforms such as VIVEPORT platform, HYPEREAL platform, Antview VR application software platform, Dapeng assistant, Tencent WEGAME, OGP application platform, etc.
  • the VR application software in the application platform software uses the application engine (such as Unreal Engine 4, Universal 3D, etc.), and has integrated the SDK provided by the data interface, such as the SDK provided by the OpenVR data interface of the Steam platform software, so that it can be used on the PC See the application screen on the monitor.
  • the server side of the streaming software may be set as the A end of the NOLOHOME software, for example.
  • the server side of the streaming software includes two parts, one is the control interface and the other is the server driver.
  • the server driver is preferably a dll file, but may also be in other implementation forms, such as SDK, API file, and so on.
  • the application platform software such as Steam VR platform software is started on the PC, the above server driver will be loaded accordingly.
  • a streaming software client is installed on the VR device, for example, it can be set as the B end of the NOLOHOME software.
  • the VR device may be a VR all-in-one machine, and the streaming software client is installed in the system of the VR all-in-one machine, the screen is also displayed on the display screen of the VR all-in-one machine, and the sensor is fixedly mounted on the VR all-in-one machine.
  • the VR device can be a mobile VR device, the streaming software client is installed in the mobile VR device's smartphone, and the screen can be displayed on the mobile VR device's smartphone or on the mobile VR device's display screen It is shown that the sensor can be fixed in the housing of the mobile VR device or borrow the sensor of the smartphone installed in the mobile VR device.
  • the above-mentioned PC and the VR device are connected in a wired / wireless manner.
  • the wireless method is adopted, it is preferably operated in a WLAN (wireless local area network) or 5G communication environment. Because 5G communication has the characteristics of high rate and low delay, the actual delay caused by PC and VR equipment in the 5G communication environment is basically negligible.
  • the location tracking device is used to track the position of the user, for example, it can include a handle held on the user's hand to track the position of the user's hand; a locator installed on the VR device in a built-in or peripheral manner to track the user's head s position.
  • the handle can transmit the positioning data to the locator.
  • the locator is transmitting the positioning data of the locator and the positioning data to the VR device, or both the handle and the locator directly transmit the positioning data to the VR device.
  • the VR device obtains the positioning data collected by the positioning and tracking device, and obtains the posture data of the positioning and tracking device and its own posture data, and then uses the streaming software client to send the positioning data and posture data to the streaming software on the terminal using the UDP protocol Service-Terminal.
  • the streaming software server on the terminal sends the positioning data and attitude data to the application platform software, so that the application platform software renders a real-time picture.
  • the system architecture shown in FIG. 3 is used to achieve this requirement.
  • several core modules that need to be implemented are: Server driver for streaming software server, VR device, streaming software client installed on VR device, location tracking device.
  • the positioning and tracking device is used to collect positioning data and posture data of the user's body;
  • the VR device is used to obtain positioning data and posture data and transmit the data to the server driver;
  • the streaming software client and server driver are used to perform data Transmission and processing.
  • the video streaming method includes the following steps: starting the streaming software client on the VR device, such as the B-side NOLOHOME software, and starting the streaming software server on the PC, such as the A-side NOLOHOME software.
  • the control interface UI on the server side of the streaming software includes various control buttons, and the streaming software is started through the control buttons to connect the A terminal and the B terminal.
  • the streaming software client on the VR device can send posture data, control information, positioning data, etc. to the server driver of the PC's streaming software server.
  • the server driver processes the received data and sends it to the application platform software for the screen For rendering, the server driver sends the rendered image to the VR device for display.
  • Obtaining pose data and positioning data includes the following steps:
  • the positioning tracking device may include a locator installed on the VR device, a handle held on the user's hand, etc. By obtaining positioning data of the locator and / or handle, the positioning data of the user's head and / or hand can be obtained.
  • the user's positioning data can be obtained through the three-dimensional space positioning method and system with the patent application number 201610917518.0, or can be obtained using other existing known three-dimensional space positioning methods and systems.
  • the posture data of the user's head can be obtained by a sensor installed on the VR device or a sensor installed on the locator of the VR device, such as a nine-axis sensor, an inertial sensor, a six-axis sensor, a gyroscope, Geomagnetism, etc.
  • the posture data of other parts of the user, such as the hand, is obtained by sensors installed on the handle of the positioning and tracking device.
  • S12 Send the collected attitude data and positioning data to the VR device.
  • VR devices can read attitude data and positioning data through wired methods such as OTG data lines, and can also read attitude data and positioning data through wireless methods such as Bluetooth and Wifi.
  • the data is directly sent to the system of the VR all-in-one; for the mobile VR device, the data can be sent to the smartphone installed in the mobile VR device housing.
  • the VR device transmits the acquired posture data and positioning data to the streaming software client installed on the VR device, and then sends it to the server driver of the streaming software server installed on the terminal through 5G wireless mode using the UDP protocol. After this step, the streaming software server obtains positioning data and attitude data.
  • the streaming software server can also obtain control information, and the control information can also be sent to the server driver of the streaming software server by using the UDP protocol through the streaming software client.
  • the control information can be from a VR device or a location tracking device.
  • S21 Send the acquired posture data and positioning data to the data interface, and then transmit it to the VR application software via the data interface.
  • control information obtained by the server driver on the server side of the streaming software is also sent to the VR application software.
  • the control information obtained by the server driver of the streaming software server is sent to the data interface OpenVR, and then transmitted to the VR application software via the data interface OpenVR.
  • the application engine renders the screen.
  • the VR application software transmits the exact rendering content of the application engine based on the obtained positioning data, posture data, and application logic to render the image.
  • the application engine is Unreal Engine 4 or Universal 3D.
  • the VR application software also transmits the obtained control information to the application engine to accurately render the screen content to render the screen.
  • the data rendered by the application engine is stored in the video memory of the graphics card, and the VR application software is notified that the picture has been rendered.
  • the VR application software notifies the data interface, and the data interface notifies the streaming software server driver Rendering completed event.
  • the texture address from the data interface finds the corresponding texture data in the video memory, which is the data of a frame, and encodes a frame into Multiple data packets.
  • the NvCodec library provided by Nvidia is used. When initializing, inform the NvCodec library encoding format and screen format in advance.
  • the H.264 standard is used to encode data.
  • the picture format the image in NV_ENC_BUFFER_FORMAT_ABGR format is used.
  • the NvCodec library will encode a frame of pictures into multiple small data packets as required.
  • S32 Send the encoded multiple data packets to the VR device for decoding and display.
  • the server driver of the streaming software server sends the encoded multiple data packets to the streaming software client installed on the VR device.
  • the streaming software client is then transmitted to the VR device.
  • the VR device receives a complete After the frame data of the frame, decode the received data packet to form a complete image on the VR device and display it.
  • the method and related hardware for the screen display of the VR device can use any existing method and hardware, and no specific requirements are made here.
  • the streaming software server first obtains positioning data and posture data; sends the obtained posture data and positioning data to the VR application software for screen rendering; obtains the rendered screen, Send to VR device for display.
  • the streaming software server is installed on the terminal, so that it is only the terminal that is responsible for running the VR application software, and the VR device itself is only responsible for the screen display.
  • the 5G router wirelessly connects the terminal to the VR device, which solves the technical problem of "VR wireless" that plagues many manufacturers.
  • the video streaming system used in the third embodiment is the same as the first embodiment, and includes a terminal and a VR device (such as a VR all-in-one, etc.), which will not be repeated here.
  • a VR device such as a VR all-in-one, etc.
  • the video streaming system if the data is sent into the application platform software without restriction, that is, whenever a data is received, it is directly placed in the application platform software, because the frequency of each device is different (for example The frequency of data transmission by the VR device is X, and the frequency of data collected by the application platform software is Y, X is not equal to Y), and the delay is different, which eventually leads to problems such as picture delay and picture jitter.
  • a positioning prediction unit is installed on the terminal.
  • the positioning prediction unit is installed in the server driver of the streaming software server in the form of software.
  • the positioning prediction unit is used to predict the posture data required by the application platform software for image rendering based on the posture data of the VR device, and the application platform software renders the real-time image according to the prediction data.
  • Obtaining the predicted pose data through the positioning prediction unit can accurately predict the pose data of the application platform software at the next moment, thereby reducing picture jitter and display delay.
  • the predicted pose data at the next moment is rendered in the VR application software, and the terminal transmits the rendered picture through the streaming software server to the UDP protocol to the streaming software client in the VR device for display. This process will be explained in detail later.
  • the requirement is implemented according to the system architecture shown in FIG. 1.
  • the server driver installed on the terminal's streaming software server
  • the VR device the VR device
  • the streaming software client installed on the VR device.
  • Target prediction unit the VR device
  • the VR device is used to obtain posture data and transmit the data to the server driver
  • the streaming software client and server driver are used for data transmission and processing.
  • the positioning prediction unit is used to predict the posture data required by the application platform software for screen rendering based on the posture data sent by the VR device.
  • the location prediction unit is located in the server driver on the server side of the streaming software.
  • FIG. 5 is a flowchart of the video streaming method provided by the third embodiment. The implementation process of video streaming is specifically described below.
  • the attitude data of the VR device is obtained by sensors installed on the VR device, such as a nine-axis sensor, an inertial sensor, a six-axis sensor, a gyroscope, and a geomagnetometer.
  • the posture data of other parts of the user, such as the hand, is obtained by sensors installed on the handle of the positioning and tracking device.
  • the attitude data of the VR device is transmitted to the streaming software client installed on the VR device, and then sent to the server driver of the streaming software server installed on the terminal through the streaming software client, using the UDP protocol, and the server driver obtains Attitude data to the VR device.
  • the server driver includes a positioning prediction unit, and the positioning prediction unit may be installed in the server driver on the server side of the streaming software in the form of software. As shown in FIG. 6, the positioning prediction unit obtains predicted pose data according to the acquired pose data, which specifically includes the following steps:
  • S21 Obtain a first timestamp and a second timestamp, where the first timestamp is the moment when the streaming software server receives the i-th posture data, and the second timestamp is the moment when the streaming software server receives the i + th 1 Moment of pose data.
  • the frequency of data collected during application platform software rendering is X Hz
  • the frequency of VR device sending attitude data is Y Hz.
  • the data delay M is the total delay from the action until the server driver receives the posture data.
  • the data delay M can be obtained by the following formula:
  • T0 is the delay after the action is generated until the sensor acquires the action.
  • the delay is the delay from the user's head movement until the sensor acquires the gesture data of the head movement;
  • t1 is the time when the sensor acquires the gesture data;
  • t2 is the The moment when the attitude data is sent to the streaming software server;
  • ⁇ T is the network delay.
  • Figure 7 shows all the data delays involved in the process from action generation to server driver getting data.
  • the data delay ⁇ T due to the network delay is fixed and can be calculated only once.
  • the process of obtaining the data delay caused by the network delay includes the following steps:
  • the server driver of the streaming software server sends request data to the VR device.
  • the server driver on the server side of the streaming software receives the reply message sent by the VR device.
  • the network delay uses the following formula:
  • the network delay ⁇ T can be obtained through the request and response time between the server driver and the VR device.
  • the frequency of data transmission by the VR device is X
  • the frequency of data collection by the application platform software is Y
  • X is not equal to Y.
  • the predicted posture data of the third timestamp is obtained by using the following formula:
  • V j ′ is the predicted pose data at T j ′;
  • T i is the first time stamp;
  • V i is the pose data of the first time stamp;
  • T i + 1 is the second time stamp;
  • V i + 1 is the first Attitude data of two time stamps;
  • T j ′ is the third time stamp;
  • M is the data delay.
  • the posture data at the time T j ′ can be predicted more accurately, thereby reducing screen jitter and display delay.
  • the predicted posture data at time T j ′ is transmitted to the application platform software to render the screen, and the rendered screen is transmitted to the VR device for display.
  • the application platform software performs screen rendering based on the predicted pose data, and sends the rendered screen to the VR device for screen display, specifically including the following steps:
  • the predicted posture data is sent to the data interface, and then transmitted to the VR application software in the application platform software via the data interface.
  • the predicted attitude data obtained by the positioning prediction unit in the server driver of the streaming software server is passed to the data interface.
  • the VR application software in the application platform software SteamVR uses the application engine and integrates the SDK provided by the data interface OpenVR.
  • the data interface OpenVR will The attitude data is transferred to the VR application software.
  • S32 Determine the content of the screen rendered by the application engine according to the predicted pose data and application logic obtained by the VR application software, and perform the rendering of the screen.
  • the VR application software According to the posture data and application logic obtained by the VR application software, it is transmitted to the application engine to obtain the exact content of the rendered screen and render the screen.
  • the application engine can be Unreal Engine 4, Universal 3D, etc.
  • control information obtained by the server driver on the server side of the streaming software is also sent to the VR application software to perform picture rendering.
  • the control information obtained by the streaming software server is sent to the data interface OpenVR, and then transmitted to the VR application software via the data interface OpenVR.
  • the VR application software is also transmitted to the application engine according to the obtained control information to obtain the exact content of the rendered screen and render the screen.
  • the data rendered by the application engine is stored in the video memory of the graphics card, for example, the video memory of the Nvidia graphics card, and the VR application software is notified that the picture has been rendered.
  • OpenVR notifies the server driver of the streaming software that the server driver has finished rendering the event.
  • the server driver on the server side of the streaming software learns that the picture is rendered well, it finds the corresponding texture data in the video memory through the texture address from the data interface OpenVR, which is the data of a frame, and encodes the frame of the frame. Into multiple packets.
  • the NvCodec library provided by Nvidia is used. When initializing, inform the NvCodec library encoding format and screen format in advance.
  • the H.264 standard is used to encode data.
  • the picture format the image in NV_ENC_BUFFER_FORMAT_ABGR format is used.
  • the NvCodec library will encode a frame of pictures into multiple small data packets as required.
  • S42 Send the encoded multiple data packets to the VR device for decoding and display.
  • the server driver of the streaming software server sends the encoded multiple data packets to the streaming software client installed on the VR device.
  • the streaming software client is then transmitted to the VR device.
  • the VR device receives a complete After the frame data of the frame, decode the received data packet to form a complete image on the VR device and display it.
  • the method and related hardware for the screen display of the VR device can use any existing method and hardware, and no specific requirements are made here.
  • the streaming software server installed on the terminal can obtain the control information sent by the VR device, and the control information may come from the VR device, or may come from a controller that cooperates with the VR device.
  • the streaming software server sends the predicted pose data to the application platform software for screen rendering, and also sends the control information to the application platform software for screen rendering.
  • the video streaming method calculates the data delay of the posture data received by the streaming software server and combines the posture data sent by the VR device to predict the posture of the application platform software when rendering the screen
  • the data is rendered according to the predicted pose data, and the rendered picture is sent to the VR device for display.
  • This method can predict pose data more accurately, thereby reducing picture jitter and display delay.
  • the video streaming system in the fourth embodiment of the present invention includes a terminal, a VR device, and a location tracking device.
  • the application platform software and streaming software server are installed on the terminal.
  • the terminal is described by taking a PC as an example, and may also be a tablet computer, a smart TV, a smart phone, or the like with data processing capabilities.
  • the application platform software installed on the PC is Steam VR platform software (the corresponding APP on the smart phone).
  • it can also be other game platforms such as VIVEPORT platform, HYPEREAL platform, AntTV VR game platform, Dapeng assistant, Tencent WEGAME, OGP game platform, etc.
  • the VR application software in the application platform software uses an application engine (such as Unreal Engine 4, Universal 3D, etc.) and integrates the SDK provided by the OpenVR data interface, such as the SDK provided by the OpenVR data interface of the Steam platform software, so that it can be displayed on the PC See the application screen.
  • the streaming software server can be set to the A end of NOLOHOME software.
  • the server side of the streaming software includes two parts, one is the control interface and the other is the server driver.
  • the server driver is preferably a dll file, but may also be in other implementation forms, such as SDK, API file, and so on.
  • the application platform software such as Steam VR platform software is started on the PC, the above server driver will be loaded accordingly.
  • a streaming software client is installed on the VR device, for example, it can be set as the B end of the NOLOHOME software.
  • the VR device may be a VR all-in-one machine, and the streaming software client is installed in the system of the VR all-in-one machine, the screen is also displayed on the display screen of the VR all-in-one machine, and the sensor is fixedly mounted on the VR all-in-one machine.
  • the VR device can be a mobile VR device, the streaming software client is installed in the mobile VR device's smartphone, and the screen can be displayed on the mobile VR device's smartphone or on the mobile VR device's display screen display.
  • the sensor can be fixed in the housing of the mobile VR device, or it can borrow the sensor of the smartphone installed in the mobile VR device.
  • the above-mentioned PC and the VR device are connected in a wired / wireless manner.
  • the wireless method is adopted, it is preferably operated in a WLAN (wireless local area network) or 5G communication environment. Because the 5G communication environment has the characteristics of high speed and low delay, the actual delay generated by the PC and VR devices in the 5G communication environment is basically negligible.
  • the positioning tracking device may include a handle and a locator installed on the VR device.
  • the handle is held on the user's hand.
  • the handle can transmit positioning data to the locator.
  • the locator transmits the positioning data of the locator and the positioning data of the handle.
  • the locator can be installed on the VR device in a built-in or peripheral manner.
  • the positioning tracking device and the VR device are connected through a USB interface, and are used to collect positioning data of the user's head and / or hand.
  • the VR device obtains the positioning data collected by the positioning tracking device, and then sends the positioning data to the streaming software server on the PC using the UDP protocol.
  • the streaming software server on the PC sends the positioning data to the application platform software, so that the application platform software renders real-time images.
  • the data is sent to the application platform software without restrictions, that is, whenever a data is received, it is directly placed in the application platform software, because the frequency of each device is different (such as VR devices).
  • the frequency of data transmission is X, and the frequency of data collected by the application platform software is Y, X is not equal to Y), the delay is different, and eventually causes problems such as screen delay and screen jitter.
  • the data must be reasonably estimated, so that the rendered picture is more stable, smooth and smooth. Therefore, in the video streaming system in the embodiment of the present invention, a positioning prediction unit is installed on the terminal, and the positioning prediction unit is set in the form of software in the server driver of the streaming software server.
  • the positioning prediction unit is used to predict the positioning data required by the application platform software for image rendering based on the positioning data collected by the positioning tracking device, and the application platform software renders a real-time image according to the prediction data. Obtaining predicted positioning data through the positioning prediction unit can more accurately predict the positioning data of the application platform software at the next moment, thereby reducing picture jitter and display delay.
  • the terminal then transmits the rendered image through the streaming software server to the VR device for display using the UDP protocol through the streaming software client. This process will be explained in detail later.
  • the VR application software in the application platform software uses an application engine (such as Unreal Engine 4, Universal 3D, etc.), and has integrated the SDK provided by the data interface, such as the SDK of OpenVR, so that it can be I see the screen of VR application software on the monitor.
  • an application engine such as Unreal Engine 4, Universal 3D, etc.
  • the SDK provided by the data interface such as the SDK of OpenVR
  • the system architecture shown in FIG. 8 is used to achieve this requirement.
  • the core modules that need to be implemented are: The server driver installed on the server side of the streaming software on the terminal, the VR device, the streaming software client installed on the VR device, the positioning prediction unit, and the positioning tracking device.
  • the positioning tracking device is used to collect the positioning data of the head and / or hands;
  • the VR device is used to obtain the collected positioning data and transmit the data to the server driver;
  • the streaming software client and server driver are used to Perform data transmission and processing.
  • the positioning prediction unit is used to predict the positioning data required for the application platform software to render the screen based on the positioning data sent by the VR device.
  • the location prediction unit is located in the server driver on the server side of the streaming software.
  • FIG. 9 is a flowchart of the video streaming method provided by the fourth embodiment. The following describes the entire process of video streaming in detail.
  • obtaining positioning data collected by the positioning and tracking device specifically includes the following steps:
  • the positioning tracking device may include a locator installed on the VR device, a handle held on the user's hand, etc. By obtaining positioning data of the locator and / or handle, the positioning data of the user's head and / or hand can be obtained.
  • the locator can be installed on the VR device in a built-in or peripheral manner. When placed on the VR device in a built-in way, it can be integrated into the locator during the manufacturing process of the VR device; when placed on the VR device by a peripheral device, it can be externally connected to the VR device through wireless or wired means on.
  • the user's positioning data can be obtained through the three-dimensional space positioning method and system with the patent application number 201610917518.0, or can be obtained using other existing known three-dimensional space positioning methods and systems, such as the multi-camera multi-marker positioning method and the SLAM method Wait.
  • S12 Send the positioning data collected by the positioning and tracking device to the VR device.
  • VR devices can read positioning data through wired methods such as OTG data lines, and can also read positioning data through wireless methods such as Bluetooth and Wifi.
  • the data is directly sent to the system of the VR all-in-one; for the mobile VR device, the data can be sent to the smartphone installed in the mobile VR device housing.
  • S13 Send the positioning data acquired by the VR device to the streaming software server using the UDP protocol.
  • the VR device transmits the obtained positioning data to the streaming software client installed on the VR device, and then sends it to the server driver of the streaming software server installed on the terminal using the UDP protocol. After this step, the positioning software is acquired by the streaming software server.
  • the streaming software server can also obtain control information, and the control information can also be sent to the server driver of the streaming software server by using the UDP protocol through the streaming software client.
  • the control information can be from a VR device or a location tracking device.
  • the server driver includes a positioning prediction unit, and the positioning prediction unit may be installed in the server driver on the server side of the streaming software in the form of software. As shown in FIG. 10, the positioning prediction unit obtains predicted positioning data according to the obtained positioning data, which specifically includes the following steps:
  • S21 Obtain a first timestamp and a second timestamp, where the first timestamp is the moment when the streaming software server receives the i-th positioning data, and the second timestamp is the moment when the streaming software server receives the i-th positioning data 1 The moment of positioning data;
  • the frequency of data collected during application platform software rendering is X Hz; the frequency of VR device sending attitude data is Y Hz.
  • the data delay M is the total delay from the action until the server driver receives the posture data. .
  • the data delay M can be obtained by the following formula:
  • T0 is the delay from the action until the sensor acquires the action
  • t1 is the time when the sensor acquires the positioning data
  • t2 is the time when the positioning data is sent to the streaming software server
  • ⁇ T is the network delay.
  • Figure 11 shows all the data delays involved in the process of generating data from the server driver to the server driver.
  • the data delay ⁇ T due to the network delay during the entire video streaming process is fixed, and only needs to be calculated once.
  • the process of obtaining the data delay caused by the network delay includes the following steps:
  • the server driver of the streaming software server sends the request data through the VR device or the location tracking device.
  • the server driver on the server side of the streaming software receives the reply message sent by the VR device or the location tracking device.
  • the network delay ⁇ T can be obtained by the time of the request and response between the server driver and the VR device or location tracking device.
  • the data delay M is:
  • the frequency of data transmission by the VR device is X
  • the frequency of data collection by the application platform software is Y
  • X is not equal to Y.
  • the positioning prediction unit obtains the third time after acquiring the i-th positioning data and the i + 1th positioning data and the corresponding first time stamp Ti and second time stamp T i + 1 sent by the VR device to the streaming software server.
  • Poke V j ′, and the third time stamp V j ′ is the moment when the application platform software samples from the streaming software server.
  • the prediction error data of the third timestamp is obtained by using the following formula:
  • V j ′ is the predicted positioning data at T j ′;
  • T i is the first time stamp;
  • V i is the positioning data of the first time stamp;
  • T i + 1 is the second time stamp;
  • V i + 1 is the first Two time stamps for positioning data;
  • T j ′ is the third time stamp;
  • M is the data delay.
  • the posture data at time T j ′ can be predicted more accurately, thereby reducing screen jitter and display delay.
  • the predicted positioning data at time T j ′ is transmitted to the application platform software to render the screen, and the rendered screen is transmitted to the VR device for display.
  • the application platform software performs screen rendering according to the predicted positioning data, and sends the rendered screen to the VR device for screen display, which specifically includes the following steps:
  • S31 Send the prediction positioning data to the data interface, and then transmit it to the VR application software in the application platform software via the data interface.
  • the predicted positioning data obtained by the positioning prediction unit in the server driver of the streaming software server is transmitted to the data interface.
  • the VR application software in the application platform software SteamVR uses the application engine and integrates the SDK provided by the data interface OpenVR.
  • the data interface OpenVR transmits attitude data to the VR application software.
  • S32 Determine the content of the screen rendered by the application engine according to the predicted positioning data and application logic obtained by the VR application software, and perform the rendering of the screen.
  • the VR application software According to the positioning data and application logic obtained by the VR application software, it is transmitted to the application engine to obtain the exact content of the rendered screen and render the screen.
  • the application engines are Unreal Engine 4, Universal 3D, etc.
  • control information obtained by the server driver on the server side of the streaming software is also sent to the VR application software to perform picture rendering.
  • the control information obtained by the streaming software server is sent to the data interface OpenVR, and then transmitted to the VR application software via the data interface OpenVR.
  • the VR application software is also transmitted to the application engine according to the obtained control information to obtain the exact content of the rendered screen and render the screen.
  • the application engine stores the rendered data in the graphics card's video memory, such as the Nvidia graphics card's video memory, and notifies the VR application software that the picture has been rendered, and the VR application software notifies the data interface OpenVR, data interface OpenVR notifies the server driver of the streaming software that the server driver has finished rendering the event.
  • the server driver on the server side of the streaming software learns that the picture is rendered well, it finds the corresponding texture data in the video memory through the texture address from the data interface OpenVR, which is the data of a frame, and encodes the frame of the frame. Into multiple packets.
  • the NvCodec library a special library for video encoding and decoding provided by Nvidia.
  • the NvCodec library When initializing, inform the NvCodec library encoding format and screen format in advance.
  • the H.264 standard is used to encode data.
  • the picture format the image in NV_ENC_BUFFER_FORMAT_ABGR format is used.
  • the NvCodec library will encode a frame of pictures into multiple small data packets as required.
  • S42 Send the encoded multiple data packets to the VR device for decoding and display.
  • the server driver of the streaming software server sends the encoded multiple data packets to the streaming software client installed on the VR device.
  • the streaming software client is then transmitted to the VR device.
  • the VR device receives a complete After the frame data of the frame, decode the received data packet to form a complete image on the VR device and display it.
  • the method and related hardware for the screen display of the VR device can use any existing method and hardware, and no specific requirements are made here.
  • the streaming software server installed on the terminal can also obtain the control information sent by the VR device.
  • the control information may come from the VR device or the positioning and tracking device. While the streaming software server sends the prediction positioning information to the application platform software for screen rendering, it also sends the control information to the application platform software for screen rendering.
  • the video streaming method in the above embodiment calculates the data delay of receiving positioning data from the streaming software server, and predicts the positioning of the application platform software when rendering the screen based on the positioning data collected by the positioning tracking device For data, perform picture rendering based on the prediction data, and send the rendered picture to the VR device for picture display.
  • This method can predict positioning data more accurately, thereby reducing picture jitter and display delay.
  • an embodiment of the present invention also provides a video streaming device.
  • the device includes a processor and a memory, and the processor is used to execute a program for realizing video streaming stored in the memory to implement the above-mentioned video streaming method.
  • the memory here stores one or more programs that implement video streaming.
  • the memory may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid-state hard disk.
  • the memory may also include a combination of the aforementioned various types of memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本发明公开了一种视频串流***,同时公开了相应的视频串流方法及视频串流装置。该视频串流***包括终端和VR设备;其中,终端上安装有应用平台软件和串流软件服务器端;VR设备上安装有串流软件客户端,串流软件客户端将姿态数据发送给终端上的串流软件服务器端;串流软件服务器端将姿态数据发送给应用平台软件,由应用平台软件渲染出画面。本发明可以利用终端本身的硬件配置进行画面处理,在VR设备的屏幕上得到让人满意的画面。另一方面,根据预测定位数据/预测姿态数据进行画面渲染,可以有效减少画面抖动和显示延迟。

Description

视频串流***、视频串流方法及装置 技术领域
本发明涉及一种视频串流***,同时涉及相应的视频串流方法及视频串流装置,属于虚拟现实技术领域。
背景技术
视频串流(video streaming)是指将一连串的视频数据压缩之后分段传送,在网络上即时传输影音以供观赏的视频播放技术。曾经流行的QuickTime Player、Real Player等播放器就采用了视频串流技术。目前,随着网络游戏、游戏直播等产业的深入发展,视频串流得到了越来越广的应用。但是,现有的VR设备对视频串流的支持能力不足,容易出现画面延迟以及画面抖动等诸多问题。
发明内容
针对现有技术的不足,本发明所要解决的首要技术问题在于提供一种视频串流***。
本发明所要解决的另一技术问题提供一种视频串流方法。
本发明所要解决的又一技术问题提供一种视频串流装置。
为实现上述目的,本发明采用下述的技术方案:
根据本发明实施例的第一方面,提供一种视频串流***,包括终端和VR设备;
其中,所述终端上安装有应用平台软件和串流软件服务器端;
所述VR设备上安装有串流软件客户端,所述串流软件客户端将姿态数据发送给所述终端上的串流软件服务器端;所述串流软件服务器端将所述姿态数据发送给所述应用平台软件,由所述应用平台软件渲染出画面。
其中较优地,还包括定位追踪设备;
所述定位追踪设备用于采集定位数据并发送给VR设备;
所述串流软件客户端将姿态数据和所述定位数据发送给所述终端上的串流软件服务器端;所述串流软件服务器端将所述姿态数据和所述定位数据发送给所述应用平台软件,由所述应用平台软件渲染出画 面。
其中较优地,所述串流软件服务器端包括控制界面和server驱动程序,当所述应用平台软件在终端上启动时,加载所述server驱动程序。
其中较优地,所述串流软件客户端通过无线方式将姿态数据和/或定位数据发送给串流软件服务器端。
其中较优地,所述串流软件服务器端包括server驱动程序;定位预测单元位于所述server驱动程序中,用于根据所述VR设备发送的定位数据/姿态数据获得预测定位数据/预测姿态数据。
根据本发明实施例的第二方面,提供一种视频串流方法,包括如下步骤:
获取VR设备的姿态数据;
将获取的姿态数据发送给VR应用软件进行画面渲染;
获取渲染好的画面,发送给VR设备进行显示。
根据本发明实施例的第三方面,提供一种视频串流方法,包括如下步骤:
获取姿态数据和定位数据;
将获取的姿态数据和定位数据发送给VR应用软件进行画面渲染;
获取渲染好的画面,发送给VR设备进行显示。
根据本发明实施例的第四方面,提供一种视频串流方法,包括如下步骤:
获取VR设备的姿态数据;
根据获取的姿态数据获得预测姿态数据;
将预测姿态数据发送给应用平台软件进行画面渲染;
获取渲染好的画面,发送给VR设备进行显示。
根据本发明实施例的第五方面,提供一种视频串流方法,包括如下步骤:
获取定位追踪设备采集的定位数据;
根据获取的定位数据获得预测定位数据;
将预测定位数据发送给应用平台软件进行画面渲染;
获取渲染好的画面,发送给VR设备进行显示。
根据本发明实施例的第六方面,提供一种视频串流装置,包括处理器及存储器,所述处理器用于执行存储器中存储的实现视频串流的程序,以实现上述的视频串流方法。
利用本发明,一方面可以使VR应用软件仅仅运行在终端上,VR设备仅仅负责进行画面显示。这样充分利用终端本身的硬件配置进行画面处理,在VR设备的屏幕上可以得到让人满意的画面。另一方面,通过计算串流软件服务器端收到的定位数据/姿态数据的数据延时,结合VR设备发送的定位数据/姿态数据,预测应用平台软件进行画面渲染时的定位数据/姿态数据,根据预测定位数据/预测姿态数据进行画面渲染,可以有效减少画面抖动和显示延迟。
附图说明
图1为本发明的第一实施例中,视频串流***的结构示意图;
图2为本发明的第一实施例中,视频串流方法的流程图;
图3为本发明的第二实施例中,视频串流***的结构示意图;
图4为本发明的第二实施例中,视频串流方法的流程图;
图5为本发明的第三实施例中,视频串流方法的流程图;
图6为本发明的第三实施例中,定位预测单元预测姿态数据的流程图;
图7为本发明的第三实施例中,VR应用软件的画面串流到VR设备中数据延迟的示意图;
图8为本发明的第四实施例中,视频串流***的结构示意图;
图9为本发明的第四实施例中,视频串流方法的流程图;
图10为本发明的第四实施例中,定位预测单元预测定位数据的流程图;
图11为本发明的第四实施例中,VR应用软件的画面串流到VR设备中数据延迟的示意图。
具体实施方式
下面结合附图和具体实施例对本发明的技术内容进行详细具体的说明。
第一实施例
如图1所示,本发明实施例中的视频串流***,包括终端、VR设 备(例如VR一体机等)。其中,终端上安装有应用平台软件和串流软件的服务器端。在本发明的实施例中,终端以PC(personal computer)为例进行说明,也可以为平板电脑、智能电视、智能手机等类似具有数据处理能力的终端。其中,示例性地,PC上安装的应用平台软件为Steam VR平台软件(在智能手机上为相应的APP),也可以是其他应用平台例如VIVEPORT平台、HYPEREAL平台、蚁视VR应用软件平台、大朋助手、腾讯WEGAME、OGP应用平台等。应用平台软件中的VR应用软件使用应用引擎(例如Unreal Engine 4,Universal 3D等),集成数据接口OpenVR提供的SDK,例如Steam VR平台软件的数据接口OpenVR提供的SDK,这样就可以在PC的显示器上看到VR应用软件的画面。串流软件服务器端可以设置为NOLOHOME软件的A端。
串流软件服务器端包括两部分,一部分是控制界面,一部分是server驱动程序。其中,该server驱动程序优选为一个dll文件,但也可以是其它实现形式,例如SDK、API文件等等。应用平台软件,例如Steam VR平台软件在PC上启动时,会相应加载上述的server驱动程序。
VR设备上安装有串流软件客户端,例如可以设置为NOLOHOME软件的B端。VR设备安装有各类传感器,例如九轴传感器、惯性传感器等,可以感知姿态动作,即俯仰、横滚和偏航等。VR设备将其姿态数据通过串流软件客户端发送给PC上的串流软件服务器端;通过PC上的串流软件服务器端发送给应用平台软件,从而让应用平台软件渲染出实时画面。在图1所示的实施例中,VR设备可以为VR一体机,串流软件客户端安装在VR一体机的***中,画面也是在VR一体机的显示屏上显示,传感器固装在VR一体机上。在其它的实施例中,VR设备也可以为移动式VR设备,串流软件客户端安装在移动式VR设备的智能手机中,画面可以在移动式VR设备的智能手机上显示,也可以在移动式VR设备的显示屏上显示,传感器可以固装在移动式VR设备的外壳中也可以借用安装在移动式VR设备中智能手机的传感器。
上述PC和VR设备之间采用有线/无线方式进行连接,其中在采取无线方式时,优选在WLAN(无线局域网)或者5G通信环境下运行。由于5G通信环境具有高速率、低延迟等特点,在5G通信环境下PC与 VR设备所产生的实际延迟基本可以忽略不计。
为了能够让VR应用软件的画面串流到VR设备中,在图1所示的串流***中,需要实现的几个核心模块分别是:串流软件服务器端的server驱动程序,VR设备、安装在VR设备里的串流软件客户端。其中,VR设备用于获取自身的姿态数据;串流软件客户端和server驱动程序用于进行数据传输和处理。
图2是第一实施例提供的视频串流方法的流程图。该视频串流方法包括如下步骤:启动VR设备上的串流软件客户端,例如B端NOLOHOME软件,并启动位于PC上的串流软件服务器端,例如A端NOLOHOME软件。串流软件服务器端的控制界面UI包括各种控制按键,通过控制按键启动串流软件,将A端和B端连接。VR设备可将姿态、控制信息等数据通过串流软件客户端,发送给PC上串流软件服务器端的server驱动程序,server驱动程序对接收到这些数据进行处理,发送给应用平台软件进行画面渲染,server驱动程序在将渲染后的画面发送给VR设备进行画面显示。下面对此展开详细具体的说明:
S1,获取VR设备的姿态数据。
其中,VR设备的姿态数据通过安装在VR设备上传感器获取,例如九轴传感器、惯性传感器、六轴传感器、陀螺仪、地磁计等。
VR设备的姿态数据传输给安装在VR设备上的串流软件客户端,再通过串流软件客户端,利用UDP协议发送到串流软件服务器端的server驱动程序。UDP(User Datagram Protocol)是开放式***互联参考模型中一种无连接的传输层协议,提供面向事务的简单不可靠信息传送服务。经此步骤,串流软件服务器端获取到VR设备的姿态数据。
优选地,还可以获取VR设备的控制信息,VR设备的控制信息也可以通过串流软件客户端,利用UDP协议发送到串流软件服务器端的server驱动程序。
S2,将获取的姿态数据发送给VR应用软件进行画面渲染。
在本发明的实施例中,具体包括如下步骤:
S21,将获取的VR设备的姿态数据发送给数据接口,经数据接口传送给VR应用软件。
将串流软件服务器端的server驱动程序获取的姿态数据传给数据接口,对于应用平台软件SteamVR中的VR应用软件使用了应用引擎,已经集成数据接口OpenVR提供的SDK,OpenVR可将姿态数据传给VR应用软件。
优选地,串流软件服务器端的server驱动程序获取的VR设备的控制信息也发送给VR应用软件,进行画面渲染。将串流软件服务器端获取的控制信息发送给数据接口,经数据接口传送给VR应用软件。
S22,根据VR应用软件得到的姿态数据以及应用逻辑,通过应用引擎进行画面的渲染。
VR应用软件根据得到的姿态数据以及应用逻辑,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。应用引擎为Unreal Engine 4、Universal 3D等。
优选地,VR应用软件还根据得到的控制信息,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。
S23,将应用引擎渲染好的数据存储在显卡的显存中。
在本发明的实施例中,应用引擎渲染好的数据存放于显卡的显存中,例如Nvidia显卡的显存中,并且通知VR应用软件,画面已经渲染好了,VR应用软件通知数据接口OpenVR,数据接口OpenVR通知串流软件服务器端的server驱动程序渲染完成的事件。
S3,获取渲染好的画面,发送给VR设备进行显示。
在本发明的实施例中,具体包括如下步骤:
S31,获取渲染好的画面对应的纹理数据,将一帧画面编码成多个数据包。
当串流软件服务器端的server驱动程序得知画面渲染好的事件后,通过OpenVR传来的纹理地址,在显存中找到对应的纹理数据,即为一帧画面的数据,将一帧画面编码成多个数据包。
在本发明的实施例中,采用英伟达提供的视频编解码的专用库,NvCodec库。在进行初始化的时候,预先告知NvCodec库编码格式、画面格式。在本发明的实施例中,使用H.264标准对数据进行编码。关于画面格式,使用NV_ENC_BUFFER_FORMAT_ABGR格式的图像,在当前帧中,NvCodec库会按要求,将一帧画面编码成多个小的数据包。
S32,将编码的多个数据包发送给VR设备进行解码并显示。
完成编码后,串流软件服务器端的server驱动程序将编码的多个数据包发送给VR设备上安装的串流软件客户端,串流软件客户端再传输给VR设备,VR设备在接收到一个完整的帧画面数据后,对接收的数据包进行解码,在VR设备上形成一幅完整的图像并显示。
VR设备进行画面显示的方法和相关硬件可使用现有的任何一种方法和硬件,在此不做具体要求。
综上所述,上述实施例所提供的视频串流方法,首先由串流软件服务器端获取VR设备的姿态数据和控制信息;将获取的姿态数据和控制信息发送给VR应用软件进行画面渲染;获取渲染好的画面,发送给VR设备进行显示。串流软件服务器端安装在终端上,该方法使得负责运行VR应用软件的是终端,VR设备要负责的仅仅是画面显示而已。因此,可以通过终端本身的硬件配置进行画面处理,VR设备的屏幕上就可以得到让人满意的画面。而且,通过5G路由器无线连接终端与VR设备,解决了困扰众多厂商的“VR无线化”的技术难题。
第二实施例
如图3所示,本发明第二实施例中的视频串流***,包括终端、VR设备、定位追踪设备。其中,终端上安装有应用平台软件和串流软件服务器端;在本发明的实施例中,终端以PC为例进行说明,也可以为平板电脑、智能电视、智能手机等类似具有数据处理能力的终端。其中,示例性地,PC上安装的应用平台软件为Steam VR平台软件(在智能手机上为相应的APP)。当然也可以是其他应用平台例如VIVEPORT平台、HYPEREAL平台、蚁视VR应用软件平台、大朋助手、腾讯WEGAME、OGP应用平台等。应用平台软件中的VR应用软件使用了应用引擎(例如Unreal Engine 4,Universal 3D等),已经集成数据接口提供的SDK,例如Steam VR平台软件的数据接口OpenVR提供的SDK,这样就可以在PC的显示器上看到应用的画面。串流软件服务器端例如可以设置为NOLOHOME软件的A端。
串流软件服务器端包括两部分,一部分是控制界面,一部分是server驱动程序。其中,该server驱动程序优选为一个dll文件,但也可以是其它实现形式,例如SDK、API文件等等。应用平台软件, 例如Steam VR平台软件在PC上启动时,会相应加载上述的server驱动程序。
VR设备上安装有串流软件客户端,例如可以设置为NOLOHOME软件的B端。VR设备可以为VR一体机,则串流软件客户端安装在VR一体机的***中,画面也是在VR一体机的显示屏上显示,传感器固装在VR一体机。VR设备可以为移动式VR设备,则串流软件客户端安装在移动式VR设备的智能手机中,画面可以在移动式VR设备的智能手机上显示,也可以在移动式VR设备的显示屏上显示,传感器可以固装在移动式VR设备的外壳中也可以借用安装在移动式VR设备中智能手机的传感器。
上述PC和VR设备之间采用有线/无线方式进行连接,其中在采取无线方式时,优选在WLAN(无线局域网)或者5G通信环境下运行。由于5G通信具有高速率、低延迟等特点,在5G通信环境下PC与VR设备所产生的实际延迟基本可以忽略不计。
现有技术中,VR设备多数只能观看视频,即只有三自由度的姿态追踪(俯仰、横滚和偏航),如果需要进行六自由度的头手位置定位(包括俯仰、横滚、偏航和空间X、Y、Z坐标),则需要配备定位追踪设备。定位追踪设备用于追踪用户的位置,例如可以包括把持在用户手上的手柄,用以追踪用户手部的位置;内置或外设方式安装在VR设备上的***,用以追踪用户头部的位置。手柄可将定位数据传给***,***在将***的定位数据和手柄的定位数据传给VR设备,或者,手柄和***都将定位数据直接传给VR设备。
VR设备获取了定位追踪设备采集的定位数据,并获取定位追踪设备的姿态数据和自身的姿态数据,再利用串流软件客户端将定位数据和姿态数据利用UDP协议发送给终端上的串流软件服务器端。终端上的串流软件服务器端将定位数据和姿态数据发送给应用平台软件,从而让应用平台软件渲染出实时画面。
为了能够让VR应用软件的画面串流到VR设备中,按照图3所示的***架构来实现该需求,在图3所示的视频串流***中,需要实现的几个核心模块分别是:串流软件服务器端的server驱动程序,VR设备,安装在VR设备上的串流软件客户端,定位追踪设备。其中,定 位追踪设备用于采集用户身体的定位数据和姿态数据;VR设备用于获取定位数据和姿态数据,并将数据传输给server驱动程序;串流软件客户端和server驱动程序用于进行数据传输和处理。
图4是第二实施例提供的视频串流方法的流程图。该视频串流方法包括如下步骤:启动VR设备上的串流软件客户端,例如B端NOLOHOME软件,并启动位于PC上的串流软件服务器端,例如A端NOLOHOME软件。串流软件服务器端的控制界面UI包括各种控制按键,通过控制按键启动串流软件,将A端和B端连接。VR设备上串流软件客户端可将姿态数据、控制信息、定位数据等发送给PC的串流软件服务器端的server驱动程序,server驱动程序对接收到这些数据进行处理,发送给应用平台软件进行画面渲染,server驱动程序在将渲染后的画面发送给VR设备进行画面显示。下面对此展开详细具体的说明:
S1,获取姿态数据和定位数据。
获取姿态数据和定位数据,具体包括如下步骤:
S11,通过定位追踪设备采集用户的定位数据和/或姿态数据。
定位追踪设备可以包括安装在VR设备上的***、把持在用户手上的手柄等,通过获得***和/或手柄的定位数据,即可获得用户头部和/或手部的定位数据。用户的定位数据可以通过专利申请号为201610917518.0的三维空间定位方法及***获得,也可以使用其他现有已知的三维空间定位方法及***获得。
用户头部的姿态数据可以通过安装在VR设备上传感器得到,也可以通过安装在VR设备上的***上的传感器得到,上述传感器例如为九轴传感器、惯性传感器、六轴传感器、陀螺仪、地磁计等。用户其他部位,例如手部的姿态数据通过安装在定位追踪设备手柄上的传感器得到。
S12,将采集的姿态数据和定位数据发送到VR设备上。
VR设备可以通过OTG数据线等有线的方式读取姿态数据和定位数据,也可以通过蓝牙、Wifi等无线方式读取姿态数据和定位数据。对于VR一体机,则数据直接发送到VR一体机的***中;对于移动式VR设备,则数据可以发送到安装在移动式VR设备外壳中的智能手机上。
S13,将获取的姿态数据和定位数据发送到串流软件服务器端。
VR设备将获取的姿态数据和定位数据传输给安装在VR设备上的串流软件客户端,再利用UDP协议通过5G无线方式发送到安装在终端上的串流软件服务器端的server驱动程序。经此步骤,串流软件服务器端获取到定位数据和姿态数据。
优选地,串流软件服务器端还可以获取控制信息,控制信息也可以通过串流软件客户端,利用UDP协议发送到串流软件服务器端的server驱动程序。控制信息可以是来自于VR设备,也可以是来自于定位追踪设备。
S2,将获取的姿态数据和定位数据发送给VR应用软件进行画面渲染,具体包括如下步骤:
S21,将获取的姿态数据和定位数据发送给数据接口,经数据接口传送给VR应用软件。
将串流软件服务器端的server驱动程序获取的姿态数据和定位数据传给数据接口,对于应用平台软件SteamVR中的VR应用软件使用了应用引擎,已经集成数据接口OpenVR提供的SDK,数据接口OpenVR将姿态数据和定位数据传给VR应用软件。
优选地,串流软件服务器端的server驱动程序获取的控制信息也发送给VR应用软件。将串流软件服务器端的server驱动程序获取的控制信息发送给数据接口OpenVR,经数据接口OpenVR传送给VR应用软件。
S22,根据VR应用软件得到的定位数据、姿态数据以及应用逻辑,通过应用引擎进行画面的渲染。
这时候VR应用软件根据得到的定位数据、姿态数据以及应用逻辑,传输给应用引擎确切的渲染画面内容,进行画面的渲染。应用引擎为Unreal Engine 4或者Universal 3D等。
优选地,VR应用软件将得到的控制信息也传输给应用引擎确切的渲染画面内容,进行画面的渲染。
S23,将应用引擎渲染好的数据存储在显卡的显存中。
在本发明的实施例中,应用引擎渲染好的数据存放于显卡的显存中,并且通知VR应用软件画面已经渲染好了,VR应用软件通知数据接口,数据接口通知串流软件服务器端的server驱动程序渲染完成的 事件。
S3,获取渲染好的画面,发送给VR设备进行显示。
在本发明的实施例中,具体包括如下步骤:
S31,获取渲染好的画面对应的纹理数据,将一帧画面编码成多个数据包。
当串流软件服务器端的server驱动程序得知画面渲染好的事件后,通过数据接口传来的纹理地址,在显存中找到对应的纹理数据,即为一帧画面的数据,将一帧画面编码成多个数据包。
在本发明的实施例中,采用英伟达提供的视频编解码专用库—NvCodec库实现。在进行初始化的时候,预先告知NvCodec库编码格式、画面格式。在本发明的实施例中,使用H.264标准对数据进行编码。关于画面格式,使用NV_ENC_BUFFER_FORMAT_ABGR格式的图像,在当前帧中,NvCodec库会按要求,将一帧画面编码成多个小的数据包。
S32,将编码的多个数据包发送给VR设备进行解码并显示。
完成编码后,串流软件服务器端的server驱动程序将编码的多个数据包发送给VR设备上安装的串流软件客户端,串流软件客户端再传输给VR设备,VR设备在接收到一个完整的帧画面数据后,对接收的数据包进行解码,在VR设备上形成一幅完整的图像并显示。VR设备进行画面显示的方法和相关硬件可使用现有的任何一种方法和硬件,在此不做具体要求。
在上述实施例所提供的视频串流方法中,首先由串流软件服务器端获取定位数据和姿态数据;将获取的姿态数据和定位数据发送给VR应用软件进行画面渲染;获取渲染好的画面,发送给VR设备进行显示。串流软件服务器端安装在终端上,使得负责运行VR应用软件的只是终端,VR设备本身要负责的只是画面显示。而且,通过5G路由器无线连接终端与VR设备,解决了困扰众多厂商的“VR无线化”的技术难题。
第三实施例
第三实施例中使用的视频串流***同第一实施例,包括终端、VR设备(例如VR一体机等),在此不予赘述。该视频串流***在使用过 程中,如果不加限制地把数据送入应用平台软件中,即每当一个数据被接收到就直接放入应用平台软件中,由于每个设备的频率不同(例如VR设备进行数据传输的频率为X,而应用平台软件采集数据的频率为Y,X不等于Y),延迟不同,最终导致画面延迟以及画面抖动等问题。
为了解决这个问题,必须对数据进行合理预估,从而使渲染出来的画面更加稳定平滑流畅。因此,本发明实施例中的视频串流***,终端上安装有定位预测单元。定位预测单元以软件的形式设置在串流软件服务器端的server驱动程序中。定位预测单元用于根据VR设备的姿态数据对应用平台软件进行画面渲染所需的姿态数据进行预测,应用平台软件根据预测数据来渲染出实时画面。通过定位预测单元获取预测姿态数据,能够较为准确地预测应用平台软件下一时刻的姿态数据,从而减少画面抖动和显示延迟。将预测得到的下一时刻的姿态数据在VR应用软件中进行画面渲染,终端再通过串流软件服务器端将渲染后的画面,利用UDP协议传送给VR设备中的串流软件客户端进行显示。这一处理过程将在后续进行详细说明。
为了能够让VR应用软件的画面串流到VR设备中,按照图1所示的***架构来实现该需求。在图1所示的视频串流***中,需要实现的几个核心模块分别是:安装在终端的串流软件服务器端的server驱动程序,VR设备,安装在VR设备上的串流软件客户端,定位预测单元。其中,VR设备用于获取姿态数据,并将数据传输给server驱动程序;串流软件客户端和server驱动程序用于进行数据传输和处理。定位预测单元用于根据VR设备发送的姿态数据,对应用平台软件进行画面渲染所需的姿态数据进行预测。定位预测单元位于串流软件服务器端的server驱动程序中。
图5是第三实施例提供的视频串流方法的流程图,下面具体描述视频串流的实现过程。
S1,获取VR设备的姿态数据。
VR设备的姿态数据通过安装在VR设备上传感器获取,例如九轴传感器、惯性传感器、六轴传感器、陀螺仪、地磁计等。用户其他部位,例如手部的姿态数据通过安装在定位追踪设备手柄上的传感器得 到。
VR设备的姿态数据传输给安装在VR设备上的串流软件客户端,再通过串流软件客户端,利用UDP协议发送到安装在终端上的串流软件服务器端的server驱动程序,server驱动程序获取到VR设备的姿态数据。
S2,根据获取的姿态数据获得预测姿态数据。
为了有很好的使用体验,必须对VR设备发送的姿态数据进行合理预估,从而使渲染出来的画面更加稳定平滑流畅。所以,本发明实施例中的视频串流***,server驱动程序包括有定位预测单元,定位预测单元可以以软件的形式设置在串流软件服务器端的server驱动程序中。如图6所示,定位预测单元根据获取的姿态数据获得预测姿态数据,具体包括如下步骤:
S21,获取第一时间戳和第二时间戳,其中第一时间戳是串流软件服务器端收到第i姿态数据的时刻,所述第二时间戳是串流软件服务器端收到第i+1姿态数据的时刻。
在本发明的实施例中,定位预测单元获得第一时间戳Ti(i=1,2……N,N为正整数,N>=1),第一时间戳Ti是对收到的VR设备发送的第i个姿态数据和收到该姿态数据的时间进行签名得到的时间戳。定位预测单元获得第二时间戳T i+1(i=1,2……N,N为正整数,N>=1),第二时间戳T i+1是对收到的VR设备发送的第i+1个姿态数据和收到该姿态数据的时间进行签名得到的时间戳。
S22,获取串流软件服务器端收到姿态数据的数据延迟M。
在实现不同设备间视频串流时,应用平台软件渲染时采集数据的频率是X赫兹,VR设备发送姿态数据的频率是Y赫兹。数据延迟M是动作产生到server驱动程序接收到姿态数据的总延迟。
数据延迟M可通过下式得到:
M=T0+(t2–t1)+ΔT
其中,T0为动作产生到传感器获取该动作的延迟。具体地说,对于戴在用户头上的VR设备来说,该延迟为用户头部动作到传感器获取该头部动作时的姿态数据的延迟;t1为传感器获取到姿态数据的时刻; t2为将姿态数据发送到串流软件服务器端的时刻;ΔT为网络延迟。
图7显示了从动作产生到server驱动程序得到数据的过程中包含的所有数据延迟。
在第三实施例中,在整个视频串流过程中,由于网络延迟造成的数据延迟ΔT固定,仅计算一次即可。获取由于网络延迟造成的数据延迟的过程,具体包括如下步骤:
S221,在第一发送时刻t3,串流软件服务器端的server驱动程序向VR设备发送请求数据。
S222,在第一接收时刻t4,串流软件服务器端的server驱动程序接收到VR设备发送来的回复消息。
S223,根据第一接收时刻和第一发送时刻,得到网络延迟。
网络延迟采用如下公式:
Figure PCTCN2019111315-appb-000001
通过server驱动程序和VR设备之间的请求及响应的时间即可得到网络延迟ΔT。
S23,获取第三时间戳,其中第三时间戳是应用平台软件从串流软件服务器端进行采样的时间。
VR设备进行数据传输的频率为X,而应用平台软件采集数据的频率为Y,X不等于Y。定位预测单元在获取VR设备发送给串流软件服务器端的第i姿态数据和第i+1姿态数据和对应的第一时间戳Ti和第二时间戳T i+1后,紧接着获取第三时间戳V j′,第三时间戳V j′为应用平台软件从串流软件服务器端进行采样的时刻。
S24,根据第一时间戳和第一时间戳的姿态数据、第二时间戳和第二时间戳的姿态数据、数据延时,获得第三时间戳的预测姿态数据。
根据第一时间戳和第一时间戳获取的姿态数据、第二时间戳和第二时间戳获取的姿态数据、数据延时,获得第三时间戳的预测姿态数据,采用如下公式:
Figure PCTCN2019111315-appb-000002
其中,V j′是T j′时刻的预测姿态数据;T i是第一时间戳;V i是第一时间戳的姿态数据;T i+1是第二时间戳;V i+1是第二时间戳的姿态数据;T j′是第三时间戳;M是数据延迟。
通过以上方式,可以较为准确的预测T j′时刻的姿态数据,从而减少画面抖动和显示延迟。
S3,将预测姿态数据发送给VR应用软件进行画面渲染。
将预测得到的T j′时刻的姿态数据传给应用平台软件进行画面的渲染,在将渲染后的画面传送给VR设备进行显示。
其中,应用平台软件根据预测姿态数据进行画面渲染,将渲染好的画面发送给VR设备进行画面显示,具体包括如下步骤:
S31,将预测姿态数据发送给数据接口,经数据接口传送给应用平台软件中的VR应用软件。
将串流软件服务器端的server驱动程序中定位预测单元获取的预测姿态数据传给数据接口,应用平台软件SteamVR中的VR应用软件使用了应用引擎,集成了数据接口OpenVR提供的SDK,数据接口OpenVR将姿态数据传给VR应用软件。
S32,根据VR应用软件得到的预测姿态数据以及应用逻辑,确定应用引擎渲染的画面内容,进行画面的渲染。
根据VR应用软件得到的姿态数据以及应用逻辑,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。应用引擎可以为Unreal Engine 4、Universal 3D等。
优选地,串流软件服务器端的server驱动程序获取的控制信息也发送给VR应用软件,进行画面渲染。将串流软件服务器端获取的控制信息发送给数据接口OpenVR,经数据接口OpenVR传送给VR应用软件。VR应用软件还根据得到的控制信息,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。
在本发明的实施例中,应用引擎渲染好的数据存放于显卡的显存中,例如Nvidia显卡的显存中,并且通知VR应用软件,画面已经渲 染好了,VR应用软件通知数据接口OpenVR,数据接口OpenVR通知串流软件服务器端的server驱动程序渲染完成的事件。
S4,获取渲染好的画面,发送给VR设备进行显示。
在本发明的实施例中,具体包括如下步骤:
S41,获取渲染好的画面对应的纹理数据,将一帧画面编码成多个数据包。
当串流软件服务器端的server驱动程序得知画面渲染好的事件后,通过数据接口OpenVR传来的纹理地址,在显存中找到对应的纹理数据,即为一帧画面的数据,将一帧画面编码成多个数据包。
在本发明的实施例中,采用英伟达提供的视频编解码专用库—NvCodec库实现。在进行初始化的时候,预先告知NvCodec库编码格式、画面格式。在本发明的实施例中,使用H.264标准对数据进行编码。关于画面格式,使用NV_ENC_BUFFER_FORMAT_ABGR格式的图像,在当前帧中,NvCodec库会按要求,将一帧画面编码成多个小的数据包。
S42,将编码的多个数据包发送给VR设备进行解码并显示。
完成编码后,串流软件服务器端的server驱动程序将编码的多个数据包发送给VR设备上安装的串流软件客户端,串流软件客户端再传输给VR设备,VR设备在接收到一个完整的帧画面数据后,对接收的数据包进行解码,在VR设备上形成一幅完整的图像并显示。
VR设备进行画面显示的方法和相关硬件可使用现有的任何一种方法和硬件,在此不做具体要求。
在本发明的实施例中,安装在终端上的串流软件服务器端可以获取VR设备发送的控制信息,该控制信息可以来自于VR设备,也可以来自于与VR设备配合的控制器等。串流软件服务器端在将预测姿态数据发送给应用平台软件进行画面渲染的同时,也将控制信息发送给应用平台软件进行画面渲染。
综上所述,上述实施例所提供的视频串流方法,通过计算串流软件服务器端收到姿态数据的数据延时,结合VR设备发送的姿态数据,预测应用平台软件进行画面渲染时的姿态数据,根据预测姿态数据进行画面渲染,将渲染好的画面发送给VR设备进行画面显示。该方法可 以较为准确地预测姿态数据,从而减少画面抖动和显示延迟。
第四实施例
如图8所示,本发明第四实施例中的视频串流***,包括终端、VR设备、定位追踪设备。其中,终端上安装有应用平台软件和串流软件服务器端。在本发明的实施例中,终端以PC为例进行说明,也可以为平板电脑、智能电视、智能手机等类似具有数据处理能力的终端。其中,示例性地,PC上安装的应用平台软件为Steam VR平台软件(在智能手机上为相应的APP)。当然也可以是其他游戏平台例如VIVEPORT平台、HYPEREAL平台、蚁视VR游戏平台、大朋助手、腾讯WEGAME、OGP游戏平台等。应用平台软件中的VR应用软件使用应用引擎(例如Unreal Engine 4,Universal 3D等),集成数据接口OpenVR提供的SDK,例如Steam VR平台软件的数据接口OpenVR提供的SDK,这样就可以在PC的显示器上看到应用的画面。串流软件服务器端可以设置为NOLOHOME软件的A端。
串流软件服务器端包括两部分,一部分是控制界面,一部分是server驱动程序。其中,该server驱动程序优选为一个dll文件,但也可以是其它实现形式,例如SDK、API文件等等。应用平台软件,例如Steam VR平台软件在PC上启动时,会相应加载上述的server驱动程序。
VR设备上安装有串流软件客户端,例如可以设置为NOLOHOME软件的B端。VR设备可以为VR一体机,则串流软件客户端安装在VR一体机的***中,画面也是在VR一体机的显示屏上显示,传感器固装在VR一体机。VR设备可以为移动式VR设备,则串流软件客户端安装在移动式VR设备的智能手机中,画面可以在移动式VR设备的智能手机上显示,也可以在移动式VR设备的显示屏上显示。传感器可以固装在移动式VR设备的外壳中,也可以借用安装在移动式VR设备中智能手机的传感器。
上述PC和VR设备之间采用有线/无线方式进行连接,其中在采取无线方式时,优选在WLAN(无线局域网)或者5G通信环境下运行。由于5G通信环境具有高速率、低延迟等特点,在5G通信环境下PC与VR设备所产生的实际延迟基本可以忽略不计。
现有的VR设备多数只能观看视频,即只有三自由度的姿态追踪(俯仰、横滚和偏航),如果需要进行六自由度的头手位置定位(包括俯仰、横滚和偏航和空间X、Y、Z坐标),则需要配备定位追踪设备。定位追踪设备可以包括手柄和一个安装在VR设备上的***,手柄把持在用户的手上,手柄可将定位数据传给***,***在将***的定位数据和手柄的定位数据传给VR设备,或者,手柄和***都将定位数据直接传给VR设备。***可以以内置或者外设的方式安装在VR设备上。当以内置的方式放置在VR设备上时,可以是在VR设备制造过程中就集成组装进***;当以外设的方式放置在VR设备上时,可以是通过无线或者有线方式外接在VR设备上。定位追踪设备与VR设备通过USB接口连接,用于采集用户头部和/或手部的定位数据。VR设备获取定位追踪设备采集的定位数据,再将定位数据利用UDP协议发送给PC上的串流软件服务器端。PC上的串流软件服务器端将定位数据发送给应用平台软件,从而让应用平台软件渲染出实时画面。
为了获得很好的使用体验,如果不加限制地把数据送入应用平台软件中,即每当一个数据被接收到就直接放入应用平台软件中,由于每个设备的频率不同(例如VR设备进行数据传输的频率为X,而应用平台软件采集数据的频率为Y,X不等于Y),延迟不同,最终导致画面延迟以及画面抖动等问题。为了解决这个问题,必须对数据进行合理预估,从而使渲染出来的画面更加稳定平滑流畅。所以,本发明实施例中的视频串流***,终端上安装有定位预测单元,定位预测单元以软件的形式设置在串流软件服务器端的server驱动程序中。定位预测单元用于根据定位追踪设备采集的定位数据对应用平台软件进行画面渲染所需的定位数据进行预测,应用平台软件根据预测数据渲染出实时画面。通过定位预测单元获取预测定位数据,能较为准确的预测应用平台软件下一时刻的定位数据,从而减少画面抖动和显示延迟。终端再通过串流软件服务器端将渲染后的画面,利用UDP协议通过串流软件客户端传送给VR设备进行显示。这一处理过程在后续进行详细说明。
在本发明的实施例中,应用平台软件中的VR应用软件使用了应用引擎(例如Unreal Engine 4,Universal 3D等),已经集成数据接口 提供的SDK,例如OpenVR的SDK,这样就可以在PC的显示器上看到VR应用软件的画面了。
为了能够让VR应用软件的画面串流到VR设备中,按照图8所示的***架构来实现该需求,在图8所示的视频串流***中,需要实现的几个核心模块分别是:安装在终端的串流软件服务器端的server驱动程序,VR设备,安装在VR设备上的串流软件客户端,定位预测单元,定位追踪设备。其中,定位追踪设备用于采集的头部和/或手部的定位数据;VR设备用于获取采集的定位数据,并将数据传输给server驱动程序;串流软件客户端和server驱动程序用于进行数据传输和处理。定位预测单元用于根据VR设备发送的定位数据,对应用平台软件进行画面渲染所需的定位数据进行预测。定位预测单元位于串流软件服务器端的server驱动程序中。
图9是第四实施例提供的视频串流方法的流程图,下面具体描述视频串流的整个过程。
S1,获取定位追踪设备采集的定位数据。
其中,获取定位追踪设备采集的定位数据,具体包括如下步骤:
S11,通过定位追踪设备采集用户的定位数据。
定位追踪设备可以包括安装在VR设备上的***、把持在用户手上的手柄等,通过获得***和/或手柄的定位数据,即可获得用户头部和/或手部的定位数据。***可以以内置或者外设的方式安装在VR设备上。当以内置的方式放置在VR设备上时,可以是在VR设备制造过程中就集成组装进***;当以外设的方式放置在VR设备上时,可以是通过无线或者有线方式外接在VR设备上。用户的定位数据可以通过专利申请号为201610917518.0的三维空间定位方法及***获得,也可以使用其他现有已知的三维空间定位方法及***获得,例如多相机多标记点的定位方法、SLAM的方法等。
S12,将定位追踪设备采集的定位数据发送到VR设备上。
VR设备可以通过OTG数据线等有线的方式读取定位数据,也可以通过蓝牙、Wifi等无线方式读取定位数据。对于VR一体机,则数据直接发送到VR一体机的***中;对于移动式VR设备,则数据可以发送到安装在移动式VR设备外壳中的智能手机上。
S13,将VR设备获取的定位数据利用UDP协议发送到串流软件服务器端。VR设备将获取的定位数据传输给安装在VR设备上的串流软件客户端,再利用UDP协议发送到安装在终端上的串流软件服务器端的server驱动程序。经此步骤,串流软件服务器端获取到定位数据。
优选地,串流软件服务器端还可以获取控制信息,控制信息也可以通过串流软件客户端,利用UDP协议发送到串流软件服务器端的server驱动程序。控制信息可以是来自于VR设备,也可以是来自于定位追踪设备。
S2,根据获取的定位数据获得预测定位数据。
为了有很好的使用体验,必须对VR设备发送的定位数据进行合理预估,从而使渲染出来的画面更加稳定平滑流畅。所以,本发明实施例中的视频串流***,server驱动程序包括有定位预测单元,定位预测单元可以以软件的形式设置在串流软件服务器端的server驱动程序中。如图10所示,定位预测单元根据获取的定位数据获得预测定位数据,具体包括如下步骤:
S21,获取第一时间戳和第二时间戳,其中第一时间戳是串流软件服务器端收到第i定位数据的时刻,所述第二时间戳是串流软件服务器端收到第i+1定位数据的时刻;
在本发明的实施例中,定位预测单元获得第一时间戳Ti(i=1,2……N,N为正整数,N>=1),第一时间戳Ti是对收到的VR设备发送的第i个定位数据和收到该定位数据的时间进行签名得到的时间戳。定位预测单元获得第二时间戳T i+1(i=1,2……N,N为正整数,N>=1),第二时间戳T i+1是对收到的VR设备发送的第i+1个定位数据和收到该定位数据的时间进行签名得到的时间戳。
S22,获取串流软件服务器端收到定位数据的数据延迟M。
在实现不同设备间视频串流时,应用平台软件渲染时采集数据的频率是X赫兹;VR设备发送姿态数据的频率是Y赫兹。数据延迟M是动作产生到server驱动程序接收到姿态数据的总延迟。。
数据延迟M可通过下式得到:
M=T0+(t2–t1)+ΔT
其中,T0为动作产生到传感器获取该动作的延迟;t1为传感器获取到定位数据的时刻;t2为将定位数据发送到串流软件服务器端的时刻;ΔT为网络延迟。
图11显示了动作产生到server驱动程序得到数据过程中包含的所有数据延迟。
在第四实施例中,整个视频串流过程中由于网络延迟造成的数据延迟ΔT固定,仅计算一次即可。获取由于网络延迟造成的数据延迟的过程,具体包括如下步骤:
S221,在第一发送时刻t3,串流软件服务器端的server驱动程序通过VR设备或者定位追踪设备发送请求数据。
S222,在第一接收时刻t4,串流软件服务器端的server驱动程序接收到VR设备或者定位追踪设备发送的回复消息。
S234,根据第一接收时刻和第一发送时刻,得到网络延迟。网络延迟采用如下公式:
Figure PCTCN2019111315-appb-000003
通过server驱动程序和VR设备或定位追踪设备之间的请求及响应的时间即可得到网络延迟ΔT。
通过以上我们可以确定动作到server驱动程序的总延迟,即数据延迟M为:
M=20+(t2–t1)+(t4-t3)/2。
S23,获取第三时间戳,其中第三时间戳是应用平台软件从串流软件服务器端进行采样的时间。
VR设备进行数据传输的频率为X,而应用平台软件采集数据的频率为Y,X不等于Y。定位预测单元在获取VR设备发送给串流软件服务器端的第i定位数据和第i+1定位数据和对应的第一时间戳Ti和第二时间戳T i+1后,紧接着获取第三时间戳V j′,第三时间戳V j′为应用平台软件从串流软件服务器端进行采样的时刻。
S24,根据第一时间戳和第一时间戳的定位数据、第二时间戳和第二时间戳的定位数据、数据延时,获得第三时间戳的预测定位数据。
根据第一时间戳和第一时间戳获取的定位数据、第二时间戳和第二时间戳获取的定位数据、数据延时,获得第三时间戳的预测错误数据,采用如下公式:
Figure PCTCN2019111315-appb-000004
其中,V j′是T j′时刻的预测定位数据;T i是第一时间戳;V i是第一时间戳的定位数据;T i+1是第二时间戳;V i+1是第二时间戳的定位数据;T j′是第三时间戳;M是数据延迟。
通过以上方式,可以较为准确地预测T j′时刻的姿态数据,从而减少画面抖动和显示延迟。
S3,将预测定位数据发送给应用平台软件进行画面渲染。
将预测得到的T j′时刻的定位数据传给应用平台软件进行画面的渲染,在将渲染后的画面传送给VR设备进行显示。
其中,应用平台软件根据预测定位数据进行画面渲染,将渲染好的画面发送给VR设备进行画面显示,具体包括如下步骤:
S31,将预测定位数据发送给数据接口,经数据接口传送给应用平台软件中的VR应用软件。
将串流软件服务器端的server驱动程序中定位预测单元获取的预测定位数据传给数据接口,应用平台软件SteamVR中的VR应用软件使用应用引擎,集成数据接口OpenVR提供的SDK。数据接口OpenVR将姿态数据传给VR应用软件。
S32,根据VR应用软件得到的预测定位数据以及应用逻辑,确定应用引擎渲染的画面内容,进行画面的渲染。
根据VR应用软件得到的定位数据以及应用逻辑,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。应用引擎为Unreal Engine 4、Universal 3D等。
优选地,串流软件服务器端的server驱动程序获取的控制信息也发送给VR应用软件,进行画面渲染。将串流软件服务器端获取的控制 信息发送给数据接口OpenVR,经数据接口OpenVR传送给VR应用软件。VR应用软件还根据得到的控制信息,传输给应用引擎用以得到确切的渲染画面内容,并进行画面的渲染。
在本发明的实施例中,应用引擎将渲染好的数据存放于显卡的显存中,例如Nvidia显卡的显存中,并且通知VR应用软件画面已经渲染好了,VR应用软件通知数据接口OpenVR,数据接口OpenVR通知串流软件服务器端的server驱动程序渲染完成的事件。
S4,获取渲染好的画面,发送给VR设备进行显示。
在本发明的实施例中,具体包括如下步骤:
S41,获取渲染好的画面对应的纹理数据,将一帧画面编码成多个数据包。
当串流软件服务器端的server驱动程序得知画面渲染好的事件后,通过数据接口OpenVR传来的纹理地址,在显存中找到对应的纹理数据,即为一帧画面的数据,将一帧画面编码成多个数据包。
在本发明的实施例中,采用英伟达提供的视频编解码的专用库—NvCodec库。在进行初始化的时候,预先告知NvCodec库编码格式、画面格式。在本发明的实施例中,使用H.264标准对数据进行编码。关于画面格式,使用NV_ENC_BUFFER_FORMAT_ABGR格式的图像,在当前帧中,NvCodec库会按要求,将一帧画面编码成多个小的数据包。
S42,将编码的多个数据包发送给VR设备进行解码并显示。
完成编码后,串流软件服务器端的server驱动程序将编码的多个数据包发送给VR设备上安装的串流软件客户端,串流软件客户端再传输给VR设备,VR设备在接收到一个完整的帧画面数据后,对接收的数据包进行解码,在VR设备上形成一幅完整的图像并显示。
VR设备进行画面显示的方法和相关硬件可使用现有的任何一种方法和硬件,在此不做具体要求。
在本发明的实施例中,安装在终端上的串流软件服务器端还可以获取VR设备发送的控制信息。该控制信息可以来自于VR设备,也可以来自于定位追踪设备。串流软件服务器端在将预测定位信息发送给应用平台软件进行画面渲染的同时,也将控制信息发送给应用平台软件进行画面渲染。
综上所述,上述实施例中的视频串流方法,通过计算串流软件服务器端收到定位数据的数据延时,根据定位追踪设备采集的定位数据,预测应用平台软件进行画面渲染时的定位数据,根据预测数据进行画面渲染,将渲染好的画面发送给VR设备进行画面显示。该方法可以较为准确地预测定位数据,从而减少画面抖动和显示延迟。
另外,本发明实施例中还提供了一种视频串流装置。该装置包括处理器和存储器,所述处理器用于执行存储器中存储的实现视频串流的程序,以实现上述的视频串流方法。这里的存储器存储有一个或者多个实现视频串流的程序。其中,存储器可以包括易失性存储器,例如随机存取存储器;存储器也可以包括非易失性存储器,例如只读存储器、快闪存储器、硬盘或固态硬盘。存储器还可以包括上述各个种类的存储器的组合。当存储器中的一个或者多个实现视频串流的程序被处理器执行时,可以实现上述方法实施例中记载的视频串流方法的部分步骤或者全部步骤。
上面对本发明所提供的视频串流***、方法及装置进行了详细的说明。对本领域的一般技术人员而言,在不背离本发明实质精神的前提下对它所做的任何显而易见的改动,都将构成对本发明专利权的侵犯,将承担相应的法律责任。

Claims (20)

  1. 一种视频串流***,其特征在于包括终端和VR设备;
    其中,所述终端上安装有应用平台软件和串流软件服务器端;
    所述VR设备上安装有串流软件客户端,所述串流软件客户端将姿态数据发送给所述终端上的串流软件服务器端;所述串流软件服务器端将所述姿态数据发送给所述应用平台软件,由所述应用平台软件渲染出画面。
  2. 如权利要求1所述的视频串流***,其特征在于还包括定位追踪设备;
    所述定位追踪设备用于采集定位数据并发送给VR设备;
    所述串流软件客户端将姿态数据和所述定位数据发送给所述终端上的串流软件服务器端;所述串流软件服务器端将所述姿态数据和所述定位数据发送给所述应用平台软件,由所述应用平台软件渲染出画面。
  3. 如权利要求1或2所述的视频串流***,其特征在于:
    所述串流软件服务器端包括控制界面和server驱动程序,当所述应用平台软件在终端上启动时,加载所述server驱动程序。
  4. 如权利要求1或2所述的视频串流***,其特征在于:
    所述串流软件客户端通过无线方式将姿态数据和/或定位数据发送给串流软件服务器端。
  5. 如权利要求1或2所述的视频串流***,其特征在于:
    所述串流软件服务器端包括server驱动程序;定位预测单元位于所述server驱动程序中,用于根据所述VR设备发送的定位数据/姿态数据获得预测定位数据/预测姿态数据,所述串流软件服务器端将预测定位数据/预测姿态数据发送给所述应用平台软件,由所述应用平台软件根据预测定位数据/预测姿态数据渲染出画面。
  6. 一种视频串流方法,其特征在于包括如下步骤:
    获取VR设备的姿态数据;
    将获取的姿态数据发送给VR应用软件进行画面渲染;
    获取渲染好的画面,发送给VR设备进行显示。
  7. 如权利要求6所述的视频串流方法,其特征在于所述获取VR设备的姿态数据,包括如下步骤:
    安装在VR设备上的串流软件客户端获取VR设备的姿态数据;
    安装在终端上的串流软件服务器端获取所述串流软件客户端发送的所述VR设备的姿态数据。
  8. 如权利要求6所述的视频串流方法,其特征在于所述将获取的姿态数据发送给VR应用软件进行画面渲染,包括如下步骤:
    将获取的姿态数据发送给数据接口,经数据接口传送给VR应用软件;
    根据VR应用软件得到的姿态数据以及应用逻辑,通过应用引擎进行画面的渲染;
    将应用引擎渲染好的数据存储在显存中。
  9. 一种视频串流方法,其特征在于包括如下步骤:
    获取姿态数据和定位数据;
    将获取的姿态数据和定位数据发送给VR应用软件进行画面渲染;
    获取渲染好的画面,发送给VR设备进行显示。
  10. 如权利要求9所述的视频串流方法,其特征在于所述获取姿态数据和定位数据,包括如下步骤:
    通过定位追踪设备采集用户的定位数据和/或姿态数据;
    将定位追踪设备采集的定位数据和/或姿态数据发送到VR设备上;
    串流软件服务器端获取所述VR设备发送的所述定位数据和姿态数据。
  11. 一种视频串流方法,其特征在于包括如下步骤:
    获取VR设备的姿态数据;
    根据获取的姿态数据获得预测姿态数据;
    将预测姿态数据发送给应用平台软件进行画面渲染;
    获取渲染好的画面,发送给VR设备进行显示。
  12. 如权利要求11所述的视频串流方法,其特征在于所述根据获取的姿态数据获得预测姿态数据包括如下步骤:
    获取第一时间戳和第二时间戳,其中第一时间戳是串流软件服务 器端收到第i姿态数据的时刻,所述第二时间戳是串流软件服务器端收到第i+1姿态数据的时刻;
    获取所述串流软件服务器端收到所述姿态数据的数据延迟;
    获取第三时间戳,其中第三时间戳是应用平台软件从所述串流软件服务器端进行采样的时间;
    根据所述第一时间戳和第一时间戳的姿态数据、所述第二时间戳和第二时间戳的姿态数据、数据延时,获得所述第三时间戳的预测姿态数据。
  13. 如权利要求12所述的视频串流方法,其特征在于:
    所述数据延迟,采用如下公式得到:
    M=T0+(t2–t1)+ΔT;
    其中,M为数据延迟;t1为传感器获取到姿态数据的时刻;t2为将姿态数据发送到串流软件服务器端的时刻;ΔT为网络延迟。
  14. 如权利要求12所述的视频串流方法,其特征在于:
    所述根据第一时间戳和第一时间戳的姿态数据、第二时间戳和第二时间戳的姿态数据、数据延时,获得第三时间戳的预测姿态数据,采用如下公式:
    Figure PCTCN2019111315-appb-100001
    其中,V j′是T j′时刻的预测姿态数据;T i是第一时间戳;V i是第一时间戳的姿态数据;T i+1是第二时间戳;V i+1是第二时间戳的姿态数据;T j′是第三时间戳;M是数据延迟。
  15. 一种视频串流方法,其特征在于包括如下步骤:
    获取定位追踪设备采集的定位数据;
    根据获取的定位数据获得预测定位数据;
    将预测定位数据发送给应用平台软件进行画面渲染;
    获取渲染好的画面,发送给VR设备进行显示。
  16. 如权利要求15所述的视频串流方法,其特征在于所述获取定位追踪设备采集的定位数据是将定位追踪设备采集的定位数据发送到 终端的串流软件服务器端,包括如下步骤:
    通过定位追踪设备采集用户的定位数据;
    将定位追踪设备采集的定位数据发送到VR设备上;
    将VR设备获取的定位数据发送到串流软件服务器端。
  17. 如权利要求15所述的视频串流方法,其特征在于所述根据获取的定位数据获得预测定位数据,包括如下步骤:
    获取第一时间戳和第二时间戳,其中第一时间戳是串流软件服务器端收到第i定位数据的时刻,所述第二时间戳是串流软件服务器端收到第i+1定位数据的时刻;
    获取所述串流软件服务器端收到所述定位数据的数据延迟;
    获取第三时间戳,其中第三时间戳是应用平台软件从所述串流软件服务器端进行采样的时间;
    根据所述第一时间戳和第一时间戳的定位数据、所述第二时间戳和第二时间戳的定位态数据、数据延时,获得所述第三时间戳的预测定位数据。
  18. 如权利要求17所述的视频串流方法,其特征在于:
    所述数据延迟,采用如下公式得到:
    M=T0+(t2–t1)+ΔT;
    其中,M为数据延迟;t1为传感器获取到定位数据的时刻;t2为将定位数据发送到串流软件服务器端的时刻;ΔT为网络延迟。
  19. 如权利要求17所述的视频串流方法,其特征在于:
    所述根据第一时间戳和第一时间戳的定位数据、第二时间戳和第二时间戳的定位数据、数据延时,获得第三时间戳的预测定位数据,采用如下公式:
    Figure PCTCN2019111315-appb-100002
    其中,V j′是T j′时刻的预测定位数据;T i是第一时间戳;V i是第一时间戳的定位数据;T i+1是第二时间戳;V i+1是第二时间戳的定位数据;T j′是第三时间戳;M是数据延迟。
  20. 一种视频串流装置,包括处理器及存储器,所述处理器用于执行存储器中存储的实现视频串流的程序,以实现权利要求6~19中任意一项所述的视频串流方法。
PCT/CN2019/111315 2018-10-16 2019-10-15 视频串流***、视频串流方法及装置 WO2020078354A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/286,387 US11500455B2 (en) 2018-10-16 2019-10-15 Video streaming system, video streaming method and apparatus

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811203106.6 2018-10-16
CN201811203090.9A CN111064985A (zh) 2018-10-16 2018-10-16 一种实现视频串流的***、方法及装置
CN201811202640.5A CN111064981B (zh) 2018-10-16 2018-10-16 一种视频串流的***及方法
CN201811203090.9 2018-10-16
CN201811202640.5 2018-10-16
CN201811203106.6A CN111065053B (zh) 2018-10-16 2018-10-16 一种视频串流的***及方法

Publications (1)

Publication Number Publication Date
WO2020078354A1 true WO2020078354A1 (zh) 2020-04-23

Family

ID=70283686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111315 WO2020078354A1 (zh) 2018-10-16 2019-10-15 视频串流***、视频串流方法及装置

Country Status (2)

Country Link
US (1) US11500455B2 (zh)
WO (1) WO2020078354A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI779336B (zh) * 2020-08-24 2022-10-01 宏碁股份有限公司 顯示系統及裸視立體影像之播放方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130117377A1 (en) * 2011-10-28 2013-05-09 Samuel A. Miller System and Method for Augmented and Virtual Reality
CN106383596A (zh) * 2016-11-15 2017-02-08 北京当红齐天国际文化发展集团有限公司 基于空间定位的虚拟现实防晕眩***及方法
CN106454322A (zh) * 2016-11-07 2017-02-22 金陵科技学院 Vr的图像处理***及其方法
CN106710002A (zh) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 基于观察者视角定位的ar实现方法及其***
CN107024995A (zh) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 多人虚拟现实交互***及其控制方法
CN107577045A (zh) * 2013-05-30 2018-01-12 欧库勒斯虚拟现实有限责任公司 用于头戴式显示器的预测跟踪的方法、装置及存储介质
CN108052364A (zh) * 2017-12-13 2018-05-18 上海曼恒数字技术股份有限公司 一种基于远程操作的图像显示方法、装置、设备及存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209048A1 (en) * 2007-02-28 2008-08-28 Microsoft Corporation Loading A Mirror Driver In Remote Terminal Server Session
JP2010212947A (ja) 2009-03-10 2010-09-24 Sony Corp 情報処理装置および方法、情報処理システム、並びにプログラム
CN104035760A (zh) 2014-03-04 2014-09-10 苏州天魂网络科技有限公司 跨移动平台实现沉浸式虚拟现实的***
CN106899860B (zh) 2015-12-21 2019-10-11 优必达公司 通过网络传送媒体的***及方法
US9910282B2 (en) 2015-12-28 2018-03-06 Oculus Vr, Llc Increasing field of view of head-mounted display using a mirror
US11017712B2 (en) * 2016-08-12 2021-05-25 Intel Corporation Optimized display image rendering
CN107979763B (zh) 2016-10-21 2021-07-06 阿里巴巴集团控股有限公司 一种虚拟现实设备生成视频、播放方法、装置及***
CN206541288U (zh) 2017-01-07 2017-10-03 北京国承万通信息科技有限公司 一种虚拟现实***、主机及头戴式显示设备
CN106998409B (zh) 2017-03-21 2020-11-27 华为技术有限公司 一种图像处理方法、头戴显示器以及渲染设备
CN107315470B (zh) 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 图形处理方法、处理器和虚拟现实***
US11158101B2 (en) * 2017-06-07 2021-10-26 Sony Interactive Entertainment Inc. Information processing system, information processing device, server device, image providing method and image generation method
CN107943287A (zh) 2017-11-16 2018-04-20 烽火通信科技股份有限公司 一种基于Android机顶盒***解决VR画面抖动的***及方法
CN108111839A (zh) 2017-12-22 2018-06-01 北京轻威科技有限责任公司 一种串流式无线虚拟现实头盔
US10534454B2 (en) * 2018-02-02 2020-01-14 Sony Interactive Entertainment Inc. Head-mounted display to controller clock synchronization over EM field
US10726765B2 (en) * 2018-02-15 2020-07-28 Valve Corporation Using tracking of display device to control image display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130117377A1 (en) * 2011-10-28 2013-05-09 Samuel A. Miller System and Method for Augmented and Virtual Reality
CN107577045A (zh) * 2013-05-30 2018-01-12 欧库勒斯虚拟现实有限责任公司 用于头戴式显示器的预测跟踪的方法、装置及存储介质
CN106454322A (zh) * 2016-11-07 2017-02-22 金陵科技学院 Vr的图像处理***及其方法
CN106383596A (zh) * 2016-11-15 2017-02-08 北京当红齐天国际文化发展集团有限公司 基于空间定位的虚拟现实防晕眩***及方法
CN106710002A (zh) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 基于观察者视角定位的ar实现方法及其***
CN107024995A (zh) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 多人虚拟现实交互***及其控制方法
CN108052364A (zh) * 2017-12-13 2018-05-18 上海曼恒数字技术股份有限公司 一种基于远程操作的图像显示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US11500455B2 (en) 2022-11-15
US20210357020A1 (en) 2021-11-18

Similar Documents

Publication Publication Date Title
US10469820B2 (en) Streaming volumetric video for six degrees of freedom virtual reality
US11509825B2 (en) Image management system, image management method, and computer program product
CN111314724B (zh) 云游戏直播方法和装置
JP6961612B2 (ja) 三次元モデル配信方法及び三次元モデル配信装置
CN110213616B (zh) 视频提供方法、获取方法、装置及设备
WO2018177314A1 (zh) 全景图像的展示控制方法、装置及存储介质
CN107888987B (zh) 一种全景视频播放方法及装置
US20120293613A1 (en) System and method for capturing and editing panoramic images
WO2019105274A1 (zh) 媒体内容的展示方法、装置、计算设备及存储介质
US10652284B2 (en) Method and apparatus for session control support for field of view virtual reality streaming
US10684696B2 (en) Mechanism to enhance user experience of mobile devices through complex inputs from external displays
US11450053B1 (en) Efficient 5G transmission of volumetric data using 3D character rigging techniques
US10558261B1 (en) Sensor data compression
WO2018078986A1 (ja) 情報処理装置、情報処理方法、およびプログラム
US11500413B2 (en) Headset clock synchronization
US20240098344A1 (en) Video modification and transmission using tokens
CN111064981B (zh) 一种视频串流的***及方法
US20220311970A1 (en) Communication management device, image communication system, communication management method, and recording medium
WO2020078354A1 (zh) 视频串流***、视频串流方法及装置
CN109766006B (zh) 虚拟现实场景的显示方法、装置及设备
JP2016194783A (ja) 画像管理システム、通信端末、通信システム、画像管理方法、及びプログラム
JP5861684B2 (ja) 情報処理装置、及びプログラム
CN111065053B (zh) 一种视频串流的***及方法
CN111064985A (zh) 一种实现视频串流的***、方法及装置
US20150295783A1 (en) Method for real-time multimedia interface management sensor data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19873659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 09.06.2021.)

122 Ep: pct application non-entry in european phase

Ref document number: 19873659

Country of ref document: EP

Kind code of ref document: A1