WO2020114297A1 - 一种控制vr视频播放的方法及相关装置 - Google Patents

一种控制vr视频播放的方法及相关装置 Download PDF

Info

Publication number
WO2020114297A1
WO2020114297A1 PCT/CN2019/121439 CN2019121439W WO2020114297A1 WO 2020114297 A1 WO2020114297 A1 WO 2020114297A1 CN 2019121439 W CN2019121439 W CN 2019121439W WO 2020114297 A1 WO2020114297 A1 WO 2020114297A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
jump
video image
input
icon
Prior art date
Application number
PCT/CN2019/121439
Other languages
English (en)
French (fr)
Inventor
蒯多慈
彭冰洁
何薇
吴亮
杨勇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19892508.3A priority Critical patent/EP3873099A4/en
Publication of WO2020114297A1 publication Critical patent/WO2020114297A1/zh
Priority to US17/338,261 priority patent/US11418857B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4131Peripherals receiving signals from specially adapted client devices home appliance, e.g. lighting, air conditioning system, metering devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8541Content authoring involving branching, e.g. to different story endings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Definitions

  • This application relates to the field of virtual reality technology, in particular to a method and related device for controlling VR video playback.
  • Virtual reality (Virtual Reality, VR) technology is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a simulation environment. It is a multi-source information fusion, interactive 3D dynamic scene and entity The system simulation of behavior immerses the user into the environment.
  • VR panoramic video VR360 degree video is a typical application scenario of VR technology.
  • the VR device can only play directly from the beginning of the video to the end of the video, or fast forward and rewind to a certain moment by dragging the progress bar of the user, and cannot switch the playback scene according to the user's interests and preferences. Interact with users and provide users with personalized services.
  • Embodiments of the present invention provide a method and related device for controlling VR video playback, which can provide personalized services for users and improve the user viewing experience of VR videos.
  • an embodiment of the present invention provides a method for controlling VR video playback, which is executed by a video server device.
  • the method includes:
  • the video server device sends a frame of first video image of the first video to the terminal device, the first video image includes a jump icon, wherein the first video is the video being played by the video server device for the terminal device;
  • the video server device receives the input sent by the terminal device for selecting the jump icon; and obtains the jump time of the jump target video corresponding to the jump icon based on the input; then, The video server device sends a second video image to the terminal device, where the second video image is a frame of video image corresponding to the jump time of the jump target video.
  • This method includes a jump icon in the VR video image, which can prompt the user to jump to the video at the jump icon when the user watches the video.
  • the user can form an input by selecting the jump icon based on the video content viewed, the terminal device sends the input to the video server device, and the video server completes the video jump, and quickly jumps the video to the scene that the user is interested in.
  • This method provides personalized services for users, forms a viewing interaction with users, and improves the user experience of VR videos.
  • the receiving the input for selecting the jump icon sent by the terminal device specifically includes: receiving the input position information of the input sent by the terminal device, and according to the The input location information determines that the jump icon is selected by the input.
  • the method determines the jump icon selected by the user based on the input position information input by the user, and obtains the corresponding jump time, which can ensure that the video server device performs video jump based on the user's selection and forms a viewing interaction with the user.
  • the method before the sending the first video image to the terminal device, the method further includes: a video server device rendering the jump icon at the jump position information of the first video image,
  • the first video image is a frame of video image corresponding to the video frame identifier in the first video.
  • This method can display the corresponding jump icon at the corresponding position in the video image according to the preset jump position information. Therefore, the user can determine whether to perform the video jump based on the video content near the location of the jump icon in the video image, which can make the video jump more intuitive and effective.
  • the method before the sending the first video image to the terminal device, the method further includes: the video server device rendering jump video prompt information at the jump location information of the first video image.
  • the jump video prompt information is used to prompt the user of the video content after the jump, and the jump video prompt information may be video image information or text description information on the video content.
  • This method can display the corresponding jump video prompt information at the corresponding position in the video image according to the preset jump video prompt information.
  • the jump video prompt information is used to prompt the user of the video content after the jump, which is convenient for the user to choose whether to perform the video jump according to their own interests, which can make the video jump more intuitive and effective.
  • the jump target video is the first video
  • the second video image is a frame of video image corresponding to the jump time of the first video
  • This method can realize video jump in one video, that is, jump in different video scenes in the same video, and can help users jump to video scenes of interest based on their own preferences to continue watching without being played by the video
  • the limitation of the time sequence improves the user's movie viewing experience.
  • the jump target video is a second video
  • the second video image is a frame of video image corresponding to the jump time of the second video
  • the acquisition is based on the input
  • the jump time of the jump target video corresponding to the jump icon further includes: acquiring the playback address of the second video corresponding to the jump icon based on the input; based on the second video
  • the play address of and the jump time of the second video determine the second video image.
  • This method can realize video jumping in different videos, that is, different video scenes in different videos, and can help users jump to video scenes of interest based on their own preferences and continue watching, without being played by the video.
  • the time sequence is not limited by the video content being played.
  • an embodiment of the present invention provides a method for controlling VR video playback, which is executed by a terminal device.
  • the method includes: the terminal device plays a frame of a first video image of a first video, and the first video image includes a jump Turn icon, wherein the first video is the video being played by the terminal device; after that, receive an input for selecting the jump icon; and obtain jump information corresponding to the jump icon based on the input,
  • the jump information includes a jump time of the jump target video; then, the terminal device plays a second video image, and the second video image is a video image of the jump time of the jump target video.
  • This method includes a jump icon in the VR video image, which can prompt the user to jump to the video at the jump icon when the user watches the video.
  • the user can form an input by selecting a jump icon based on the video content viewed, and the terminal device completes the video jump after receiving the input, and quickly jumps the video to a scene that the user is interested in.
  • This method provides personalized services for users, forms a viewing interaction with users, and improves the user experience of VR videos.
  • the receiving the input for selecting the jump icon specifically includes:
  • the terminal device receives the input, obtains input position information of the input in the first video image; and determines that the jump icon is selected by the input according to the received input position information.
  • the method determines the jump icon selected by the user based on the input position information input by the user, and obtains the corresponding jump information, which can ensure that the terminal device performs video jump based on the user's selection and forms a viewing interaction with the user.
  • the method before the playing the first video image, the method further includes: the terminal device rendering the jump icon at the jump position of the first video image, the first video image is A frame of video image corresponding to the video frame identifier in the first video.
  • This method can display the corresponding jump icon at the corresponding position in the video image according to the preset jump position information. Therefore, the user can determine whether to perform the video jump based on the video content near the location of the jump icon in the video image, which can make the video jump more intuitive and effective.
  • the method before the playing the first video image, the method further includes: the terminal device rendering jump video prompt information at a jump position of the first video image.
  • the jump video prompt information is used to prompt the user of the video content after the jump, and the jump video prompt information may be video image information or text description information on the video content.
  • This method can display the corresponding jump video prompt information at the corresponding position in the video image according to the preset jump video prompt information.
  • the jump video prompt information is used to prompt the user of the video content after the jump, which is convenient for the user to choose whether to perform the video jump according to their own interests, which can make the video jump more intuitive and effective.
  • the jump target video is the first video
  • the second video image is a frame of video image corresponding to the jump time of the first video
  • This method can realize video jump in one video, that is, jump in different video scenes in the same video, and can help users jump to video scenes of interest based on their own preferences to continue watching without being played by the video
  • the limitation of the time sequence improves the user's movie viewing experience.
  • the jump target video is a second video
  • the second video image is a frame of video image corresponding to the jump time of the second video
  • the acquisition is based on the input
  • the jump time of the jump target video corresponding to the jump icon further includes: acquiring the playback address of the second video corresponding to the jump icon based on the input; based on the second video
  • the play address of and the jump time of the second video determine the second video image.
  • This method can realize video jumping in different videos, that is, different video scenes in different videos, and can help users jump to video scenes of interest based on their own preferences and continue watching, without being played by the video.
  • the time sequence is not limited by the video content being played.
  • an embodiment of the present invention provides a video server device.
  • the device has a function to realize the behavior in the method example of the first aspect described above.
  • the function can be realized by hardware, or can also be realized by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the structure of the video server device includes a sending module, a receiving module, and an acquiring module, and these modules can perform the corresponding functions in the method example of the first aspect described above. For details, see the detailed description in the method example. I will not repeat them here.
  • an embodiment of the present invention provides a terminal device.
  • the device has a function to realize the behavior in the method example of the second aspect.
  • the function can be realized by hardware, or can also be realized by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the structure of the terminal device includes a playback module, a reception module, and an acquisition module, and these modules can perform the corresponding functions in the method example of the second aspect described above. For details, see the detailed description in the method example, here Do not repeat them.
  • an embodiment of the present invention further provides a video server device.
  • the structure of the device includes a processor, and may further include a transceiver or a memory.
  • the processor is configured to support the video server device to perform the corresponding function in the method of the first aspect.
  • the memory is coupled to the processor, which stores necessary program instructions and data of the device.
  • the transceiver is used to communicate with other devices.
  • an embodiment of the present invention also provides a terminal device.
  • the structure of the device includes a processor, and may further include a transceiver or a memory.
  • the processor is configured to support the terminal device to perform the corresponding function in the method of the second aspect.
  • the memory is coupled to the processor, which stores necessary program instructions and data of the device.
  • the transceiver is used to communicate with other devices.
  • the present invention also provides a computer-readable storage medium, where the computer storage medium includes a set of program codes for performing the method described in any implementation manner of the first aspect of the embodiments of the present invention.
  • the present invention also provides a computer-readable storage medium, where the computer storage medium includes a set of program codes for performing the method described in any implementation manner of the second aspect of the embodiments of the present invention.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for creating jump information according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for controlling VR video playback according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of another method for controlling VR video playback according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the composition of a video server device provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of the composition of a terminal device according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another composition of a video server device according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another composition of a terminal device according to an embodiment of the present invention.
  • the embodiments of the present invention provide a method and related device for controlling VR video playback, which are used to solve the problem that the VR video cannot switch the playback scene according to the user's preference during the playback process and cannot interact with the user in the prior art.
  • FIG. 1 is a schematic diagram of a system architecture provided in an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a system architecture provided in an embodiment of the present invention.
  • typical application scenarios including video processing equipment, video server equipment and terminal equipment.
  • the video processing device may be a computer device, and the video processing device may have strong video processing functions and data calculation functions, for example, it may extract position information in a video image where auxiliary tools in the video are located.
  • the video processing device may process the recorded video to generate VR video, and make jump information.
  • the VR video and jump information processed by the video processing device can be uploaded to the video server device, and the video server device can control the playback of the video, or the terminal device can download and control the playback of the video.
  • the video server device can be a local high-performance host or a remote server deployed in the cloud.
  • the video server device can have strong image processing functions and data calculation functions, such as rendering operations, logical operation functions, etc.; the video server device can be an ultra-multi-core server, deployed with a graphics processing (GPU) cluster Computers, large distributed computers, clustered computers with pooled hardware resources, etc.
  • the video server device may render a jump icon at a corresponding position in the video image according to the jump information, may respond to an input of the user selecting the jump icon, and play a target jump video to the user.
  • the terminal device may be a device worn on the user's head, such as VR glasses, VR helmets, etc., and may also include devices worn on other parts of the user, such as those worn on the user's hands, elbows, feet, and knees Devices, etc., for example, gamepads, etc.
  • the terminal device can display the video image of the VR video to the user through the display.
  • the terminal device can locally save the data of the VR video and the jump information, can render the jump icon at the corresponding position in the video image according to the jump information, can respond to the input of the user selecting the jump icon, and play it to the user Target jump video.
  • the terminal device may not save the data of the VR video and jump information locally, but store the relevant data in the video server device.
  • the video image of the VR video is displayed and the user input is sent to the video server device information.
  • Jump information is a collection of information required to jump to the jump target video in the first video, and may include a video frame identifier (Identifier, ID) of the video image of the first video, jump position information and jump time
  • ID video frame identifier
  • the jump target video is not the first video, it also includes the playback address of the second video; it can also include jump video prompt information and so on.
  • the jump information is an exemplary collection, and the video frame identifier, jump position information, jump time and other information may be included in the jump information and appear in the form of a collection, It may not appear in the form of collection, which is not limited in the embodiment of the present invention.
  • the first video is an exemplary name used to indicate the currently playing video; the jump target video is the video played after the jump, which may be the first video or other videos; the second video is also an example Sexual name, used to represent other videos different from the first video
  • the video processing device can produce jump information. Specific methods include:
  • the video is the first video or the jump target video.
  • the method of adding an auxiliary tool may be to add a positioning tool on the camera position of the first video and the jump target video, such as a high-precision GPS locator and a gyroscope.
  • the positioning tool can periodically generate video frame identifiers and corresponding camera position information captured by the camera.
  • the method of adding an auxiliary tool may also be to put an auxiliary object that is easy to be identified by the program, such as an auxiliary object of a specific shape or a specific color, at a position where jump information needs to be set. Since multiple jump information can be set in a video image, easily distinguishable auxiliary items, such as red triangle cones and yellow triangle cones, can be placed, which can be used to distinguish different jump information during post-video processing.
  • the video processing device obtains the jump time of the jump target video.
  • jump video prompt information may be generated according to the video content that starts playing at the jump time of the jump target video.
  • the jump video prompt information is used to prompt the user of the video content after the jump, so that the user can choose whether to jump the video according to their own interests.
  • the jump video prompt information may be video image information, or text description information on the video content.
  • the jump time is the other playback time point of the first video; when the jump target video is not the first video, that is, it needs to jump to the second When playing in the video, the jump time is the playing time of the second video.
  • the present invention uses the jump time of the jump to the jump target video in the first video as an example to illustrate, It does not constitute a limitation on the embodiments of the present invention.
  • the video processing device acquires the video frame identifier of the video image in the first video and the jump position information in the video image according to the auxiliary tool.
  • the positioning tool can be used to obtain the video frame identifier of the first video image generated when the camera records the first video and the corresponding first camera position information, or Obtain the video frame identifier and corresponding target camera position information of the second video image generated when recording the jump target video, and process the first camera position information and the target camera position information to obtain the jump position information, for example
  • the relative position information obtained by subtracting the two position information may be jump position information.
  • the second video image may be a video image of a jump time from the first video image of the first video to the jump target video.
  • the added auxiliary tool in step 201 is an auxiliary material that is easy for program identification
  • extract the characteristic image according to the characteristics of the auxiliary material then use the image recognition program to identify the characteristic image from the first video image of the first video; and Calculate the position information of the characteristic image in the first video image, that is, the corresponding jump position information, and record the video frame identifier of the first video image at the same time.
  • the video processing device matches the video frame identifier, the jump position information, and the jump time of the jump target video to generate jump information.
  • the video frame identifier and the corresponding jump position information can be matched when they are recorded in step 203. Then, it matches with the jump time of the jump target video, and the matching here is to consider at which position information of which frame of the video image of the first video, and to which jump target video the jump time of the jump target video continues to play.
  • the second video image is related to the video content at the jump position information of the first video image.
  • the video content at the jump position information (x1, y1, z1) in the video image of the Mth frame in the first video is a door
  • the exhibition hall entered from the door is the calligraphy and painting exhibition hall
  • the jump target video The video scene played at the jump time t is the calligraphy and painting exhibition hall
  • the video frame identifier of the M-th frame video image of the first video and the corresponding jump position information (x1, y1, z1) It matches the jump time t of the jump target video to generate jump information.
  • the jump information After the jump information is generated, it is stored together with the first video.
  • the video data of a video is stored as two associated data files. It should be noted that in the embodiment of the present invention, the stored file is described by the jump information and the first video, and the file may be a file combining the jump information and the first video data, or may be two related File, which is not limited in the embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a method for controlling VR video playback according to an embodiment of the present invention.
  • the video server device plays the video image of the first video to the terminal device, and the jump position of the video image includes a jump icon; when the user selects the jump icon, the video server device obtains the corresponding The jump time of the target jump video is played to the terminal device from the jump time.
  • the method includes:
  • the terminal device sends a first video playback request to the video server device, including the first video playback address.
  • the user orders the first video through the terminal device, and the terminal device sends the first video playback request to the video server device after receiving the input.
  • the video server device obtains the first video and the jump information according to the first video playback address.
  • the video server device receives the first video playback request sent by the terminal device, and obtains the first video and the jump information according to the first video playback address therein.
  • the first video and the jump information are stored in the video server device, and the video server device may directly obtain it.
  • the video processing device uploads the processed first video and jump information to the video server.
  • the first video and the jump information are not in the video server device, and the video server device requests related resources from other devices according to the address.
  • the video server device renders the jump icon at the jump position of the first video image.
  • the first video and the jump information acquired by the video server device may be a combined file or two related files.
  • the video server device parses the acquired first video and jump information into video frame data and jump information data.
  • the video server device renders and generates a video image according to the video frame data and the jump information, and the video image includes a jump icon.
  • the first video image is a frame of video image in the first video.
  • the jump information of the first video image includes a video frame identifier, and the first video image and the jump information can be associated according to the video frame identifier. It can be understood that the first video image is a frame of video image corresponding to the video frame identifier.
  • the jump information of the first video image further includes jump position information, and a jump icon can be rendered at the jump position of the first video image according to the jump position information.
  • the jump information may further include jump video prompt information, and according to the jump position information, the jump video prompt information may also be rendered at the jump position of the first video image.
  • the first video image is the n-th frame video image
  • the video frame identifier of the n-th frame video image may have corresponding multiple jump information.
  • 1 jump information as an example for description, it should be understood that when there is When there are multiple pieces of jump information, the processing method for other pieces of jump information is also the same.
  • the jump video prompt information can also be obtained.
  • the jump video prompt information refer to step 202 in the embodiment shown in FIG. 2, for example, information of the calligraphy and painting exhibition hall.
  • the video server device renders and generates the nth frame video image according to the information of the video frame data and user perspective, and renders a jump icon, such as a small arrow, at the corresponding jump position (x, y, z) in the nth frame video image , Which is used to prompt the user that video jumping can be performed interactively; optionally, the video server device can also render jump video prompt information at the corresponding jump position in the n-th frame video image (x, y, z) ; Optionally, the jump video prompt information can also be used as the jump icon.
  • the video server device sends a frame of first video image of the first video to the terminal device, where the first video image includes a jump icon.
  • the video server device displays the first video image to the user through the display of the terminal device, and the first video image includes a jump icon.
  • the first video image includes jump video prompt information.
  • the video server device may animate the jump icon, for example, make the jump icon jump, highlight, etc., to better remind the user that video jump can be performed interactively here.
  • a video server plays a video through a terminal device, it may send a frame-by-frame video image to the terminal device, or it may send a multi-frame video image, and no matter what form the video server device sends video data to the terminal device , Can be understood as sending video streams.
  • the video server device sends a frame of video image to the terminal device, which is described from the perspective of video display, and may include the above various specific sending forms.
  • the terminal device receives user input.
  • the user can select the jump icon in the first video image to jump to the corresponding video scene to play based on his preference.
  • the jump icon is in the video image, and the user can jump the video based on the video content in the video image, especially the video content where the jump icon is located.
  • the VR icon such as an air mouse, ray gun, handle, etc.
  • the terminal device receives user input and obtains input position information of the input.
  • the input position information may be the posture of the VR control device, or the position in the video screen displayed on the terminal device according to the input of the VR control device, or the user’s gaze in the video screen displayed on the terminal device Position (such as coordinates), etc.
  • the video server device when the video server device is playing video, if no user input is received, the VR video is played normally, which is not limited in this embodiment of the present invention.
  • the terminal device sends input to the video server device.
  • the terminal device sends the input to the video server device. Specifically, the terminal device sends the input input location information to the video server device. Optionally, the terminal device may also send the input video frame identifier to the video server device, where the input video frame identifier is the video frame identifier corresponding to the third video image being displayed by the terminal device when the user clicks to form the input .
  • the third video image may be the same frame of video image as the first video image, or may be two frames of video image with the same or similar content of the video image.
  • the video server device receives input for selecting the jump icon, and obtains the jump time of the jump target video corresponding to the jump icon based on the input.
  • the video server device receives the input sent by the terminal device, specifically, it may receive input position information of the input, and determine that the jump icon is selected by the input according to the input position information. Specific methods include:
  • the user sees that the video image with the jump icon displayed on the terminal device may be the first video image; after thinking, the user selects the jump icon on the terminal device based on the video content, his preference, or the jump video prompt information
  • the terminal device may be displaying the third video image at this time; the terminal device receives the user's input and sends the input location information to the video server device, which is being processed when the video server device receives the input location information
  • the video image may be the fourth video image.
  • first video image, the third video image, and the fourth video image may be the same frame of video images (scenario one), or may be several frames of video images with the same or similar video content of the video images (scenario two) , It may also happen that the content of the video image is very different due to the time point when the lens is switched (scene three).
  • the video server device generates relative position information of the fourth video image according to the received input position information, which may be coordinate information obtained by converting into the coordinate axis of the fourth video image.
  • the video server device compares the relative position information with the jump position information in the jump information of the fourth video image. If the distance difference between the two location information is outside the acceptable range, it means that the user has not clicked on the effective interactive area, which may be the user's misoperation.
  • the video server device can continue to play the video without interactive response.
  • the video server device determines that the jump icon corresponding to the jump information is selected by the user's input And obtain the jump information to obtain the jump time of the jump target video.
  • the video server device may also obtain a playback address of the second video.
  • the range that can be received represents a range where a jump icon can interact with the user in the video image. It should be noted that when there is a plurality of jump information in the fourth video image, the jump position information therein may also be used to determine which one is selected by the user.
  • the video jumps indicated by the jump icons of the first video image, the third video image, and the fourth video image are the same, and may be considered as the same jump Icon; and from the perspective of the video server device, the jump target video and the jump time in the jump information of the first video image, the third video image, and the fourth video image are the same, then the first video image,
  • the jump information corresponding to the jump icon of the third video image may also be the jump information of the fourth video image, and the video server device may still operate according to the scenario 1 method.
  • the video jumps indicated by the jump icons of the first video image, the third video image, and the fourth video image are different, then the jump icon selected by the user and the jump of the fourth video image Transfer information cannot be matched. If the video server device cannot find the corresponding jump information in the third video image, it may continue to play the video without interactive response or give a prompt that the jump icon is not selected.
  • the video server device finds the corresponding third video image according to the video frame identifier, and then obtains according to the input location information Jump information to obtain the jump time of the jump target video. Specifically, the video server device generates relative position information of the third video image according to the received input position information, which may be coordinate information obtained by converting into a corresponding coordinate axis. The video server device compares the relative position information with the jump position information in the jump information of the third video image.
  • the video server device determines that the user's input selects the jump icon corresponding to the jump information, and obtains the jump information to obtain the jump target Video jump time.
  • the video server device may also obtain the playback address of the second video.
  • scenario three may be extremely small, because the jump icon is used to prompt the user to perform a video jump, the user needs to be given time to find the jump icon, and the user must also consider whether to proceed When the video jumps, the jump icons of the adjacent multi-frame video images are the same.
  • the processing method of the video server device described in scenario 1 and scenario 2 can realize interaction with the user based on the video content.
  • the video server device sends a second video image to the terminal device, where the second video image is a frame of video image corresponding to the jump time of the jump target video.
  • the video server device may obtain data of the video image corresponding to the jump time, and render and generate a second video image. It should be understood that when the second video image has jump information, a corresponding jump icon is rendered in the second video image, and the method of controlling the VR video playback next is the same as the method from step 304 to step 308.
  • the video server device may acquire the second video based on the playback address of the second video.
  • the second video optionally, can also obtain jump information of the second video.
  • the video server device may also determine the second video image corresponding to the jump time of the second video. Specifically, the video server device renders and generates the second video image, and when the second video image has corresponding jump information, further includes rendering a jump icon in the second video image.
  • the method of controlling the VR video playback is the same as the method from step 304 to step 308.
  • the video content of the second video image is related to the video content at the jump icon of the first video image.
  • the second video image played from the jump time of the jump target video is the video at the jump position information in the first video image of the first video
  • the content is related to the video content at the jump icon in the first video image.
  • the naming of the first video image, the second video image, the first video, the second video, etc. is an example name taken to clearly explain the technical solution, and does not constitute a limitation on the embodiment of the present invention.
  • the VR video image includes a jump icon, which can prompt the user to jump to the video at the jump icon when the user views the video. Users can quickly jump to other scenes of interest by selecting the jump icon based on the video content they watched, combined with their own interests.
  • This method provides personalized services for users, forms a viewing interaction with users, and improves the user experience of VR videos.
  • FIG. 4 is a schematic flowchart of another method for controlling VR video playback according to an embodiment of the present invention.
  • the terminal device plays the video image of the first video, and the jump position of the video image includes a jump icon; when the user selects the jump icon, the terminal device obtains the target jump video corresponding to the jump icon Jump time, play the target jump video from the jump time.
  • Steps 401 and 402 are the same as steps 301 and 302 in the embodiment shown in FIG. 3, and will not be repeated here.
  • the method also includes:
  • the video server device sends the first video and the jump information to the terminal device.
  • the terminal device controls the playback of VR video. Therefore, the video server device sends the first video and the jump information to the terminal device. It should be noted that the first video and the jump information may also be stored in the terminal device, or may be stored in a storage device such as DVD, hard disk, etc., the terminal device may not perform steps 401-403, and the terminal device directly Get the first video and jump information.
  • the terminal device renders a jump icon at the jump position of the first video image.
  • step 404 is the same as step 303 of the embodiment shown in FIG. 3, except that the main body of the method is changed from the video server device to the terminal device.
  • the first video and the jump information acquired by the terminal device may be a combined file or two related files.
  • the terminal device parses the acquired first video and jump information into video frame data and jump information data.
  • the terminal device renders a video image according to video frame data and jump information data, and the video image includes a jump icon.
  • the first video image is a frame of video image in the first video.
  • the jump information of the first video image includes a video frame identifier, and the first video image and the jump information can be associated according to the video frame identifier. It can be understood that the first video image is a frame of video image corresponding to the video frame identifier.
  • the jump information of the first video image further includes jump position information, and a jump icon can be rendered at the jump position of the first video image according to the jump position information.
  • the jump information may further include jump video prompt information, and according to the jump position information, the jump video prompt information may also be rendered at the jump position of the first video image.
  • the first video image is the n-th frame video image
  • the video frame identifier of the n-th frame video image may have corresponding multiple jump information.
  • 1 jump information as an example for description, it should be understood that when there is In the case of multiple jump information, the processing method for other jump information is also the same.
  • the jump video prompt information can also be obtained.
  • the jump video prompt information refer to step 202 in the embodiment shown in FIG. 2, for example, information of the calligraphy and painting exhibition hall.
  • the terminal device renders and generates the nth frame video image according to the information of the video frame data and user perspective, and renders a jump icon, such as a small arrow, at the corresponding jump position (x, y, z) in the nth frame video image, It is used to prompt the user that video jumping can be performed interactively; optionally, the terminal device can also render jump video prompt information at the corresponding jump position in the n-th frame video image (x, y, z); Optionally, the jump video prompt information is used as the jump icon.
  • the terminal device plays a frame of first video image of the first video, and the first video image includes a jump icon.
  • the terminal device displays the first video image to the user through the display, and the first video image includes a jump icon.
  • the first video image may further include jump video prompt information.
  • the terminal device can animate the jump icon, for example, make the jump icon jump, highlight, etc., to better remind the user that video jump can be performed interactively here.
  • the terminal device receives input for selecting the jump icon.
  • the user can click the jump icon in the first video image to jump to the corresponding video scene to play based on his own preference.
  • the jump icon is in the video image, and the user can jump the video based on the video content in the video image, especially the video content where the jump icon is located.
  • the VR icon such as an air mouse, ray gun, handle, etc.
  • selects the jump icon to form the input or stares at the jump icon to form the input.
  • the terminal device receives user input and obtains input position information of the input.
  • the input position information may be the posture of the VR control device, or the position in the video screen displayed on the terminal device according to the input of the VR control device, or the user’s gaze in the video screen displayed on the terminal device Position (such as coordinates), etc.
  • the user When the user sees the video image with the jump icon displayed on the terminal device, it may be the first video image; after thinking, the user selects the jump icon to form the input in the terminal device based on the video content, his own preferences, or the jump video prompt information.
  • the third video image may be displayed by the terminal device.
  • the first video image and the third video image may be the same frame of video image (Scenario 1), or may be two frames of video image with the same or similar video content of the video image (Scenario 2). It is just at the time when the lens is switched, and the content of the video image is very different (scenario three).
  • the terminal device generates relative position information of the third video image according to the received input position information, which may be coordinate information obtained by converting into the coordinate axis of the third video image.
  • the terminal device compares the relative position information with the jump position information in the jump information of the third video image. If the distance difference between the two location information is outside the acceptable range, it means that the user has not clicked on the effective interaction area, which may be the user's misoperation.
  • the terminal device may continue to play the video without interactive response, or A prompt is given that the jump icon is not selected; if the distance difference between the two location information is within the acceptable range, the terminal device determines that the jump icon corresponding to the jump information is selected by the user's input.
  • the range that can be received represents a range where a jump icon can interact with the user in the video image. It should be noted that, when there is a plurality of jump information in the third video image, which jump position information can be used to determine which one is selected by the user.
  • the video jumps indicated by the jump icons of the first video image and the third video image are the same, which may be considered to be the same jump icon; and from the terminal From the perspective of the device, the jump target video and the jump time in the jump information of the first video image and the third video image are the same, then the jump information corresponding to the jump icon of the first video image may also be
  • the terminal device can still determine whether the jump icon is selected by the user's input according to the scenario 1 method.
  • the video jumps indicated by the jump icons of the first video image and the third video image are different, then the jump icon selected by the user and the jump information of the third video image cannot correspond of. If the terminal device cannot find the corresponding jump information in the third video image, it may continue to play the video without interactive response or give a prompt that the jump icon is not selected.
  • scenario three may be extremely small, because the jump icon is used to prompt the user to perform a video jump, the user needs to be given time to find the jump icon, and the user must also consider whether to proceed When the video jumps, the jump icons of the adjacent multi-frame video images are the same.
  • the method executed by the terminal device in scenario 1 and scenario 2 can realize interaction with the user based on the video content.
  • the VR video is normally played, which is not limited in this embodiment of the present invention.
  • the terminal device obtains the jump time of the jump target video corresponding to the jump icon based on the input.
  • the terminal device may obtain the jump information to obtain the jump time of the jump target video therein.
  • the terminal device may also obtain a playback address of the second video.
  • the terminal device plays a second video image, and the second video image is a frame of video image corresponding to the jump time of the jump target video.
  • the terminal device may obtain data of the video image corresponding to the jump time, and render and generate a second video image. It should be understood that when the second video image has jump information, the corresponding jump icon is rendered in the second video image, and the method of controlling the VR video playback next is the same as the method from step 405 to step 408.
  • the terminal device may acquire the second video according to the playback address of the second video Video, optionally, can also obtain jump information of the second video. Based on the jump time obtained in step 407, the terminal device may also determine the second video image corresponding to the jump time of the second video. Specifically, the terminal device renders and generates a second video image, and when the second video image has corresponding jump information, it further includes rendering a jump icon in the second video image.
  • the method of controlling the VR video playback is the same as the method from step 405 to step 408.
  • the video content of the second video image is related to the video content at the jump icon of the first video image.
  • the second video image played from the jump time of the jump target video is the video at the jump position information in the first video image of the first video
  • the content is related to the video content at the jump icon in the first video image.
  • the naming of the first video image, the second video image, the first video, the second video, etc. is an example name taken to clearly explain the technical solution, and does not constitute a limitation on the embodiment of the present invention.
  • the method for controlling VR video playback provided by the embodiment of the present invention is performed by a terminal device to control VR video playback.
  • the method is reduced.
  • the delay of the video jump enhances the user's interactive viewing experience.
  • FIG. 5 is a schematic diagram of a composition of a video server device according to an embodiment of the present invention.
  • the video server equipment includes:
  • -A sending module 501 configured to send a frame of first video image of the first video to the terminal device, the first video image including a jump icon, wherein the first video is being played by the video server device for the terminal device
  • the receiving module 502 is configured to receive an input sent by the terminal device to select the jump icon.
  • an input sent by the terminal device For a specific execution process, refer to the step description in the embodiment shown in FIG. 3, such as steps 305-307;
  • the receiving module 502 receives the input for selecting the jump icon sent by the terminal device, and a specific method may include: a receiving submodule 5021 for receiving the input sent by the terminal device Input location information; the determination submodule 5022 is used to determine that the jump icon is selected by the input according to the input location information.
  • a specific execution process refer to the step description in the embodiment shown in FIG. 3, such as step 307.
  • the obtaining module 503 is used to obtain the jump time of the jump target video corresponding to the jump icon based on the input.
  • the sending module 501 is further configured to send a second video image to the terminal device, where the second video image is a frame of video image corresponding to the jump time of the jump target video, and the specific execution process Refer to the step description in the embodiment shown in FIG. 3, such as step 308.
  • the jump target video is the first video
  • the second video image is a frame of video image corresponding to the jump time of the first video, that is, the first video being watched
  • the specific execution process can refer to the step description in the embodiment shown in FIG. 3, such as step 308.
  • the jump target video is a second video
  • the second video image is a frame of video image corresponding to the jump time of the second video
  • the obtaining module 503 obtains the jump target
  • the jump time of the video further includes: the obtaining module 503, which is further used to obtain the playback address of the second video corresponding to the jump icon based on the input; and the determining module 504, which is based on the second
  • the playback address of the video and the jump time of the second video determine the second video image. That is, jump to a second video different from the first video being played to continue playing the video.
  • step 308 For a specific execution process, refer to the step description in the embodiment shown in FIG. 3, such as step 308.
  • the video content of the second video image is related to the video content at the jump icon of the first video image.
  • the sending module 501 before the sending module 501 sends the first video image to the terminal device, it further includes: a rendering module 505, configured to render the jump at the jump position of the first video image Icon, the first video image is the video image corresponding to the video frame identifier in the first video.
  • a rendering module 505 configured to render the jump at the jump position of the first video image Icon
  • the first video image is the video image corresponding to the video frame identifier in the first video.
  • the method further includes: the rendering module 505, which is further configured to be at the jump position of the first video image Render jump video prompt information.
  • the jump video prompt information is used to prompt the user of the video content after the jump, and the jump video prompt information may be video image information or text description information on the video content.
  • the video server device provided by the embodiment of the present invention can be used for a method for controlling VR video playback, the technical effects that can be obtained can refer to the foregoing method embodiments, which will not be repeated here.
  • FIG. 6 is a schematic diagram of a composition of a terminal device according to an embodiment of the present invention.
  • the terminal equipment includes:
  • the -A playing module 601 used to play a frame of a first video image of the first video, the first video image including a jump icon.
  • the first video is a video being played by the terminal device.
  • the receiving module 602 is used to receive an input for selecting the jump icon.
  • a specific execution process please refer to the step description in the embodiment shown in FIG. 4, such as step 406;
  • the receiving module 602 receives the input for selecting the jump icon, and specifically includes: a receiving submodule 6021 for receiving the input; an obtaining submodule 6022 for obtaining the input in Input position information of the first video image; a determination submodule 6023, configured to determine that the jump icon is selected by the input according to the input position information.
  • a receiving submodule 6021 for receiving the input
  • an obtaining submodule 6022 for obtaining the input in Input position information of the first video image
  • a determination submodule 6023 configured to determine that the jump icon is selected by the input according to the input position information.
  • -An obtaining module 603 configured to obtain the jump time of the jump target video corresponding to the jump icon based on the input, for the specific execution process, refer to the step description in the embodiment shown in FIG. 4, such as step 407;
  • the play module 601 is also used to play a second video image, the second video image is a frame of video image corresponding to the jump time of the jump target video, the specific execution process can be seen in FIG. 4 The steps in the embodiment are shown as step 408.
  • the jump target video is the first video
  • the second video image is a frame of video image corresponding to the jump time of the first video. That is, the video jump is performed in the first video being watched.
  • the jump target video is a second video
  • the second video image is a frame of video image corresponding to the jump time of the second video
  • the acquiring module 603 is configured to be based on the input Acquiring the jump time of the jump target video corresponding to the jump icon, further comprising: the acquiring module 603, further used to acquire the second video corresponding to the jump icon based on the input A play address of; a determining module 604, configured to determine the second video image based on the play address of the second video and the jump time of the second video. That is, jump to a second video different from the first video being played to continue playing the video.
  • step 408 For a specific execution process, refer to the step description in the embodiment shown in FIG. 4, such as step 408.
  • the video content of the second video image is related to the video content at the jump icon of the first video image.
  • the specific execution process refer to the step description in the embodiment shown in FIG. 408.
  • the playback module 601 before the playback module 601 plays the first video image, it further includes: a rendering module 605, configured to render the jump icon at the jump position of the first video image, the first A video image is a frame of video image corresponding to the video frame identifier in the first video.
  • a rendering module 605 configured to render the jump icon at the jump position of the first video image
  • the first A video image is a frame of video image corresponding to the video frame identifier in the first video.
  • the method further includes: the rendering module 605, which is further configured to render a jump video prompt at the jump position of the first video image information.
  • the jump video prompt information is used to prompt the user of the video content after the jump, and the jump video prompt information may be video image information or text description information on the video content.
  • the rendering module 605 is further configured to render a jump video prompt at the jump position of the first video image information.
  • the jump video prompt information is used to prompt the user of the video content after the jump, and the jump video prompt information may be video image information or text description information on the video content.
  • the terminal device provided by the embodiment of the present invention can be used for a method for controlling VR video playback, the technical effects that can be obtained can refer to the foregoing method embodiments, and details are not described herein again.
  • FIG. 7 is a schematic diagram of another composition of a video server device provided in an embodiment of the present invention.
  • At least one processor 701 and transceiver 702 are included, and optionally, a memory 703 may also be included.
  • the memory 703 may be a volatile memory, such as a random access memory; the memory may also be a non-volatile memory, such as a read-only memory, flash memory, hard disk drive (HDD), or solid-state drive (solid-state drive, SSD), or the memory 703 is any other medium that can be used to carry or store a desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
  • the memory 703 may be a combination of the aforementioned memories.
  • the specific connection medium between the processor 701 and the memory 703 is not limited.
  • the memory 703 and the processor 701 are connected by a bus 704 in the figure, and the bus 704 is indicated by a thick line in the figure.
  • the connection between other components is only for schematic description and is not cited as Limited.
  • the bus 704 can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 7, but it does not mean that there is only one bus or one type of bus.
  • the processor 701 may have a data transceiving function and can communicate with other devices. As in the embodiment of the present invention, the processor 701 may send a video image to a terminal device, or may receive input location information from the terminal device. In the video server device shown in FIG. 7, an independent data transceiving module may also be provided, such as a transceiver 702 for transceiving data; when the processor 701 communicates with other devices, data may also be transmitted through the transceiver 702 For transmission, as in the embodiment of the present invention, the processor 701 may send a video image to the terminal device through the transceiver 702, or may receive input location information from the terminal device through the transceiver 702.
  • an independent data transceiving module may also be provided, such as a transceiver 702 for transceiving data; when the processor 701 communicates with other devices, data may also be transmitted through the transceiver 702
  • the processor 701 may send a video image to the terminal device through the transceiver 702, or
  • the processor 701 may have a video image rendering function, for example, rendering a jump icon in a video image; the processor 701 may also display a video image to a user through a terminal device, and the video image includes a jump icon; the processor 701 may also read the memory Jump time in 703, and perform the corresponding video jump.
  • a video image rendering function for example, rendering a jump icon in a video image
  • the processor 701 may also display a video image to a user through a terminal device, and the video image includes a jump icon
  • the processor 701 may also read the memory Jump time in 703, and perform the corresponding video jump.
  • the function of the transceiver 702 may be implemented through a transceiver circuit or a dedicated chip for transceiver.
  • the processor 701 may be realized by a dedicated processing chip, a processing circuit, a processor, or a general-purpose chip.
  • the processor 701 may be a central processing unit (Central Processing Unit, referred to as "CPU"), and the processor may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), Programming gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the video server device may be considered to use a general-purpose computer to implement the video server device provided by the embodiment of the present invention.
  • the program codes that will realize the functions of the processor 701 and the transceiver 702 are stored in the memory 703, and the general processor implements the functions of the processor 701 and the transceiver 702 by executing the codes in the memory 703.
  • the processor 701 in FIG. 7 may call the computer stored in the memory 702 to execute instructions, so that the video server device can execute the method described in the foregoing method embodiments The method performed by the video server device.
  • the processor 701 in FIG. 7 may call the computer stored in the memory 702 to execute instructions, so that the video server device can execute the method described in the foregoing method embodiments The method performed by the video server device.
  • FIG. 8 is a schematic diagram of another composition of a terminal device provided in an embodiment of the present invention.
  • At least one processor 801 and transceiver 802 are included, and optionally, a memory 803 may also be included.
  • the device 800 may further include a display 804 for displaying a video image to the user, and the video image includes a jump icon; the device may further include a sensor 805 for capturing the gaze or acquisition of the jump icon selected by the user
  • the sensor 805 can also be represented by a device with an input function, such as a mouse, an empty mouse, a ray gun, and a handle.
  • the memory 803 may be a volatile memory, such as a random access memory; the memory may also be a non-volatile memory, such as a read-only memory, a flash memory, a hard disk drive (HDD), or a solid-state drive (solid-state drive, SSD), or the memory 803 is any other medium that can be used to carry or store a desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
  • the memory 803 may be a combination of the aforementioned memories.
  • the specific connection medium between the processor 801 and the memory 803 is not limited.
  • the memory 803 and the processor 801 are connected by a bus 806 in the figure, and the bus 806 is indicated by a thick line in the figure.
  • the connection between other components is only for schematic description and is not cited as Limited.
  • the bus 806 can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 8, but it does not mean that there is only one bus or one type of bus.
  • the processor 801 may have a data transmission and reception function and can communicate with other devices. As in the embodiment of the present invention, the processor 801 may request video data and jump information from the video server device, and receive data sent by the video server device. In the terminal device shown in FIG. 8, an independent data transceiving module may also be provided, such as a transceiver 802 for transceiving data; when the processor 801 communicates with other devices, data transmission may also be performed through the transceiver 802.
  • the processor 801 may have a video image rendering function, such as rendering a jump icon in the video image; the processor 801 may also display a video image to the user by controlling the display 804, and the video image includes the jump icon; the processor 801 may also receive a sensor 805 input, and obtain the input location information; the processor 801 can also read the jump time in the memory 803, and perform the corresponding video jump.
  • a video image rendering function such as rendering a jump icon in the video image
  • the processor 801 may also display a video image to the user by controlling the display 804, and the video image includes the jump icon; the processor 801 may also receive a sensor 805 input, and obtain the input location information; the processor 801 can also read the jump time in the memory 803, and perform the corresponding video jump.
  • the function of the transceiver 802 may be implemented through a transceiver circuit or a dedicated chip for transceiver.
  • the processor 801 may be realized by a dedicated processing chip, a processing circuit, a processor, or a general-purpose chip.
  • the processor 801 may be a central processing unit (Central Processing Unit, referred to as "CPU"), and the processor may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), and field devices.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGA Programming gate array
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the terminal device may be considered to use a general-purpose computer to implement the terminal device provided by the embodiment of the present invention.
  • the program codes to implement the functions of the processor 801 and the transceiver 802 are stored in the memory 803, and the general processor implements the functions of the processor 801 and the transceiver 802 by executing the codes in the memory 803.
  • the processor 801 in FIG. 8 may call the computer stored in the memory 802 to execute instructions, so that the terminal device can execute the terminal device in the foregoing method embodiment The method of execution.
  • the processor 801 in FIG. 8 may call the computer stored in the memory 802 to execute instructions, so that the terminal device can execute the terminal device in the foregoing method embodiment The method of execution.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a division of logical functions.
  • there may be other divisions for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, and may be in electrical, mechanical, or other forms.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmit to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)) or the like.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例公开了一种控制VR视频播放的方法及相关装置。在该方法中,视频服务器设备在视频图像中渲染跳转图标,当用户基于视频内容想要进行视频跳转时,可以选中感兴趣的场景对应的跳转图标形成输入。视频服务器根据用户的输入,获取跳转图标对应的跳转目标视频的跳转时间,从该跳转时间开始为用户播放跳转目标视频。跳转目标视频可以是正在播放的视频,也可以是其他视频。并且本发明实施例还公开了由终端设备来执行上述控制VR视频播放的方法。该方法可以随用户的兴趣喜好来进行视频切换,为用户提供个性化服务,与用户形成观影互动,提高了VR视频的用户体验。

Description

一种控制VR视频播放的方法及相关装置
本申请要求于2018年12月04日提交中国国家知识产权局、申请号为CN201811475821.5、申请名称为“一种控制VR视频播放的方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种控制VR视频播放的方法及相关装置。
背景技术
虚拟现实(Virtual Reality,VR)技术是一种可以创建和体验虚拟世界的计算机仿真***,它利用计算机生成一种模拟环境,是一种多源信息融合的、交互式的三维动态视景和实体行为的***仿真,使用户沉浸到该环境中。其中VR全景视频(VR360度视频)是VR技术的一种典型应用场景。
在VR全景视频播放过程中,可以向用户展示360度全景画面。但在播放过程中,VR设备只能直接从视频开始播放到视频结束,或者通过用户拖动进度条快进快退到某个时刻继续播放,不能随用户的兴趣喜好而切换播放的场景,无法与用户互动、为用户提供个性化服务。
发明内容
本发明实施例提供了一种控制VR视频播放的方法及相关装置,能够为用户提供个性化服务,提升VR视频的用户观看体验。
第一方面,本发明实施例提供了一种控制VR视频播放的方法,由视频服务器设备执行,该方法包括:
视频服务器设备向终端设备发送第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标,其中,所述第一视频为视频服务器设备正在为终端设备播放的视频;
之后,所述视频服务器设备接收所述终端设备发送的用于选中所述跳转图标的输入;并且基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间;接着,所述视频服务器设备向所述终端设备发送第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
该方法在VR视频图像中包含跳转图标,可以在用户观看视频时,提示用户在跳转图标处可以进行视频跳转。用户能够基于观看到的视频内容,通过选中跳转图标形成输入,由终端设备将输入发送到视频服务器设备,由视频服务器完成视频跳转,快速地将视频跳转到用户感兴趣的场景中。该方法为用户提供个性化服务,与用户形成观影互动,提高了VR视频的用户体验。
在一个可能的方案中,所述接收所述终端设备发送的用于选中所述跳转图标的所述输入,具体包括:接收所述终端设备发送的所述输入的输入位置信息,并且根据所述输入位置信息,确定所述输入选中所述跳转图标。
该方法基于用户输入的输入位置信息来确定用户选中的跳转图标,并获取对应的跳转时间,能够保证视频服务器设备是基于用户的选择来做视频跳转的,与用户形成观影互动。
在一个可能的方案中,在所述向所述终端设备发送所述第一视频图像之前,还包括:视频服务器设备在所述第一视频图像的跳转位置信息处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。
该方法能够根据预置的跳转位置信息,在视频图像中相应的位置显示相应的跳转图标。因此用户可以基于跳转图标在视频图像中所在位置附近的视频内容来判断是否在此进行视频跳转,能够使视频跳转实现起来更加直观有效。
在一个可能的方案中,在所述向所述终端设备发送所述第一视频图像之前,还包括:视频服务器设备在所述第一视频图像的跳转位置信息处渲染跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,所述跳转视频提示信息可以是视频图像信息,也可以是对视频内容的文字描述信息。
该方法能够根据预置的跳转视频提示信息,在视频图像中相应的位置显示相应的跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,便于用户根据自己的兴趣喜好选择是否要在此进行视频跳转,能够使视频跳转实现起来更加直观有效。
在一个可能的方案中,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
该方法能够在一个视频中实现视频跳转,也就是同一视频中实现不同视频场景的跳转,能够帮助用户基于自己的喜好跳转到自己感兴趣的视频场景中继续观看,而不受视频播放的时间顺序的限制,提高用户的观影体验。
在一个可能的方案中,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
该方法能够在不同视频中实现视频跳转,也就是不同视频中实现不同视频场景的跳转,能够帮助用户基于自己的喜好跳转到自己感兴趣的视频场景中继续观看,而不受视频播放的时间顺序的限制,也不受正在播放的视频内容的限制。当用户对某部分视频场景感兴趣时,即使正在播放的视频内容中没有相关内容了,也可以跳转到其他视频中的相关内容来为用户播放,为用户提供个性化服务,并且使得与用户的观影互动更加丰富。
第二方面,本发明实施例提供了一种控制VR视频播放的方法,由终端设备执行,该方法包括:终端设备播放第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标,其中,所述第一视频为终端设备正在播放的视频;之后,接收用于选中所述跳转图标的输入;并且基于所述输入获取所述跳转图标相对应的跳转信息,所述跳转信息包括跳转目标视频的跳转时间;接着,所述终端设备播放第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间的视频图像。
该方法在VR视频图像中包含跳转图标,可以在用户观看视频时,提示用户在跳转图标处可以进行视频跳转。用户能够基于观看到的视频内容,通过选中跳转图标形成输入,终端设备接收该输入后完成视频跳转,快速地将视频跳转到用户感兴趣的场景中。该方法为用户提供个性化服务,与用户形成观影互动,提高了VR视频的用户体验。
在一个可能的方案中,所述接收用于选中所述跳转图标的所述输入,具体包括:
终端设备接收所述输入,获取所述输入在所述第一视频图像的输入位置信息;并且根据接收的所述输入位置信息,确定所述输入选中所述跳转图标。
该方法基于用户输入的输入位置信息来确定用户选中的跳转图标,并获取对应的跳转信息,能够保证终端设备是基于用户的选择来做视频跳转的,与用户形成观影互动。
在一个可能的方案中,在所述播放所述第一视频图像之前,还包括:终端设备在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。
该方法能够根据预置的跳转位置信息,在视频图像中相应的位置显示相应的跳转图标。因此用户可以基于跳转图标在视频图像中所在位置附近的视频内容来判断是否在此进行视频跳转,能够使视频跳转实现起来更加直观有效。
在一个可能的方案中,在所述播放所述第一视频图像之前,还包括:终端设备在所述第一视频图像的跳转位置处渲染跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,所述跳转视频提示信息可以是视频图像信息,也可以是对视频内容的文字描述信息。
该方法能够根据预置的跳转视频提示信息,在视频图像中相应的位置显示相应的跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,便于用户根据自己的兴趣喜好选择是否要在此进行视频跳转,能够使视频跳转实现起来更加直观有效。
在一个可能的方案中,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
该方法能够在一个视频中实现视频跳转,也就是同一视频中实现不同视频场景的跳转,能够帮助用户基于自己的喜好跳转到自己感兴趣的视频场景中继续观看,而不受视频播放的时间顺序的限制,提高用户的观影体验。
在一个可能的方案中,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
该方法能够在不同视频中实现视频跳转,也就是不同视频中实现不同视频场景的跳转,能够帮助用户基于自己的喜好跳转到自己感兴趣的视频场景中继续观看,而不受视频播放的时间顺序的限制,也不受正在播放的视频内容的限制。当用户对某部分视频场景感兴趣时,即使正在播放的视频内容中没有相关内容了,也可以跳转到其他视频中的相关内容来为用户播放,为用户提供个性化服务,并且使得与用户的观影互 动更加丰富。
第三方面,本发明实施例提供了一种视频服务器设备,有益效果可以参见第一方面的描述此处不再赘述。该设备具有实现上述第一方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述视频服务器设备的结构中包括发送模块、接收模块和获取模块,这些模块可以执行上述第一方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第四方面,本发明实施例提供了一种终端设备,有益效果可以参见第二方面的描述此处不再赘述。该设备具有实现上述第二方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述终端设备的结构中包括播放模块、接收模块和获取模块,这些模块可以执行上述第二方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第五方面,本发明实施例还提供了一种视频服务器设备,有益效果可以参见第一方面的描述此处不再赘述。所述设备的结构中包括处理器,还可以包括收发器或存储器,所述处理器被配置为支持所述视频服务器设备执行上述第一方面方法中相应的功能。所述存储器与所述处理器耦合,其保存所述设备必要的程序指令和数据。所述收发器,用于与其他设备进行通信。
第六方面,本发明实施例还提供了一种终端设备,有益效果可以参见第二方面的描述此处不再赘述。所述设备的结构中包括处理器,还可以包括收发器或存储器,所述处理器被配置为支持所述终端设备执行上述第二方面方法中相应的功能。所述存储器与所述处理器耦合,其保存所述设备必要的程序指令和数据。所述收发器,用于与其他设备进行通信。
第七方面,本发明还提供一种计算机可读存储介质,所述计算机存储介质包括一组程序代码,用于执行如本发明实施例第一方面任一实现方式所述的方法。
第八方面,本发明还提供一种计算机可读存储介质,所述计算机存储介质包括一组程序代码,用于执行如本发明实施例第二方面任一实现方式所述的方法。
本发明的这些方面或其他方面在以下实施例的描述中会更加简明易懂。
附图说明
图1为本发明实施例提供的一种***架构示意图;
图2为本发明实施例提供的一种跳转信息制作的方法的流程示意图;
图3为本发明实施例提供的一种控制VR视频播放的方法的流程示意图;
图4为本发明实施例提供的另一种控制VR视频播放的方法的流程示意图;
图5为本发明实施例提供的一种视频服务器设备的组成示意图;
图6为本发明实施例提供的一种终端设备的组成示意图;
图7为本发明实施例提供的一种视频服务器设备的另一组成示意图;
图8为本发明实施例提供的一种终端设备的另一组成示意图。
具体实施方式
本发明实施例提供了一种控制VR视频播放的方法及相关装置,用于解决现有技术中VR视频在播放过程中不能随用户的喜好而切换播放场景,无法与用户互动的问题。
请参考图1,为本发明实施例中提供的***架构示意图。在典型应用场景中,包括视频处理设备、视频服务器设备和终端设备。
视频处理设备可以是计算机设备,视频处理设备可以具有较强的视频处理功能以及数据计算功能,例如可以提取视频中的辅助工具所在视频图像中的位置信息等。在发明实施例中,视频处理设备可以对录制好的视频进行处理生成VR视频,并且制作跳转信息。视频处理设备处理好的VR视频和跳转信息可以上传到视频服务器设备中由视频服务器设备控制视频的播放,也可以由终端设备下载并控制该视频的播放。
视频服务器设备可以是本地高性能主机,也可以是部署在云端的远程服务器。视频服务器设备可以具有较强的图像处理功能以及数据计算功能,例如可以执行渲染操作、逻辑运算功能等;视频服务器设备可以是超多核服务器、部署有图形处理器(graphics processing unit,GPU)集群的计算机、大型的分布式计算机、硬件资源池化的集群计算机等等。在本发明实施例中所述视频服务器设备可以根据跳转信息在视频图像中对应的位置渲染跳转图标,可以响应于用户选中所述跳转图标的输入,并向用户播放目标跳转视频。
终端设备可以是佩戴在用户的头部的设备,如VR眼镜、VR头盔等,还可以包括佩戴在用户的其他部位的设备,如佩戴在用户的手部、肘部、脚部、膝盖处的设备等,例如,游戏手柄等。终端设备可以通过显示器向用户显示VR视频的视频图像。其中,终端设备可以本地保存VR视频和跳转信息的数据,可以根据跳转信息在视频图像中对应的位置渲染跳转图标,可以响应于用户选中所述跳转图标的输入,并向用户播放目标跳转视频。终端设备也可以不在本地保存VR视频和跳转信息的数据,而将相关数据都保存在视频服务器设备中,在播放VR视频时,显示VR视频的视频图像,并向视频服务器设备发送用户的输入信息。
请参考图2,为本发明实施例提供的一种跳转信息制作的方法的流程示意图。跳 转信息是在第一视频中跳转到跳转目标视频所需要的信息的合集,可以包括第一视频的视频图像的视频帧标识符(Identifier,ID)、跳转位置信息和跳转时间;当跳转目标视频不是第一视频时,还包括第二视频的播放地址;还可以包括跳转视频提示信息等。
需要说明的是,所述跳转信息是一种示例性的合集,所述视频帧标识符、跳转位置信息、跳转时间等信息可以包含于所述跳转信息中以合集的形式出现,也可以不以合集的形式出现,本发明实施例不对此做限定。所述第一视频是示例性名称,用于表示当前正在播放的视频;所述跳转目标视频是跳转之后播放的视频,可以是第一视频,也可以是其他视频;第二视频也是示例性名称,用于表示不同于第一视频的其他视频。
视频处理设备可以制作跳转信息,具体方法包括:
201.录制视频前,添加辅助工具,所述视频为第一视频或者跳转目标视频。
在录制视频前,规划好录制的摄像机机位路径和机位的停留时间。在机位上添加辅助工具,用于在后期处理视频时得到跳转位置信息。可选的,添加辅助工具的方法可以是在第一视频和跳转目标视频的机位上上添加定位工具,例如高精度的GPS定位仪和陀螺仪等。所述定位工具可以定时产生摄像机拍摄的视频帧标识符和对应的机位位置信息。可选的,添加辅助工具的方法还可以是在需要设置跳转信息的位置放上易于程序识别的辅助物,例如特定形状或者特定颜色的辅助物。由于一个视频图像中可以设置多个跳转信息,因此可以放上易区分的辅助物,例如红色的三角锥,黄色的三角锥等,可以用于后期视频处理时区分不同的跳转信息。
202.视频录制好后,视频处理设备获取所述跳转目标视频的跳转时间。
视频录制好后,记录所述跳转目标视频的跳转时间,也就是跳转到跳转目标视频后,所述跳转目标视频播放的时间起点。可选地,可以根据所述跳转目标视频的跳转时间开始播放的视频内容,生成跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,便于用户根据自己的兴趣喜好选择是否要进行视频跳转。所述跳转视频提示信息可以是视频图像信息,也可以是对视频内容的文字描述信息。
当所述跳转目标视频还是第一视频时,所述跳转时间就是所述第一视频的其他播放时间点;当所述跳转目标视频不是第一视频,也就是需要跳转到第二视频中进行播放时,所述跳转时间就是所述第二视频的播放时间点。需要说明的是,本发明实施例可以用于在多个视频中多个时间点之间跳转,本发明以在第一视频中跳转到跳转目标视频的跳转时间为示例进行说明,并不构成对本发明实施例的限定。
203.视频处理设备根据辅助工具获取所述第一视频中的视频图像的视频帧标识符,以及在所述视频图像中的跳转位置信息。
当在步骤201中,添加的辅助工具是定位工具时,利用定位工具可以得到在摄像机录制第一视频时产生的第一视频图像的视频帧标识符和对应的第一机位位置信息,也可以得到录制跳转目标视频时产生的第二视频图像的视频帧标识符和对应的目标机位位置信息,将第一机位位置信息和目标机位位置信息做处理可以得到跳转位置信息,例如将两个位置信息相减得到的相对位置信息可以是跳转位置信息。记录第一视频图像的视频帧标识符和对应的跳转位置信息。所述第二视频图像可以是从第一视频的第一视频图像跳转到跳转目标视频的跳转时间的视频图像。
当在步骤201中,添加的辅助工具是易于程序识别的辅助物时,根据辅助物的特征提取特征图像;再利用图像识别程序,从第一视频的第一视频图像中识别出特征图像;然后计算出特征图像在所在第一视频图像中的位置信息,也就是对应的跳转位置信息,同时记录所述第一视频图像的视频帧标识符。
204.视频处理设备匹配所述视频帧标识符、所述跳转位置信息和所述跳转目标视频的跳转时间,生成跳转信息。
所述视频帧标识符和对应的跳转位置信息,在步骤203中被记录时就是可以匹配起来的。再与所述跳转目标视频的跳转时间匹配,这里的匹配就是考虑在第一视频的哪一帧视频图像的哪个位置信息处,跳转到哪个跳转目标视频的跳转时间继续播放。可选的,第二视频图像是与第一视频图像的跳转位置信息处的视频内容相关的。例如,在第一视频中第M帧视频图像中跳转位置信息(x1,y1,z1)处的视频内容是一扇门,而从该门进入的展览厅是字画展览厅,跳转目标视频的跳转时间t处播放的视频场景是所述字画展览厅,则可以将所述第一视频的第M帧视频图像的视频帧标识符、对应的跳转位置信息(x1,y1,z1)和所述跳转目标视频的跳转时间t相匹配,生成跳转信息。
跳转信息生成后,与第一视频一起存储起来。存储的方法有多种,可以是将每个跳转信息和每个跳转信息中的视频帧标识符对应的视频图像数据一起存储,也可以是将第一视频中所有的跳转信息和第一视频的视频数据作为两份相关联的数据文件进行存储。需要说明的是,本发明实施例中以跳转信息和第一视频来描述存储的文件,所述文件可以是跳转信息和第一视频数据组合的一个文件,也可以是相关联的两个文件,本发明实施例对此不作限定。
请参考图3,为本发明实施例提供的一种控制VR视频播放的方法的流程示意图。在本发明实施例中,视频服务器设备向终端设备播放第一视频的视频图像,视频图像的跳转位置处包含跳转图标;当用户选中跳转图标时,视频服务器设备获取跳转图标对应的目标跳转视频的跳转时间,向终端设备从所述跳转时间开始播放所述目标跳转视频。所述方法包括:
301.终端设备向视频服务器设备发送第一视频播放请求,包含第一视频播放地址。
用户通过终端设备点播第一视频,终端设备接收输入后向视频服务器设备发送第一视频播放请求。
302.视频服务器设备根据所述第一视频播放地址获取第一视频和跳转信息。
视频服务器设备接收终端设备发送的第一视频播放请求,根据其中的第一视频播放地址来获取第一视频和跳转信息。可选地,所述第一视频和所述跳转信息是保存于视频服务器设备中的,视频服务器设备直接获取即可。例如由视频处理设备将处理好的第一视频和跳转信息上传到视频服务器。可选地,所述第一视频和所述跳转信息不在视频服务器设备中,视频服务器设备根据所述地址向其他设备请求相关资源。
303.视频服务器设备在第一视频图像的跳转位置处渲染跳转图标。
由图2所示实施例步骤204可知,视频服务器设备获取的第一视频和跳转信息可以是组合的一个文件,也可以是相关联的两个文件。视频服务器设备将获取的第一视 频和跳转信息进行解析,解析成视频帧数据和跳转信息数据。视频服务器设备根据视频帧数据和跳转信息进行渲染生成视频图像,视频图像中包含跳转图标。
其中,第一视频图像是第一视频中的一帧视频图像。第一视频图像的跳转信息中包含视频帧标识符,根据所述视频帧标识符可以将第一视频图像和跳转信息关联起来。可以理解,第一视频图像为视频帧标识符对应的一帧视频图像。第一视频图像的跳转信息中还包含跳转位置信息,根据所述跳转位置信息可以在第一视频图像的跳转位置处渲染跳转图标。可选地,跳转信息中还可以包含跳转视频提示信息,根据所述跳转位置信息还可以在第一视频图像的跳转位置处渲染跳转视频提示信息。
例如,第一视频图像为第n帧视频图像,第n帧视频图像的视频帧标识符可以有对应的多个跳转信息,以有1个跳转信息为例进行说明,应理解,当有多个跳转信息时,对于其他跳转信息的处理方法也是相同的。首先,根据第n帧视频图像的视频帧标识符获取对应的跳转信息,根据该跳转信息得到跳转位置信息,例如三维坐标(x,y,z)。可选地,还可以得到跳转视频提示信息,跳转视频提示信息的解释参见图2所示实施例步骤202,例如字画展览厅的信息。视频服务器设备根据视频帧数据和用户视角等信息渲染生成第n帧视频图像,并在第n帧视频图像中(x,y,z)相应的跳转位置处渲染跳转图标,例如小箭头等,用于向用户提示此处可以互动进行视频跳转;可选地,视频服务器设备还可以在第n帧视频图像中(x,y,z)相应的跳转位置处渲染跳转视频提示信息;可选地,还可以以跳转视频提示信息作为跳转图标。
304.视频服务器设备向终端设备发送第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标。
视频服务器设备通过终端设备的显示器向用户显示第一视频图像,第一视频图像中包含跳转图标。可选地,第一视频图像中包含跳转视频提示信息。可选地,视频服务器设备可以对跳转图标设置动画,例如使跳转图标跳动、高亮等,用于更好地提醒用户此处可以互动进行视频跳转。应理解,视频服务器通过终端设备播放视频时,可以是向终端设备发送一帧一帧的视频图像,也可以是发送多帧视频图像,且无论视频服务器设备以哪种形式向终端设备发送视频数据,都可以理解为是以发送视频流的形式。本发明实施例中,视频服务器设备向终端设备发送一帧视频图像,是从视频显示的角度来阐述的,可以包括以上各种具体的发送形式。
305.终端设备接收用户的输入。
用户基于自己的喜好,可以选中第一视频图像中的跳转图标来跳转到对应的视频场景中进行播放。跳转图标是在视频图像中的,用户可以基于视频图像中的视频内容,特别是跳转图标所在的视频内容,来进行视频跳转。用户选中跳转图标形成输入的方法可以有很多,例如通过空鼠、射线枪、手柄等VR控制设备选中跳转图标形成输入,或者通过凝视跳转图标形成输入等。终端设备接收用户的输入,并获取所述输入的输入位置信息。所述输入位置信息可以是VR控制设备的姿态,也可以是根据VR控制设备的输入得到的在终端设备显示的视频画面中的位置,还可以是用户凝视的视线在终端设备显示的视频画面中的位置(如坐标)等。
需要说明的是,当视频服务器设备在播放视频时,如果没有接收到用户输入,则正常播放VR视频,本发明实施例不对此限定。
306.终端设备向视频服务器设备发送输入。
终端设备向视频服务器设备发送所述输入。具体地,终端设备向视频服务器设备发送所述输入的输入位置信息。可选地,终端设备还可以向视频服务器设备发送输入的视频帧标识符,所述输入的视频帧标识符是用户点击形成输入时,终端设备正在显示的第三视频图像对应的视频帧标识符。所述第三视频图像可以和第一视频图像是同一帧视频图像,也可以是视频图像内容相同或相近的两帧视频图像。
307.视频服务器设备接收用于选中所述跳转图标的输入,基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间。
视频服务器设备接收终端设备发送的所述输入,具体地,可以是接收所述输入的输入位置信息,并且根据所述输入位置信息,确定所述输入选中所述跳转图标。具体方法包括:
可选地,用户看到终端设备显示的有跳转图标的视频图像可以是第一视频图像;用户思考后基于视频内容、自己的喜好或者是跳转视频提示信息,选中跳转图标在终端设备中形成输入时,此时终端设备正在显示的可以是第三视频图像;终端设备接收用户的输入并向视频服务器设备发送输入位置信息,当视频服务器设备接收到所述输入位置信息时正在处理的视频图像可以是第四视频图像。需要说明的是,第一视频图像、第三视频图像以及第四视频图像可以是同一帧视频图像(情景一),也可以是视频图像的视频内容相同或相近的几帧视频图像(情景二),还可以出现由于刚好处于镜头切换的时间点,而导致视频图像的内容差别较大的情况(情景三)。
可选地,对于情景一,视频服务器设备根据接收到的输入位置信息生成第四视频图像的相对位置信息,可以是转化到第四视频图像的坐标轴中得到的坐标信息。视频服务器设备将该相对位置信息与第四视频图像的跳转信息中的跳转位置信息进行比较。若两个位置信息之间的距离差值在可以接收的范围之外,则表示用户没有点击到有效的互动区域,可能是用户的误操作,视频服务器设备可以不做互动响应,继续播放视频,或者给出未选中跳转图标的提示;若两个位置信息之间的距离差值在可以接收的范围之内,则视频服务器设备确定用户的输入选中了所述跳转信息对应的跳转图标,并获取所述跳转信息,得到其中的跳转目标视频的跳转时间。可选地,当所述跳转目标视频为第二视频,即需要跳转到不同于正在播放的第一视频的第二视频时,视频服务器设备还可以得到第二视频的播放地址。
应理解,可以接收的范围表示的是一个跳转图标在视频图像中可以与用户产生互动的范围。需要说明的是,当第四视频图像中存在多个跳转信息时,也可以通过其中的跳转位置信息来判断用户选中的是哪一个。
可选地,对于情景二,从用户的角度来说,第一视频图像、第三视频图像和第四视频图像的跳转图标所指示的视频跳转是相同的,可以认为是相同的跳转图标;且从视频服务器设备的角度来说,第一视频图像、第三视频图像和第四视频图像的跳转信息中的跳转目标视频和跳转时间是相同的,则第一视频图像、第三视频图像的跳转图标对应的跳转信息也可以是第四视频图像的跳转信息,视频服务器设备依然可以按照情景一的方法进行操作。
可选地,对于情景三,第一视频图像、第三视频图像和第四视频图像的跳转图标 所指示的视频跳转是不同的,则用户选中的跳转图标与第四视频图像的跳转信息是不能对应的。视频服务器设备无法在第三视频图像中找到对应的跳转信息,则可以不做互动响应,继续播放视频,或者给出未选中跳转图标的提示。
可选地,对于以上三个情景,当步骤306中终端设备还发送了视频帧标识符,视频服务器设备根据所述视频帧标识符找到对应的第三视频图像,再根据所述输入位置信息获取跳转信息,从而获取跳转目标视频的跳转时间。具体地,视频服务器设备根据接收到的输入位置信息生成第三视频图像的相对位置信息,可以是转化到对应的坐标轴中得到的坐标信息。视频服务器设备将该相对位置信息与第三视频图像的跳转信息中的跳转位置信息进行比较。若两个位置信息之间的距离差值在可以接收的范围之外,则表示用户没有点击到有效的互动区域,可能是用户的误操作,视频服务器设备可以不做互动响应;若两个位置信息之间的距离差值在可以接收的范围之内,则视频服务器设备确定用户的输入选中了所述跳转信息对应的跳转图标,并获取所述跳转信息,得到其中的跳转目标视频的跳转时间。可选地,当所述跳转目标视频为第二视频,即需要跳转到第二视频时,视频服务器设备还可以得到第二视频的播放地址。
需要说明的是,情景三出现的可能是极小的,由于跳转图标是用于给用户进行视频跳转做提示的,需要给用户发现跳转图标的时间,且还要给用户考虑是否进行视频跳转的时间,邻近的多帧视频图像的跳转图标都是相同的,情景一和情景二中所述的视频服务器设备的处理方法是可以实现与用户基于视频内容的互动的。
308.视频服务器设备向所述终端设备发送第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
基于步骤307中获得的跳转时间,视频服务器设备可以获得所述跳转时间对应的视频图像的数据,渲染生成第二视频图像。应理解,当第二视频图像有跳转信息时,在第二视频图像中渲染对应的跳转图标,接下来控制VR视频播放的方法与步骤304到步骤308的方法相同。
当需要跳转到第二视频时,也就是当步骤307中基于所述输入获取的跳转信息中包含第二视频的播放地址时,视频服务器设备可以基于所述第二视频的播放地址获取第二视频,可选地,还可以获取第二视频的跳转信息。基于步骤307中获得的跳转时间,视频服务器设备还可以确定第二视频的跳转时间对应的第二视频图像。具体地,视频服务器设备渲染生成第二视频图像,当第二视频图像有对应的跳转信息时,还包括在第二视频图像中渲染跳转图标。接下来控制VR视频播放的方法与步骤304到步骤308的方法相同。
可选地,第二视频图像的视频内容与第一视频图像的所述跳转图标处的视频内容相关。由图2所示实施例的步骤204所述方法可得,从跳转目标视频的跳转时间播放的第二视频图像是与第一视频的第一视频图像中的跳转位置信息处的视频内容相关的,也就是和第一视频图像中的跳转图标处的视频内容相关。
应理解,本发明实施例中,第一视频图像、第二视频图像、第一视频、第二视频等命名是为了清楚阐述技术方案而取的示例名称,并不构成对本发明实施例的限定。
在本发明实施例提供的控制VR视频播放的方法中,VR视频图像中包含跳转图标,可以在用户观看视频时,提示用户在跳转图标处可以进行视频跳转。用户能够基于观 看到的视频内容,并结合自己的兴趣喜好,通过选中跳转图标,快速地跳转到其他感兴趣的场景中。该方法为用户提供个性化服务,与用户形成观影互动,提高了VR视频的用户体验。
请参考图4,为本发明实施例提供的另一种控制VR视频播放的方法的流程示意图。在本发明实施例中,终端设备播放第一视频的视频图像,视频图像的跳转位置处包含跳转图标;当用户选中跳转图标时,终端设备获取跳转图标对应的目标跳转视频的跳转时间,从所述跳转时间播放所述目标跳转视频。其中,步骤401、402与图3所示实施例步骤301、302相同,此处不再赘述。所述方法还包括:
403.视频服务器设备向终端设备发送第一视频和跳转信息。
图4所述实施例中,由终端设备控制VR视频的播放,因此,视频服务器设备将第一视频和跳转信息发送到终端设备。需要说明的是,第一视频和跳转信息也可以是存储在终端设备中,还可以是存储在DVD、硬盘等存储设备中,则终端设备可以不执行步骤401-403,而由终端设备直接获取第一视频和跳转信息。
404.终端设备在第一视频图像的跳转位置处渲染跳转图标。
步骤404具体方法和图3所示实施例步骤303相同,只是方法的执行主体从视频服务器设备换成了终端设备。由图2所示实施例步骤204可知,终端设备获取的第一视频和跳转信息可以是组合的一个文件,也可以是相关联的两个文件。终端设备将获取的第一视频和跳转信息进行解析,解析成视频帧数据和跳转信息数据。终端设备根据视频帧数据和跳转信息数据进行渲染生成视频图像,且视频图像中包含跳转图标。
其中,第一视频图像是第一视频中的一帧视频图像。第一视频图像的跳转信息中包含视频帧标识符,根据所述视频帧标识符可以将第一视频图像和跳转信息关联起来。可以理解,第一视频图像为视频帧标识符对应的一帧视频图像。第一视频图像的跳转信息中还包含跳转位置信息,根据所述跳转位置信息可以在第一视频图像的跳转位置处渲染跳转图标。可选地,跳转信息中还可以包含跳转视频提示信息,根据所述跳转位置信息还可以在第一视频图像的跳转位置处渲染跳转视频提示信息。
例如,第一视频图像为第n帧视频图像,第n帧视频图像的视频帧标识符可以有对应的多个跳转信息,以有1个跳转信息为例进行说明,应理解,当有多个跳转信息时,对于其他跳转信息处理方法也是相同的。首先,根据第n帧视频图像的视频帧标识符获取对应的跳转信息,根据该跳转信息得到跳转位置信息,例如三维坐标(x,y,z)。可选地,还可以得到跳转视频提示信息,跳转视频提示信息的解释参见图2所示实施例步骤202,例如字画展览厅的信息。终端设备根据视频帧数据和用户视角等信息渲染生成第n帧视频图像,并在第n帧视频图像中(x,y,z)相应的跳转位置处渲染跳转图标,例如小箭头等,用于向用户提示此处可以互动进行视频跳转;可选地,终端设备还可以在第n帧视频图像中(x,y,z)相应的跳转位置处渲染跳转视频提示信息;可选地,以跳转视频提示信息作为跳转图标。
405.终端设备播放第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标。
终端设备通过显示器向用户显示第一视频图像,第一视频图像中包含跳转图标。 可选地,第一视频图像中还可以包含跳转视频提示信息。可选地,终端设备可以对跳转图标设置动画,例如使跳转图标跳动、高亮等,用于更好地提醒用户此处可以进行互动进行视频跳转。
406.终端设备接收用于选中所述跳转图标的输入。
用户基于自己的喜好,可以点击第一视频图像中的跳转图标来跳转到对应的视频场景中进行播放。跳转图标是在视频图像中的,用户可以基于视频图像中的视频内容,特别是跳转图标所在的视频内容,来进行视频跳转。用户选中跳转图标形成输入的方法可以有很多,例如通过空鼠、射线枪、手柄等VR控制设备选中跳转图标形成输入,或者通过凝视跳转图标形成输入等。终端设备接收用户的输入,并获取所述输入的输入位置信息。所述输入位置信息可以是VR控制设备的姿态,也可以是根据VR控制设备的输入得到的在终端设备显示的视频画面中的位置,还可以是用户凝视的视线在终端设备显示的视频画面中的位置(如坐标)等。
用户看到终端设备显示的有跳转图标的视频图像可以是第一视频图像;用户思考后基于视频内容、自己的喜好或者是跳转视频提示信息,选中跳转图标在终端设备中形成输入时,此时终端设备正在显示的可以是第三视频图像。需要说明的是,第一视频图像和第三视频图像可以是同一帧视频图像(情景一),也可以是视频图像的视频内容相同或相近的两帧视频图像(情景二),还可以出现由于刚好处于镜头切换的时间点,而导致视频图像的内容差别很大的情况(情景三)。
可选地,对于情景一,终端设备根据接收到的输入位置信息生成第三视频图像的相对位置信息,可以是转化到第三视频图像的坐标轴中得到的坐标信息。终端设备将该相对位置信息与第三视频图像的跳转信息中的跳转位置信息进行比较。若两个位置信息之间的距离差值在可以接收的范围之外,则表示用户没有点击到有效的互动区域,可能是用户的误操作,终端设备可以不做互动响应,继续播放视频,或者给出未选中跳转图标的提示;若两个位置信息之间的距离差值在可以接收的范围之内,则终端设备确定用户的输入选中了所述跳转信息对应的跳转图标。
应理解,可以接收的范围表示的是一个跳转图标在视频图像中可以与用户产生互动的范围。需要说明的是,当第三视频图像中存在多个跳转信息时,也可以通过其中的跳转位置信息来判断用户选中的是哪一个。
可选地,对于情景二,从用户的角度来说,第一视频图像和第三视频图像的跳转图标所指示的视频跳转是相同的,可以认为是相同的跳转图标;且从终端设备的角度来说,第一视频图像和第三视频图像的跳转信息中的跳转目标视频和跳转时间是相同的,则第一视频图像的跳转图标对应的跳转信息也可以是第三视频图像的跳转信息,终端设备依然可以按照情景一的方法来确定用户的输入是否选中了跳转图标。
可选地,对于情景三,第一视频图像和第三视频图像的跳转图标所指示的视频跳转是不同的,则用户选中的跳转图标与第三视频图像的跳转信息是不能对应的。终端设备无法在第三视频图像中找到对应的跳转信息,则可以不做互动响应,继续播放视频,或者给出未选中跳转图标的提示。
需要说明的是,情景三出现的可能是极小的,由于跳转图标是用于给用户进行视频跳转做提示的,需要给用户发现跳转图标的时间,且还要给用户考虑是否进行视频 跳转的时间,邻近的多帧视频图像的跳转图标都是相同的,情景一和情景二中所述的终端设备执行的方法是可以实现与用户基于视频内容的互动的。
当终端设备在播放视频时,如果没有接收到用户输入,则正常播放VR视频,本发明实施例不对此限定。
407.终端设备基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间。
步骤406中终端设备确定用户的输入选中了所述跳转图标之后,可以获取所述跳转信息,得到其中的跳转目标视频的跳转时间。可选地,当所述跳转目标视频为第二视频,即需要跳转到不同于正在播放的第一视频的第二视频时,终端设备还可以得到第二视频的播放地址。
408.终端设备播放第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
基于步骤407中获得的跳转时间,终端设备可以获得所述跳转时间对应的视频图像的数据,渲染生成第二视频图像。应理解,当第二视频图像有跳转信息时,在第二视频图像中渲染对应的跳转图标,接下来控制VR视频播放的方法与步骤405到步骤408的方法相同。
当需要跳转到第二视频时,也就是当步骤407中基于所述输入获取的跳转信息中包含第二视频的播放地址时,终端设备可以根据所述第二视频的播放地址获取第二视频,可选地,还可以获取第二视频的跳转信息。基于步骤407中获得的跳转时间,终端设备还可以确定第二视频的跳转时间对应的第二视频图像。具体地,终端设备渲染生成第二视频图像,当第二视频图像有对应的跳转信息时,还包括在第二视频图像中渲染跳转图标。接下来控制VR视频播放的方法与步骤405到步骤408的方法相同。
可选地,第二视频图像的视频内容与第一视频图像的跳转图标处的视频内容相关。由图2所示实施例的步骤204所述方法可得,从跳转目标视频的跳转时间播放的第二视频图像是与第一视频的第一视频图像中的跳转位置信息处的视频内容相关的,也就是和第一视频图像中的跳转图标处的视频内容相关。
应理解,本发明实施例中,第一视频图像、第二视频图像、第一视频、第二视频等命名是为了清楚讲解技术方案而取的示例名称,并不构成对本发明实施例的限定。
本发明实施例提供的控制VR视频播放的方法,与图3所示实施例相比,是由终端设备来执行控制VR视频播放的方法,在图3所示实施例的基础上,减小了视频跳转的时延,提升了用户的观影互动体验。
请参考图5,为本发明实施例提供的一种视频服务器设备的组成示意图。所述视频服务器设备包括:
-发送模块501,用于向终端设备发送第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标,其中,所述第一视频为视频服务器设备正在为终端设备播放的视频,具体执行过程可参见图3所示实施例中的步骤说明,如步骤304;
-接收模块502,用于接收所述终端设备发送用于选中所述跳转图标的输入,具体执行过程可参见图3所示实施例中的步骤说明,如步骤305-307;
具体地,所述接收模块502接收所述终端设备发送的用于选中所述跳转图标的所 述输入,具体方法可以包括:接收子模块5021,用于接收所述终端设备发送的所述输入的输入位置信息;确定子模块5022,用于根据所述输入位置信息,确定所述输入选中所述跳转图标,具体执行过程可参见图3所示实施例中的步骤说明,如步骤307。
-获取模块503,用于基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间,具体执行过程可参见图3所示实施例中的步骤说明,如步骤307;
-所述发送模块501,还用于向所述终端设备发送第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像,具体执行过程可参见图3所示实施例中的步骤说明,如步骤308。
具体地,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像,也就是在正在观看的第一视频中进行视频跳转,具体执行过程可参见图3所示实施例中的步骤说明,如步骤308。
具体地,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述获取模块503获取所述跳转目标视频的跳转时间,还包括:所述获取模块503,还用于基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;确定模块504,用于基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。也就是跳转到不同于正在播放的第一视频的第二视频中去继续播放视频,具体执行过程可参见图3所示实施例中的步骤说明,如步骤308。
可选地,所述第二视频图像的视频内容与所述第一视频图像的所述跳转图标处的视频内容相关,具体内容参见图3所示实施例中的步骤说明,如步骤308。
可选地,在所述发送模块501向所述终端设备发送所述第一视频图像之前,还包括:渲染模块505,用于在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的视频图像,具体内容参见图3所示实施例中的步骤说明,如步骤303。
可选地,在所述发送模块501向所述终端设备发送所述第一视频图像之前,还包括:所述渲染模块505,还用于在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,所述跳转视频提示信息可以是视频图像信息,也可以是对视频内容的文字描述信息。具体内容参见图3所示实施例中的步骤说明,如步骤303。
由于本发明实施例提供的视频服务器设备可用于控制VR视频播放的方法,因此其所能获得的技术效果可参考上述方法实施例,在此不再赘述。
请参考图6,为本发明实施例提供的一种终端设备的组成示意图。所述终端设备包括:
-播放模块601,用于播放第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标。其中,所述第一视频为终端设备正在播放的视频,具体执行过程可参见图4所示实施例中的步骤说明,如步骤405;
-接收模块602,用于接收用于选中所述跳转图标的输入,具体执行过程可参见图4所示实施例中的步骤说明,如步骤406;
具体地,所述接收模块602接收用于选中所述跳转图标的所述输入,具体包括:接收子模块6021,用于接收所述输入;获取子模块6022,用于获取所述输入在所述第一视频图像的输入位置信息;确定子模块6023,用于根据所述输入位置信息,确定所述输入选中所述跳转图标。具体执行过程可参见图4所示实施例中的步骤说明,如步骤406;
-获取模块603,用于基于所述输入获取所述跳转图标相对应的跳转目标视频的跳转时间,具体执行过程可参见图4所示实施例中的步骤说明,如步骤407;
-所述播放模块601,还用于播放第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像,具体执行过程可参见图4所示实施例中的步骤说明,如步骤408。
具体地,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。也就是在正在观看的第一视频中进行视频跳转,具体执行过程可参见图4所示实施例中的步骤说明,如步骤408。
具体地,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述获取模块603用于基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:所述获取模块603,还用于基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;确定模块604,用于基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。也就是跳转到不同于正在播放的第一视频的第二视频中去继续播放视频,具体执行过程可参见图4所示实施例中的步骤说明,如步骤408。
可选地,所述第二视频图像的视频内容与所述第一视频图像的所述跳转图标处的视频内容相关,具体执行过程可参见图4所示实施例中的步骤说明,如步骤408。
可选地,在所述播放模块601播放所述第一视频图像之前,还包括:渲染模块605,用于在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。具体执行过程可参见图4所示实施例中的步骤说明,如步骤404。
可选地,在所述播放模块601播放所述第一视频图像之前,还包括:所述渲染模块605,还用于在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息。所述跳转视频提示信息用于向用户提示跳转之后的视频内容,所述跳转视频提示信息可以是视频图像信息,也可以是对视频内容的文字描述信息。具体执行过程可参见图4所示实施例中的步骤说明,如步骤404。
由于本发明实施例提供的终端设备可用于控制VR视频播放的方法,因此其所能获得的技术效果可参考上述方法实施例,在此不再赘述。
请参考图7,为本发明实施例中提供的一种视频服务器设备的另一组成示意图。包括至少一个处理器701、收发器702,可选的,还可以包括存储器703。
存储器703可以是易失性存储器,例如随机存取存储器;存储器也可以是非易失性存储器,例如只读存储器,快闪存储器,硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)、或者存储器703是能够用于携带或存储具有指令或 数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器703可以是上述存储器的组合。
本发明实施例中不限定上述处理器701以及存储器703之间的具体连接介质。本发明实施例在图中以存储器703和处理器701之间通过总线704连接,总线704在图中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。该总线704可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
处理器701可以具有数据收发功能,能够与其他设备进行通信。如在本发明实施例中,处理器701可以向终端设备发送视频图像,也可以接收来自所述终端设备的输入位置信息。在如图7所示的视频服务器设备中,也可以设置独立的数据收发模块,例如收发器702,用于收发数据;处理器701在与其他设备进行通信时,也可以通过收发器702进行数据传输,如在本发明实施例中,处理器701可以通过收发器702向终端设备发送视频图像,也可以通过收发器702接收来自所述终端设备的输入位置信息。
处理器701可以具有视频图像渲染功能,例如在视频图像中渲染跳转图标;处理器701也可以通过终端设备向用户显示视频图像,视频图像中包含跳转图标;处理器701还可以读取存储器703中的跳转时间,并进行对应的视频跳转。
作为一种实现方式,收发器702的功能可以考虑通过收发电路或者收发的专用芯片实现。处理器701可以考虑通过专用处理芯片、处理电路、处理器或者通用芯片实现。例如,处理器701可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
作为另一种实现方式,可以考虑使用通用计算机的方式来实现本发明实施例提供的视频服务器设备。即将实现处理器701,收发器702功能的程序代码存储在存储器703中,通用处理器通过执行存储器703中的代码来实现处理器701,收发器702的功能。
当所述视频服务器设备采用图7所示的形式时,图7中的处理器701可以通过调用存储器702中存储的计算机执行指令,使得所述视频服务器设备可以执行上述方法实施例中的所述视频服务器设备执行的方法。具体步骤请参见前述方法或其他实施例中的描述,此处不做赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
请参考图8,为本发明实施例中提供的一种终端设备的另一组成示意图。包括至少一个处理器801、收发器802,可选的,还可以包括存储器803。
可选地,所述装置800还可以包括显示器804,用于向用户显示视频图像,视频图像包含跳转图标;所述装置还可以包括传感器805,用于捕捉用户选中跳转图标的 凝视或者获取终端设备的姿态和位置,需要说明的是,传感器805也可以以有输入功能的器件来表示,如鼠标、空鼠、射线枪、手柄等。
存储器803可以是易失性存储器,例如随机存取存储器;存储器也可以是非易失性存储器,例如只读存储器,快闪存储器,硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)、或者存储器803是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器803可以是上述存储器的组合。
本发明实施例中不限定上述处理器801以及存储器803之间的具体连接介质。本发明实施例在图中以存储器803和处理器801之间通过总线806连接,总线806在图中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。该总线806可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
处理器801可以具有数据收发功能,能够与其他设备进行通信。如在本发明实施例中,处理器801可以向视频服务器设备请求视频数据和跳转信息,并接收视频服务器设备发送的数据。在如图8所示终端设备中,也可以设置独立的数据收发模块,例如收发器802,用于收发数据;处理器801在与其他设备进行通信时,也可以通过收发器802进行数据传输。
处理器801可以具有视频图像渲染功能,例如在视频图像中渲染跳转图标;处理器801也可以通过控制显示器804向用户显示视频图像,视频图像中包含跳转图标;处理器801还可以接收传感器805的输入,并获取输入位置信息;处理器801还可以读取存储器803中的跳转时间,并进行对应的视频跳转。
作为一种实现方式,收发器802的功能可以考虑通过收发电路或者收发的专用芯片实现。处理器801可以考虑通过专用处理芯片、处理电路、处理器或者通用芯片实现。例如,处理器801可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
作为另一种实现方式,可以考虑使用通用计算机的方式来实现本发明实施例提供的终端设备。即将实现处理器801,收发器802功能的程序代码存储在存储器803中,通用处理器通过执行存储器803中的代码来实现处理器801,收发器802的功能。
当所述终端设备采用图8所示的形式时,图8中的处理器801可以通过调用存储器802中存储的计算机执行指令,使得所述终端设备可以执行上述方法实施例中的所述终端设备执行的方法。具体步骤请参见前述方法或其他实施例中的描述,此处不做赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本发明实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本 发明实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各种说明性逻辑块(illustrative logical block)和步骤(step),能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种控制VR视频播放的方法,其特征在于,包括在视频服务器设备中执行以下步骤:
    向终端设备发送第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标;
    接收所述终端设备发送的用于选中所述跳转图标的输入;
    基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间;
    向所述终端设备发送第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
  2. 根据权利要求1所述的方法,其特征在于,所述接收所述终端设备发送的用于选中所述跳转图标的所述输入,具体包括:
    接收所述终端设备发送的所述输入的输入位置信息;
    根据所述输入位置信息,确定所述输入选中所述跳转图标。
  3. 根据权利要求1所述的方法,其特征在于,在所述向所述终端设备发送所述第一视频图像之前,还包括:
    在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。
  4. 根据权利要求1所述的方法,其特征在于,在所述向所述终端设备发送所述第一视频图像之前,还包括:
    在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息,所述第一视频图像为所述第一视频中所述视频帧标识符对应的一帧视频图像。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
  6. 根据权利要求1-4任一项所述的方法,其特征在于,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:
    基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;
    基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
  7. 一种控制VR视频播放的方法,其特征在于,包括在终端设备中执行以下步骤:
    播放第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标;
    接收用于选中所述跳转图标的输入;
    基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间;
    播放第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
  8. 根据权利要求7所述的方法,其特征在于,所述接收用于选中所述跳转图标的所述输入,具体包括:
    接收所述输入,获取所述输入在所述第一视频图像的输入位置信息;
    根据所述输入位置信息,确定所述输入选中所述跳转图标。
  9. 根据权利要求7所述的方法,其特征在于,在所述播放所述第一视频图像之前,还包括:
    在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。
  10. 根据权利要求7所述的方法,其特征在于,在所述播放所述第一视频图像之 前,还包括:
    在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息,所述第一视频图像为所述第一视频中所述视频帧标识符对应的一帧视频图像。
  11. 根据权利要求7-10任一所述的方法,其特征在于,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
  12. 根据权利要求7-10任一所述的方法,其特征在于,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:
    基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;
    基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
  13. 一种视频服务器设备,其特征在于,包括:
    发送模块,用于向终端设备发送第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标;
    接收模块,用于接收所述终端设备发送的用于选中所述跳转图标的输入;
    获取模块,用于基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间;
    所述发送模块,还用于向所述终端设备发送第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
  14. 根据权利要求13所述的视频服务器设备,其特征在于,所述接收模块用于接收所述终端设备发送的用于选中所述跳转图标的所述输入,具体包括:
    接收子模块,用于接收所述终端设备发送的所述输入的输入位置信息;
    确定子模块,用于根据所述输入位置信息,确定所述输入选中所述跳转图标。
  15. 根据权利要求13所述的视频服务器设备,其特征在于,在所述发送模块用于向所述终端设备发送所述第一视频图像之前,还包括:
    渲染模块,用于在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的视频图像。
  16. 根据权利要求13所述的视频服务器设备,其特征在于,在所述发送模块用于向所述终端设备发送所述第一视频图像之前,还包括:
    所述渲染模块,还用于在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息,所述第一视频图像为所述第一视频中所述视频帧标识符对应的一帧视频图像。
  17. 根据权利要求13-16任一项所述的视频服务器设备,其特征在于,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
  18. 根据权利要求13-16任一项所述的视频服务器设备,其特征在于,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述获取模块用于基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:
    所述获取模块,还用于基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;
    确定模块,用于基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
  19. 一种终端设备,其特征在于,包括:
    播放模块,用于播放第一视频的一帧第一视频图像,所述第一视频图像包含跳转图标;
    接收模块,用于接收用于选中所述跳转图标的输入;
    获取模块,用于基于所述输入获取所述跳转图标对应的跳转目标视频的跳转时间;
    所述播放模块,还用于播放第二视频图像,所述第二视频图像为所述跳转目标视频的所述跳转时间对应的一帧视频图像。
  20. 根据权利要求19所述的终端设备,其特征在于,所述接收模块接收用于选中所述跳转图标的所述输入,具体包括:
    接收子模块,用于接收所述输入;
    获取子模块,用于获取所述输入在所述第一视频图像的输入位置信息;
    确定子模块,用于根据所述输入位置信息,确定所述输入选中所述跳转图标。
  21. 根据权利要求19所述的终端设备,其特征在于,在所述播放模块用于播放所述第一视频图像之前,还包括:
    渲染模块,用于在所述第一视频图像的跳转位置处渲染所述跳转图标,所述第一视频图像为所述第一视频中视频帧标识符对应的一帧视频图像。
  22. 根据权利要求19所述的终端设备,其特征在于,在所述播放模块用于播放所述第一视频图像之前,还包括:
    所述渲染模块,还用于在所述第一视频图像的所述跳转位置处渲染跳转视频提示信息,所述第一视频图像为所述第一视频中所述视频帧标识符对应的一帧视频图像。
  23. 根据权利要求19-22任一所述的终端设备,其特征在于,所述跳转目标视频为所述第一视频,所述第二视频图像为所述第一视频的所述跳转时间对应的一帧视频图像。
  24. 根据权利要求19-22任一所述的终端设备,其特征在于,所述跳转目标视频为第二视频,所述第二视频图像为所述第二视频的所述跳转时间对应的一帧视频图像,所述获取模块用于基于所述输入获取所述跳转图标对应的所述跳转目标视频的所述跳转时间,还包括:
    所述获取模块,还用于基于所述输入获取所述跳转图标对应的所述第二视频的播放地址;
    确定模块,用于基于所述第二视频的所述播放地址和所述第二视频的所述跳转时间确定所述第二视频图像。
PCT/CN2019/121439 2018-12-04 2019-11-28 一种控制vr视频播放的方法及相关装置 WO2020114297A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19892508.3A EP3873099A4 (en) 2018-12-04 2019-11-28 METHOD OF CONTROLLING VR VIDEO PLAYBACK AND ASSOCIATED DEVICE
US17/338,261 US11418857B2 (en) 2018-12-04 2021-06-03 Method for controlling VR video playing and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811475821.5A CN111277866B (zh) 2018-12-04 2018-12-04 一种控制vr视频播放的方法及相关装置
CN201811475821.5 2018-12-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/338,261 Continuation US11418857B2 (en) 2018-12-04 2021-06-03 Method for controlling VR video playing and related apparatus

Publications (1)

Publication Number Publication Date
WO2020114297A1 true WO2020114297A1 (zh) 2020-06-11

Family

ID=70975146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121439 WO2020114297A1 (zh) 2018-12-04 2019-11-28 一种控制vr视频播放的方法及相关装置

Country Status (4)

Country Link
US (1) US11418857B2 (zh)
EP (1) EP3873099A4 (zh)
CN (1) CN111277866B (zh)
WO (1) WO2020114297A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709542B (zh) * 2020-10-09 2023-09-19 天翼数字生活科技有限公司 一种交互式全景视频播放的方法和***
CN114584840B (zh) * 2022-02-28 2024-02-23 北京梧桐车联科技有限责任公司 音视频播放方法、装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358650A1 (en) * 2014-06-06 2015-12-10 Samsung Electronics Co., Ltd. Electronic device, control method thereof and system
CN107547922A (zh) * 2016-10-28 2018-01-05 腾讯科技(深圳)有限公司 信息处理方法、装置及***
CN108376424A (zh) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 用于对三维虚拟环境进行视角切换的方法、装置、设备及存储介质
CN108769814A (zh) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 视频互动方法、装置及可读介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6195497B1 (en) * 1993-10-25 2001-02-27 Hitachi, Ltd. Associated image retrieving apparatus and method
US20070180488A1 (en) * 2006-02-01 2007-08-02 Sbc Knowledge Ventures L.P. System and method for processing video content
US8667533B2 (en) * 2010-04-22 2014-03-04 Microsoft Corporation Customizing streaming content presentation
WO2015159128A1 (en) * 2014-04-16 2015-10-22 Telefonaktiebolaget L M Ericsson (Publ) System and method of providing direct access to specific timestamp points of streamed video content during consumption on a limited interaction capability device
CN104219584B (zh) * 2014-09-25 2018-05-01 广东京腾科技有限公司 基于增强现实的全景视频交互方法和***
US10181219B1 (en) * 2015-01-21 2019-01-15 Google Llc Phone control and presence in virtual reality
US9959677B2 (en) * 2015-05-26 2018-05-01 Google Llc Multidimensional graphical method for entering and exiting applications and activities in immersive media
US9824723B1 (en) * 2015-08-27 2017-11-21 Amazon Technologies, Inc. Direction indicators for panoramic images
CN108632674B (zh) * 2017-03-23 2021-09-21 华为技术有限公司 一种全景视频的播放方法和客户端
CN108882018B (zh) * 2017-05-09 2020-10-20 阿里巴巴(中国)有限公司 虚拟场景中的视频播放、数据提供方法、客户端及服务器
CN107908290A (zh) 2018-01-12 2018-04-13 四川超影科技有限公司 一种基于剧情选择的vr视频播放***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358650A1 (en) * 2014-06-06 2015-12-10 Samsung Electronics Co., Ltd. Electronic device, control method thereof and system
CN107547922A (zh) * 2016-10-28 2018-01-05 腾讯科技(深圳)有限公司 信息处理方法、装置及***
CN108376424A (zh) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 用于对三维虚拟环境进行视角切换的方法、装置、设备及存储介质
CN108769814A (zh) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 视频互动方法、装置及可读介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3873099A4 *

Also Published As

Publication number Publication date
CN111277866B (zh) 2022-05-10
EP3873099A4 (en) 2022-01-05
CN111277866A (zh) 2020-06-12
US11418857B2 (en) 2022-08-16
US20210297753A1 (en) 2021-09-23
EP3873099A1 (en) 2021-09-01

Similar Documents

Publication Publication Date Title
JP7017175B2 (ja) 情報処理装置、情報処理方法、プログラム
US20220155913A1 (en) Accessing item information for an item selected from a displayed image
US10449732B2 (en) Customized three dimensional (3D) printing of media-related objects
JP5732129B2 (ja) 拡大表示ナビゲーション
JP6277329B2 (ja) 立体広告枠決定システム、ユーザ端末および立体広告枠決定コンピュータ
WO2018103384A1 (zh) 一种360度全景视频的播放方法、装置及***
CN109743584B (zh) 全景视频合成方法、服务器、终端设备及存储介质
US20120144312A1 (en) Information processing apparatus and information processing system
US11418857B2 (en) Method for controlling VR video playing and related apparatus
CN104023181A (zh) 信息处理方法及装置
WO2020093862A1 (zh) 一种vr视频处理的方法及相关装置
JP6078476B2 (ja) メディアアセットに関する説明情報の表示をカスタマイズする方法
CN112188219B (zh) 视频接收方法和装置以及视频发送方法和装置
EP2942949A1 (en) System for providing complex-dimensional content service using complex 2d-3d content file, method for providing said service, and complex-dimensional content file therefor
TW201840196A (zh) 視訊推薦方法、伺服器及客戶端
US11336963B2 (en) Method and apparatus for playing a 360-degree video
JP5861684B2 (ja) 情報処理装置、及びプログラム
AU2020101686A4 (en) Technology adapted to provide a user interface via presentation of two-dimensional content via three-dimensional display objects rendered in a navigable virtual space
US20230419613A1 (en) Multi-camera toggling method and apparatus, device, and storage medium
US20240062496A1 (en) Media processing method, device and system
JP7038869B1 (ja) コンピュータプログラム、方法及びサーバ装置
US20220270368A1 (en) Interactive video system for sports media

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892508

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019892508

Country of ref document: EP

Effective date: 20210528

NENP Non-entry into the national phase

Ref country code: DE