WO2022078167A1 - 互动视频的创建方法、装置、设备及可读存储介质 - Google Patents

互动视频的创建方法、装置、设备及可读存储介质 Download PDF

Info

Publication number
WO2022078167A1
WO2022078167A1 PCT/CN2021/119639 CN2021119639W WO2022078167A1 WO 2022078167 A1 WO2022078167 A1 WO 2022078167A1 CN 2021119639 W CN2021119639 W CN 2021119639W WO 2022078167 A1 WO2022078167 A1 WO 2022078167A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
interactive
video
information
scene
Prior art date
Application number
PCT/CN2021/119639
Other languages
English (en)
French (fr)
Inventor
吕露
洪薇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022078167A1 publication Critical patent/WO2022078167A1/zh
Priority to US17/981,127 priority Critical patent/US20230057703A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8541Content authoring involving branching, e.g. to different story endings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications

Definitions

  • the embodiments of the present application relate to the field of multimedia, and in particular, to a method, apparatus, device, and readable storage medium for creating an interactive video.
  • Interactive video is a video form that sets up interactive components and provides users with plot interaction.
  • the content being played in the interactive video is that character A is communicating with character B, and character A says to character B, “Eat tonight. Fried chicken or hot pot", two options of fried chicken and hot pot are displayed during the playback of the interactive video, and different video clips of subsequent plot development are played according to the user's choice of options.
  • the interactive components are set according to the connection relationship of different video clips, so as to connect multiple video clips to form an interactive video with a complete plot.
  • the interactive components corresponding to the plot are selected, so as to choose the development trend of the interactive video plot.
  • Embodiments of the present application provide a method, apparatus, device, and readable storage medium for creating an interactive video.
  • the technical solution is as follows:
  • a method for creating an interactive video is provided, which is applied to a computer device, and the method includes:
  • the creation interface includes the video editing preview area and the component editing area, and the component editing area includes information viewing options;
  • a first interactive segment is set in the video editing preview area, and the first interactive segment is used to provide an information collection scene;
  • an information viewing component is set at the position of the target scene in the first interactive segment, and the information viewing component is used to display the information that can be collected in the information collection scene;
  • the interactive video is generated according to the information viewing component set in the first interactive segment.
  • an interactive video creation device comprising:
  • the display module is used to display the creation interface of the interactive video.
  • the creation interface includes a video editing preview area and a component editing area, and the component editing area includes information viewing options;
  • a setting module for setting a first interactive segment in the video editing preview area, where the first interactive segment is used to provide an information collection scene
  • the setting module is further configured to, in response to the selection operation on the information viewing option, set an information viewing component at the position of the target scene in the first interactive segment, and the information viewing component is used to display the information that can be collected in the information collection scene;
  • the generating module is configured to view the component according to the information set by the first interactive segment, and generate an interactive video.
  • a computer device in another aspect, includes a processor and a memory, and the memory stores at least one instruction, at least one program, code set or instruction set, at least one instruction, at least one program, code set or instruction set Loaded and executed by the processor to implement the method for creating an interactive video as described in any one of the previous aspect and its optional embodiments.
  • a computer-readable storage medium stores at least one instruction, at least one piece of program, code set or instruction set, and at least one instruction, at least one piece of program, code set or instruction set consists of
  • the processor loads and executes to implement the method for creating an interactive video as described in any one of the previous aspect and its optional embodiments.
  • a computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for creating an interactive video described in any one of the above aspects and its optional embodiments.
  • a first interactive clip is set in the interactive video, and an information viewing component is set in the first interactive clip, and the information collection scene provided by the first interactive clip is displayed through the information viewing component for collection.
  • the information contained in the first interactive clip increases the amount of information and the interactive form of the interactive clip in the interactive video.
  • the user can view the information contained in the first interactive clip by selecting the information viewing component in the first interactive clip, which improves the interaction with the interactive video. The efficiency with which users collect information during the interaction process.
  • FIG. 1 is a schematic diagram of an interaction method for an interactive video provided by an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of an interaction method for an interactive video provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of an implementation environment of a method for creating an interactive video provided by an exemplary embodiment of the present application
  • FIG. 4 is a flowchart of a method for creating an interactive video provided by an exemplary embodiment of the present application
  • FIG. 5 is a schematic diagram of setting an information viewing component in a creation interface provided based on the embodiment shown in FIG. 4;
  • FIG. 6 is a flowchart of a method for creating an interactive video provided by another exemplary embodiment of the present application.
  • FIG. 7 is a schematic interface diagram of setting a voting component in a creation interface provided based on the embodiment shown in FIG. 6;
  • FIG. 8 is a flowchart of a method for creating an interactive video provided by another exemplary embodiment of the present application.
  • FIG. 9 is a schematic diagram of setting a character selection component in a creation interface provided based on the embodiment shown in FIG. 8;
  • FIG. 10 is a schematic diagram of a plot selection control setting interface provided based on the embodiment shown in FIG. 8;
  • FIG. 11 is a schematic diagram of a setting interface of a global control provided based on the embodiment shown in FIG. 8;
  • FIG. 12 is a structural block diagram of an interactive video playback system provided by an exemplary embodiment of the present application.
  • FIG. 13 is a structural block diagram of an interactive video playback system provided by another exemplary embodiment of the present application.
  • FIG. 14 is a schematic diagram of data interaction provided by an exemplary embodiment of the present application.
  • 15 is a structural block diagram of an apparatus for creating an interactive video provided by an exemplary embodiment of the present application.
  • 16 is a structural block diagram of an apparatus for creating an interactive video provided by another exemplary embodiment of the present application.
  • FIG. 17 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • FIG. 1 shows an exemplary interface diagram for creating an interactive video.
  • the interactive video creation interface 100 displays the connection relationship of the interactive video.
  • the video clip 120 is connected.
  • there are four video clips connected including video clip 131, video clip 132, video clip 133 and video clip 134, that is, after the video clip 120 is played, the plot options are displayed in the interface, and the plot options can be displayed in the video.
  • the video clip 120 is displayed during the playback process, or the video clip 120 can be paused for display after the video clip 120 is played.
  • scenario option A corresponds to video clip 131
  • scenario option B corresponds to video clip 132
  • scenario option C corresponds to video clip 133
  • scenario option D corresponds to video clip 134 .
  • the interactive video provided in the embodiment of the present application on the basis of the above-mentioned plot selection, also provides interactive forms such as multi-person participation interaction, video content interaction, voting interaction, and information exchange interaction.
  • the user watches the background video clip in the interactive video.
  • the background video clip is used to tell the background of the story corresponding to the interactive video.
  • the computer device displays the interactive form selection control in the interactive video playback interface. It includes participation controls and spectator controls.
  • participation control it means that the user participates in the interactive process of the interactive video in the form of a player.
  • An interactive video corresponds to at least two players.
  • the spectator control is selected, the Indicates that the user is watching the interactive process of the interactive video.
  • player character A after the participation control is selected, player character A, player character B, and player character C are displayed.
  • Players can choose the role to play among the three roles, or the system automatically assigns the corresponding role to each player.
  • At least two virtual reality (Virtual Reality, VR) scenes are also provided in the interactive video, and the VR scenes correspond to the player characters or are related to the plot.
  • Players can choose from at least two VR scenes, so as to display the selected VR scene, conduct information query in the VR scene, and discuss and analyze with other players according to the queried information.
  • Multiple players can also choose the plot direction, and determine the plot direction selected by most players as the plot direction of the interactive video.
  • Voting controls are also provided in the interactive video to vote for player characters, select the player character that best meets the voting requirements from multiple player characters, and play the ending video according to the voting results.
  • sequence of the various stages in the above interaction process is only a schematic example. In the actual interaction process, the sequence of the above various stages can be freely matched according to the creator's design, which is not limited in this embodiment of the present application.
  • the creation structure of the interactive video includes a video structure of the interactive video, wherein It includes: a start stage 210, used to play the story background video of the interactive video; an interactive form selection component 220, including a participation component 221 and a spectator component 222, the participation component 221 is used to make the player participate in the interactive process of the interactive video, the spectator component 222 is used to allow the user to watch the interactive process of the interactive video.
  • the participation component 221 is also provided with an identity selection component.
  • the identity selection component includes different options for different identities in the interactive video. Players can choose their own expectations in the options.
  • the identity role played, and different identity roles also have different plot videos, which are used for players to understand the story plot corresponding to the identity role they play.
  • the interactive video also includes a VR video 230.
  • An interactive component is set in the VR video 230, and the interactive component is used to realize the information viewing function in the VR video 230. For example, if the VR video is implemented as a living room scene, it is set at the corner of the sofa in the living room. There is an interactive component, when the interactive component is clicked, the letter left in the corner of the sofa and the content of the letter are displayed.
  • the interactive video also includes a plot selection component 240, and after the candidate plots are selected in the plot selection component 240, the plot direction of the interactive video is controlled.
  • the scenario is pushed to a candidate scenario with more selections.
  • the interactive video also includes a voting component 250 for selecting an identity role that meets the voting requirements among the identity roles played by multiple players, and correspondingly playing different result videos according to the selection of the identity role.
  • the start video 260 is first played, and the start video 260 is used to express the story background of the interactive video; after the start video 260 is played or in the process of playing, the interactive form is displayed
  • the selection component 270 includes a participation component corresponding to the participation interaction mode 271 and a spectator component corresponding to the spectator mode 272.
  • the participation interaction mode 271 is used to indicate that the player participates in the interactive process of the interactive video, such as: in the interactive scene of the interactive video Viewing of scene information, etc.
  • the spectator mode 272 is used to indicate that the user watches the interactive process of the interactive video, and the spectator can view the perspective video of the character in the story line during the spectator process, but cannot participate in voting; or, the spectator can Check the direction of the story line according to the player's story line selection, and watch the player's speech and other content, and finally participate in the voting selection.
  • the player can also choose an identity, and in the different options corresponding to different identities in the interactive video, they can choose the identity role they want to play, and different identity roles also have different story videos, and the player chooses the identity. After the role is completed, you can understand the storyline corresponding to the role you play through the plot video.
  • the interactive video also includes a VR scene 280 , and an interactive component is set in the VR scene 280 , and the interactive component is used to implement an information viewing function in the VR scene 280 .
  • the interactive video also includes a plot selection process 290 , in which the candidate plots are selected in the plot selection process 290 to control the plot direction of the interactive video.
  • the interactive video also includes a multi-person voting session 200 for selecting an identity role that meets the voting requirements among the identity roles played by multiple players, and correspondingly playing different result videos according to the selection of the identity role.
  • FIG. 3 is a schematic diagram of an implementation environment of a method for creating an interactive video provided by an exemplary embodiment of the present application, and the implementation environment includes: a terminal 310 and a server 320;
  • the terminal 310 is installed with multimedia application programs, such as: a video playback application program, a video processing application program, and the multimedia application program is provided with an interactive video creation function, through the interactive video creation function, the user creates an interactive video in the multimedia application program, And add interactive video clips and components to the newly created interactive video.
  • multimedia application programs such as: a video playback application program, a video processing application program, and the multimedia application program is provided with an interactive video creation function, through the interactive video creation function, the user creates an interactive video in the multimedia application program, And add interactive video clips and components to the newly created interactive video.
  • the terminal 310 finally generates an interactive video after connecting the interactive video clips and components according to the relationship of the plot.
  • the terminal 310 uploads the interactive video to the server 320 through the communication network 330, so that the player terminal can obtain the interactive video from the server 320 and interact with other player terminals in the interactive video.
  • the player terminal creates a room for the interactive video, and invites other player terminals to participate in the interaction of the interactive video.
  • the above server may be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, and network services. , cloud communication, middleware service, domain name service, security service, content delivery network (CDN), and cloud server for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart TV, a smart watch, etc., but is not limited thereto.
  • the terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • the method for creating an interactive video provided by the embodiments of the present application will be described. at least one.
  • FIG. 4 is a flowchart of a method for creating an interactive video provided by an exemplary embodiment of the present application.
  • the method is executed by a terminal as an example for description.
  • the method includes:
  • Step 401 an interactive video creation interface is displayed, the creation interface includes a video editing preview area and a component editing area, and the component editing area includes information viewing options.
  • the creation interface is used to create an interactive video, wherein the creation interface is used to determine the overall structure of the interactive video; and/or, used to set video clips and interactive components in the interactive video.
  • the video editing preview area is used to set and preview the video clips in the interactive video; the component editing area is used to set the interactive components based on the set video clips.
  • the video editing preview area is also used to preview the overall structure of the interactive video, such as: the connection relationship between each video clip, and/or the relationship between each video clip and the set components .
  • the above component editing area includes component editing options, the component editing options include information viewing options, and the information viewing options are used to instruct to add an information viewing component to the video, so that when a selection operation for an information viewing component is received, the information viewing component is displayed.
  • the information contained in the component is used for setting in the VR scene; or, the information viewing component is used for setting at a specified position in the video clip.
  • the video editing preview area and the component editing area are two areas displayed side by side in the creation interface.
  • the video editing preview area is located on the left side of the creation interface
  • the component editing area is located on the right side of the creation interface.
  • the video clip is first set in the video editing preview area, so that the setting of the interactive component is performed for the video clip. For example, set the information viewing component at any position of the video clip; or, set the selection component at the end position of the video clip, etc.
  • the interactive component when no video clip is set in the video editing preview area, or the setting of the interactive component does not specify a corresponding video clip, the interactive component is set at the interactive video start position, or end position, or a specified time period Inside.
  • Step 402 setting a first interactive segment in the video editing preview area.
  • the first interactive segment is used to provide an information collection scene.
  • the setting manner of the first interactive segment includes at least one of the following manners:
  • the first interactive segment has been stored locally in the terminal.
  • the terminal opens the multimedia application and displays the creation interface, the user drags the first interactive segment to the creation interface, thereby setting the first interaction in the creation interface. fragment;
  • the playback position of the first interactive segment is set.
  • the first video clip to be played is set to No. 1
  • the second video clip to be played is set to No. 2
  • the first interactive segment has been stored locally on the terminal.
  • the terminal opens the multimedia application and displays the creation interface
  • the user selects the control for uploading the video clip in the video editing preview area, and correspondingly selects the locally stored first interactive segment. Fragment.
  • a playback position of the first interactive segment is set.
  • n is a positive integer, including the first interactive clip; set the video clips in sequence according to the playback order, including the settings of the first interactive clip.
  • an overall frame of the interactive video is first created in the creation interface, and the first interactive clip is uploaded at the playback position corresponding to the first interactive clip for the overall frame.
  • the above-mentioned setting manner of the first interactive segment is only a schematic example, and the specific setting manner of the first interactive segment is not limited in the embodiments of the present application.
  • the first interaction segment is a common video segment; or, the first interaction segment is a VR video segment, that is, a VR scene is provided in the first interaction segment, and the player can interact with objects in the scene in the VR scene .
  • an information viewing component can also be set in the VR scene to indicate the information contained in the objects in the VR scene.
  • the first interactive segment is a virtual reality segment, and correspondingly, the information collection scene is a three-dimensional virtual scene presented by the virtual reality segment.
  • the first interactive segment may also be implemented as a VR 3D virtual scene, and the VR 3D virtual scene is set with a corresponding viewing time limit, and the player can search for information on the VR 3D virtual scene within the time limit of the viewing time limit. .
  • the 3D virtual scene model stored locally on the terminal is imported into the multimedia application, and the display position of the VR 3D virtual scene is set.
  • Step 403 in response to the selection operation on the information viewing option, set the information viewing component at the target scene position in the first interactive segment.
  • the information viewing component is used to display collectible information in the information collection scene.
  • the collectible information includes scene information and component information. That is, the information viewing component is correspondingly set with scene information corresponding to the target scene position; for example, the corresponding information viewing component is set with scene information "viewing the door slit note", and the information viewing component corresponds to the note at the door slit position.
  • the information viewing component also corresponds to specific component information, for example, the above scene information "viewing the word strip of the door seam" also corresponds to the content of the word strip.
  • the target display scene is at least one scene included in the information collection scene.
  • the first interactive segment includes multiple information collection scenes, and each information collection scene corresponds to at least one display scene.
  • the information collection scene may be an exhibition hall, the rest area of the exhibition hall is a display scene, and the commodities in the exhibition hall are displayed. Zones are another display scene.
  • the target scene location is a 3D location in the 3D virtual scene.
  • the information viewing component 521 is located in the VR three-dimensional virtual scene 511, and the target scene location where the information viewing component 521 is located is a three-dimensional position.
  • the information viewing component for the first interactive segment when setting the information viewing component for the first interactive segment, it includes any one of the following situations:
  • the first interactive segment is implemented as a first interactive video
  • the key frame (I frame) set with the information viewing component and the video frame (P frame, B frame) corresponding to the key frame the information viewing component is displayed correspondingly;
  • the first interactive segment is implemented as a virtual reality segment, that is, if the first interactive segment is a VR 3D virtual scene including a time limit, the information viewing component is set at the target position in the VR 3D virtual scene, and the information viewing component is set at the target position in the VR 3D virtual scene.
  • the information viewing component is displayed.
  • the user drags the information viewing component to adjust the position of the information viewing component in the VR three-dimensional virtual scene.
  • an information viewing component is set at a preset position in the target display scene of the first interactive segment, and a drag operation on the information viewing component is received, where the drag operation is used to perform a drag operation on the information viewing component in the target display scene Move the setting position in the target scene, determine the target scene position of the information viewing component in the target display scene according to the drag operation, and set the information viewing component at the target scene position.
  • the information content included in the information viewing component is set.
  • the content included in the information viewing component includes at least one of scene information and component information.
  • the scene information is used to indicate the title of the information content included in the information viewing component
  • the component information is used to indicate the information content included in the information viewing component.
  • the information viewing component also corresponds to an information editing control
  • the component editing area further includes a component editing control corresponding to the information viewing component; in response to a trigger operation on the component editing control, the information editing area is displayed, and the information editing area is It includes the information title editing area and the information content editing area, receives the title input operation in the information title editing area, generates the title corresponding to the information viewing component, receives the content input operation in the information content editing area, and generates the information corresponding to the information viewing component.
  • the information editing area may be part or all of the component editing area.
  • the editing order of the above-mentioned information title and content is not limited.
  • the creation interface 500 includes a video editing preview area 510 and a component editing area 520, wherein the video editing preview area 510 displays a VR 3D virtual scene 511 that has been set in the interactive video, and the VR 3D virtual scene 511 is displayed.
  • the virtual scene 511 is set with the name "Kobayashi's room", and a three-dimensional scene top view 512 is also displayed correspondingly.
  • Components that can be set in the VR 3D virtual scene 511 are displayed in the component editing area 520, including the information viewing component 521.
  • When setting the information viewing component 521 in the VR 3D virtual scene 511 drag the information viewing component 521 to VR three-dimensional virtual scene 511, and drag it to the corresponding target position.
  • one information viewing component 521 may be set in the VR three-dimensional virtual scene 511, or multiple information viewing components 521 may be set.
  • the set quantity of the information viewing component 521 corresponds to a set upper limit, or the information viewing component 521 can be set in an unlimited number.
  • Step 404 Generate an interactive video according to the information viewing component set in the first interactive segment.
  • the component is viewed according to the first interactive clip and the information set in the first interactive clip, and an interactive video is finally generated for video interaction.
  • a first interactive segment is set in the interactive video, and an information viewing component is set in the first interactive clip, and the information viewing component
  • the scene information corresponding to the scene where the first interactive segment is located is set in the video, thereby increasing the information amount and interaction form of the interactive segment in the interactive video.
  • the user can view the first interactive segment by selecting the information viewing component in the first interactive segment.
  • the information contained in the clip improves the efficiency of human-computer interaction during the interaction with the interactive video.
  • the component editing options further include a result selection option, that is, the player can select results according to the information viewed in the first interactive segment, and view different interactive video endings according to the selected results.
  • FIG. 6 is a flowchart of a method for creating an interactive video provided by another exemplary embodiment of the present application. Taking the method being executed by a terminal as an example, the method includes:
  • an interactive video creation interface is displayed, the creation interface includes a video editing preview area and a component editing area, and the component editing area includes information viewing options.
  • the component editing area also includes a result selection option for providing the player with the development result of the interactive video story line that can be selected.
  • a result selection option for providing the player with the development result of the interactive video story line that can be selected.
  • the component editing area includes component editing options, and the component editing options also include result selection options.
  • Step 602 setting a first interactive segment in the video editing preview area.
  • Step 603 in response to the selection operation on the information viewing option, set an information viewing component at the target scene position in the first interactive segment.
  • Step 604 setting a second interactive video clip in the video editing preview area.
  • the second interactive video clip is used to provide result presentation content.
  • a second interactive video clip is set in the video editing preview area, where the second interactive video clip is used to guide the development of the plot.
  • the setting manner of the second interactive video clip includes any one of the following manners:
  • a second interactive video clip is stored locally on the terminal. After the terminal opens the multimedia application and displays the creation interface, the user drags the second interactive video clip into the creation interface, thereby setting the second interactive video clip in the creation interface.
  • a second interactive video clip is stored locally on the terminal. After the terminal opens the multimedia application and displays the creation interface, the user selects the control for uploading the video clip in the video editing preview area, and correspondingly selects the locally stored second interactive video clip. Interactive video clips. After the upload of the second interactive video clip is completed, a play position of the second interactive video clip is set.
  • n is a positive integer, including the second interactive video clip, and set the video clips in sequence according to the playback order, including the settings for the second interactive video clip.
  • an overall frame of the interactive video is first created in the creation interface, and the second interactive video clip is uploaded at the playback position corresponding to the second interactive video clip according to the overall frame.
  • the second interactive video clip is used to guide the display result selection component. It is worth noting that the second interactive video clip and the above-mentioned first interactive clip can be implemented as the same or different clips.
  • Step 605 in response to the selection operation on the result selection option, set the result selection component corresponding to the second interactive video segment.
  • the result selection component is used for ending selection of interactive videos.
  • the result selection component includes at least two candidate items, wherein each candidate item corresponds to a plot development result; or, i candidate items correspond to k plot development results, i and k are both positive integers, and i ⁇ k.
  • the second interactive video segment is set first, and then the result selection component is set as an example for description.
  • the result selection component can also be set first, and then the second interactive video clip is set.
  • the setting order of the components and the second interactive video clip is not limited.
  • the result selection component is set at the end of the second interactive video clip, or the result selection component is set at a connecting position between the second interactive video clip and the result video.
  • the result selection component includes at least two candidates, and the at least two candidates include target candidates, wherein a result video setting operation on the target candidate is received, and the target result is set according to the result video setting operation. Video is set to the resulting video associated with the target candidate.
  • the setting method of the resulting video is also different, including any one of the following situations:
  • the interactive video is a video in which a single player participates in the interaction, that is, during the interaction of the interactive video, the player checks the information in the first interactive clip, continues to view the second interactive video clip in the interactive video, and then Select at least two candidate items, and play the corresponding result video according to the player's selection. Therefore, in the process of setting the candidate item and the result video, it is only necessary to determine the corresponding relationship between the candidate item and the result video, so that when receiving When it is time to select one of the candidates, play the corresponding result video;
  • the interactive video is a video in which multiple players participate in the interaction, that is, during the interaction of the interactive video, multiple players assist in the first interactive segment to view information, and continue to view the second interactive video
  • At least two candidate items are selected respectively, and for the selection of multiple players, the result video corresponding to the candidate item that is selected more is played, so that in the setting process of the candidate item and the result video, it is necessary to select the candidate item.
  • the score corresponding to the number of times is set, so that the candidate with the highest score is determined from the scores corresponding to the at least two candidates respectively, and the result video corresponding to the candidate is played.
  • the interactive video creation interface 700 includes a set voting component 710, which includes four options: control 711 (corresponding to Master Lin), control 712 (corresponding to Butcher Zheng), and control 713 ( Corresponding to Zhou Banxian) and control 714 (corresponding to Uncle Zhao).
  • control 711 corresponding to Master Lin
  • control 712 corresponding to Butcher Zheng
  • control 713 Corresponding to Zhou Banxian
  • control 714 corresponding to Uncle Zhao
  • it includes any one of the following methods: first, drag the voting component 710 to any position on the interface, and set the name of the voting component 710 correspondingly, for example: drag the control 711 Go to any position, and set the corresponding name of the control 711 as Master Lin.
  • Step 606 generating an interactive video.
  • the component is viewed according to the first interactive clip and the information set in the first interactive clip, and an interactive video is finally generated for video interaction.
  • a first interactive segment is set in the interactive video, and an information viewing component is set in the first interactive clip, and the information viewing component
  • the scene information corresponding to the scene where the first interactive segment is located is set in the video, thereby increasing the information amount and interaction form of the interactive segment in the interactive video.
  • the user can view the first interactive segment by selecting the information viewing component in the first interactive segment.
  • the information contained in the clip improves the efficiency of human-computer interaction during the interaction with the interactive video.
  • a result selection component is set in the creation process of the interactive video, so that the user can predict the result of the interactive video, and view the resulting video clip according to the predicted result, which increases the number of interactive clips in the interactive video.
  • the amount of information and the form of interaction is set in the creation process of the interactive video, so that the user can predict the result of the interactive video, and view the resulting video clip according to the predicted result, which increases the number of interactive clips in the interactive video.
  • FIG. 8 is a flowchart of a method for creating an interactive video provided by another exemplary embodiment of the present application, and the method is executed by a terminal as an example for description. , the method includes:
  • Step 801 an interactive video creation interface is displayed, the creation interface includes a video editing preview area and a component editing area, and the component editing area includes component editing options.
  • the above component editing area is used to set interactive components based on the set video clips.
  • the component editing area includes component editing options, and the component editing options include at least one of information viewing options, result selection options, character selection options, and plot selection options.
  • the information viewing option is used to set the information viewing component in the interactive segment in the interactive video;
  • the result selection option is used to provide the result selection component for predicting or selecting the ending of the interactive video;
  • the character selection option is used to provide the user with an A character control component for selecting a character in an interactive video;
  • the plot selection option is used to provide a plot selection component for selecting the story line of the interactive video.
  • the information viewing component is used for setting in the VR scene; or, the information viewing component is used for setting at a specified position in the video clip.
  • the information viewing component is set in the VR scene as an example for description.
  • the result selection component is used for setting in the pre-sequence video segment before the result video segment; or, the result selection component is used for setting in the result selection interface before the result video segment.
  • the character selection component is used to provide the user with a control for selecting characters in the background video clip; or, the character selection component is used to set in the character selection interface after the background video clip; or, the character selection component is used to set in the background video clip In the previous character selection interface.
  • the plot selection component is used to set in any video clip in the interactive video to provide the choice of the subsequent plot direction; or, the plot selection component is used to set in the plot selection interface in the interactive video to provide the follow-up plot direction. choose.
  • Step 802 setting a background video clip in the video editing preview area.
  • This background video clip is used to provide the story background for the interactive video.
  • the background video clip is the start video clip of the interactive video. That is, the first clip when the interactive video starts playing is the background video clip.
  • the background video clip when setting the background video clip, directly drag the background video clip into the video editing preview area; or, on the interactive video structure displayed in the video editing preview area, select the position corresponding to the background video clip, The background video clip is set at this position.
  • the background video clip is set for the interactive video as an example for description.
  • the interactive video does not have a background video clip, that is, when the interactive video starts, the character selection interface is directly displayed.
  • Step 803 in response to the selection operation on the character selection option, set a character selection component after the background video clip.
  • the character selection component is used for character selection in interactive videos.
  • the character selection component includes at least two character selection options.
  • each character selection option in the at least two character selection options corresponds to a character selection component; or, a character selection component is provided, and the character selection component includes at least two character selection options.
  • the character selection component is superimposed and displayed at the back of the background video clip; optionally, when setting the display logic of the character selection component, the character selection component is set to be displayed continuously until the user selects a certain character selection component, if the background video After the clip is played, if the user does not select a character selection component, the designated image frame will remain displayed until the user selects a certain character selection component; or, an unselected character is randomly assigned to the user.
  • the character selection interface may be set during the creation process of the interactive video, or may be selected according to background video clips. For example, select any image frame in the background video clip as the character selection interface; or, use the first frame of the background video clip as the character selection interface; or use the last frame of the background video clip as the character selection interface.
  • a matching component and a room creation component are also provided, wherein the matching component is used to instruct the player to randomly match with other players participating in the interactive video to select characters; the room creation component is used for Instruct players to create rooms and invite other players to interact with interactive videos.
  • FIG. 9 shows a schematic diagram of setting a character selection component in a creation interface provided by an exemplary embodiment of the present application.
  • a character selection option 910 is displayed in the interactive video creation interface 900, and an interface 930 currently set in the interactive video is displayed in the video editing preview area 920, in which a matching control 931 and a room creation control 932 are displayed.
  • the interactive video includes at least two characters for the player to choose to play.
  • the match control 931 indicates that the player randomly matches a character among at least two characters to interact with other players; the room creation control 932 indicates that the player creates a room by himself and invites other players, or matches other players to interact.
  • the component editing options further include interactive viewing options, that is, the component editing area also includes interactive viewing options.
  • the component editing area also includes interactive viewing options.
  • a character selection component is set after the background video clip.
  • Step 804 setting a first interactive segment in the video editing preview area.
  • the first interactive segment is respectively set for the at least two character selection options and the story scene of the interactive video.
  • Step 805 in response to the selection operation on the information viewing option, set the information viewing component at the target scene position in the first interactive segment.
  • Step 806 in response to the selection operation on the scenario selection option, set a scenario selection component in the video editing area.
  • each candidate plot option corresponds to a plot selection component respectively; or, a plot selection component is set, and at least two candidate plot options are set in the plot selection component.
  • the plot selection component is set in any video clip in the interactive video; or, the plot selection component is set in the plot selection interface in the interactive video.
  • the candidate scenario option that meets the selection conditions is used as the finally selected candidate scenario option.
  • the unlocking condition of the plot is set as: when the number of unlocked players reaches n, the corresponding plot video clip is played, and n is a positive integer.
  • an interactive video 1010 and a plot selection control 1020 in the video editing preview area are displayed in the interactive video creation interface 1000, wherein, in the editing area 1030 of the plot selection control 1020, an unlock is displayed.
  • the condition is that when the number of unlockables reaches 3, the plot will be unlocked.
  • Step 807 For the at least two candidate scenario options, respectively set the scenario video clips associated with each candidate scenario option.
  • each candidate scenario option corresponds to a scenario video clip; or, in at least two candidate scenario options, there are two or more candidate scenario options corresponding to the same scenario video clip.
  • the scenario video clip associated with the candidate scenario option means that when a selection signal for the candidate scenario option is received, the scenario video clip associated with the candidate scenario option is played.
  • the interactive video is further provided with a global component, which includes at least one of a message component and a voice component.
  • a global component is a component created globally for an interactive video and can be displayed at any time period in the interactive video.
  • the global components are set in a manner corresponding to the interactive video timestamps; or, the global components are set in a manner of corresponding relationships with video clips.
  • an interactive video 1110 in the video editing preview area and a global control setting area 1120 are displayed in the interactive video creation interface 1100, wherein the global control setting area 1120 includes a message option 1121 and a voice option.
  • the global control setting area 1120 includes a message option 1121 and a voice option.
  • 1122 when receiving a selection operation on the message option 1121, set a message component for the interactive video 1110 in the video editing preview area, and set display parameters for the message component; when receiving a selection operation on the voice option 1122, A voice component is set for the interactive video 1110 in the video editing preview area, and display parameters are set for the voice component.
  • Step 808 Generate an interactive video according to the first interactive segment and other video segments.
  • an interactive video is generated according to the above-mentioned first interactive clip, background video clip, second interactive video clip, and other video clips, as well as the components set above.
  • a first interactive segment is set in the interactive video, and an information viewing component is set in the first interactive segment, and the information viewing The component is provided with scene information corresponding to the scene where the first interactive segment is located, thereby increasing the information amount and interaction form of the interactive segment in the interactive video.
  • the user can view the first interactive segment by selecting the information viewing component in the first interactive segment.
  • the information contained in the interactive clips improves the efficiency of human-computer interaction during the interaction with the interactive video.
  • the interactive video playback system is mainly used to implement functions such as playing, buffering, rendering, and interaction of the interactive video, while collecting user interaction data.
  • the interactive video playback system 1200 is mainly composed of a player 1230, an interaction engine 1220, and interaction components 1210 three-layer structure.
  • the interactive component 1210 mainly provides the platform standard component 1211 and the creator's self-built component 1212, such as: a multi-person team component, a VR panoramic search function, a multi-person voting component, a player information exchange component, etc., mainly from the lower layer Relevant capabilities are called from the interactive container 1221 and the player 1230 .
  • the interaction engine 1220 includes an interaction container 1221 and a platform adaptation layer 1222.
  • the hierarchical structure can better decouple interaction control logic, playback logic and rendering logic, and can flexibly support real-time interaction between content and users.
  • the platform adaptation layer 1222 includes a playback control application programming interface (Application Programming Interface, API) encapsulation, a device capability API encapsulation, and a user API encapsulation.
  • API Application Programming Interface
  • the player 1230 includes the native code of the application, the H5 player, and the like.
  • the interactive video playback system includes an interactive layer 1310 , a playing layer 1320 and a platform layer 1330 , wherein the interactive layer 1310 is separated from the playing layer 1320 , the interactive layer 1310 is independent, and the playing layer is lowered.
  • the scale and complexity of 1320 improve the smoothness of multi-video connection.
  • the interaction layer 1310 is above the playback layer 1320 and does not block video playback, and the bottom can realize the decoupling of the video playback connection and the interaction layer 1310 to realize cross-video interactive gameplay design.
  • FIG. 14 shows a schematic diagram of a process of creating an interactive video and driving a data stream in an interactive process provided by an exemplary embodiment of the present application.
  • the process is mainly divided into two aspects: data synchronization : 1, the configuration of the value group of the same component and the same variable in the editor; 2, the synchronization and judgment of multiple groups of data streams.
  • the application 1410 and the server 1420 are involved in the process
  • the application 1410 reports the user's behavior to the server 1420, and the user's operation behavior at a certain interactive node is called behavioral events, such as clicking, sliding, browsing time, click speed, user facial expressions, shaking, blowing, etc.
  • the operation behavior can be the behavior collected by the interactive terminal sensor, or a combination of multiple behaviors.
  • Server 1420 stores behavioral events. In order to process the replicated behaviors in a unified format, the above reported behavior events are abstracted into behavior ids and behavior values and stored in the log database of the server 1420.
  • the abstracted data is the basic data for feature extraction in subsequent formula calculation.
  • the statistical computing model By extracting feature vectors from behavior records, the statistical computing model generates new V values.
  • the server 1420 returns the corresponding plot branch information to the application 1410 according to the multi-dimensional V value decision.
  • the apparatus can be implemented as part or all of computer equipment through software, hardware, or a combination of the two.
  • the apparatus includes:
  • a display module 1510 configured to display an interactive video creation interface, the creation interface includes a video editing preview area and a component editing area, and the component editing area includes information viewing options;
  • a setting module 1520 configured to set a first interactive segment in the video editing preview area, where the first interactive segment is used to provide an information collection scene;
  • the setting module 1520 is further configured to, in response to the selection operation on the information viewing option, set an information viewing component at the position of the target scene in the first interactive segment, where the information viewing component is used to display the information that can be collected in the information collection scene;
  • the generating module 1530 is configured to generate an interactive video according to the information viewing component set by the first interactive segment.
  • the first interactive segment is a virtual reality segment
  • the information collection scene is a three-dimensional virtual scene presented by the virtual reality segment
  • the target scene position is a three-dimensional position in the three-dimensional virtual scene.
  • the setting module 1520 is further configured to set the information viewing component at a preset position in the target display scene in the first interactive segment, where the target display scene is at least one display scene included in the information collection scene;
  • the device also includes:
  • the receiving module 1540 is configured to receive a drag operation on the information viewing component, where the drag operation is used to move the setting position of the information viewing component in the target display scene; determine that the information viewing component is in the target display scene according to the drag operation the target scene location;
  • the setting module 1520 is further configured to set the information viewing component at the location of the target scene.
  • the component editing area further includes a component editing control corresponding to the information viewing component
  • the display module 1510 is further configured to display an information editing area in response to a trigger operation on the component editing control, and the information editing area includes an information title editing area and an information content editing area;
  • the receiving module 1540 is further configured to receive the title input operation in the information title editing area, and generate the title corresponding to the information viewing component; receive the content input operation in the information content editing area, and generate the corresponding title of the information viewing component. information.
  • the component editing area further includes a result selection option
  • the setting module 1520 is further configured to set a second interactive video clip in the video editing preview area
  • the setting module 1520 is further configured to, in response to a selection operation on the result selection option, set a result selection component corresponding to the second interactive video segment, where the result selection component is used for ending selection of the interactive video.
  • the result selection component includes at least two candidates
  • the device also includes:
  • a receiving module 1540 configured to receive a result video setting operation for the target candidate in the at least two candidates
  • the setting module 1520 is further configured to set the target result video as the result video associated with the target candidate according to the result video setting operation.
  • the component editing area further includes a character selection option
  • the setting module 1520 is further configured to set a background video clip in the video editing preview area, and the background video clip is used to provide the story background of the interactive video;
  • the setting module 1520 is further configured to, in response to a selection operation on the character selection option, set a character selection component after the background video clip, where the character selection component is used for character selection in the interactive video.
  • the component editing area further includes an interactive spectator option
  • the setting module 1520 is further configured to, in response to the selection operation on the interactive spectator option, set the spectator component and the participation component after the background video clip; in response to the selection operation on the character selection option, set the character selection component associated with the participation component .
  • the setting module 1520 is further configured to set the first interactive segment for the at least two character selection options and the story scene of the interactive video in response to the completion of setting of the at least two character selection options.
  • the component editing area further includes a scenario selection option
  • the setting module 1520 is further configured to set a plot selection component in the video editing preview area in response to the selection operation on the plot selection option, and the plot selection component includes at least two candidate plot options; for at least two candidate plot options, respectively Set the drama video clip associated with each candidate drama option.
  • a first interactive segment is set in the interactive video, and an information viewing component is set in the first interactive segment, and the information viewing The component is provided with scene information corresponding to the scene where the first interactive segment is located, thereby increasing the information amount and interaction form of the interactive segment in the interactive video.
  • the user can view the first interactive segment by selecting the information viewing component in the first interactive segment.
  • the information contained in the interactive clips improves the efficiency of human-computer interaction during the interaction with the interactive video.
  • the apparatus for creating an interactive video provided in the above-mentioned embodiment is only illustrated by the division of the above-mentioned functional modules.
  • the internal structure is divided into different functional modules to complete all or part of the functions described above.
  • the apparatus for creating an interactive video provided by the above embodiment and the embodiment of the method for creating an interactive video belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the "plurality" in the above embodiments refers to two or more, that is, “at least two".
  • FIG. 17 shows a structural block diagram of a terminal 1700 provided by an exemplary embodiment of the present application.
  • the terminal 1700 includes: a processor 1701 and a memory 1702 .
  • the processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1701 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1701 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1701 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • GPU Graphics Processing Unit, image processor
  • Memory 1702 may include one or more computer-readable storage media, which may be non-transitory. Memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1702 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1701 to implement the interactive video provided by the method embodiments in this application. method of creation.
  • the terminal 1700 may also optionally include:
  • Radio frequency circuits are used to receive and transmit RF (Radio Frequency, radio frequency) signals, also known as electromagnetic signals.
  • Radio frequency circuits communicate with communication networks and other communication devices through electromagnetic signals.
  • the radio frequency circuit converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit may communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.
  • 2G, 3G, 4G and 5G various generations of mobile communication networks
  • WiFi Wireless Fidelity, Wireless Fidelity
  • the display screen is used to display the UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the audio circuitry may include speakers.
  • the speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the audio circuit may also include a headphone jack.
  • the power supply is used to power various components in the terminal 1700 .
  • the power source can be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. Wired rechargeable batteries are batteries that are charged through wired lines, and wireless rechargeable batteries are batteries that are charged through wireless coils.
  • the rechargeable battery can also be used to support fast charging technology.
  • FIG. 17 does not constitute a limitation on the terminal 1700, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • An embodiment of the present application further provides a computer device, the computer device includes a memory and a processor, and the memory stores at least one instruction, at least one program, code set or instruction set, at least one instruction, at least one program, code set or instruction set The set is loaded by the processor and implements the method for creating an interactive video described in any of the foregoing embodiments.
  • the present application also provides a computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for creating an interactive video described in any of the foregoing embodiments.
  • the present application also provides a computer program product comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for creating an interactive video described in any of the foregoing embodiments.
  • the medium may be a computer-readable storage medium included in the memory in the above-mentioned embodiments; it may also be a computer-readable storage medium that exists independently and is not assembled into the terminal.
  • the computer-readable storage medium stores at least one instruction, at least one piece of program, code set or instruction set, and the at least one instruction, the at least one piece of program, the code set or the instruction set is loaded and executed by the processor To implement the method for creating an interactive video described in any of the embodiments of this application.
  • the computer-readable storage medium may include: Read Only Memory (ROM, Read Only Memory), Random Access Memory (RAM, Random Access Memory), Solid State Drive (SSD, Solid State Drives), or an optical disc.
  • the random access memory may include a resistive random access memory (ReRAM, Resistance Random Access Memory) and a dynamic random access memory (DRAM, Dynamic Random Access Memory).
  • ReRAM resistive random access memory
  • DRAM Dynamic Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种互动视频的创建方法、装置、设备及可读存储介质,涉及多媒体领域。该方法包括:显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域;在视频编辑预览区域中设置第一互动片段;响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件;根据第一互动片段所设置的信息查看组件,生成互动视频。在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,从而增加互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,提高了与互动视频的互动过程中的人机交互效率。

Description

互动视频的创建方法、装置、设备及可读存储介质
本申请要求于2020年10月16日提交的申请号为202011110802.X、发明名称为“互动视频的创建方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及多媒体领域,特别涉及一种互动视频的创建方法、装置、设备及可读存储介质。
背景技术
互动视频是一种设置互动组件,提供给用户进行剧情互动的视频形式,示意性的,互动视频中正在播放的内容为角色A与角色B正在进行沟通,角色A对角色B说“今晚吃炸鸡还是火锅”,则在互动视频的播放过程中显示炸鸡和火锅两个选项,根据用户对选项的选择播放不同的后续剧情发展的视频片段。
通常,在进行互动组件的设置时,针对剧情的发展情况,针对不同的视频片段的衔接关系进行互动组件的设置,从而将多个视频片段衔接起来,组成一个剧情完整的互动视频,用户对不同剧情对应的互动组件进行选择,从而选择互动视频剧情的发展走向。
然而,在上述互动组件的设置过程中,仅能够针对剧情发展走向进行选择。
发明内容
本申请实施例提供了一种互动视频的创建方法、装置、设备及可读存储介质。所述技术方案如下:
一方面,提供了一种互动视频的创建方法,应用于计算机设备中,该方法包括:
显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括信息查看选项;
在视频编辑预览区域中设置第一互动片段,第一互动片段用于提供信息收集场景;
响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件,信息查看组件用于展示信息收集场景内可收集的信息;
根据第一互动片段所设置的信息查看组件,生成互动视频。
另一方面,提供了一种互动视频的创建装置,该装置包括:
显示模块,用于显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括信息查看选项;
设置模块,用于在视频编辑预览区域中设置第一互动片段,第一互动片段用于提供信息收集场景;
设置模块,还用于响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件,信息查看组件用于展示信息收集场景内可收集的信息;
生成模块,用于根据第一互动片段所设置的信息查看组件,生成互动视频。
另一方面,提供了一种计算机设备,计算机设备包括处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上一个方面及其可选实施例中任一所述的互动视频的创建方法。
另一方面,提供了一种计算机可读存储介质,计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上一个方面及其可选实施例中任一所述的互动视频的创建方法。
另一方面,提供了一种计算机程序,该计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述一个方面及其可选实施例中任一所述的互动视频的创建方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,通过该信息查看组件来展示第一互动片段提供的信息收集场景中可供收集的信息,从而增加了互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,查看第一互动片段中所包含的信息,提高了与互动视频的互动过程中用户收集信息的效率。
附图说明
图1是本申请一个示例性实施例提供的互动视频的交互方法示意图;
图2是本申请一个示例性实施例提供的互动视频的交互方法的示意图;
图3是本申请一个示例性实施例提供的互动视频的创建方法的实施环境示意图;
图4是本申请一个示例性实施例提供的互动视频的创建方法的流程图;
图5是基于图4示出的实施例提供的在创建界面中设置信息查看组件的示意图;
图6是本申请另一个示例性实施例提供的互动视频的创建方法的流程图;
图7是基于图6示出的实施例提供的在创建界面中设置投票组件的界面示意图;
图8是本申请另一个示例性实施例提供的互动视频的创建方法的流程图;
图9是基于图8示出的实施例提供的在创建界面中设置人物选择组件的示意图;
图10是基于图8示出的实施例提供的剧情选择控件设置界面示意图;
图11是基于图8示出的实施例提供的全局控件的设置界面示意图;
图12是本申请一个示例性实施例提供的互动视频播放***的结构框图;
图13是本申请另一个示例性实施例提供的互动视频播放***的结构框图;
图14是本申请一个示例性实施例提供的数据交互示意图;
图15是本申请一个示例性实施例提供的互动视频的创建装置的结构框图;
图16是本申请另一个示例性实施例提供的互动视频的创建装置的结构框图;
图17是本申请一个示例性的实施例提供的终端的结构框图。
具体实施方式
如图1,其示出了一个示例性的创建互动视频的界面示意图,在互动视频的创建界面100 中显示有互动视频的衔接关系,视频片段110播放完毕后衔接视频片段120,而在视频片段120之后,衔接有四个视频片段,其中包括视频片段131、视频片段132、视频片段133和视频片段134,也即,在视频片段120播放后,在界面中显示剧情选项,剧情选项能够在视频片段120的播放过程中显示,也可以在视频片段120播放完毕后暂停视频播放过程进行显示。如:剧情选项A对应视频片段131,剧情选项B对应视频片段132,剧情选项C对应视频片段133,以及剧情选项D对应视频片段134。
本申请实施例中提供的互动视频,在上述剧情选择的基础上,还提供有如:多人参与互动、视频内容互动、投票互动、信息交流互动等互动形式。
结合上述互动形式,对本申请实施例提供的互动视频的互动整体流程进行示意性说明。首先,用户观看互动视频中的背景视频片段,该背景视频片段用于讲述互动视频对应的故事背景,在背景视频片段播放完毕后,计算机设备在互动视频的播放界面中显示互动形式选择控件,其中包括参与控件和旁观控件,当对参与控件进行选择时,则表示用户以玩家的形式参与到互动视频的互动过程中,一个互动视频对应有至少两个玩家,当对旁观控件进行选择时,则表示用户对互动视频的互动过程进行旁观。
示意性的,对参与控件进行选择后,显示有玩家角色A、玩家角色B以及玩家角色C。玩家能够在三个角色中对需要扮演的角色进行选择,或***自动分配给每个玩家对应的角色。
在角色选择完毕后,播放不同角色对应的剧***片段,供每个玩家了解自己的角色对应的剧情。根据每个玩家播放的剧***片段,玩家之间能够进行讨论,进行信息共享以及疑点询问等沟通。
互动视频中还提供有至少两个虚拟现实(Virtual Reality,VR)场景,该VR场景与玩家角色对应,或与剧情相关。玩家能够在至少两个VR场景中进行选择,从而显示选中的VR场景,在该VR场景中进行信息查询,并根据查询到的信息与其他玩家进行讨论分析。多个玩家还能够对剧情走向进行选择,并将多数玩家选择的剧情走向确定为互动视频的剧情走向。
互动视频中还提供有投票控件,针对玩家角色进行投票,从多个玩家角色中选出最符合投票要求的玩家角色,并根据投票结果播放结局视频。
值得注意的是,上述互动过程中各个阶段的顺序仅为示意性的举例,实际互动过程中,上述各个阶段的顺序根据创建者的设计进行自由搭配,本申请实施例对此不加以限定。
在一个示例性的实施例中,如图2,其示出了本申请一个示例性实施例提供的互动视频的交互方法的示意图,在互动视频的创建结构中包括了互动视频的视频架构,其中包括:开始阶段210,用于播放该互动视频的故事背景视频;互动形式选择组件220,包括参与组件221和旁观组件222,参与组件221用于使玩家参与到互动视频的互动过程中,旁观组件222用于使用户对互动视频的互动过程进行旁观,参与组件221中还对应设置有身份选择组件,身份选择组件中包括针对互动视频中不同身份设置的不同选项,玩家能够在选项中选择自己期望扮演的身份角色,且不同身份角色还对应有不同的剧***,用于供玩家了解所扮演的身份角色对应的故事情节。
互动视频中还包括VR视频230,VR视频230中设置有互动组件,该互动组件用于在VR视频230中实现信息查看功能,如:VR视频实现为客厅场景,则在客厅的沙发角落处设置有互动组件,当对互动组件进行点击时,显示遗落在沙发角落处的信件以及信件内容。
互动视频中还包括剧情选择组件240,在剧情选择组件240中对候选剧情进行选择后,控制互动视频的剧情走向。可选地,当存在多个玩家参与互动时,根据玩家在多个剧情中的选择情况,将剧情推动至被选择较多的候选剧情。
互动视频中还包括投票组件250,用于在多个玩家扮演的身份角色中选择出符合投票要求的身份角色,根据身份角色的被选择情况,对应播放不同的结果视频。
对应上述互动视频的创建结构,在互动视频的播放结构中,首先播放开始视频260,该开始视频260用于表达该互动视频的故事背景;在开始视频260播放完毕或者播放过程中,显示互动形式选择组件270,其中包括参与互动模式271对应的参与组件和旁观模式272对应的旁观组件,参与互动模式271用于表示玩家参与到互动视频的互动过程中,如:在互动视频的互动场景中进行场景信息的查看等,旁观模式272用于表示用户对互动视频的互动过程进行旁观,且在旁观过程中旁观者能够查看故事线中角色的视角视频,但无法参与投票选择;或,旁观者能够根据玩家的故事线选择查看故事线走向,并旁观玩家的发言等内容,最终参与投票选择。其中,参与互动模式271中玩家还能够对身份进行选择,在互动视频中不同身份对应的不同选项中,选择自己期望扮演的身份角色,且不同身份角色还对应有不同的剧***,玩家选择身份角色完毕后,能够通过剧***了解所扮演的身份角色对应的故事情节。
互动视频中还包括VR场景280,且VR场景280中设置有互动组件,该互动组件用于在VR场景280中实现信息查看功能。
互动视频中还包括剧情选择过程290,在剧情选择过程290中对候选剧情进行选择后,控制互动视频的剧情走向。
互动视频中还包括多人投票环节200,用于在多个玩家扮演的身份角色中选择出符合投票要求的身份角色,根据身份角色的被选择情况,对应播放不同的结果视频。
图3是本申请一个示例性实施例提供的互动视频的创建方法的实施环境示意图,该实施环境中包括:终端310和服务器320;
终端310中安装有多媒体应用程序,如:视频播放应用程序、视频处理应用程序,多媒体应用程序中提供有互动视频的创建功能,通过互动视频的创建功能,用户在多媒体应用程序中创建互动视频,并在新创建的互动视频中增加互动视频片段和组件。
终端310将互动视频片段和组件根据剧情关联关系衔接后,最终生成互动视频。终端310通过通信网络330将互动视频上传至服务器320,从而玩家终端能够从服务器320中获取该互动视频,并与其他玩家终端进行在该互动视频中的互动。
其中,玩家终端针对该互动视频创建房间,并邀请其他玩家终端共同参与该互动视频的互动。
示例性的,上述服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式***,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能电视、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
结合上述实施环境说明,对本申请实施例提供的互动视频的创建方法进行说明,该方法可以由终端或者服务器执行,也可以由终端和服务器共同执行,也即,计算机设备可以包括终端和服务器中的至少一种。
图4是本申请一个示例性实施例提供的互动视频的创建方法的流程图,以该方法由终端执行为例进行说明,该方法包括:
步骤401,显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括信息查看选项。
可选地,创建界面用于创建互动视频,其中,创建界面用于确定互动视频的整体结构;和/或,用于设定互动视频中的视频片段和互动组件。
其中,视频编辑预览区域用于对互动视频中的视频片段进行设置和预览;组件编辑区域用于在已设定视频片段的基础上进行互动组件的设置。
在一些实施例中,视频编辑预览区域还用于对互动视频的整体结构进行预览,如:各个视频片段之间的衔接关系,和/或,各个视频片段与已设置的组件之间的关联关系。
上述组件编辑区域中包括组件编辑选项,组件编辑选项中包括信息查看选项,信息查看选项用于指示在视频中增加信息查看组件,从而当接收到对信息查看组件的选择操作时,展示该信息查看组件所包含的信息。其中,信息查看组件用于设置于VR场景中;或,信息查看组件用于设置于视频片段中的指定位置处。
视频编辑预览区域和组件编辑区域为并列显示在创建界面中的两个区域,示意性的,视频编辑预览区域位于创建界面的左侧,组件编辑区域位于创建界面的右侧。
在一些实施例中,首先在视频编辑预览区域中设置视频片段,从而针对视频片段进行互动组件的设置。如:在视频片段的任意位置设置信息查看组件;或,在视频片段的终止位置设置选择组件等。
在一些实施例中,当视频编辑预览区域中未设置视频片段,或互动组件的设置未指定对应的视频片段时,则互动组件设置在互动视频起始位置,或,终止位置,或指定时间段内。
步骤402,在视频编辑预览区域中设置第一互动片段。
第一互动片段用于提供信息收集场景。第一互动片段的设置方式包括如下方式中的至少一种:
第一,在终端本地中已存储有第一互动片段,在终端打开多媒体应用程序并显示创建界面后,用户将第一互动片段拖动至该创建界面中,从而在创建界面中设置第一互动片段;
其中,在拖动第一互动片段时,直接将第一互动片段拖动至第一互动片段在互动视频的预期播放位置处;或,将第一互动片段拖动至互动视频的视频编辑预览区域中,在第一互动片段的设置参数中,设置第一互动片段的播放位置。示意性的,将第一个播放的视频片段设置为1号,将第二个播放的视频片段设置为2号,第三阶段存在三个并列的视频片段,则分别设置为3-1号、3-2号以及3-3号,表示三个视频片段的并列关系。
第二,在终端本地中已存储有第一互动片段,在终端打开多媒体应用程序并显示创建界面后,用户在视频编辑预览区域中选择上传视频片段的控件,对应选择本地已存储的第一互动片段。在第一互动片段上传完毕后,设定该第一互动片段的播放位置。
第三,在创建界面中首先上传n个视频片段,n为正整数,其中包括第一互动片段;根 据播放顺序依次对视频片段进行设置,其中包括对该第一互动片段的设置。
第四,在创建界面中首先创建互动视频的整体框架,并针对整体框架在第一互动片段对应的播放位置处上传该第一互动片段。
值得注意的是,上述第一互动片段的设置方式仅为示意性的举例,本申请实施例中对第一互动片段的具体设置方式不加以限定。
示例性的,第一互动片段为普通视频片段;或者,第一互动片段为VR视频片段,也即,第一互动片段中提供有VR场景,玩家能够在VR场景中与场景中的物体进行交互。同理,VR场景中还可以设置信息查看组件,用于指示VR场景中的物体所包含的信息。
可选地,第一互动片段为虚拟现实片段,对应的,信息收集场景是虚拟现实片段所呈现的三维虚拟场景。示例性的,第一互动片段还可以实现为VR三维虚拟场景,且该VR三维虚拟场景设置有对应的查看时长限制,玩家能够在查看时长限制的时长范围内对该VR三维虚拟场景进行信息查找。
可选地,在VR三维虚拟场景的设置过程中,将终端本地已存储的三维虚拟场景模型导入至多媒体应用程序中,并设置该VR三维虚拟场景的显示位置。
步骤403,响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件。
信息查看组件用于展示信息收集场景内可收集的信息,比如,可收集的信息包括场景信息和组件信息。也即,信息查看组件对应设置有目标场景位置对应的场景信息;比如,对应信息查看组件设置场景信息“查看门缝字条”,该信息查看组件对应的是在门缝位置处的字条。信息查看组件还对应有具体的组件信息,如:上述场景信息“查看门缝字条”还对应设置有字条的内容。其中,目标显示场景是信息收集场景中包含的至少一个场景。示例性的,第一互动片段中包含多个信息收集场景,每个信息收集场景对应至少一个显示场景,比如,信息收集场景可以是一个展厅,展厅的休息区是一个显示场景,展厅的商品展示区是另一个显示场景。
可选地,在第一互动片段为虚拟现实片段,信息收集场景是虚拟现实片段所呈现的三维虚拟场景的情况下,目标场景位置是三维虚拟场景中的三维位置。示例性的,如图5,信息查看组件521位于VR三维虚拟场景511中,信息查看组件521所处的目标场景位置是一个三维位置。
可选地,在对第一互动片段设置信息查看组件时,包括如下情况中的任意一种:
第一,第一互动片段实现为第一互动视频片段,则在该第一互动视频片段的播放过程中,针对关键帧进行信息查看组件的设置,从而在第一互动视频片段的播放过程中,在播放到设置有信息查看组件的关键帧(I帧)以及与该关键帧对应的视频帧(P帧、B帧)时,对应显示该信息查看组件;
第二,第一互动片段实现为虚拟现实片段,也即,第一互动片段为包括时长限制的VR三维虚拟场景,则在VR三维虚拟场景中的目标位置处设置该信息查看组件,在玩家对VR三维虚拟场景进行查看时,当查看到目标位置处时,显示该信息查看组件。可选地,用户在信息查看组件上进行拖动,从而对信息查看组件在VR三维虚拟场景中的位置进行调整。
在一些实施例中,在第一互动片段的目标显示场景中的预设位置处设置信息查看组件,接收信息查看组件上的拖动操作,该拖动操作用于对信息查看组件在目标显示场景中的设置 位置进行移动,根据拖动操作确定信息查看组件在目标显示场景中的目标场景位置,并将信息查看组件设置于目标场景位置处。
可选地,在确定信息查看组件的设置位置之前,或,在确定信息查看组件的设置位置之后,对信息查看组件所包含的信息内容进行设置。其中,信息查看组件包含的内容包括场景信息和组件信息中的至少一种,场景信息用于指示信息查看组件包含的信息内容的标题,组件信息用于指示信息查看组件所包含的信息内容。结合上述举例,“查看门缝字条”即为场景信息,也即信息内容的标题;门缝字条的内容“晚上6点在会议室开会”为组件信息,也即信息内容。示例性的,信息查看组件还对应有信息编辑控件,比如,组件编辑区域还包括与信息查看组件对应的组件编辑控件;响应于组件编辑控件上的触发操作,显示信息编辑区域,该信息编辑区域中包括信息标题编辑区域和信息内容编辑区域,接收在信息标题编辑区域中的标题输入操作,生成信息查看组件对应的标题,接收在信息内容编辑区域的内容输入操作,生成信息查看组件对应的信息内容。示例性的,信息编辑区域可以是组件编辑区域中的部分或者全部区域。另外,本实施例中对上述信息标题与内容的编辑顺序不加以限定。
示意性的,请参考图5,在创建界面500中包括视频编辑预览区域510和组件编辑区域520,其中视频编辑预览区域510中显示有已设置在互动视频中的VR三维虚拟场景511,VR三维虚拟场景511设置有名称“小林的房间”,以及还对应显示有三维场景俯视图512。组件编辑区域520中显示有能够用于设置在VR三维虚拟场景511中的组件,其中包括信息查看组件521,在VR三维虚拟场景511中设置信息查看组件521时,将信息查看组件521拖动至VR三维虚拟场景511中,并将其拖动至对应的目标位置处。可选地,VR三维虚拟场景511中可以设置一个信息查看组件521,也可以设置多个信息查看组件521。信息查看组件521的设置数量对应有设置上限,或,信息查看组件521可以无限数量进行设置。
步骤404,根据第一互动片段所设置的信息查看组件,生成互动视频。
结合其他互动视频片段,根据该第一互动片段以及第一互动片段中设置的信息查看组件,最终生成互动视频进行视频互动。
综上所述,本实施例提供的互动视频的创建方法,在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,该信息查看组件中设置有第一互动片段所处场景对应的场景信息,从而增加了互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,查看第一互动片段中所包含的信息,提高了与互动视频的互动过程中的人机交互效率。
在一个可选的实施例中,组件编辑选项中还包括结果选择选项,也即玩家能够根据在第一互动片段中查看的信息进行结果选择,并根据选择的结果查看不同的互动视频结局。图6是本申请另一个示例性实施例提供的互动视频的创建方法的流程图,以该方法由终端执行为例进行说明,该方法包括:
步骤601,显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括信息查看选项。
上述组件编辑区域中还包括结果选择选项,该结果选择选项用于向玩家提供能够选择的互动视频故事线的发展结果。示例性的,组件编辑区域中包括组件编辑选项,组件编辑选项还包括结果选择选项。
步骤602,在视频编辑预览区域中设置第一互动片段。
步骤603,响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件。
步骤604,在视频编辑预览区域中设置第二互动视频片段。
第二互动视频片段用于提供结果展示内容。可选地,在视频编辑预览区域中设置第二互动视频片段,该第二互动视频片段用于引导剧情发展。
可选地,第二互动视频片段的设置方式包括如下方式中的任意一种:
第一,在终端本地中存储有第二互动视频片段,在终端打开多媒体应用程序并显示创建界面后,用户将第二互动视频片段拖动至该创建界面中,从而在创建界面中设置第二互动视频片段;
其中,在拖动第二互动视频片段时,直接将第二互动视频片段拖动至第二互动视频片段在互动视频的预期播放位置处;或,将第二互动视频片段拖动至互动视频的视频编辑预览区域中,在第二互动视频片段的设置参数中,设置第二互动视频片段的播放位置。
第二,在终端本地中存储有第二互动视频片段,在终端打开多媒体应用程序并显示创建界面后,用户在视频编辑预览区域中选择上传视频片段的控件,对应选择本地已存储的该第二互动视频片段。在第二互动视频片段上传完毕后,设定该第二互动视频片段的播放位置。
第三,在创建界面中首先上传n个视频片段,n为正整数,其中包括第二互动视频片段,根据播放顺序依次对视频片段进行设置,其中包括对该第二互动视频片段的设置。
第四,在创建界面中首先创建互动视频的整体框架,并针对整体框架在第二互动视频片段对应的播放位置处上传该第二互动视频片段。
值得注意的是,上述第二互动视频片段的设置方式仅为示意性的举例,本申请实施例中对第二互动视频片段的具体设置方式不加以限定。
可选地,第二互动视频片段用于引导展示结果选择组件,值得注意的是,该第二互动视频片段和上述第一互动片段可以实现为相同的或不同的片段。
步骤605,响应于在结果选择选项上的选择操作,对应第二互动视频片段设置结果选择组件。
结果选择组件用于互动视频的结局选择。结果选择组件中包括至少两个候选项,其中,每个候选项对应一个剧情发展结果;或,i个候选项对应k个剧情发展结果,i,k皆为正整数,且i≥k。
本实施例中以先设置第二互动视频片段,再设置结果选择组件为例进行说明,实际操作中,还可以先设置结果选择组件,再设置第二互动视频片段,本申请实施例对结果选择组件和第二互动视频片段的设置顺序不加以限定。
可选地,结果选择组件设置在第二互动视频片段的末尾,或,结果选择组件设置在第二互动视频片段与结果视频之间的衔接位置处。
在一些实施例中,结果选择组件包括至少两个候选项,至少两个候选项中包括目标候选项,其中,接收对目标候选项的结果视频设置操作,从而根据结果视频设置操作,将目标结果视频设置为与目标候选项关联的结果视频。
在一些实施例中,针对参与互动视频的玩家数量的不同,结果视频的设置方式也不同,其中,包括如下情况中的任意一种:
第一,互动视频为单个玩家参与互动的视频,也即,在互动视频的互动过程中,玩家在第一互动片段中进行信息查看,继续对互动视频中的第二互动视频片段进行查看,然后在至少两个候选项中进行选择,针对玩家的选择,播放对应的结果视频,故在候选项与结果视频的设置过程中,仅需要确定候选项与结果视频之间的对应关系,从而在接收到对其中某个候选项的选择操作时,播放对应的结果视频;
第二,互动视频为多个玩家参与互动的视频,也即,在互动视频的互动过程中,多个玩家在第一互动片段助攻进行信息查看,继续对第二互动视频片段进行查看,从而在至少两个候选项中分别进行选择,针对多个玩家的选择,播放被选择较多的候选项所对应的结果视频,从而在候选项与结果视频的设置过程中,需要针对候选项的被选次数所对应的分数进行设置,从而从至少两个候选项分别对应的分数中确定分数最高的候选项,播放该候选项对应的结果视频。
示意性的,请参考图7,在互动视频的创建界面700中包括已设置的投票组件710,其中包括四个选项:控件711(对应林师傅)、控件712(对应郑屠夫)、控件713(对应周半仙)以及控件714(对应赵大爷)。其中,在设置投票组件710时,包括如下方式中的任意一种:第一,将投票组件710拖动至界面的任意位置处,并对应设置投票组件710的名称,如:将控件711拖动至任意位置处,并对应设置控件711对应的名称为林师傅。第二,接收在投票组件设置控件上的选择操作,并根据选择操作显示设置项,其中包括投票组件的数量,每个投票组件对应的名称等,根据设置项中的设置内容,以及预先设定的投票组件排布方式,生成互动视频中的投票界面。
步骤606,生成互动视频。
结合其他互动视频片段,根据该第一互动片段以及第一互动片段中设置的信息查看组件,最终生成互动视频进行视频互动。
综上所述,本实施例提供的互动视频的创建方法,在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,该信息查看组件中设置有第一互动片段所处场景对应的场景信息,从而增加了互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,查看第一互动片段中所包含的信息,提高了与互动视频的互动过程中的人机交互效率。
本实施例提供的方法,还通过在互动视频的创建过程中设置结果选择组件,从而用户能够对互动视频的结果进行预测,并根据预测的结果查看结果视频片段,增加了互动视频中互动片段的信息量和交互形式。
在一个可选的实施例中,互动视频由多个玩家参与互动,图8是本申请另一个示例性实施例提供的互动视频的创建方法的流程图,以该方法由终端执行为例进行说明,该方法包括:
步骤801,显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括组件编辑选项。
上述组件编辑区域用于在已设定视频片段的基础上进行互动组件的设置。组件编辑区域中包括组件编辑选项,组件编辑选项中包括信息查看选项、结果选择选项、人物选择选项以及剧情选择选项中的至少一种。
其中,信息查看选项用于在互动视频中的互动片段中设置信息查看组件;结果选择选项 用于提供对互动视频的结局进行预测或选择的结果选择组件;人物选择选项用于向用户提供对在互动视频中的扮演角色进行选择的人物控件组件;剧情选择选项用于提供对互动视频的故事线走向进行选择的剧情选择组件。
其中,信息查看组件用于设置于VR场景中;或,信息查看组件用于设置于视频片段中的指定位置处。本实施例中,以信息查看组件设置于VR场景中为例进行说明。
结果选择组件用于在结果视频片段之前设置于前序视频片段中;或,结果选择组件用于在结果视频片段之前设置于结果选择界面中。
人物选择组件用于设置在背景视频片段中向用户提供选择人物的控件;或,人物选择组件用于设置在背景视频片段之后的人物选择界面中;或,人物选择组件用于设置在背景视频片段之前的人物选择界面中。
剧情选择组件用于设置在互动视频中的任意视频片段中,用于提供后续剧情走向的选择;或,剧情选择组件用于设置在互动视频中的剧情选择界面中,用于提供后续剧情走向的选择。
步骤802,在视频编辑预览区域中设置背景视频片段。
该背景视频片段用于提供互动视频的故事背景。可选地,背景视频片段为互动视频的开始视频片段。也即,互动视频开始播放时的第一个片段即为背景视频片段。
可选地,在设置背景视频片段时,直接将背景视频片段拖入视频编辑预览区域中;或,在视频编辑预览区域中显示的互动视频结构上,对背景视频片段对应的位置进行选择,将背景视频片段设置在该位置上。
示例性的,以针对互动视频设置背景视频片段为例进行说明,互动视频中未设置有背景视频片段,也即在互动视频开始时,直接显示人物选择界面。
步骤803,响应于在人物选择选项上的选择操作,在背景视频片段后设置人物选择组件。
人物选择组件用于互动视频中的扮演角色的选择。人物选择组件中包括至少两个人物选择选项。可选地,至少两个人物选择选项中每个人物选择选项对应一个人物选择组件;或,设置一个人物选择组件,该人物选择组件中包括至少两个人物选择选项。
在背景视频片段后设置人物选择组件时,包括如下情况中的任意一种:
第一,在背景视频片段的后部叠加显示人物选择组件;可选地,在设置人物选择组件的显示逻辑时,将人物选择组件设置为持续显示直至用户选中某个人物选择组件,若背景视频片段播放完毕后,用户未在人物选择组件上进行选择,则保持显示指定图像帧,直至用户选中某个人物选择组件;或,向用户随机分配一个未被选择的人物角色。
第二,在背景视频片段后衔接人物选择界面,在人物选择界面中设置人物选择组件。该人物选择界面可以是互动视频的创建过程中设置的,也可以是根据背景视频片段选取得到的。如:在背景视频片段中选择任意一帧图像帧作为人物选择界面;或,将背景视频片段的第一帧作为人物选择界面;或,将背景视频片段的最后一帧作为人物选择界面。
可选地,在互动视频设置人物选择组件之前,还设置有匹配组件和房间创建组件,其中,匹配组件用于指示玩家与其他参与互动视频的玩家随机匹配,进行人物选择;房间创建组件用于指示玩家创建房间,并邀请其他玩家进行互动视频的互动。
示意性的,请参考图9,其示出了本申请一个示例性实施例提供的在创建界面中设置人物选择组件的示意图。在互动视频的创建界面900中显示有人物选择选项910,而在视频编辑预览区域920中显示有当前设置在互动视频中的界面930,该界面930中显示有匹配控件 931和房间创建控件932。在人物选择选项910中进行人物选择选项的设置后,即互动视频中包括至少两个人物角色供玩家选择扮演。匹配控件931表示玩家在至少两个人物角色中随机匹配一个角色与其他玩家进行互动;房间创建控件932表示玩家自己创建房间,并邀请其他玩家,或匹配其他玩家进行互动。
在一些实施例中,组件编辑选项中还包括互动旁观选项,也即组件编辑区域中还包括互动旁观选项。在人物选择组件的设置过程中,包括如下情况中的至少一种:
第一,互动视频的互动过程能够旁观,则响应于在互动旁观选项上的选择操作,在背景视频片段后设置旁观组件和参与组件;响应于在人物选择选项上的选择操作时,设置与参与组件关联的人物选择组件。
第二,互动视频的互动过程无法旁观,则响应于在人物选择选项上的选择操作时,在背景视频片段后设置人物选择组件。
步骤804,在视频编辑预览区域中设置第一互动片段。
可选地,响应于至少两个人物选择选项设置完毕,针对至少两个人物选择选项和互动视频的故事场景分别设置第一互动片段。
步骤805,响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件。
步骤806,响应于在剧情选择选项上的选择操作,在视频编辑区域中设置剧情选择组件。
其中,至少两个候选剧情选项中,每个候选剧情选项分别对应一个剧情选择组件;或,设置一个剧情选择组件,该剧情选择组件中设置有至少两个候选剧情选项。
其中,剧情选择组件设置在互动视频中的任意一个视频片段中;或,剧情选择组件设置在互动视频中的剧情选择界面中。
当互动视频为多个玩家参与互动的视频时,至少两个候选剧情选项中,符合被选条件的候选剧情选项作为最终被选择的候选剧情选项。示意性的,将剧情的解锁条件设置为:当解锁玩家数达到n个时,播放对应的剧***片段,n为正整数。
示意性的,请参考图10,在互动视频的创建界面1000中显示有视频编辑预览区域中的互动视频1010和剧情选择控件1020,其中,在剧情选择控件1020的编辑区域1030中,显示有解锁条件为,当解锁人数达到3人时,解锁剧情。
步骤807,针对至少两个候选剧情选项,分别设置每个候选剧情选项关联的剧***片段。
可选地,每个候选剧情选项对应一个剧***片段;或,至少两个候选剧情选项中,存在两个或者多个候选剧情选项对应同一个剧***片段。
与候选剧情选项关联的剧***片段,即表示当接收到对候选剧情选项的选定信号时,播放与该候选剧情选项关联的剧***片段。
在一些实施例中,互动视频中还设置有全局组件,其中包括留言组件和语音组件中的至少一种。全局组件即为针对互动视频全局创建的,能够显示在互动视频中的任意时间段的组件。
示意性的,全局组件以与互动视频时间戳对应的方式进行设置;或,全局组件以与视频片段之间对应关系的方式进行设置。
示意性的,请参考图11,在互动视频的创建界面1100中显示有视频编辑预览区域中的 互动视频1110和全局控件设置区域1120,其中,全局控件设置区域1120中包括留言选项1121和语音选项1122,当接收到在留言选项1121上的选择操作时,在视频编辑预览区域中对互动视频1110设置留言组件,并对留言组件设置显示参数;当接收到在语音选项1122上的选择操作时,在视频编辑预览区域中对互动视频1110设置语音组件,并对语音组件设置显示参数。
步骤808,根据第一互动片段以及其他视频片段生成互动视频。
可选地,根据上述第一互动片段、背景视频片段、第二互动视频片段和其他视频片段,以及上述设置的组件,生成互动视频。
值得注意的是,针对全局组件和互动组件,创建者皆可以设置自定义组件,并为自定义组件设置对应的功能参数以实现不同的功能。
综上所述,本申请实施例提供的互动视频的创建方法,在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,该信息查看组件中设置有第一互动片段所处场景对应的场景信息,从而增加了互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,查看第一互动片段中所包含的信息,提高了与互动视频的互动过程中的人机交互效率。
本实施例提供的方法,通过设置人物选择选项从而在互动视频中设置人物选择组件,提供了多个玩家在互动视频中共同参与互动的条件,增加了互动视频的互动形式,提高了互动视频的互动效率;以及通过设置剧情选择组件,从而控制互动视频的剧情走向,在多个玩家共同参与互动的情况下,由多个玩家通过投票共同决定剧情走向,提高了互动效率,增加了互动形式。
在一些实施例中,互动视频播放***主要用于实现互动视频的播放、缓冲、渲染以及交互等功能,同时采集用户互动数据,互动视频播放***1200主要由播放器1230、互动引擎1220和互动组件1210三层结构组成。
如图12,互动组件1210主要提供平台标准组件1211和创作者自建组件1212,如:支持多人组队组件、VR全景搜证功能、多人投票组件、玩家信息交流组件等,主要从下层互动容器1221、播放器1230中调取相关能力。
互动引擎1220中包括互动容器1221和平台适配层1222,该分层结构可以较好的解耦互动控制逻辑、播放逻辑和渲染逻辑,可达到灵活支持内容与用户实时互动的目的。平台适配层1222包括播放控制应用程序接口(Application Programming Interface,API)封装、设备能力API封装以及用户API封装。
播放器1230中包括应用程序的原生代码、H5播放器等。
示意性的,请参考图13,整体而言,该互动视频播放***包括互动层1310、播放层1320和平台层1330,其中,互动层1310与播放层1320分离,互动层1310独立,降低播放层1320的规模和复杂度,提升多视频衔接的流畅度。
互动层1310在播放层1320之上,不遮挡视频播放,底部可实现视频播放衔接与互动层1310解耦,实现跨视频的互动玩法设计。
在一些实施例中,如图14,其示出了本申请一个示例性实施例提供的互动视频的创建和 互动过程中数据流的驱动的过程示意图,该过程主要分为两个方面的数据同步:1,编辑器内同个组件、同个变量的数值组的配置;2,多组数据流同步与判断。
该过程中涉及到应用程序1410和服务器1420;
应用程序1410上报用户的行为给服务器1420,将用户在某个互动节点的操作行为称为行为事件,例如点击、滑动、浏览时间长短、点击速度、用户面部表情、摇一摇、吹一吹等,操作行为可以是通过交互终端传感器采集的行为,或,多种行为的组合。
服务器1420对行为事件进行存储。为了将复制多样的行为统一格式处理,上述上报行为事件会抽象为行为id和行为值存在服务器1420的日志数据库中,抽象后的数据,是后续公式计算特征提取的基础数据。通过对行为记录进行特征向量提取,统计计算模型产出新的V数值。服务器1420根据多维度的V数值决策返回对应的剧情分支信息给应用程序1410。
图15是本申请一个示例性实施例提供的互动视频的创建装置的结构框图,该装置可以通过软件、硬件、或者二者结合的形式实现成为计算机设备的部分或者全部,该装置包括:
显示模块1510,用于显示互动视频的创建界面,创建界面中包括视频编辑预览区域和组件编辑区域,组件编辑区域中包括信息查看选项;
设置模块1520,用于在视频编辑预览区域中设置第一互动片段,第一互动片段用于提供信息收集场景;
设置模块1520,还用于响应于在信息查看选项上的选择操作,在第一互动片段中目标场景位置处设置信息查看组件,信息查看组件用于展示所述信息收集场景内可收集的信息;
生成模块1530,用于根据第一互动片段所设置的信息查看组件,生成互动视频。
在一个可选的实施例中,第一互动片段为虚拟现实片段,信息收集场景是虚拟现实片段所呈现的三维虚拟场景,目标场景位置是三维虚拟场景中的三维位置。
在一个可选的实施例中,
设置模块1520,还用于在第一互动片段中的目标显示场景中的预设位置处设置信息查看组件,目标显示场景是信息收集场景中包含的至少一个显示场景;
该装置还包括:
接收模块1540,用于接收在信息查看组件上的拖动操作,拖动操作用于对信息查看组件在目标显示场景中的设置位置进行移动;根据拖动操作确定信息查看组件在目标显示场景中的目标场景位置;
设置模块1520,还用于将信息查看组件设置于目标场景位置处。
在一个可选的实施例中,组件编辑区域还包括与信息查看组件对应的组件编辑控件;
显示模块1510,还用于响应于在组件编辑控件上的触发操作,显示信息编辑区域,信息编辑区域中包括信息标题编辑区域和信息内容编辑区域;
如图16所示,接收模块1540,还用于接收在信息标题编辑区域中的标题输入操作,生成信息查看组件对应的标题;接收在信息内容编辑区域的内容输入操作,生成信息查看组件对应的信息内容。
在一个可选的实施例中,组件编辑区域中还包括结果选择选项;
设置模块1520,还用于在视频编辑预览区域中设置第二互动视频片段;
设置模块1520,还用于响应于在结果选择选项上的选择操作,对应第二互动视频片段设 置结果选择组件,结果选择组件用于所述互动视频的结局选择。
在一个可选的实施例中,结果选择组件中包括至少两个候选项;
装置还包括:
接收模块1540,用于接收对至少两个候选项中目标候选项的结果视频设置操作;
设置模块1520,还用于根据结果视频设置操作,将目标结果视频设置为与目标候选项关联的结果视频。
在一个可选的实施例中,组件编辑区域中还包括人物选择选项;
设置模块1520,还用于在视频编辑预览区域中设置背景视频片段,背景视频片段用于提供互动视频的故事背景;
设置模块1520,还用于响应于在人物选择选项上的选择操作,在背景视频片段后设置人物选择组件,人物选择组件用于所述互动视频中的扮演角色的选择。
在一个可选的实施例中,组件编辑区域中还包括互动旁观选项;
设置模块1520,还用于响应于在互动旁观选项上的选择操作,在背景视频片段后设置旁观组件和参与组件;响应于在人物选择选项上的选择操作,设置与参与组件关联的人物选择组件。
在一个可选的实施例中,设置模块1520,还用于响应于至少两个人物选择选项设置完毕,针对至少两个人物选择选项和互动视频的故事场景分别设置第一互动片段。
在一个可选的实施例中,组件编辑区域中还包括剧情选择选项;
设置模块1520,还用于响应于在剧情选择选项上的选择操作,在视频编辑预览区域中设置剧情选择组件,剧情选择组件中包括至少两个候选剧情选项;针对至少两个候选剧情选项,分别设置每个候选剧情选项关联的剧***片段。
综上所述,本申请实施例提供的互动视频的创建装置,在创建互动视频的过程中,在互动视频中设置第一互动片段,并在第一互动片段中设置信息查看组件,该信息查看组件中设置有第一互动片段所处场景对应的场景信息,从而增加了互动视频中互动片段的信息量和交互形式,用户能够在第一互动片段中通过对信息查看组件的选择,查看第一互动片段中所包含的信息,提高了与互动视频的互动过程中的人机交互效率。
需要说明的是:上述实施例提供的互动视频的创建装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的互动视频的创建装置与互动视频的创建方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。另外,上述实施例中的“多个”是指两个及两个以上,也即“至少两个”。
图17示出了本申请一个示例性实施例提供的终端1700的结构框图。
通常,终端1700包括有:处理器1701和存储器1702。
处理器1701可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1701可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的 至少一种硬件形式来实现。处理器1701也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1701可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。
存储器1702可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1702还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1702中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1701所执行以实现本申请中方法实施例提供的互动视频的创建方法。
在一些实施例中,终端1700还可选包括有:
射频电路用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路通过电磁信号与通信网络以及其他通信设备进行通信。射频电路将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路包括:天线***、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。
显示屏用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。
音频电路可以包括扬声器。扬声器则用于将来自处理器1701或射频电路的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。在一些实施例中,音频电路还可以包括耳机插孔。
电源用于为终端1700中的各个组件进行供电。电源可以是交流电、直流电、一次性电池或可充电电池。当电源包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
本领域技术人员可以理解,图17中示出的结构并不构成对终端1700的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供一种计算机设备,该计算机设备包括存储器和处理器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并实现上述实施例中任一所述的互动视频的创建方法。
本申请还提供了一种计算机程序,该计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的互动视频的创建方法。
本申请还提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机 指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的互动视频的创建方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。该计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现本申请实施例中任一所述的互动视频的创建方法。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (17)

  1. 一种互动视频的创建方法,其特征在于,应用于计算机设备中,所述方法包括:
    显示互动视频的创建界面,所述创建界面中包括视频编辑预览区域和组件编辑区域,所述组件编辑区域中包括信息查看选项;
    在所述视频编辑预览区域中设置第一互动片段,所述第一互动片段用于提供信息收集场景;
    响应于在所述信息查看选项上的选择操作,在所述第一互动片段中目标场景位置处设置所述信息查看组件,所述信息查看组件用于展示所述信息收集场景内可收集的信息;
    根据所述第一互动片段所设置的所述信息查看组件,生成所述互动视频。
  2. 根据权利要求1所述的方法,其特征在于,所述第一互动片段为虚拟现实片段,所述信息收集场景是所述虚拟现实片段所呈现的三维虚拟场景,所述目标场景位置是所述三维虚拟场景中的三维位置。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述第一互动片段中目标场景位置处设置所述信息查看组件,包括:
    在所述第一互动片段中的目标显示场景中的预设位置处设置所述信息查看组件,所述目标显示场景是所述信息收集场景中包含的至少一个显示场景;
    接收所述信息查看组件上的拖动操作,所述拖动操作用于对所述信息查看组件在所述目标显示场景中的设置位置进行移动;
    根据所述拖动操作确定所述信息查看组件在所述目标显示场景中的所述目标场景位置;
    将所述信息查看组件设置于所述目标场景位置处。
  4. 根据权利要求3所述的方法,其特征在于,所述组件编辑区域还包括与所述信息查看组件对应的组件编辑控件;
    所述将所述信息查看组件设置于所述目标场景位置处之后,还包括:
    响应于所述组件编辑控件上的触发操作,显示信息编辑区域,所述信息编辑区域中包括信息标题编辑区域和信息内容编辑区域;
    接收在所述信息标题编辑区域中的标题输入操作,生成所述信息查看组件对应的标题;
    接收在所述信息内容编辑区域的内容输入操作,生成所述信息查看组件对应的信息内容。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述组件编辑区域中还包括结果选择选项;
    所述方法还包括:
    在所述视频编辑预览区域中设置第二互动视频片段;
    响应于在所述结果选择选项上的选择操作,对应所述第二互动视频片段设置结果选择组件,所述结果选择组件用于所述互动视频的结局选择。
  6. 根据权利要求5所述的方法,其特征在于,所述结果选择组件包括至少两个候选项;
    所述方法还包括:
    接收对所述至少两个候选项中目标候选项的结果视频设置操作;
    根据所述结果视频设置操作,将目标结果视频设置为与所述目标候选项关联的结果视频。
  7. 根据权利要求1至4任一所述的方法,其特征在于,所述组件编辑区域中还包括人物选择选项;
    所述方法还包括:
    在所述视频编辑预览区域中设置背景视频片段,所述背景视频片段用于提供所述互动视频的故事背景;
    响应于在所述人物选择选项上的选择操作,在所述背景视频片段后设置人物选择组件,所述人物选择组件用于所述互动视频中的扮演角色的选择。
  8. 根据权利要求7所述的方法,其特征在于,所述组件编辑区域中还包括互动旁观选项;
    所述响应于在所述人物选择选项上的选择操作,在所述背景视频片段后设置人物选择组件,包括:
    响应于在所述互动旁观选项上的选择操作,在所述背景视频片段后设置旁观组件和参与组件;
    响应于在所述人物选择选项上的选择操作,设置与所述参与组件关联的所述人物选择组件。
  9. 根据权利要求1至4任一所述的方法,其特征在于,所述在所述视频编辑预览区域中设置第一互动片段,包括:
    响应于至少两个人物选择选项设置完毕,针对所述至少两个人物选择选项和所述互动视频的故事场景分别设置所述第一互动片段。
  10. 根据权利要求1至4任一所述的方法,其特征在于,所述组件编辑区域中还包括剧情选择选项;
    所述方法还包括:
    响应于在所述剧情选择选项上的选择操作,在所述视频编辑预览区域中设置剧情选择组件,所述剧情选择组件中包括至少两个候选剧情选项;
    针对所述至少两个候选剧情选项,分别设置每个所述候选剧情选项关联的剧***片段。
  11. 一种互动视频的创建装置,其特征在于,所述装置包括:
    显示模块,用于显示互动视频的创建界面,所述创建界面中包括视频编辑预览区域和组件编辑区域,所述组件编辑区域中包括信息查看选项;
    设置模块,用于在所述视频编辑预览区域中设置第一互动片段,所述第一互动片段用于提供信息收集场景;
    所述设置模块,还用于响应于在所述信息查看选项上的选择操作,在所述第一互动片段 中目标场景位置处设置所述信息查看组件,所述信息查看组件用于展示所述信息收集场景内可收集的信息;
    生成模块,用于根据所述第一互动片段所设置的所述信息查看组件,生成所述互动视频。
  12. 根据权利要求11所述的装置,其特征在于,所述第一互动片段为虚拟现实片段,所述信息收集场景是所述虚拟现实片段所呈现的三维虚拟场景,所述目标场景位置是所述三维虚拟场景中的三维位置。
  13. 根据权利要求12所述的装置,其特征在于,
    所述设置模块,还用于在所述第一互动片段中的目标显示场景中的预设位置处设置所述信息查看组件,所述目标显示场景是所述信息收集场景中包含的至少一个显示场景;
    所述装置还包括:
    接收模块,用于接收在所述信息查看组件上的拖动操作,所述拖动操作用于对所述信息查看组件在所述目标显示场景中的设置位置进行移动;根据所述拖动操作确定所述信息查看组件在所述目标显示场景中的所述目标场景位置;
    所述设置模块,还用于将所述信息查看组件设置于所述目标场景位置处。
  14. 根据权利要求13所述的装置,其特征在于,所述组件编辑区域还包括与所述信息查看组件对应的组件编辑控件;
    所述显示模块,还用于响应于所述组件编辑控件上的触发操作,显示信息编辑区域,所述信息编辑区域中包括信息标题编辑区域和信息内容编辑区域;
    所述接收模块,还用于接收在所述信息标题编辑区域中的标题输入操作,生成所述信息查看组件对应的标题;接收在所述信息内容编辑区域的内容输入操作,生成所述信息查看组件对应的信息内容。
  15. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至10任一所述的互动视频的创建方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至10任一所述的互动视频的创建方法。
  17. 一种计算机程序,其特征在于,所述计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中;计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行如权利要求1至10任一所述的互动视频的创建方法。
PCT/CN2021/119639 2020-10-16 2021-09-22 互动视频的创建方法、装置、设备及可读存储介质 WO2022078167A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/981,127 US20230057703A1 (en) 2020-10-16 2022-11-04 Method and apparatus for creating interactive video, device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011110802.XA CN112261481B (zh) 2020-10-16 2020-10-16 互动视频的创建方法、装置、设备及可读存储介质
CN202011110802.X 2020-10-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/981,127 Continuation US20230057703A1 (en) 2020-10-16 2022-11-04 Method and apparatus for creating interactive video, device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2022078167A1 true WO2022078167A1 (zh) 2022-04-21

Family

ID=74244629

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119639 WO2022078167A1 (zh) 2020-10-16 2021-09-22 互动视频的创建方法、装置、设备及可读存储介质

Country Status (3)

Country Link
US (1) US20230057703A1 (zh)
CN (1) CN112261481B (zh)
WO (1) WO2022078167A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095413A (zh) * 2022-05-30 2023-05-09 荣耀终端有限公司 视频处理方法及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261481B (zh) * 2020-10-16 2022-03-08 腾讯科技(深圳)有限公司 互动视频的创建方法、装置、设备及可读存储介质
CN112954479A (zh) * 2021-01-26 2021-06-11 广州欢网科技有限责任公司 基于电视终端的剧情类游戏实现方法及装置
CN115037960B (zh) * 2021-03-04 2024-04-02 上海哔哩哔哩科技有限公司 互动视频的生成方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108650555A (zh) * 2018-05-15 2018-10-12 优酷网络技术(北京)有限公司 视频界面的展示、交互信息的生成方法、播放器及服务器
US20190057722A1 (en) * 2017-08-18 2019-02-21 BON2 Media Services LLC Embedding interactive content into a shareable online video
CN110784752A (zh) * 2019-09-27 2020-02-11 腾讯科技(深圳)有限公司 一种视频互动方法、装置、计算机设备和存储介质
CN111711856A (zh) * 2020-08-19 2020-09-25 深圳电通信息技术有限公司 交互视频的制作方法、装置、终端、存储介质及播放器
CN111741367A (zh) * 2020-07-23 2020-10-02 腾讯科技(深圳)有限公司 视频互动方法、装置、电子设备及计算机可读存储介质
CN112261481A (zh) * 2020-10-16 2021-01-22 腾讯科技(深圳)有限公司 互动视频的创建方法、装置、设备及可读存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8631453B2 (en) * 2008-10-02 2014-01-14 Sony Corporation Video branching
CN104883627A (zh) * 2015-06-22 2015-09-02 田志明 一种情节影视及其播映装置与方法
CN107295359B (zh) * 2016-04-11 2020-05-01 腾讯科技(北京)有限公司 一种视频播放方法、装置、计算设备和存储介质
CN106254941A (zh) * 2016-10-10 2016-12-21 乐视控股(北京)有限公司 视频处理方法及装置
CN108156523A (zh) * 2017-11-24 2018-06-12 互影科技(北京)有限公司 交互视频播放的互动方法及装置
CN108124187A (zh) * 2017-11-24 2018-06-05 互影科技(北京)有限公司 交互视频的生成方法及装置
CN109446346A (zh) * 2018-09-14 2019-03-08 传线网络科技(上海)有限公司 多媒体资源编辑方法及装置
WO2020190736A1 (en) * 2019-03-15 2020-09-24 Rct Studio Inc. Methods, systems, and apparatuses for production of an interactive movie
CN111193960B (zh) * 2019-09-27 2022-12-27 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN111432277B (zh) * 2020-04-01 2022-10-14 咪咕视讯科技有限公司 视频播放方法、电子设备及计算机可读存储介质
CN111556370B (zh) * 2020-04-02 2022-10-25 北京奇艺世纪科技有限公司 一种互动视频交互方法、装置、***及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190057722A1 (en) * 2017-08-18 2019-02-21 BON2 Media Services LLC Embedding interactive content into a shareable online video
CN108650555A (zh) * 2018-05-15 2018-10-12 优酷网络技术(北京)有限公司 视频界面的展示、交互信息的生成方法、播放器及服务器
CN110784752A (zh) * 2019-09-27 2020-02-11 腾讯科技(深圳)有限公司 一种视频互动方法、装置、计算机设备和存储介质
CN111741367A (zh) * 2020-07-23 2020-10-02 腾讯科技(深圳)有限公司 视频互动方法、装置、电子设备及计算机可读存储介质
CN111711856A (zh) * 2020-08-19 2020-09-25 深圳电通信息技术有限公司 交互视频的制作方法、装置、终端、存储介质及播放器
CN112261481A (zh) * 2020-10-16 2021-01-22 腾讯科技(深圳)有限公司 互动视频的创建方法、装置、设备及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095413A (zh) * 2022-05-30 2023-05-09 荣耀终端有限公司 视频处理方法及电子设备
CN116095413B (zh) * 2022-05-30 2023-11-07 荣耀终端有限公司 视频处理方法及电子设备

Also Published As

Publication number Publication date
CN112261481B (zh) 2022-03-08
US20230057703A1 (en) 2023-02-23
CN112261481A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022078167A1 (zh) 互动视频的创建方法、装置、设备及可读存储介质
US9712862B2 (en) Apparatus, systems and methods for a content commentary community
RU2527199C2 (ru) Совместный выбор мультимедиа с интегрированными видеообразами
US9245020B2 (en) Collaborative media sharing
US20140380167A1 (en) Systems and methods for multiple device interaction with selectably presentable media streams
WO2020207106A1 (zh) 关注用户的信息展示方法、装置、设备及存储介质
US10897637B1 (en) Synchronize and present multiple live content streams
US20150296033A1 (en) Life Experience Enhancement Via Temporally Appropriate Communique
CN111279709B (zh) 提供视频推荐
CN102790922B (zh) 多媒体播放器及分享多媒体的方法
US20220201341A1 (en) Method, apparatus and device for game live-streaming
US20150294633A1 (en) Life Experience Enhancement Illuminated by Interlinked Communal Connections
JP2014082582A (ja) 視聴装置、コンテンツ提供装置、視聴プログラム、及びコンテンツ提供プログラム
CN114430494B (zh) 界面显示方法、装置、设备及存储介质
CN112188223B (zh) 直播视频播放方法、装置、设备及介质
US20230027035A1 (en) Automated narrative production system and script production method with real-time interactive characters
US20230156245A1 (en) Systems and methods for processing and presenting media data to allow virtual engagement in events
US11315607B2 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium
US20230336838A1 (en) Graphically animated audience
KR102067360B1 (ko) 실시간 그룹 스트리밍 콘텐츠 처리 방법 및 장치
CN114827701A (zh) 多媒体信息互动方法、装置、电子设备以及存储介质
Sconce The golden age of badness
KR102615377B1 (ko) 방송 체험 서비스의 제공 방법
US20220124383A1 (en) Audio bullet screen processing method and device
WO2024125046A1 (zh) 虚拟角色的显示方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879213

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/09/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21879213

Country of ref document: EP

Kind code of ref document: A1