CN114915855A - Virtual video program loading method - Google Patents

Virtual video program loading method Download PDF

Info

Publication number
CN114915855A
CN114915855A CN202210474902.3A CN202210474902A CN114915855A CN 114915855 A CN114915855 A CN 114915855A CN 202210474902 A CN202210474902 A CN 202210474902A CN 114915855 A CN114915855 A CN 114915855A
Authority
CN
China
Prior art keywords
virtual
data
video program
program
virtual video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210474902.3A
Other languages
Chinese (zh)
Inventor
王唯翔
宋丽婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202210474902.3A priority Critical patent/CN114915855A/en
Publication of CN114915855A publication Critical patent/CN114915855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a loading method of a virtual video program, wherein the method comprises the following steps: acquiring resource package data corresponding to a virtual video program in a virtual video program list item, wherein the resource package data comprises at least one of the following data: character model data of a virtual character in the virtual video program, action characteristic data of the virtual character and expression characteristic data of the virtual character; and performing real-time rendering, display and loading on the resource packet data, and configuring a virtual camera view angle aiming at the virtual role according to the program type of the virtual video program. According to the invention, a new video display mode is realized, more video visual angles and interactive modes are provided by combining a virtual rendering technology, and thus the entertainment effect and experience of video playing are improved.

Description

Virtual video program loading method
Technical Field
The invention relates to the technical field of computers, in particular to a loading method of a virtual video program.
Background
In the related art, evening shows like spring and evening shows every year are all presented by character props of entities on an entity stage, and some shows have a background curtain for playing audio and video corresponding to foreground programs and the like, so that interestingness and interactivity among audiences in front of the screen are lacked. In some special links, virtual elements are also introduced, for example, virtual characters presented by optical technologies such as AR perform on stage, but the virtual personnel cannot interact with the user, and the user can only passively receive the perspective given by the director, and cannot flexibly select the perspective.
Video programs in the related art lack interest and interactivity.
In view of the above problems in the related art, no effective solution has been found at present.
Disclosure of Invention
The embodiment of the invention provides a loading method of a virtual video program.
According to an embodiment of the present invention, a method for loading a virtual video program is provided, including: acquiring resource package data corresponding to a virtual video program in a virtual video program list item, wherein the resource package data comprises at least one of the following data: the virtual video program comprises character model data of virtual characters in the virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters; and performing real-time rendering, displaying and loading on the resource packet data, and configuring a virtual camera visual angle aiming at the virtual role according to the program type of the virtual video program.
Optionally, configuring a virtual camera view angle for the virtual role includes: receiving a visual angle selection instruction, wherein the visual angle selection instruction comprises a first visual angle selection instruction and/or a second visual angle selection instruction, the first visual angle selection instruction is used for indicating a mirror moving track of the virtual camera, the second visual angle selection instruction is used for indicating a camera position of the virtual camera, a plurality of camera positions are arranged in a virtual scene where the virtual role is located, and each camera position corresponds to a shooting visual angle range; and configuring the virtual camera view angle of the virtual role according to the view angle selection instruction.
Optionally, when the resource package data is rendered, displayed and loaded in real time, and after a virtual camera view angle for the virtual role is configured according to the program type of the virtual video program, the method further includes: after the resource packet data is played at a playing end, storing the resource packet data to the playing end; at the playing end, performing at least one of the following reconfiguration operations on the resource package data: editing the character image parameters of the virtual character, and editing the virtual camera parameters of the virtual video program; and storing the reconfigured resource packet data.
Optionally, when the resource package data is rendered, displayed and loaded in real time, the method further includes: issuing the control authority of the virtual role to a user account; receiving a control instruction initiated by the user account based on the control authority; and responding to the control instruction, and controlling the virtual character to execute corresponding action and/or expression in the virtual video program.
Optionally, when the resource package data is rendered, displayed and loaded in real time, the method further includes: acquiring background material resources or stage material resources of the virtual video program; and rendering and displaying the background picture loaded with the virtual video program in real time by adopting the background material resource or the stage material resource.
Optionally, after acquiring the package data corresponding to the virtual video program in the virtual video program list item, the method further includes: receiving a user selection instruction, wherein the user selection instruction is used for selecting the custom feature data of the virtual role; and replacing the initial action characteristic data and/or expression characteristic data in the resource package data by the user-defined characteristic data.
Optionally, the method further includes: locating a game environment component in a virtual game scene, wherein the game environment component is embedded in a fixed area in the virtual game scene; and when the virtual video program is played in the virtual game scene, synchronously driving the character model data on the game environment component so as to enable the virtual character to execute the operation corresponding to the virtual video program on the game environment component.
Optionally, after the real-time rendering, displaying and loading of the resource package data, the method further includes: acquiring interaction information aiming at the virtual role or the virtual video program from a local playing end; and presenting the interactive information in a playing picture of the virtual video program.
Optionally, rendering, displaying and loading the resource package data in real time includes: loading the resource package data in a designated map area of a virtual game scene; determining an interactive virtual role participating in the virtual video program in the virtual game scene, and determining a subscription account of the virtual video program;
and rendering and displaying a playing picture of the virtual video program on a playing end where the interactive virtual character and the subscription account are located.
Optionally, the obtaining resource packet data corresponding to the virtual video program in the virtual video program list item includes: displaying a list of show elements to be used in a resource pool of the virtual video program, wherein the list of show elements includes at least one of: the method comprises the following steps of (1) a character model, character clothes, props, sounds, scene objects and face pinching data; and receiving a selection instruction of a target performance element, and adding the target performance element into the resource package data.
Optionally, the method further includes: sharing the data packet of the action characteristic data to a playing end of the virtual video program; and after receiving the transfer instruction of the data packet, transferring the data packet to the user account of the playing end.
Optionally, the obtaining of the resource package data corresponding to the virtual video program in the virtual video program list item includes: analyzing the program style type of the virtual video program; searching the role data and the scene data matched with the program style types in a resource library; adding the character data and the scene data to the asset pack data.
Optionally, configuring, according to the program type of the virtual video program, a virtual camera view angle for the virtual character includes: if the program type of the virtual video program is a single program, configuring a lens view angle of a preset lens authority range aiming at the virtual role; if the program type of the virtual video program is a multi-person program, configuring a view angle of a preset motion trail lens for displaying the interaction actions of the virtual characters; and if the program type of the virtual video program is a single program, configuring a preset close-up view angle aiming at the virtual character.
According to another embodiment of the present invention, there is provided an apparatus for loading a virtual video program, including: the acquisition module is used for acquiring resource packet data corresponding to a virtual video program in a virtual video program list item, wherein the resource packet data comprises at least one of the following data: the virtual video program comprises character model data of virtual characters in the virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters; and the loading module is used for performing real-time rendering, display and loading on the resource packet data and configuring a virtual camera view angle aiming at the virtual role according to the program type of the virtual video program.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
By the invention, the resource packet data corresponding to the virtual video program in the virtual video program list item is obtained, and the resource packet data comprises: character model data of virtual characters in a virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters; the method comprises the steps of performing real-time rendering, displaying and loading on resource packet data, configuring a virtual camera visual angle aiming at a virtual role according to a program type of a virtual video program, realizing a new video display mode by acquiring the resource packet data of the virtual video program and configuring the virtual camera visual angle aiming at the virtual role in the program during the real-time rendering, displaying and loading, and providing more video visual angles and interactive modes by combining a virtual rendering technology, thereby improving the entertainment effect and experience of video playing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a loading computer for virtual video programs according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a loading method of a virtual video program according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of laying out virtual cameras in a virtual scene according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an embodiment of configuring a lens view according to program types;
fig. 5 is a block diagram of a loading apparatus for virtual video programs according to an embodiment of the present invention;
fig. 6 is a structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile phone, a tablet, a server, a computer, or a similar electronic terminal. Taking an example of the virtual video program running on a computer, fig. 1 is a block diagram of a hardware structure of a virtual video program loading computer according to an embodiment of the present invention. As shown in fig. 1, the computer may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is illustrative only and is not intended to limit the configuration of the computer described above. For example, a computer may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a method for loading a virtual video program in an embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. In the present embodiment, the processor 104 is configured to control the target virtual character to perform a specified operation to complete the game task in response to the human-machine interaction instruction and the game policy. The memory 104 is used for storing program scripts, configuration information, attribute information of virtual characters, and the like.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Optionally, the input/output device 108 further includes a human-computer interaction screen for acquiring a human-computer interaction instruction through a human-computer interaction interface, and further for presenting a streaming media picture;
in this embodiment, a method for loading a virtual video program is provided, and fig. 2 is a schematic flowchart of a method for loading a virtual video program according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring resource packet data corresponding to the virtual video program in the virtual video program list item, wherein the resource packet data comprises at least one of the following: character model data of virtual characters in a virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters;
the virtual video program list item of this embodiment includes a plurality of virtual video programs configured in advance, a program identifier list of the virtual video program list item may be displayed on software interfaces of video playing software, game software, and the like of a playing end, where the program identifier may be a program title, a program introduction, a character introduction, and the like, and a user triggers a playing request for generating a virtual video program by clicking a certain program identifier at a playing end point, and obtains resource package data corresponding to the virtual video program in response to the playing request.
Optionally, the resource packet data corresponding to the virtual video program may be read locally from the playing end, or may be acquired from a network server, a cloud server, or the like, and may be adapted to different network architectures according to conditions such as different application scenarios and rendering capabilities of the playing end.
In some examples, the package data includes, in addition to: the character model data of the virtual character in the virtual video program, the action characteristic data of the virtual character, the expression characteristic data of the virtual character, and the like can also comprise background data of the virtual video program, and the background data is used for rendering and displaying a character background picture, a caption and the like in the loaded virtual video program.
The resource package data in this embodiment is similar to game resource data in an installation Software SDK (Software Development Kit) in a mobile game, and the resource package data is rendered by rendering engine Software provided in the SDK to calculate image frames for a video. In this embodiment, the resource package data may be resource data such as a model, an action, an expression, and a scene cut animation of a virtual character performed on a virtual stage or a stage, and is prepared for rendering a video frame of a virtual video program, and is preset in advance according to a video content of the virtual video program and/or updated in real time during rendering and displaying.
And step S204, performing real-time rendering, displaying and loading on the resource packet data, and configuring a virtual camera view angle aiming at the virtual role according to the program type of the virtual video program.
Optionally, the program types of the virtual video program include: single person programs, double person programs, multiplayer programs, phase-sound programs, dance programs, sports/game event programs, movie & TV shows, and the like. The virtual video program at least comprises a virtual role, the virtual camera view angle can be automatically configured according to the program type, and the virtual camera view angle corresponding to the selection instruction of the user can also be configured according to the selection instruction of the user. The virtual character of the present embodiment may be an object other than a human, such as an animal, an NPC (non-player character), or the like.
Through the steps, resource packet data corresponding to the virtual video program in the virtual video program list item is obtained, and the resource packet data comprises: character model data of virtual characters in a virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters; the method comprises the steps of performing real-time rendering, displaying and loading on resource packet data, configuring a virtual camera visual angle aiming at a virtual role according to a program type of a virtual video program, realizing a new video display mode by acquiring the resource packet data of the virtual video program and configuring the virtual camera visual angle aiming at the virtual role in the program during the real-time rendering, displaying and loading, and providing more video visual angles and interactive modes by combining a virtual rendering technology, thereby improving the entertainment effect and experience of video playing.
In one implementation of this embodiment, configuring a virtual camera perspective for the virtual character includes:
s11, receiving a visual angle selection instruction;
in one example, receiving the perspective selection instruction includes at least one of: receiving a first visual angle selection instruction, wherein the first visual angle selection instruction is used for indicating a mirror moving track of a virtual camera; and receiving a second visual angle selection instruction, wherein the second visual angle selection instruction is used for indicating the camera positions of the virtual camera, a plurality of camera positions are arranged in the virtual scene where the virtual role is located, and each camera position corresponds to a shooting visual angle range.
The virtual video program playing end can receive a visual angle selection instruction of a user, so that the performance can be watched by multiple moving mirrors and multiple cameras, the user can define the moving mirror track, the position and the like of the camera by himself to watch in real time or watch for the second time, and therefore the user-defined visual angle picture of the virtual video program is presented.
Optionally, the first view angle selection instruction may be a gesture instruction, such as a sliding track, where the sliding track is the same as or matched with a running track of a virtual camera that controls a virtual video program, and the virtual video program presents a picture switch corresponding to the sliding track when being played and displayed.
In this embodiment, a plurality of camera positions are also arranged in the virtual scene corresponding to the virtual video program, for example, the virtual video program includes virtual character 1 and virtual character 2, and 5 camera positions are configured for virtual character 1 and virtual character 2, respectively, corresponding to 5 directions in front, back, left, right, and upper left, and corresponding to 5 viewing angles in front, back, left, right, and lower upper right viewing angles.
Fig. 3 is a schematic diagram of laying out virtual cameras in a virtual scene according to an embodiment of the present invention, where the virtual scene includes a virtual camera 1, a virtual camera 2, and a virtual camera 3, where the virtual scene includes an object 1, an object 2, and the like, the virtual camera 1 may collect a picture of the virtual scene in a right view, the virtual camera 2 may collect a picture of the virtual scene in a middle view, and the virtual camera 3 may collect a picture of the virtual scene in a left view.
And S12, configuring the virtual camera view angle of the virtual character according to the view angle selection instruction.
By adopting the scheme of the embodiment, the user defines the image acquisition visual angle of the virtual camera of at least one virtual role in the virtual video program, so that the picture display of thousands of people and thousands of faces of the same virtual video program is realized; by customizing the motion trail, the motion trail can be formed by switching and combining virtual cameras of a plurality of virtual roles, and the rendering result of the visual angle trail of the customized virtual camera is realized. The user can compile the virtual video programs belonging to the user by himself, can edit aiming at one program, and can also carry out combined editing aiming at a plurality of virtual video programs in the virtual video program list items and share the virtual video programs.
In an embodiment of this embodiment, when performing real-time rendering, display, and loading on resource package data, after configuring a virtual camera view angle for a virtual character according to a program type of a virtual video program, the method further includes: after the resource packet data is played at the playing end, storing the resource packet data to the playing end; at the playing end, performing at least one of the following reconfiguration operations on the resource packet data: editing the character image parameters of the virtual character, and editing the virtual camera parameters of the virtual video program; and storing the reconfigured resource packet data.
In this embodiment, after the program is added (for example, the original virtual video program is the resource packet data generated by the professional director's clip), after the resource packet data is downloaded to the client (or a resource backup is acquired at the cloud), the virtual roles in the resource packet data are re-edited, such as face pinching and changing, the user sets the view angle of the camera again, and the video is re-rendered and loaded after the mirror is moved, so that the reconfigured video is generated and then stored or shared. In this embodiment, the reconfiguration operation performed on the resource package data at the playing end may be dynamic configuration or static configuration, where the dynamic configuration is to reconfigure the resource package data while playing, and the static configuration is to configure the resource package data under different playing conditions, for example, to configure the character model parameters, clothing, props, scene parameters, etc. of the virtual character, and then perform update replacement in the resource package data.
In an embodiment of this embodiment, when performing real-time rendering, displaying and loading on the resource package data, the method further includes: issuing the control authority of the virtual role to the user account; receiving a control instruction initiated by a user account based on a control authority; and responding to the control instruction, and controlling the virtual character to execute corresponding action and/or expression in the virtual video program.
In this embodiment, the control authority is an authority for controlling the virtual character to execute an action, and since the virtual character in the virtual video program is composed of a character model, and the virtual character is intended to perform as a real person, the character model needs to be driven to perform an activity, and the resource packet data includes driving data of the character model, such as action characteristic data and expression characteristic data. The embodiment may set a plurality of instruction options, such as "go left" and "go right", "attack" and "escape", based on the control instruction of the user, the plot trend of the virtual video program may be determined within a predetermined range.
In an embodiment of this embodiment, when performing real-time rendering, displaying and loading on the resource package data, the method further includes: acquiring background material resources or stage material resources of the virtual video program; and rendering and displaying the background picture of the loaded virtual video program in real time by adopting the background material resource or the stage material resource.
By loading background resources in the stage, a macro modeling and stage effect can be realized in the virtual video program.
In an implementation manner of this embodiment, after acquiring the resource package data corresponding to the virtual video program in the virtual video program list item, the method further includes: receiving a user selection instruction, wherein the user selection instruction is used for selecting the user-defined feature data of the virtual role; and replacing the initial action characteristic data and/or expression characteristic data in the resource package data by the user-defined characteristic data.
The system can preset a plurality of groups of resource packet data, such as performance action data of professional actors, performance action data of net friends imitation, performance action data of folk talent and skill attackers or performance action data of artists and the like, and a user can select different presentation pictures rendered by custom characteristic data. For example, the virtual video program is "flute", the package data of the program itself is data 1, in a certain bridge segment, the system also sets another two deductive ways of the bridge segment, corresponding to data 2 and data 3, after receiving the user selection instruction of data 2, data 1 can be replaced by data 2 in the package data of the program itself.
In an example of this embodiment, the scheme further includes: positioning a game environment component in the virtual game scene, wherein the game environment component is embedded in a fixed area in the virtual game scene; and when the virtual video program is played in the virtual game scene, synchronously driving character model data on the game environment component so that the virtual character executes the operation corresponding to the virtual video program on the game environment component.
The example can be applied in a game scene, the resource package data of a virtual video program is loaded in the game scene for playing, and a stage scene (such as karaoke) in the game is formed, wherein the game environment component can be a screen, a group of curtain walls, and the like in the game scene, as an embedded playing interface of the virtual video program, since there are virtual characters in both the game scene and the game environment component, only a Player of the game environment component is a game environment component position performer, the virtual Character in the game scene is an NPC or a PCC (Player-Controlled Character) in the game, the virtual Character in the virtual video program comes from the game scene, so that online and offline linkage can be performed in the game scene, when the virtual video program of the virtual Character on the "online" performs a certain action, the virtual Character is driven to perform the same action in the "offline" game scene, such as dance movements, etc.
In an implementation manner of this embodiment, after performing real-time rendering, displaying and loading on the resource package data, the method further includes: acquiring interaction information aiming at a virtual character or a virtual video program from a local playing end; and presenting the interactive information in the playing picture of the virtual video program.
In this embodiment, the virtual video program and the user may interact with each other, and the program playing end presents interaction information triggered by the user in the real world in a special effect in a form of 3D or the like.
The embodiment can realize the synchronization and the depth fusion of the real world and the virtual world, and the interaction in the real world is expressed in the picture of the virtual world, and the picture (similar to a game picture, a short video picture and the like) of the virtual video program of the virtual world in the embodiment can be interactive, namely the interactive information of the user, such as favorite collection, barrage, voting and the like, is received, and the information can be synchronously displayed in the playing picture of the virtual video program. In the present embodiment, while rendering the video virtual program picture, the local client/program playing end is provided with UI design elements (e.g., interaction controls) for interacting with the viewer user, for example, a praise to a virtual character and a gift to the virtual character shown in the program, and barrage information are sent to the server side for pushing resource package data, and then the server sends the collected UI interaction information to the clients of other viewer users of the virtual video program for synchronous display. UI interaction such as voting is designed with a similar principle, and will not be described herein again. The UI interaction design may be specific to a specific virtual character in the video (since there may be multiple virtual characters in the video, such as a program of a small article show), and may be specifically configured, so as to increase the interaction degree and entertainment of the virtual video. Preferably, the UI interaction may be loaded directly on a trigger on the 3D model of the virtual character, for example, by continuously clicking on the model that triggers one of the actors in a certain program, thereby activating the interactive information for sending the cartoons to the server.
In some implementation scenarios of the present embodiment, the real-time rendering, displaying and loading the resource package data includes: loading resource package data in a designated map area of a virtual game scene; determining an interactive virtual role participating in a virtual video program in a virtual game scene, and determining a subscription account number of the virtual video program; and rendering and displaying a playing picture of the virtual video program on a playing end where the interactive virtual character and the subscription account are located.
In this embodiment, the subscription account of the virtual video program may be a server that provides a subscription-capable video distribution of the interactive virtual video, such as "beijing satellite television", "arcade art channel", and the like. Preferably, the virtual video with the content of the virtual character as a theme can also be used as a subscription theme or a website server based on the virtual character. For example, a certain netbook IP forms a virtual video, which can subscribe to update, from a virtual video in an APP such as a jittering sound or a WeChat video. Therefore, the player at the client can use the rendering engine to render the video data updated under the subscription account for loading and playing.
The implementation scene is a game scene, the stage of the virtual video program is a map resource, only NPC and PCC participating in the virtual video program can be loaded and used/interacted in the map resource, and when rendering output is carried out, the playing picture of the virtual video program is displayed on a game client terminal which is unlocked or purchases the interactive virtual character in a game, and the playing picture is loaded and rendered on a certain map position of a game map and can also be displayed to a player subscribing the virtual video program for watching. In the implementation scene, the virtual stage of the virtual video program is used as a map resource in a similar game scene, and is developed for audiences to edit and then rendered to form a stage with thousands of people and thousands of faces. A game scene may be used as a stage background for the virtual video program. And the user has the authority to obtain the stage background resources and edit the stage background resources. Preferably, an NPC character may be purchased in the stage system, and an interaction of a specific performance action such as dance and martial arts realized by AI, or a performance character of a specific scenario may be completed, where the purchasable NPC character is a part of the game map and is used as an accessory resource of the map resource, and the NPC character is bound to the map resource.
In one implementation of this embodiment, the acquiring the resource package data corresponding to the virtual video program in the virtual video program list item includes: displaying a list of show elements to be used in a resource library of the virtual video program, wherein the list of show elements comprises at least one of the following: the method comprises the following steps of (1) a character model, character clothes, props, sounds, scene objects and face pinching data; and receiving a selection instruction of the target performance element, and adding the target performance element into the resource package data.
In this embodiment, the show elements used by the virtual video program may be sold or interacted with, for example, in the resource package data of the virtual video program itself, the virtual character has no hair accessories, the used character model is model 1, the hair accessories may be added to the resource package data of the virtual video program itself by the selection or purchase of the user, and the model 1 is adjusted to model 2, finally, when the virtual video program is played, the virtual character is rendered based on the model 2 and wears the hair accessories, and the show element list includes the hair accessories and the model 2. In this embodiment, entries for viewing, selecting and purchasing virtual digital assets such as a 3D model of a virtual character or game peripheral products of a real object thereof (a handheld, a poster and the like corresponding to the virtual character model) may be added to a virtual video program, and are similar to entry links of a commercial platform, so that a user can realize transaction implementation of the virtual digital assets or the real object assets when watching the virtual video program.
In an embodiment of this embodiment, the method further includes: sharing a data packet of the action characteristic data to a playing end of the virtual video program; and after receiving the transfer instruction of the data packet, transferring the data packet to a user account of the playing end. In this embodiment, in addition to the property data such as the character model data in the above embodiment, the motion characteristic data of the virtual character may be traded, for example, the virtual video program has the loading of the virtual video of the lisson, where the motion characteristic data of the lisson, which is a virtual character, may be sold to the audience, so that the motion characteristic data may be rendered and loaded in other virtual characters in this embodiment, thereby implementing that the model of the virtual character of the audience itself may quickly implement the specific motion data of the lisson, and the purchased motion characteristic data is used as the digital property under the account of the audience user.
The data packet of the action characteristic data carries data such as dance gestures, martial arts and expression packets of virtual characters performed in the virtual video program, a user wants to acquire program data in the virtual video program and can acquire the program data through purchasing, exchanging and the like, the playing end transfers the program data to a user account of the playing end, and specifically, the configuration data, action driving data and the like of the expression packets and video segments on the surface layer or the virtual characters on the bottom layer are presented.
In one implementation of this embodiment, the acquiring the resource package data corresponding to the virtual video program in the virtual video program list item includes: analyzing the program style type of the virtual video program; searching role data and scene data matched with the program style types in a resource library; character data and scene data are added to the resource package data.
The virtual video program of the embodiment further includes a stage system, and the stage system is configured to load character data and scene data of the virtual video program, which are matched with the program genre type on stage, or select the character data and the scene data based on a user instruction and add the character data and the scene data to the resource package data of the virtual video program. The system can be loaded and switched in a stage system for presenting at night, for example, the program 1 is in accordance with the style 1, and when the resource package data of the program 1 is loaded, the data (character data and scene data) of the style 1 are synchronously loaded and rendered for display.
The scheme of this embodiment may be applied to a scene with multiple program types, and in addition to matching the genre, the method may further configure a virtual camera view angle for a virtual character according to the program type of the virtual video program based on the program type matching view angle parameter of the virtual video program, including: if the program type of the virtual video program is a single program, configuring a lens visual angle of a preset lens authority range aiming at the virtual role; if the program type of the virtual video program is a multi-person program, configuring the view angle of a preset motion trail lens for displaying the interaction actions of a plurality of virtual characters; and if the program type of the virtual video program is the single program, configuring a preset close-up shot view angle aiming at the virtual character. In this embodiment, preferably, after the resource data is acquired through the program list selected by the viewer user, the configuration data (camera position + view range + trajectory data) of the virtual camera view angle of the camera corresponding to the virtual video program is acquired from the server at the same time. At the server side, camera configuration data corresponding to the virtual video program is stored, and after the video list selected by the client is determined, the camera configuration data corresponding to the virtual video program is automatically generated. Preferably, the server side manages the stored and managed virtual video programs, and performs different secondary configurations on the stored camera configuration data for different types of virtual video programs, so as to optimize loading and display at the client and improve the display effect of the virtual video programs. Preferably, the camera configuration data may be opened to a user for editing, and after being modified and edited, the camera configuration data may be stored and shared as user data of a viewer user to other viewer users, or the camera configuration data may be sold together when performing the program video transaction.
Fig. 4 is a schematic diagram of configuring a view angle according to a program type according to an embodiment of the present invention, where a range shot, a track shot, and a close-up shot are allocated, the range shot can rotate within a preset shot authority range, the track shot can move a moving mirror on a preset moving track, and the close-up shot collects a picture at a fixed angle.
In one embodiment of this embodiment, program data of a virtual video program is also configured, and the multiple program data may be video frames of a live-action video program. Before acquiring the resource package data corresponding to the virtual video program in the virtual video program list item, the scheme further includes: acquiring program data of a target program from a program copy server, and acquiring role model data of a target virtual role from a game server; generating role control data of a target virtual role according to program data, wherein the resource packet data corresponding to the virtual video program comprises the role control data; the character control data may drive character model data during a rendering phase to cause the target virtual character to perform a frame action corresponding to a video frame of the live-action video program.
In this embodiment, the virtual character may be controlled by an audio rhythm point to generate character control data, or the virtual character may be controlled by a switching point of a video screen to generate character control data.
In one example of this embodiment, the generating the character control data of the target virtual character from the program data includes: generating a rhythm point queue according to the program data, wherein each element in the rhythm point queue corresponds to a rhythm point type and rhythm time; aiming at each rhythm point in the rhythm point queue, searching a first control instruction matched with the type of the rhythm point in a preset instruction set; and replacing the rhythm point type of each element in the rhythm point queue by using a first control instruction to obtain a first control instruction queue, wherein each element in the first control instruction queue corresponds to one instruction type and instruction triggering time, and the instruction triggering time is the same as or lags behind the corresponding rhythm time by a fixed preset time length.
Optionally, generating the tempo point queue according to the program data includes: analyzing audio parameters in the program data to generate a frequency response curve and a harmonic curve; the following target points are located in the frequency response curve: beat points and pitch fluctuation points of the percussion instrument locate the following target points in the harmonic curve: a tone switching point; and determining each target point as a rhythm point, and arranging all rhythm points according to time sequence combination to obtain a rhythm point queue.
Taking the program data as MV (Music Video) and the virtual character as NPC (NPC) as examples, the rhythm point of the audio can be various types, such as the beat point of percussion instruments, such as drum points, such as the fluctuation point of the tone, the switching point of the tone color, and the like. The control flow for controlling the NPC through the audio rhythm point comprises the following steps: the method comprises the steps of obtaining a rhythm point queue of the MV, wherein each element in the rhythm point queue corresponds to one rhythm point type and rhythm time, converting the rhythm point queue into a control instruction queue based on a preset mapping relation, wherein each element in the control instruction queue corresponds to one instruction type and instruction trigger time, the instruction trigger time is the same as or lags behind the corresponding rhythm time by a fixed preset time length, monitoring the MV progress, and triggering a control instruction aiming at the NPC after the instruction trigger time is reached to control the NPC to perform action display and switching.
In another example of the present embodiment, the program data is video data, and the generating character control data of the target virtual character based on the program data includes: generating a picture switching point queue according to the program data, wherein each element in the picture switching point queue corresponds to a picture switching type and switching time; searching a second control instruction matched with the picture switching type of each picture switching point in the picture switching point queue in a preset instruction set; and replacing the picture switching type of each element in the picture switching point queue by using a second control instruction to obtain a second control instruction queue, wherein each element in the second control instruction queue corresponds to one instruction type and instruction triggering time, and the instruction triggering time is the same as the corresponding switching time or lags behind the corresponding switching time by a fixed preset time length.
Optionally, generating the picture switching point queue according to the program data includes: analyzing video frame pictures in the program data frame by frame, and extracting the starting frames of the following picture conversion from the program data: character switching, background switching, prop switching and plot bridge section switching; and determining each initial frame as a picture switching point, and arranging all the picture switching points according to time sequence combination to obtain a picture switching point queue.
Still taking program data as MV and virtual character as NPC as an example, the switching point of the video picture may also be of various types, such as switching of characters in the picture, switching of backgrounds, switching of props, switching of bridge segments, and switching of pictures (if the pixel coincidence ratio of the current frame and the previous frame is less than 50%, picture switching is determined). The control flow for controlling the NPC through the switching point of the video picture comprises the following steps: the method comprises the steps of obtaining a switching point queue of the MVs, wherein each element in the switching point queue corresponds to one picture switching type and switching time, converting the switching point queue into a control instruction queue based on a preset mapping relation, wherein each element in the control instruction queue corresponds to one instruction type and one instruction triggering time, the instruction triggering time is the same as or lags behind the corresponding switching time by a fixed preset time length, monitoring the progress of the MVs, and after the instruction triggering time is reached, triggering a control instruction aiming at the NPC to control the NPC to perform action display and switching.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a loading apparatus for a virtual video program is further provided, which is used to implement the foregoing embodiments and preferred embodiments, and is not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a loading apparatus for virtual video programs according to an embodiment of the present invention, as shown in fig. 5, the apparatus includes: an acquisition module 50, a loading module 52, wherein,
an obtaining module 50, configured to obtain resource package data corresponding to a virtual video program in a virtual video program list item, where the resource package data includes at least one of: the virtual video program comprises character model data of virtual characters in the virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters;
and the loading module 52 is configured to perform real-time rendering, display and loading on the resource packet data, and configure a virtual camera view angle for the virtual role according to the program type of the virtual video program.
It should be noted that the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Fig. 6 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device includes a processor 61, a communication interface 62, a memory 63, and a communication bus 64, where the processor 61, the communication interface 62, and the memory 63 complete mutual communication through the communication bus 64, and the memory 63 is used for storing a computer program;
the processor 61 is configured to implement the following steps when executing the program stored in the memory 63: acquiring resource package data corresponding to a virtual video program in a virtual video program list item, wherein the resource package data comprises at least one of the following data: the virtual video program comprises character model data of virtual characters in the virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters; and performing real-time rendering, display and loading on the resource packet data, and configuring a virtual camera view angle aiming at the virtual role according to the program type of the virtual video program.
Optionally, configuring a virtual camera view angle for the virtual role includes: receiving a visual angle selection instruction; and configuring the virtual camera visual angle of the virtual role according to the visual angle selection instruction.
Optionally, the receiving the view angle selection instruction includes at least one of: receiving a first visual angle selection instruction, wherein the first visual angle selection instruction is used for indicating a mirror movement track of the virtual camera; and receiving a second visual angle selection instruction, wherein the second visual angle selection instruction is used for indicating the camera positions of the virtual camera, a plurality of camera positions are arranged in the virtual scene where the virtual role is located, and each camera position corresponds to a shooting visual angle range.
Optionally, when the resource package data is rendered, displayed and loaded in real time, and after a virtual camera view angle for the virtual role is configured according to the program type of the virtual video program, the method further includes: after the resource packet data is played at a playing end, storing the resource packet data to the playing end; at the playing end, performing at least one of the following reconfiguration operations on the resource package data: editing the character image parameters of the virtual character, and editing the virtual camera parameters of the virtual video program; and storing the reconfigured resource packet data.
Optionally, when the resource package data is rendered, displayed and loaded in real time, the method further includes: issuing the control authority of the virtual role to a user account; receiving a control instruction initiated by the user account based on the control authority; and responding to the control instruction, and controlling the virtual character to execute corresponding action and/or expression in the virtual video program.
Optionally, when the resource package data is rendered, displayed and loaded in real time, the method further includes: acquiring background material resources or stage material resources of the virtual video program; and rendering and displaying the background picture loaded with the virtual video program in real time by adopting the background material resources or the stage material resources.
Optionally, after acquiring the package data corresponding to the virtual video program in the virtual video program list item, the method further includes: receiving a user selection instruction, wherein the user selection instruction is used for selecting the custom feature data of the virtual role; and replacing the initial action characteristic data and/or expression characteristic data in the resource package data by the user-defined characteristic data.
Optionally, the method further includes: locating a game environment component in a virtual game scene, wherein the game environment component is embedded in a fixed area in the virtual game scene; and when the virtual video program is played in the virtual game scene, synchronously driving the character model data on the game environment component so as to enable the virtual character to execute the operation corresponding to the virtual video program on the game environment component.
Optionally, after performing real-time rendering, displaying and loading on the resource package data, the method further includes: acquiring interaction information aiming at the virtual role or the virtual video program from a local playing end; and presenting the interactive information in a playing picture of the virtual video program.
Optionally, rendering, displaying and loading the resource package data in real time includes: loading the resource package data in a designated map area of a virtual game scene; determining an interactive virtual role participating in the virtual video program in the virtual game scene, and determining a subscription account of the virtual video program;
and rendering and displaying a playing picture of the virtual video program on a playing end where the interactive virtual character and the subscription account are located.
Optionally, the obtaining of the resource package data corresponding to the virtual video program in the virtual video program list item includes: displaying a list of show elements to be used in a resource pool of the virtual video program, wherein the list of show elements includes at least one of: the method comprises the following steps of (1) a character model, character clothes, props, sounds, scene objects and face pinching data; and receiving a selection instruction of a target performance element, and adding the target performance element into the resource package data.
Optionally, the method further includes: sharing the data packet of the action characteristic data to a playing end of the virtual video program; and after receiving the transfer instruction of the data packet, transferring the data packet to the user account of the playing end.
Optionally, the obtaining of the resource package data corresponding to the virtual video program in the virtual video program list item includes: analyzing the program style type of the virtual video program; searching the role data and the scene data matched with the program style types in a resource library; adding the character data and the scene data to the asset pack data.
Optionally, configuring, according to the program type of the virtual video program, a virtual camera view angle for the virtual character includes: if the program type of the virtual video program is a single program, configuring a lens view angle of a preset lens authority range aiming at the virtual role; if the program type of the virtual video program is a multi-person program, configuring a view angle of a preset motion trail lens for displaying the interaction actions of the virtual characters; and if the program type of the virtual video program is a single program, configuring a preset close-up view angle aiming at the virtual character.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In another embodiment provided by the present application, there is further provided a computer-readable storage medium, having stored therein instructions, which when executed on a computer, cause the computer to execute the loading method of the virtual video program described in any of the above embodiments.
In another embodiment provided by the present application, there is also provided a computer program product containing instructions, which when run on a computer, causes the computer to execute the loading method of the virtual video program described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method for loading a virtual video program, comprising:
acquiring resource package data corresponding to a virtual video program in a virtual video program list item, wherein the resource package data comprises at least one of the following data: the virtual video program comprises character model data of virtual characters in the virtual video program, action characteristic data of the virtual characters and expression characteristic data of the virtual characters;
and performing real-time rendering, display and loading on the resource packet data, and configuring a virtual camera view angle aiming at the virtual role according to the program type of the virtual video program.
2. The method of claim 1, wherein configuring a virtual camera view for the virtual character comprises:
receiving a visual angle selection instruction, wherein the visual angle selection instruction comprises a first visual angle selection instruction and/or a second visual angle selection instruction, the first visual angle selection instruction is used for indicating a lens moving track of the virtual camera, the second visual angle selection instruction is used for indicating lens positions of the virtual camera, a plurality of lens positions are arranged in a virtual scene where the virtual role is located, and each lens position corresponds to a shooting visual angle range;
and configuring the virtual camera visual angle of the virtual role according to the visual angle selection instruction.
3. The method of claim 1, wherein after configuring the virtual camera view for the virtual character according to the program type of the virtual video program when the real-time rendering, displaying and loading are performed on the package data, the method further comprises:
after the resource packet data is played at a playing end, storing the resource packet data to the playing end;
at the playing end, performing at least one of the following reconfiguration operations on the resource package data: editing the character image parameters of the virtual character, and editing the virtual camera parameters of the virtual video program;
and storing the reconfigured resource packet data.
4. The method of claim 1, wherein when the real-time rendering display loading of the asset package data is performed, the method further comprises:
issuing the control authority of the virtual role to a user account;
receiving a control instruction initiated by the user account based on the control authority;
and responding to the control instruction, and controlling the virtual character to execute corresponding action and/or expression in the virtual video program.
5. The method of claim 1, wherein after obtaining the package data corresponding to the virtual video program in the virtual video program list item, the method further comprises:
receiving a user selection instruction, wherein the user selection instruction is used for selecting the user-defined feature data of the virtual role;
and replacing the initial action characteristic data and/or expression characteristic data in the resource package data by the user-defined characteristic data.
6. The method of claim 1, further comprising:
locating a game environment component in a virtual game scene, wherein the game environment component is embedded in a fixed area in the virtual game scene;
and when the virtual video program is played in the virtual game scene, synchronously driving the character model data on the game environment component so as to enable the virtual character to execute the operation corresponding to the virtual video program on the game environment component.
7. The method of claim 1, wherein the real-time rendering, displaying and loading of the resource package data comprises:
loading the resource package data in a designated map area of a virtual game scene;
determining an interactive virtual role participating in the virtual video program in the virtual game scene, and determining a subscription account of the virtual video program;
and rendering and displaying a playing picture of the virtual video program on a playing end where the interactive virtual character and the subscription account are located.
8. The method of claim 1, wherein obtaining the package data corresponding to the virtual video program in the virtual video program list item comprises:
displaying a list of show elements to be used in a resource pool of the virtual video program, wherein the list of show elements includes at least one of: the method comprises the following steps of (1) a character model, character clothes, props, sounds, scene objects and face pinching data;
and receiving a selection instruction of a target performance element, and adding the target performance element into the resource package data.
9. The method of claim 1, further comprising:
sharing the data packet of the action characteristic data to a playing end of the virtual video program;
and after receiving the transfer instruction of the data packet, transferring the data packet to the user account of the playing end.
10. The method of claim 1, wherein configuring the virtual camera view for the virtual character according to the program type of the virtual video program comprises:
if the program type of the virtual video program is a single program, configuring a lens view angle of a preset lens authority range aiming at the virtual role;
if the program type of the virtual video program is a multi-person program, configuring the visual angle of a preset movement track lens for displaying the interactive action of a plurality of virtual characters;
and if the program type of the virtual video program is a single program, configuring a preset close-up view angle aiming at the virtual character.
CN202210474902.3A 2022-04-29 2022-04-29 Virtual video program loading method Pending CN114915855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210474902.3A CN114915855A (en) 2022-04-29 2022-04-29 Virtual video program loading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210474902.3A CN114915855A (en) 2022-04-29 2022-04-29 Virtual video program loading method

Publications (1)

Publication Number Publication Date
CN114915855A true CN114915855A (en) 2022-08-16

Family

ID=82764900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210474902.3A Pending CN114915855A (en) 2022-04-29 2022-04-29 Virtual video program loading method

Country Status (1)

Country Link
CN (1) CN114915855A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN111050189A (en) * 2019-12-31 2020-04-21 广州酷狗计算机科技有限公司 Live broadcast method, apparatus, device, storage medium, and program product
CN111277845A (en) * 2020-01-15 2020-06-12 网易(杭州)网络有限公司 Game live broadcast control method and device, computer storage medium and electronic equipment
CN111598983A (en) * 2020-05-18 2020-08-28 北京乐元素文化发展有限公司 Animation system, animation method, storage medium, and program product
CN111629225A (en) * 2020-07-14 2020-09-04 腾讯科技(深圳)有限公司 Visual angle switching method, device and equipment for live broadcast of virtual scene and storage medium
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN112121423A (en) * 2020-09-24 2020-12-25 苏州幻塔网络科技有限公司 Control method, device and equipment of virtual camera
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114401442A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN111050189A (en) * 2019-12-31 2020-04-21 广州酷狗计算机科技有限公司 Live broadcast method, apparatus, device, storage medium, and program product
CN111277845A (en) * 2020-01-15 2020-06-12 网易(杭州)网络有限公司 Game live broadcast control method and device, computer storage medium and electronic equipment
CN111598983A (en) * 2020-05-18 2020-08-28 北京乐元素文化发展有限公司 Animation system, animation method, storage medium, and program product
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN111629225A (en) * 2020-07-14 2020-09-04 腾讯科技(深圳)有限公司 Visual angle switching method, device and equipment for live broadcast of virtual scene and storage medium
CN112121423A (en) * 2020-09-24 2020-12-25 苏州幻塔网络科技有限公司 Control method, device and equipment of virtual camera
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114401442A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Video live broadcast and special effect control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11794102B2 (en) Cloud-based game streaming
JP6857251B2 (en) Video content switching and synchronization system, and how to switch between multiple video formats
US9254438B2 (en) Apparatus and method to transition between a media presentation and a virtual environment
CN105210373B (en) Provide a user the method and system of personalized channels guide
CN108986192B (en) Data processing method and device for live broadcast
CN113965811A (en) Play control method and device, storage medium and electronic device
CN113965812A (en) Live broadcast method, system and live broadcast equipment
CN114900678B (en) VR end-cloud combined virtual concert rendering method and system
CN105704502A (en) Live video interactive method and device
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
US20220219090A1 (en) DYNAMIC AND CUSTOMIZED ACCESS TIERS FOR CUSTOMIZED eSPORTS STREAMS
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
CN113784160A (en) Video data generation method and device, electronic equipment and readable storage medium
CN115151319A (en) Presenting pre-recorded game play video for in-game player assistance
CN114173173B (en) Bullet screen information display method and device, storage medium and electronic equipment
CN113274727B (en) Live interaction method and device, storage medium and electronic equipment
WO2024104333A1 (en) Cast picture processing method and apparatus, electronic device, and storage medium
CN110798692A (en) Video live broadcast method, server and storage medium
CN114168044A (en) Interaction method and device for virtual scene, storage medium and electronic device
US11845011B2 (en) Individualized stream customizations with social networking and interactions
US20240004529A1 (en) Metaverse event sequencing
KR20190035026A (en) Method, apparatus and computer program for providing video contents
CN111667313A (en) Advertisement display method and device, client device and storage medium
CN114915855A (en) Virtual video program loading method
CN115225949A (en) Live broadcast interaction method and device, computer storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination