CN110213640B - Virtual article generation method, device and equipment - Google Patents

Virtual article generation method, device and equipment Download PDF

Info

Publication number
CN110213640B
CN110213640B CN201910578523.7A CN201910578523A CN110213640B CN 110213640 B CN110213640 B CN 110213640B CN 201910578523 A CN201910578523 A CN 201910578523A CN 110213640 B CN110213640 B CN 110213640B
Authority
CN
China
Prior art keywords
image data
virtual article
video
adjusted
transparent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578523.7A
Other languages
Chinese (zh)
Other versions
CN110213640A (en
Inventor
刘伟
庞炳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuomi Private Ltd
Original Assignee
Hong Kong LiveMe Corp ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Kong LiveMe Corp ltd filed Critical Hong Kong LiveMe Corp ltd
Priority to CN201910578523.7A priority Critical patent/CN110213640B/en
Publication of CN110213640A publication Critical patent/CN110213640A/en
Priority to PCT/CN2020/077034 priority patent/WO2020258907A1/en
Application granted granted Critical
Publication of CN110213640B publication Critical patent/CN110213640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method, a device and equipment for generating a virtual article. The virtual article generation method is applied to a server, and when a virtual article generation instruction is detected, first image data, audio data and a playing time stamp of a first video corresponding to the generation instruction are acquired; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data; adjusting the first image data according to preset appearance information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article; and displaying the virtual article and playing the audio data according to the playing time stamp. The scheme can ensure the effect of sound and picture synchronization in the diversified display effect of the generated virtual article.

Description

Virtual article generation method, device and equipment
Technical Field
The invention relates to the technical field of behavior virtual articles, in particular to a method, a device and equipment for generating a virtual article.
Background
Various clients associated with the internet typically provide various virtual goods that are configured so that the user can utilize the virtual goods to perform virtual activities. For example, a user may utilize a virtual article to implement a virtual transaction, and to dress a personal network community, among other things. For example: purchasing a virtual gift in the live client to the anchor, and decorating the avatar and personal homepage using a virtual pendant in the social software, etc. Wherein the virtual object is typically a static image, such as a flower image, a fireworks image, etc. Because the static image only has one fixed picture, the display effect of the generated virtual article is easy to be single.
Therefore, sound effects can be added to the virtual articles, and the diversification of the display effect of the virtual articles is improved. However, in the related art, the sound effect is added only by simply playing music when the virtual object is displayed, and the problem that the virtual object and the sound effect are not synchronous, that is, the sound and the picture are not synchronous easily occurs. Therefore, how to ensure the sound and picture synchronization in the diversified display effect of the generated virtual article is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a method, a device and equipment for generating a virtual article, so as to achieve the effect of improving the convenience of configuration check of the virtual article. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a virtual article, which is applied to a server, and the method includes:
when a generation instruction of a virtual article is detected, acquiring first image data, audio data and a playing time stamp of a first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data;
adjusting the first image data according to preset appearance information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article;
and displaying the virtual article and playing the audio data according to the playing time stamp.
Optionally, before the step of adjusting the first image data according to the preset shape information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article, the method further includes:
acquiring second image data of a second video with transparent picture color; the second image data provides a picture of the second video;
the step of adjusting the first image data according to the preset appearance information of the virtual article, transcoding the adjusted first image data, and generating the virtual article includes:
adjusting the first image data and the second image data according to preset appearance information of the virtual article to obtain adjusted first image data and adjusted second image data;
acquiring a transparent position belonging to a transparent area in a display area for displaying the virtual article corresponding to the generation instruction;
masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data;
and obtaining a virtual article based on the masked image data.
Optionally, the step of masking the transparent position in the adjusted first image data with a transparent color by using the adjusted second image data to obtain masked image data includes:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is the transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position in the display area except the transparent position;
and setting the mask video and the article video to be displayed together to obtain masked image data.
Optionally, the step of masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data includes:
taking the transparent position of the adjusted first image data as a transparent channel;
filling the pixels at the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
taking pixels except the pixels at the transparent position in the adjusted first image data as non-transparent channel data;
rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
Optionally, before the step of transcoding the masked image data to obtain a virtual article, the method further includes:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of a virtual article;
adding the third image data to the masked image data according to the position relation to obtain special effect image data;
the step of transcoding the masked image data to obtain a virtual article comprises:
and transcoding the special effect image data to obtain a virtual article.
Optionally, the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and taking the position of each pixel of the masked image data in the masked image data as the position of the element in the matrix;
the position relation is the corresponding relation between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
the step of adding the third image data to the masked image data according to the position relationship to obtain special effect image data includes:
respectively converting the masked image data and the masked third image data into a first matrix and a second matrix;
adding elements in the second matrix in the first matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
In a second aspect, an embodiment of the present invention provides an apparatus for generating a virtual article, which is applied to a server, and includes:
the data acquisition module is used for acquiring first image data, audio data and a playing time stamp of a first video corresponding to a generation instruction when the generation instruction of the virtual article is detected; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data;
the virtual article generation module is used for adjusting the first image data according to preset appearance information of a virtual article, and transcoding the adjusted first image data to obtain the virtual article;
and the virtual article display module is used for displaying the virtual article and playing the audio data according to the playing time stamp.
Optionally, the data obtaining module is specifically configured to:
adjusting the first image data according to the preset appearance information of the virtual article, transcoding the adjusted first image data, and acquiring second image data of a second video with a transparent color picture before acquiring the virtual article; the second image data provides a picture of the second video;
the virtual article generation module includes: the mask submodule and the virtual article acquisition submodule are connected;
the mask submodule is used for adjusting the first image data and the second image data according to preset appearance information of the virtual article to obtain adjusted first image data and adjusted second image data; acquiring a transparent position belonging to a transparent area in a display area for displaying the virtual article corresponding to the generation instruction; masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data;
and the virtual article obtaining sub-module is used for obtaining a virtual article based on the masked image data.
Optionally, the mask sub-module is specifically configured to:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is the transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position in the display area except the transparent position;
and displaying the mask video and the article video together to obtain masked image data.
Optionally, the mask sub-module is specifically configured to:
taking the transparent position of the adjusted first image data as a transparent channel;
filling the pixels at the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
taking pixels except the pixels at the transparent position in the adjusted first image data as non-transparent channel data;
rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
Optionally, the data obtaining module is specifically configured to:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of a virtual article;
adding the third image data to the masked image data according to the position relation to obtain special effect image data;
the virtual article obtaining sub-module is specifically configured to:
and transcoding the special effect image data to obtain a virtual article.
Optionally, the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and taking the position of each pixel of the masked image data in the masked image data as the position of the element in the matrix;
the position relation is the corresponding relation between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
the data acquisition module is specifically configured to:
respectively converting the masked image data and the masked third image data into a first matrix and a second matrix;
adding elements in the second matrix in the first matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the bus; a memory for storing a computer program; and a processor configured to execute the program stored in the memory to implement the steps of the virtual article generation method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the method for generating a virtual article provided in the first aspect.
In the scheme provided by the embodiment of the invention, when a server detects a generation instruction of a virtual article, the server can adjust first image data according to preset appearance information of the virtual article by acquiring the first image data, audio data and playing time stamp of a first video corresponding to the generation instruction, and transcode the adjusted first image data to obtain the virtual article; thereby displaying the virtual item according to the play time stamp, and playing the audio data. The server can generate a virtual article which accords with the preset appearance information of the virtual article by utilizing the first image data of the first video, and the audio data of the first video is used as the sound effect of the virtual article. Therefore, the screen effect of the virtual article is the same as the content of the first image data, and the sound effect is the same as the content of the audio data. On the basis, the playing time stamp of the first video can ensure that the contents of the first image data and the audio data are synchronous. Therefore, the virtual article is displayed according to the playing time stamp, and the audio data is played, so that the synchronization of the picture effect and the sound effect of the virtual article can be ensured. Therefore, by the scheme, the effect of synchronizing sound and pictures in the diversified display effect of the generated virtual article can be guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a schematic flow chart of a method for generating a virtual article according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a virtual article generation method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a virtual article generation apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a virtual article generation apparatus according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, a method for generating a virtual article according to an embodiment of the present invention will be described.
The method for generating a virtual object provided in the embodiment of the present invention may be applied to a server corresponding to a client related to the internet, where the server may include a desktop computer, a portable computer, an internet television, an intelligent mobile terminal, a wearable intelligent terminal, and the like, and is not limited herein, and any server that can implement the embodiment of the present invention belongs to the protection scope of the embodiment of the present invention.
In particular applications, the clients associated with the internet may be diverse. Illustratively, the internet-related client may be a live client, or a social client, among others. Accordingly, the virtual article may be specifically various. For example, the virtual object may be a gift, a pendant, or the like
As shown in fig. 1, a flow of a method for generating a virtual article according to an embodiment of the present invention may include the following steps:
s101, when a generation instruction of the virtual article is detected, first image data, audio data and a playing time stamp of a first video corresponding to the generation instruction are obtained. Wherein the first image data provides a picture of a first video; the audio data provides sound of the first video; the play time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data.
In order to generate diversified virtual articles, generation instructions for different virtual articles may be issued, and accordingly, the first image data, the audio data, and the play time stamp of the first video corresponding to the generation instructions need to be acquired, so as to generate the virtual articles corresponding to the generation instructions through the subsequent steps S102 to S103. Also, the first video used to generate different virtual items may be different.
The first video may specifically include first image data and audio data, the first image data provides a picture of the first video, and the audio data provides a sound of the first video. Thus, for example, the first image data may specifically be an image frame queue of the first video, and the audio data may specifically be an audio packet queue of the first video. For example, the manner of acquiring the first image data and the audio data may specifically include: and reading the first video by using a preset model to obtain an image frame queue and an audio packet queue which are respectively used as first image data and audio data. For example, the first video is read by FFMPEG (Fast Forward MPEG, a tool having functions of video capture, video format conversion, and video capture), thereby obtaining first image data and audio data.
In addition, in order to avoid that the sound and the picture of the first video are not synchronous, a playing time stamp of the first video exists, and the playing time stamp can be used as a playing sequence tag of the first image data and the audio data and is used for ensuring that the contents of the displayed first image data and the played audio data are synchronous when the first video is played. Therefore, the playing time stamp of the first video can be obtained, so as to obtain the virtual object with the sound-picture synchronization effect in the following.
S102, adjusting the first image data according to preset appearance information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article.
The preset shape information of the virtual object may be various. Illustratively, at least one of the preset size, color, brightness, shape and the like of the virtual object may be included. Correspondingly, according to the preset appearance information of the virtual object, the adjustment of the first image data may specifically be: and adjusting the shape of the first image data to be the same as the shape corresponding to the preset shape information of the virtual article.
In order to ensure that the screen effect and the sound effect of the virtual article are synchronized in the subsequent step S103, the adjusted first image data may be transcoded to obtain the virtual article with the screen effect of the video. In a specific application, the manner of transcoding the adjusted first image data may be various. For example, the adjusted first image data may be transcoded by using inter-frame compression, or the adjusted first image data may be transcoded by using intra-frame compression. Any method capable of transcoding the adjusted first image data may be used in the present invention, and this embodiment does not limit this.
The video compression method has the advantages that redundant data with great correlation possibly exist between two continuous front and back image frames of the video, so that inter-frame compression can be implemented by comparing data between different image frames on a time axis, the compression ratio is improved, and the occupation amount of data processing resources is reduced. In the intraframe compression, when a certain image frame is compressed, only the data of the image frame is considered, and redundant information between the image frame and an adjacent image frame is not considered, so that similar to the compression of a static image, the compression ratio is relatively low, and the occupation amount of data processing resources is relatively large. On the basis, for the condition that the occupation of data processing resources is required to be reduced, such as gift generation in live broadcasting, interframe compression is adopted to obtain virtual articles, so that the occupation of the data processing resources is reduced.
And S103, displaying the virtual article and playing the audio data according to the playing time stamp.
Because the playing time stamp can ensure the content synchronization of the first image data and the audio data, the virtual article can be displayed according to the playing time stamp, and the audio data can be played, so that the picture effect and the sound effect synchronization of the virtual article can be ensured.
Illustratively, the displaying of the virtual item and the playing of the audio data are performed according to the playing time stamp, which may specifically include: sending data representing the virtual article, namely data obtained by transcoding the adjusted first image data to an image display device, so that the image display device selects data corresponding to the playing time stamp from the data representing the virtual article to display; and synchronously sending the audio data to the audio output device so that the audio output device plays the data corresponding to the playing time stamp in the audio data. For example, among the data representing the virtual article, the data corresponding to the play time stamp T1 is the video frame VF0 of 0 th to 10 th seconds, and the data corresponding to the play time stamp T1 among the audio data is the audio packet AP0 matching the video frame VF 0. At the same time as the image display device starts to show the video frame VF0, the audio playback device can start playing the audio packets AP 0.
In the scheme provided by the embodiment of the invention, when a server detects a generation instruction of a virtual article, the server can adjust first image data according to preset appearance information of the virtual article by acquiring the first image data, audio data and playing time stamp of a first video corresponding to the generation instruction, and transcode the adjusted first image data to obtain the virtual article; thereby displaying the virtual item according to the play time stamp, and playing the audio data. The server can generate a virtual article which accords with the preset appearance information of the virtual article by utilizing the first image data of the first video, and the audio data of the first video is used as the sound effect of the virtual article. Therefore, the screen effect of the virtual article is the same as the content of the first image data, and the sound effect is the same as the content of the audio data. On the basis, the playing time stamp of the first video can ensure that the contents of the first image data and the audio data are synchronous. Therefore, the virtual article is displayed according to the playing time stamp, and the audio data is played, so that the synchronization of the picture effect and the sound effect of the virtual article can be ensured. Therefore, by the scheme, the effect of synchronizing sound and pictures in the diversified display effect of the generated virtual article can be guaranteed.
As shown in fig. 2, a flow of a virtual article generation method according to another embodiment of the present invention may include:
s201, when a generation instruction of the virtual article is detected, first image data, audio data and a playing time stamp of a first video corresponding to the generation instruction are obtained.
S201 is the same as S101 in the embodiment of fig. 1, and is not repeated herein, for details, see the description of the embodiment of fig. 1.
S202, acquiring second image data of a second video with transparent picture color; the second image data provides a picture of a second video.
In specific application, a transparent part can be arranged in the picture of the virtual article so as to ensure that the display effect of the virtual article is more real and three-dimensional. For example, for a virtual item: for the virtual gift automobile, the color of the picture except the automobile itself can be set to be transparent, so that the displayed picture content of the virtual article is the automobile itself, and a large amount of black or white content equal to the content irrelevant to the automobile itself cannot exist. To this end, the data relating to the transparent colour can be used to mask the areas of the virtual item that are required to have a transparent effect. Wherein the data related to the transparent color may be second image data of the second video whose picture color is the transparent color, the second image data providing a picture of the second video, or may be an image whose picture color is the transparent color.
The second image data and the first image data are both video providing pictures, and the difference is that the picture color of the second image data is transparent color, and the picture of the second video is provided. Therefore, when the second image data of the second video is used as the data relating to the transparent color, the processing logic similar to that for acquiring the first image data can be used, and only the processing object needs to be replaced with the second video and the second image data, and a separate processing logic does not need to be added. For example, the second video may be read using a preset model that obtains the first image data to obtain the second image data.
S203, adjusting the first image data and the second image data according to the preset shape information of the virtual article to obtain the adjusted first image data and the adjusted second image data.
Step S203 is similar to the process of adjusting the first image data in S102 of the embodiment of fig. 1 of the present invention, except that the object of adjustment in S203 is the first image data and the second image data. Moreover, the adjustment of the first image data and the second image data can be parallel adjustment, and the adjustment of the two image data is carried out simultaneously, so that the efficiency is improved; or, the two image data may be adjusted in sequence. The same contents are not repeated herein, and the detailed description of the embodiment of fig. 1 of the present invention is given.
And S204, acquiring a transparent position belonging to the transparent area in the display area for displaying the virtual article corresponding to the generation instruction.
In a specific application, in order to completely display a virtual article in a display area in which the virtual article corresponding to the generation instruction is displayed, the screen size of the virtual article is generally matched with the display area. Therefore, for a certain virtual article, the position of the transparent part of the transparent color needs to be set to be equivalent to the transparent position belonging to the transparent area in the display area. Therefore, the transparent position belonging to the transparent area in the display area for displaying the virtual item corresponding to the generation instruction can be acquired for masking in the subsequent step S205 to obtain masked image data having a transparent color area. Furthermore, for example, the transparent position may specifically be a two-dimensional coordinate corresponding to the transparent area in a two-dimensional coordinate system of the display area.
In the display area for displaying the virtual item corresponding to the generation instruction, the transparent position belonging to the transparent area may be obtained in a plurality of manners. For example, the shape and size of the display area are fixed, and the display position of the virtual article in the display area may also be preset, in which case, the display position of the virtual article in the display area may be regarded as the picture content itself displaying the virtual article, and the position of the transparent part not displaying the virtual article. Therefore, the position of the display position of the non-virtual article in the display area can be stored as the transparent position in advance according to the display position of the virtual article and the shape and size of the display area. Accordingly, the pre-stored transparent locations can be directly read. Or, for example, the shape and size of the display area and the display position of the virtual article may be read in real time, and the position of the display position of the non-virtual article may be determined from the display area as the transparent position according to the read data.
Any method capable of transcoding the adjusted first image data may be used in the present invention, and this embodiment does not limit this.
And S205, masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data.
The adjusted second image data and the adjusted first image correspond to each other pixel by pixel, have the same size and shape, and are transparent. Therefore, the transparent position in the adjusted first image data can be masked to be a transparent color by using the adjusted second image data, and the masked image data can be obtained. In a specific application, the method of masking the transparent position in the adjusted first image data to be a transparent color by using the adjusted second image data to obtain the masked image data may be various. This is explained in more detail below in the form of alternative embodiments.
In an optional implementation manner, the step S205: the method for masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data specifically includes the following steps:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is a transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position except the transparent position in the display area;
and displaying the mask video and the article video together to obtain masked image data.
The masked image data comprise a mask video and an article video, wherein the mask video is obtained by transcoding the adjusted second image data, and the display effect is transparent color; the article video is obtained by transcoding the adjusted first image data, and the display effect is the virtual article itself. Therefore, the mask video can be displayed at the transparent position of the display area for displaying the virtual article, the article video is displayed at the position except the transparent position in the display area, and the mask video and the article video are displayed together, so that the picture content of the virtual article is displayed at the position except the transparent position in the display area by the article video, the transparent color is displayed at the transparent position by the mask video, and the masked image data is obtained.
In a specific application, the mask video and the item video can be copied to a cache of the display module, so that the display module performs common display of the two videos in a display area for displaying the virtual item. For example, in an Android operating system, the mask video and the item video may be copied to a cache of the display module, so that the display module performs common display of the two videos in a native window.
In this optional embodiment, the masked image data may be obtained by transcoding the adjusted second image and the adjusted first image and displaying them together, without complicated image channel filling and rendering processes, so that the method has the advantage of relatively simple implementation process, and can improve the generation efficiency of the virtual object.
In another optional implementation, in step S205: the method for masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data specifically includes the following steps:
taking the adjusted transparent position of the first image data as a transparent channel;
filling pixels at the transparent position in the adjusted second image data into a transparent channel to obtain transparent channel data;
taking pixels except pixels at the transparent position in the adjusted first image data as non-transparent channel data;
rendering the transparent channel data and the non-transparent channel data in the first image data to obtain masked image data.
The position of each pixel in the adjusted first image data corresponds to the display position of the content represented by the pixel in the display area, and the transparent position of the display area can be regarded as the transparent position of the adjusted first image data. Therefore, the adjusted transparent position of the first image data can be used as a transparent channel, so that pixels with transparent colors are filled in the transparent channel in the subsequent steps, and a transparent effect is achieved. And the adjusted second image data corresponds to the pixels of the adjusted first image data bit by bit, so that the pixels at the transparent position in the adjusted second image data can be filled into the transparent channel to obtain transparent channel data, and the adjusted first image data comprising the transparent channel and the non-transparent channel can be obtained. On the basis, in order to obtain the image data which has transparent colors at transparent positions and can display the masks of the screen contents of the virtual articles, the transparent channel data and the non-transparent channel data can be rendered according to the distribution positions of the pixels in the adjusted first image data.
Exemplarily, rendering transparent channel data and non-transparent channel data in the adjusted first image data to obtain masked image data may specifically include: and generating Texture data from the transparent channel data and the non-transparent channel data in the adjusted first image data, and rendering the Texture data by using OpenGL (Open Graphics Library) to obtain the masked image data. In addition, the execution subject for rendering may be specifically a GPU (Graphics Processing Unit), so as to improve the acquisition efficiency of the masked image.
In this optional embodiment, the obtaining of the masked image data is to fill the pixels of the adjusted second image into the transparent channel of the adjusted first image, so as to render the transparent channel data and the non-transparent channel data in the adjusted first image, and obtain the masked image data. Since the transparent channel data and the non-transparent channel data with different display effects can be specifically rendered, compared with the acquisition mode of the image data after the mask without rendering, the method is equivalent to performing secondary rendering, and the display quality of the virtual article obtained based on the image data after the mask can be improved.
And S206, acquiring the virtual article based on the masked image data.
In a specific application, the manner of obtaining the virtual object based on the masked image data may be various. For example, when the masked image data is obtained by filling the transparent channel of the adjusted second image, the masked image data may be transcoded to obtain the virtual article. At this time, step S206 is similar to the process of transcoding the adjusted first image data to obtain the virtual article in S102 of the embodiment of fig. 1 of the present invention, except that the transcoded object in S206 is the masked image data. The same contents are not repeated herein, and the detailed description of the embodiment of fig. 1 of the present invention is given. Alternatively, for example, when the masked image data is obtained by displaying the masking video and the item video together in another optional embodiment of the step S205, the masked image data may be directly used as the virtual item.
Any method capable of obtaining a virtual object based on the masked image data can be used in the present invention, and the present embodiment does not limit this.
And S207, displaying the virtual article and playing the audio data according to the playing time stamp.
S207 is the same as S103 in the embodiment of fig. 1, and is not repeated herein, for details, see the description of the embodiment of fig. 1.
In the embodiment of fig. 2, the second image data of the second video is obtained, the transparent position of the adjusted first image data is masked without adding an independent processing logic for the second image, so as to obtain a masked image, and then the virtual article is obtained based on the masked image, so that the transparent effect of the transparent position of the virtual article is realized. When the virtual article is displayed, the stereoscopic impression and the sense of reality of the virtual article can be improved.
Optionally, in step S206: before obtaining the virtual article based on the masked image data, the method for generating the virtual article according to the embodiment of the present invention may further include the following steps:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of the virtual article;
adding third image data to the masked image data according to the position relation to obtain special effect image data;
accordingly, the step S206: obtaining a virtual article based on the masked image data may specifically include: and transcoding the special effect image data to obtain the virtual article.
The third image data may be a vector image. The particular elements of the virtual article may be varied. Illustratively, the specific element may be a user avatar, a text input by the user, a special effect, and the like. The acquisition of the positional relationship between the third image data and the masked image data may be various. For example, the position relationship may be fixed, and thus, the pre-stored position relationship may be directly read. For example, the positional relationship may be the upper left corner, the upper right corner, or the like of the masked image data, and the positional relationship is stored in advance so as to be read when the special effect image data is acquired. Alternatively, for example, the position may be an addition position corresponding to the acquired type of the third image is searched from a preset correspondence relationship between the type of the third image data and the addition position, as a positional relationship between the third image data and the masked image data. For example, when the type of the third image data is user information, such as a user avatar and characters input by the user, the corresponding adding position may be a transparent position, or a position where the virtual gift is not blocked, such as a boundary position of the virtual gift. When the type of the third image data is a special effect, such as a snowing special effect, the corresponding adding position may be a transparent position or a designated position that does not block the virtual gift.
Any manner of obtaining the position relationship between the third image data and the masked image data can be used in the present invention, which is not limited in this embodiment.
On the basis, the position relation can indicate the adding position of the third image data in the masked image data, so that the special effect image data can be obtained by adding the third image data in the masked image data according to the position relation. Specifically, the third image data may be added to the masked image data at the addition position according to the addition position indicated by the positional relationship, so as to obtain special-effect image data. For example, the third image data is a user head portrait, and the position relationship is the upper left corner of the image data of the third image data after the mask. Therefore, the user head portrait can be added to the top left corner of the masked image data, and special effect image data with a transparent effect and containing the user head portrait is obtained.
Correspondingly, transcoding the special-effect image data is required to obtain the virtual article. This step is similar to the process of transcoding the adjusted first image data to obtain the virtual article in S102 in the embodiment of fig. 1 of the present invention, and the difference is that the object transcoded in this step is special effect image data. The same contents are not repeated herein, and the detailed description of the embodiment of fig. 1 of the present invention is given.
In this optional embodiment, by adding a specific element to the virtual article, the richness of the content expressed by the virtual article can be increased, and the diversity of the display effect can be improved. Moreover, the specific element is in the form of the third image data, so that the specific element in the form of the third image data is added to the masked image data which is also the image data, which is relatively convenient.
Optionally, the position relationship is a corresponding relationship between elements of the second pixel matrix and elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
correspondingly, the step of adding the third image data to the masked image data according to the position relationship to obtain the special-effect image data may specifically include:
respectively converting the masked image data and the masked third image data into a first matrix and a second matrix;
adding elements in the second matrix in the first matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
The second pixel matrix corresponding to the third image data is a matrix obtained by taking each pixel of the third image data as an element of the matrix and taking the position of each pixel of the third image data in the third image data as the position of the element of the matrix in the matrix. Similarly, the first pixel matrix is obtained by taking each pixel of the masked image data as an element of the matrix, and taking a position of each pixel of the masked image data in the masked image data as a position of the element in the matrix. Therefore, the correspondence between the elements of the second pixel matrix and the elements of the first pixel matrix can be taken as the positional relationship between the third image data and the masked image data.
On the basis, in order to add third image data to the masked image data according to the position relationship to obtain special effect image data, the masked image data needs to be converted into a first matrix, and the third image data needs to be converted into a second matrix, so that elements in the second matrix are added to the first matrix according to the position relationship to obtain a third matrix; and converting the third matrix into image data to obtain special effect image data. Illustratively, the positional relationship is that the elements S11 to S36 in the second matrix correspond to the elements F11 to F36 in the upper left corner of the first matrix bit by bit, and therefore, the elements S11 to S36 in the second matrix may be added at the positions of the elements F11 to F36 in the first matrix to obtain the third matrix.
In the present alternative embodiment, the third image data as the specific element is added in pixel positions, which relatively reduces the complexity and the amount of data to be processed at the time of addition and can reduce the consumption of data processing resources, compared with the conventional addition in a drawing manner.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a virtual article generating apparatus.
As shown in fig. 3, an apparatus for generating a virtual article according to an embodiment of the present invention is applied to a server, and the apparatus may include:
the data acquisition module 301 is configured to, when a generation instruction of a virtual article is detected, acquire first image data, audio data, and a play timestamp of a first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data;
a virtual article generation module 302, configured to adjust the first image data according to preset shape information of a virtual article, and transcode the adjusted first image data to obtain the virtual article;
a virtual article display module 303, configured to display the virtual article and play the audio data according to the play timestamp.
In the scheme provided by the embodiment of the invention, when a server detects a generation instruction of a virtual article, the server can adjust first image data according to preset appearance information of the virtual article by acquiring the first image data, audio data and playing time stamp of a first video corresponding to the generation instruction, and transcode the adjusted first image data to obtain the virtual article; thereby displaying the virtual item according to the play time stamp, and playing the audio data. The server can generate a virtual article which accords with the preset appearance information of the virtual article by utilizing the first image data of the first video, and the audio data of the first video is used as the sound effect of the virtual article. Therefore, the screen effect of the virtual article is the same as the content of the first image data, and the sound effect is the same as the content of the audio data. On the basis, the playing time stamp of the first video can ensure that the contents of the first image data and the audio data are synchronous. Therefore, the virtual article is displayed according to the playing time stamp, and the audio data is played, so that the synchronization of the picture effect and the sound effect of the virtual article can be ensured. Therefore, by the scheme, the effect of synchronizing sound and pictures in the diversified display effect of the generated virtual article can be guaranteed.
As shown in fig. 4, an apparatus for generating a virtual article according to another embodiment of the present invention is applied to a server, and the apparatus may include:
the data acquisition module 401 is configured to, when a generation instruction of a virtual article is detected, acquire first image data, audio data, and a play timestamp of a first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data; acquiring second image data of a second video with transparent picture color; the second image data provides a picture of the second video;
a virtual item generation module 402, comprising: a mask sub-module 4021 and a virtual item acquisition sub-module 4022;
the mask sub-module 4021 is configured to adjust the first image data and the second image data according to shape information of a preset virtual article to obtain adjusted first image data and adjusted second image data; acquiring a transparent position belonging to a transparent area in a display area for displaying the virtual article corresponding to the generation instruction; masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data;
the virtual article obtaining sub-module 4022 is configured to obtain a virtual article based on the masked image data;
a virtual article display module 403, configured to display the virtual article and play the audio data according to the play time stamp.
Optionally, the mask sub-module 4021 is specifically configured to:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is the transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position in the display area except the transparent position;
and displaying the mask video and the article video together to obtain masked image data.
Optionally, the mask sub-module 4021 is specifically configured to:
taking the transparent position of the adjusted first image data as a transparent channel;
filling the pixels at the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
taking pixels except the pixels at the transparent position in the adjusted first image data as non-transparent channel data;
rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
Optionally, the data obtaining module 401 is specifically configured to:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of a virtual article;
adding the third image data to the masked image data according to the position relation to obtain special effect image data;
correspondingly, the virtual article obtaining sub-module 4022 is specifically configured to:
and transcoding the special effect image data to obtain a virtual article.
Optionally, the position relationship is a corresponding relationship between elements of the second pixel matrix and elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
the data obtaining module 401 is specifically configured to:
respectively converting the masked image data and the masked third image data into a first matrix and a second matrix;
adding elements in the second matrix in the first matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
Corresponding to the above embodiment, an embodiment of the present invention further provides an electronic device, as shown in fig. 5, where the electronic device may include:
the system comprises a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory complete mutual communication through the communication bus 504 through the 503;
a memory 503 for storing a computer program;
the processor 501 is configured to implement the steps of the method for generating a virtual item according to any one of the above embodiments when executing the computer program stored in the memory 503.
It is understood that the electronic device in the embodiment of fig. 5 of the present invention may specifically be a server corresponding to a client related to the internet.
In the scheme provided by the embodiment of the invention, when a server detects a generation instruction of a virtual article, the server can adjust first image data according to preset appearance information of the virtual article by acquiring the first image data, audio data and playing time stamp of a first video corresponding to the generation instruction, and transcode the adjusted first image data to obtain the virtual article; thereby displaying the virtual item according to the play time stamp, and playing the audio data. The server can generate a virtual article which accords with the preset appearance information of the virtual article by utilizing the first image data of the first video, and the audio data of the first video is used as the sound effect of the virtual article. Therefore, the screen effect of the virtual article is the same as the content of the first image data, and the sound effect is the same as the content of the audio data. On the basis, the playing time stamp of the first video can ensure that the contents of the first image data and the audio data are synchronous. Therefore, the virtual article is displayed according to the playing time stamp, and the audio data is played, so that the synchronization of the picture effect and the sound effect of the virtual article can be ensured. Therefore, by the scheme, the effect of synchronizing sound and pictures in the diversified display effect of the generated virtual article can be guaranteed.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The computer-readable storage medium provided by an embodiment of the present invention is included in an electronic device, and a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for generating a virtual article in any of the above embodiments are implemented.
In the scheme provided by the embodiment of the invention, when a server detects a generation instruction of a virtual article, the server can adjust first image data according to preset appearance information of the virtual article by acquiring the first image data, audio data and playing time stamp of a first video corresponding to the generation instruction, and transcode the adjusted first image data to obtain the virtual article; thereby displaying the virtual item according to the play time stamp, and playing the audio data. The server can generate a virtual article which accords with the preset appearance information of the virtual article by utilizing the first image data of the first video, and the audio data of the first video is used as the sound effect of the virtual article. Therefore, the screen effect of the virtual article is the same as the content of the first image data, and the sound effect is the same as the content of the audio data. On the basis, the playing time stamp of the first video can ensure that the contents of the first image data and the audio data are synchronous. Therefore, the virtual article is displayed according to the playing time stamp, and the audio data is played, so that the synchronization of the picture effect and the sound effect of the virtual article can be ensured. Therefore, by the scheme, the effect of synchronizing sound and pictures in the diversified display effect of the generated virtual article can be guaranteed.
In yet another embodiment, the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method for generating a virtual article according to any one of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted from a computer-readable storage medium to another computer-readable storage medium, for example, from a website, computer, server, or data center, over a wired (e.g., coaxial cable, fiber optic, DSL (Digital Subscriber Line), or wireless (e.g., infrared, radio, microwave, etc.) network, to another website, computer, server, or data center, to any available medium that is accessible by a computer or that is a data storage device including one or more integrated servers, data centers, etc. the available medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD (Digital Versatile Disc, digital versatile disc)), or a semiconductor medium (e.g.: SSD (Solid State Disk)), etc.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and electronic apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. A virtual article generation method is applied to a server, and comprises the following steps:
when a generation instruction of a virtual article is detected, acquiring first image data, audio data and a playing time stamp of a first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data;
adjusting the first image data according to preset appearance information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article;
displaying the virtual article and playing the audio data according to the playing time stamp;
before the step of adjusting the first image data according to the preset shape information of the virtual article, and transcoding the adjusted first image data to obtain the virtual article, the method further includes:
acquiring second image data of a second video with transparent picture color; the second image data provides a picture of the second video;
the step of adjusting the first image data according to the preset appearance information of the virtual article, transcoding the adjusted first image data, and generating the virtual article includes:
adjusting the first image data and the second image data according to preset appearance information of the virtual article to obtain adjusted first image data and adjusted second image data;
acquiring a transparent position belonging to a transparent area in a display area for displaying the virtual article corresponding to the generation instruction;
masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data;
obtaining a virtual article based on the masked image data;
the step of masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data includes:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is the transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position in the display area except the transparent position;
and setting the mask video and the article video to be displayed together to obtain masked image data.
2. The method of claim 1, wherein prior to the step of obtaining a virtual item based on the masked image data, the method further comprises:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of a virtual article;
adding the third image data to the masked image data according to the position relation to obtain special effect image data;
the step of obtaining a virtual article based on the masked image data comprises:
and transcoding the special effect image data to obtain a virtual article.
3. The method according to claim 2, wherein the positional relationship is a correspondence between elements of the second pixel matrix and elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
the step of adding the third image data to the masked image data according to the position relationship to obtain special effect image data includes:
respectively converting the masked image data and the masked third image data into a first pixel matrix and a second pixel matrix;
adding elements in the second pixel matrix in the first pixel matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
4. An apparatus for generating a virtual article, applied to a server, the apparatus comprising:
the data acquisition module is used for acquiring first image data, audio data and a playing time stamp of a first video corresponding to a generation instruction when the generation instruction of the virtual article is detected; wherein the first image data provides a picture of the first video; the audio data provides sound of the first video; the playing time stamp is an identifier for ensuring the content synchronization of the first image data and the audio data;
the virtual article generation module is used for adjusting the first image data according to preset appearance information of a virtual article, and transcoding the adjusted first image data to obtain the virtual article;
the virtual article display module is used for displaying the virtual article and playing the audio data according to the playing time stamp;
the data acquisition module is specifically configured to:
adjusting the first image data according to the preset appearance information of the virtual article, transcoding the adjusted first image data, and acquiring second image data of a second video with a transparent color picture before acquiring the virtual article; the second image data provides a picture of the second video;
the virtual article generation module includes: the mask submodule and the virtual article acquisition submodule are connected;
the mask submodule is used for adjusting the first image data and the second image data according to preset appearance information of the virtual article to obtain adjusted first image data and adjusted second image data; acquiring a transparent position belonging to a transparent area in a display area for displaying the virtual article corresponding to the generation instruction; masking the transparent position in the adjusted first image data into a transparent color by using the adjusted second image data to obtain masked image data;
the virtual article obtaining sub-module is used for obtaining a virtual article based on the masked image data;
the mask sub-module is specifically configured to:
transcoding the adjusted second image data to obtain a mask video; the display position of the mask video is the transparent position;
transcoding the adjusted first image data to obtain an article video; the display position of the article video is a position in the display area except the transparent position;
and displaying the mask video and the article video together to obtain masked image data.
5. The apparatus of claim 4, wherein the data acquisition module is specifically configured to:
acquiring third image data and a position relation between the third image data and the masked image data; the third image data is for being a particular element of a virtual article;
adding the third image data to the masked image data according to the position relation to obtain special effect image data;
the virtual article obtaining sub-module is specifically configured to:
and transcoding the special effect image data to obtain a virtual article.
6. The apparatus according to claim 5, wherein the positional relationship is a correspondence between elements of the second pixel matrix and elements of the first pixel matrix; the second pixel matrix is a pixel matrix corresponding to the third image data; the first pixel matrix is a pixel matrix corresponding to the masked image data;
the data acquisition module is specifically configured to:
respectively converting the masked image data and the masked third image data into a first pixel matrix and a second pixel matrix;
adding elements in the second pixel matrix in the first pixel matrix according to the position relation to obtain a third matrix;
and converting the third matrix into image data to obtain special effect image data.
7. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the bus; a memory for storing a computer program; a processor for executing a program stored in the memory to perform the method steps of any of claims 1 to 3.
8. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the method steps of any one of claims 1-3.
CN201910578523.7A 2019-06-28 2019-06-28 Virtual article generation method, device and equipment Active CN110213640B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910578523.7A CN110213640B (en) 2019-06-28 2019-06-28 Virtual article generation method, device and equipment
PCT/CN2020/077034 WO2020258907A1 (en) 2019-06-28 2020-02-27 Virtual article generation method, apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578523.7A CN110213640B (en) 2019-06-28 2019-06-28 Virtual article generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN110213640A CN110213640A (en) 2019-09-06
CN110213640B true CN110213640B (en) 2021-05-14

Family

ID=67795510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578523.7A Active CN110213640B (en) 2019-06-28 2019-06-28 Virtual article generation method, device and equipment

Country Status (2)

Country Link
CN (1) CN110213640B (en)
WO (1) WO2020258907A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110213640B (en) * 2019-06-28 2021-05-14 香港乐蜜有限公司 Virtual article generation method, device and equipment
CN112348969B (en) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
US11769289B2 (en) * 2021-06-21 2023-09-26 Lemon Inc. Rendering virtual articles of clothing based on audio characteristics

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950405A (en) * 2010-08-10 2011-01-19 浙江大学 Video content-based watermarks adding method
CN104995662A (en) * 2013-03-20 2015-10-21 英特尔公司 Avatar-based transfer protocols, icon generation and doll animation
KR20160064328A (en) * 2014-11-27 2016-06-08 정승화 Apparatus and method for supporting special effects with motion cartoon systems
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN107169872A (en) * 2017-05-09 2017-09-15 北京龙杯信息技术有限公司 Method, storage device and terminal for generating virtual present
CN108093307A (en) * 2017-12-29 2018-05-29 广州酷狗计算机科技有限公司 Obtain the method and system of played file
CN108174227A (en) * 2017-12-27 2018-06-15 广州酷狗计算机科技有限公司 Display methods, device and the storage medium of virtual objects
CN109191549A (en) * 2018-11-14 2019-01-11 广州酷狗计算机科技有限公司 Show the method and device of animation
CN109300180A (en) * 2018-10-18 2019-02-01 看见故事(苏州)影视文化发展有限公司 A kind of 3D animation method and calculate producing device
CN109413338A (en) * 2018-09-28 2019-03-01 北京戏精科技有限公司 A kind of method and system of scan picture

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005609B (en) * 2006-01-21 2010-11-03 腾讯科技(深圳)有限公司 Method and system for forming interaction video frequency image
CN102289339B (en) * 2010-06-21 2013-10-30 腾讯科技(深圳)有限公司 Method and device for displaying expression information
CN102663785B (en) * 2012-03-29 2014-12-10 上海华勤通讯技术有限公司 Mobile terminal and image processing method thereof
JP6080249B2 (en) * 2012-09-13 2017-02-15 富士フイルム株式会社 Three-dimensional image display apparatus and method, and program
CN105338410A (en) * 2014-07-07 2016-02-17 乐视网信息技术(北京)股份有限公司 Method and device for displaying barrage of video
CN106303653A (en) * 2016-08-12 2017-01-04 乐视控股(北京)有限公司 A kind of image display method and device
CN106713988A (en) * 2016-12-09 2017-05-24 福建星网视易信息***有限公司 Beautifying method and system for virtual scene live
WO2018116468A1 (en) * 2016-12-22 2018-06-28 マクセル株式会社 Projection video display device and method of video display therefor
CN108769826A (en) * 2018-06-22 2018-11-06 广州酷狗计算机科技有限公司 Live media stream acquisition methods, device, terminal and storage medium
CN110213640B (en) * 2019-06-28 2021-05-14 香港乐蜜有限公司 Virtual article generation method, device and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950405A (en) * 2010-08-10 2011-01-19 浙江大学 Video content-based watermarks adding method
CN104995662A (en) * 2013-03-20 2015-10-21 英特尔公司 Avatar-based transfer protocols, icon generation and doll animation
KR20160064328A (en) * 2014-11-27 2016-06-08 정승화 Apparatus and method for supporting special effects with motion cartoon systems
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN107169872A (en) * 2017-05-09 2017-09-15 北京龙杯信息技术有限公司 Method, storage device and terminal for generating virtual present
CN108174227A (en) * 2017-12-27 2018-06-15 广州酷狗计算机科技有限公司 Display methods, device and the storage medium of virtual objects
CN108093307A (en) * 2017-12-29 2018-05-29 广州酷狗计算机科技有限公司 Obtain the method and system of played file
CN109413338A (en) * 2018-09-28 2019-03-01 北京戏精科技有限公司 A kind of method and system of scan picture
CN109300180A (en) * 2018-10-18 2019-02-01 看见故事(苏州)影视文化发展有限公司 A kind of 3D animation method and calculate producing device
CN109191549A (en) * 2018-11-14 2019-01-11 广州酷狗计算机科技有限公司 Show the method and device of animation

Also Published As

Publication number Publication date
CN110213640A (en) 2019-09-06
WO2020258907A1 (en) 2020-12-30

Similar Documents

Publication Publication Date Title
CN106611435B (en) Animation processing method and device
CN110475150B (en) Rendering method and device for special effect of virtual gift and live broadcast system
CN111899155B (en) Video processing method, device, computer equipment and storage medium
US12022160B2 (en) Live streaming sharing method, and related device and system
CN110213640B (en) Virtual article generation method, device and equipment
CN111193876B (en) Method and device for adding special effect in video
CN111899322B (en) Video processing method, animation rendering SDK, equipment and computer storage medium
US11450044B2 (en) Creating and displaying multi-layered augemented reality
CN109327727A (en) Live streaming method for stream processing and plug-flow client in a kind of WebRTC
US9224156B2 (en) Personalizing video content for Internet video streaming
CN109831662B (en) Real-time picture projection method and device of AR (augmented reality) glasses screen, controller and medium
CN112804459A (en) Image display method and device based on virtual camera, storage medium and electronic equipment
US11151747B2 (en) Creating video augmented reality using set-top box
CN110012336B (en) Picture configuration method, terminal and device of live interface
TW201924317A (en) Video processing method and device based on augmented reality, and electronic equipment
CN108235055A (en) Transparent video implementation method and equipment in AR scenes
CN113141537A (en) Video frame insertion method, device, storage medium and terminal
CN111464828A (en) Virtual special effect display method, device, terminal and storage medium
CN112804460A (en) Image processing method and device based on virtual camera, storage medium and electronic equipment
WO2024087883A1 (en) Video picture rendering method and apparatus, device, and medium
CN110769241B (en) Video frame processing method and device, user side and storage medium
Kim et al. Design and implementation for interactive augmented broadcasting system
US20220210520A1 (en) Online video data output method, system, and cloud platform
WO2022024780A1 (en) Information processing device, information processing method, video distribution method, and information processing system
CN113301425A (en) Video playing method, video playing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210527

Address after: 25, 5th floor, shuangjingfang office building, 3 frisha street, Singapore

Patentee after: Zhuomi Private Ltd.

Address before: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China

Patentee before: HONG KONG LIVE.ME Corp.,Ltd.

TR01 Transfer of patent right