WO2021098151A1 - 特效视频合成方法、装置、计算机设备和存储介质 - Google Patents
特效视频合成方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2021098151A1 WO2021098151A1 PCT/CN2020/087712 CN2020087712W WO2021098151A1 WO 2021098151 A1 WO2021098151 A1 WO 2021098151A1 CN 2020087712 W CN2020087712 W CN 2020087712W WO 2021098151 A1 WO2021098151 A1 WO 2021098151A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- special effect
- template
- initiator
- effect video
- Prior art date
Links
- 230000000694 effects Effects 0.000 title claims abstract description 590
- 238000001308 synthesis method Methods 0.000 title claims abstract description 27
- 239000003999 initiator Substances 0.000 claims abstract description 220
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 83
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 83
- 230000001815 facial effect Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000014509 gene expression Effects 0.000 claims description 43
- 239000002131 composite material Substances 0.000 claims description 40
- 230000004927 fusion Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000007500 overflow downdraw method Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000004519 manufacturing process Methods 0.000 abstract description 7
- 230000002194 synthesizing effect Effects 0.000 abstract description 3
- 230000008921 facial expression Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 7
- 230000000977 initiatory effect Effects 0.000 description 4
- 230000004397 blinking Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 210000003746 feather Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Definitions
- This application relates to the field of computer technology, and in particular to a special effect video synthesis method, device, computer equipment and storage medium.
- template videos are provided by the special effects platform, users select the special effects templates they want to achieve through the terminal, upload the videos to the special effects platform based on the special effects templates, and integrate the user's video into the special effects templates to get User’s special effects video.
- a special effect video synthesis method comprising: receiving a special effect video synthesis instruction sent by an initiator terminal, the special effect video synthesis instruction including a special effect video template identifier, the image information of the initiator in the special effect video template, and the initiator Video information; fuse the image information of the initiator in the special effect video template and the video information of the initiator to obtain the personal special effect video of the initiator; generate a video shooting invitation according to the special effect video of the initiator and send it to the initiator terminal, so The video shooting invitation is sent by the initiator terminal to the designated user; the image information selection instruction sent by the recipient terminal that receives the video shooting invitation is obtained; the special effect video is sent to the recipient terminal according to the image information selection instruction Unoccupied image information in the template; acquiring the image information of the special effect video template sent by the receiver terminal based on the unoccupied image information in the special effect video template, and the receiver’s video information; according to the special effect video The image information of the receiver in the template fuses the personal special effect video of the initiator and
- a special effect video synthesis device comprising: a synthesis instruction receiving module, configured to receive a special effect video synthesis instruction sent by an initiator terminal, the special effect video synthesis instruction includes a special effect video template identifier, and the special effect video template initiates The image information of the party and the video information of the initiator; the first fusion module is used to fuse the image information of the initiator and the video information of the initiator in the special effect video template to obtain the personal special effect video of the initiator; the invitation sending module is used to Generate a video shooting invitation according to the special effect video of the initiator and send it to the initiator terminal.
- the video shooting invitation is sent by the initiator terminal to the designated user; the instruction receiving module is used to obtain the recipient of the video shooting invitation
- the receiver terminal sends the image information of the special effect video template based on the unoccupied image information in the special effect video template, and the receiver’s video information;
- the second fusion module is used for the receiver according to the image information in the special effect video template.
- the image information of is fused with the personal special effect video of the initiator and the video information of the receiver to obtain a multi-person special effect video.
- a computer device includes: one or more processors; a memory; one or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be operated by the one or more The one or more computer programs are configured to execute a special effect video synthesis method, wherein the special effect video synthesis method includes the following steps: receiving a special effect video synthesis instruction sent by the initiator terminal, the special effect The video synthesis instruction includes the special effect video template identifier, the image information of the initiator in the special effect video template, and the video information of the initiator; the image information of the initiator in the special effect video template and the video information of the initiator are merged to obtain the initiator’s video information.
- Personal special effect video generate a video shooting invitation according to the special effect video of the initiator and send it to the initiator terminal, the video shooting invitation is sent by the initiator terminal to the designated user; get the recipient terminal that receives the video shooting invitation and send it
- the image information selection instruction according to the image information selection instruction, send the unoccupied image information in the special effect video template to the receiver terminal; obtain the unoccupied image information of the special effect video template based on the receiver terminal
- the above-mentioned special effect video synthesis method, device, computer equipment and storage medium solve the problem that the production scene of special effect video is restricted by time and space, and the operation convenience is low.
- FIG. 1 is an application scene diagram of a special effect video synthesis method in an embodiment
- FIG. 2 is a schematic flowchart of a special effect video synthesis method in another embodiment
- Fig. 3 is a schematic diagram of a special effect video template in a special effect video synthesis method in an embodiment
- FIG. 4 is a schematic diagram of video frame images of a multi-person special effect video in a special effect video synthesis method in an embodiment
- Figure 5 is a structural block diagram of a special effects video synthesis device in another embodiment
- Fig. 6 is an internal structure diagram of a computer device in an embodiment.
- the special effect video synthesis method provided in this application is suitable for the field of artificial intelligence technology and can be applied to the application environment as shown in FIG. 1.
- the terminal 102 communicates with the server 104 through the network through the network.
- the server 104 receives the special-effect video synthesis instruction sent by the initiator terminal 102.
- the special-effect video synthesis instruction includes the special-effect video template identifier, the image information of the initiator in the special-effect video template, and the initiator's video information; the server 104 merges the special-effect video template with the initiator's video information.
- the image information and the video information of the initiator obtain the personal special effect video of the initiator; the video shooting invitation is generated according to the special effect video of the initiator and sent to the initiator terminal 102, and the video shooting invitation is sent by the initiator terminal 102 to the designated user; the server 104 Obtain the image information selection instruction sent by the receiving terminal 102 that received the video shooting invitation; send the unoccupied image information in the special effect video template to the receiving terminal 102 according to the image information selection instruction; obtain the receiving terminal 102 based on the special effect video template
- the image information of the special effect video template sent by the unoccupied image information, as well as the receiver’s video information; according to the receiver’s image information in the special effect video template, the personal special effect video of the initiator and the receiver’s video information are merged to obtain the multiplayer special effect video .
- the terminal 102 includes an initiator terminal and a receiver terminal, and may be one initiator terminal or multiple initiator terminals, and may be one recipient terminal or multiple recipient terminals.
- the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
- the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
- a special effect video synthesis method is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
- Step S220 Receive a special effect video synthesis instruction sent by the initiator terminal.
- the special effect video synthesis instruction includes the special effect video template identifier, the image information of the initiator in the special effect video template, and the initiator video information.
- the initiator terminal refers to the terminal held by the initiator of the special effects video.
- the special effect video template can be preset, such as: fireworks scene, game scene, animation scene, basketball, etc. As shown in Figure 3, it is a special effect video template for fireworks scene.
- the selected image information may also have only one image information.
- the special effect video initiator only wants to synthesize a personal special effect video, he can select one or more special effect video templates with image information.
- the special effect video initiator wants to synthesize a multi-person special effect video, he can select multiple special effect video templates with image information.
- the special effect video template identifier is used to identify each special effect video template, and each special effect video template corresponds to an identifier.
- the image information refers to the special effect image.
- the image of the pig in the special effect video template picture in Figure 3 is the image information.
- the special effect video synthesis instruction is that the special effect video initiator selects the special effect video template to be synthesized on the special effect video template selection page through the initiator terminal, and the initiator terminal determines the corresponding special effect video template identifier based on the special effect video template selected by the special effect video initiator, and generates Special effects video synthesis instructions are sent to the server. It can also be that the special effect video initiator selects a custom video special effect template in the special effect video template selection page through the initiator terminal, and generates a special effect video synthesis instruction based on the user-defined video special effect template.
- the instruction includes the custom video special effect template. Send to the server.
- the corresponding special effect video template can be obtained from the template database through the special effect video template identifier.
- the image information of the initiator in the special effect video template can be used to determine the image that the initiator wants to synthesize.
- the template database is used to store each special effect video template. According to the special effect video template identifier, a unique corresponding special effect video template can be found in the template database.
- the received special effect video synthesis instruction is a custom video special effect template
- process the custom video special effect template determine the image information that can be selected in the custom video special effect template, and determine the synthesis area corresponding to each image information, etc. , So that the format of the custom video special effect template is the same as that of the video special effect template in the template database to form a processed custom video special effect video template.
- the processed custom video special effect template can also be stored in the template database, which can be used as the personal video special effect template of the special effect video initiator.
- the initiator user Before receiving the special effect video synthesis instruction sent by the initiator terminal, by sending each special effect video template to the initiator terminal, the initiator user can select their favorite special effect video template and image information through the initiator terminal, based on the selected special effect video template , Determine the special effect video template identifier of the special effect video template, and shoot the video information of the initiator (that is, the personal video of the initiator of the special effect video), the initiator’s video information contains at least a certain number of face video frames, and the duration of the video Within the preset time period, the initiator terminal is made to generate a special effect video synthesis instruction based on the special effect video template identifier, the image information of the initiator in the special effect video template, and the initiator video information, and send the special effect video synthesis instruction to the server.
- the initiator terminal can also determine whether the captured video information of the initiator meets the requirements by detecting the face video frame and the duration of the video. When the captured video information of the initiator does not reach the preset number of face video frames or the duration of the video is not within the preset duration range, remind the user to reshoot.
- the preset number can be 50 ⁇ 1000 frames
- the preset duration can be 10s ⁇ 500s.
- step S240 the image information of the initiator and the video information of the initiator in the special effect video template are merged to obtain the personal special effect video of the initiator.
- the expression frame of the video information is recognized by the expression recognition model, and the facial expression frame is obtained.
- the number of the obtained facial expression frames needs to reach a preset number, and the preset number is determined according to the total number of frames of the special effect video template, such as:
- the total number of frames of the special effects video template is 10 frames, and the obtained facial expression frames also need 10 frames. Extract the face area of the facial expression frame to obtain the face area, determine the composite area in each video frame of the special effect video template according to the image information, and merge the corresponding face area of the expression video frame into each video frame of the special effect video template. In the synthesis area, get personal special effects videos.
- step S260 a video shooting invitation is generated according to the special effect video of the initiator and sent to the initiator terminal, and the video shooting invitation is sent from the initiator terminal to the designated user.
- the generated video shooting invitation may be a link, or a QR code, etc.
- the currently synthesized special effect video can be viewed through the video shooting invitation and participate in the synthesis of the special effect video.
- the generated video shooting invitation is sent to the initiator terminal.
- the special effect video initiator can send the video shooting invitation to the designated user through the terminal, and the designated user is the user designated by the special effects video initiator to receive Account
- the initiator can limit who is invited to join, for example: the special effects video initiator sends a video shooting invitation to the account of user A and user B, then the designated user is the account of user A and user B. It is also possible to not limit who is the invitee. For example, if the initiator sends a video shooting invitation to Moments, it can be seen that the account of the user invited to the video shooting is the designated user.
- Step S280 Obtain the image information selection instruction sent by the receiver terminal that received the video shooting invitation.
- the recipient is the designated user who receives the video shooting invitation.
- the receiver receives the video shooting invitation generated by the initiator, it can view the special effect video of the initiator of the special effect video based on the video shooting invitation.
- the receiver accepts the video shooting invitation of the initiator, the receiver uses the operation on the receiver terminal (click) Video shooting invitation, enter the page where you can view the special effect video of the initiator of the special effect video, and send an image information selection instruction to the server based on the page.
- Step S300 Send the unoccupied image information in the special effect video template to the receiver terminal according to the image information selection instruction.
- the server when it receives the image information selection instruction of the initiator terminal, it obtains the use of the image information of the special effect video template used in the special effect video synthesis, and determines the special effect video template according to the use of the image information of the special effect video template.
- Unoccupied image information such as the special effect video template has four image information q, w, e, r
- the special effect video initiator can choose any image information in the special effect video template, and the special effect video initiator selects the image information w
- the unoccupied image information in the special effect video template only leaves the three image information q, e, r.
- the unoccupied image information in the special effect video template should be q
- the three image information of, e, and r are sent to the receiving terminal terminal of the three image information of q, e, and r in the special effect video template.
- the image information that is not occupied in the same special effect video template is sent to multiple recipient terminals at the same time.
- multiple recipient terminals are based on the non-occupied image information in the special effect video template.
- the image information of the special effect video template sent by the occupied image information is the same, the image information is occupied by the recipient who sent first, and the image information is occupied by the recipient after the reminder. Please select again.
- Step S320 Obtain the image information of the special effect video template sent by the receiver terminal based on the unoccupied image information in the special effect video template, and the receiver's video information.
- the receiving party can view the unoccupied image information in the special effect video template through the receiving terminal, and select it for special effect video synthesis on the receiving terminal.
- the image information and the personal video are taken, and the image information and video information selected by the receiver are sent to the server through the receiver terminal.
- the server After the server receives the selected image information and the receiver’s video information, it can be based on the selected image information. Mark the image information as occupied image information.
- step S340 the personal special effect video of the initiator and the video information of the receiver are merged according to the image information of the receiver in the special effect video template to obtain a multi-person special effect video.
- the recipient’s personal special effect video can be obtained based on the recipient’s image information and the recipient’s video information in the special effect video template, and the initiator’s personal special effect video and the recipient’s personal special effect video can be synthesized according to the special effect video template to obtain multiple people Special effects video.
- the video frame of the multiplayer special effect video includes the personal special effect image of the initiator and the personal special effect image of multiple recipients, such as the video frame image of the multiplayer special effect video as shown in FIG. 4. According to the image information corresponding to the personal special effect video of the initiator and the image information corresponding to the personal special effect video of the recipient, the personal special effect video of the initiator and the personal special effect video of the recipient are merged to obtain a multi-person special effect video.
- the initiator sends a special effect video synthesis instruction to the server based on the terminal, selects the special effect video template and image information, and sends the image information and video information to the server.
- the server initiates synthesis based on the special effect video template, image information and video information Party’s personal special effects video, and generate the initiator’s special effect video invitation, for the initiator to send the special effect video invitation to the designated user, so that the designated user participates in the synthesis of the special effect video based on the special effect video invitation, and the designated user only needs to upload the captured video through the terminal
- the information and the selected image information are given to the server, and the server can synthesize the video information of the initiator and each receiver into the same special effect video, realizing the synthesis of multi-person special effect videos, without the need for one camera to collect multiple participants at the same time Participants only need to upload their personal video information to the server in the scene where the personnel are located.
- the server performs multi-person special effect video synthesis on the uploaded video information, which
- the step of fusing the personal special effect video of the initiator and the video information of the recipient according to the image information of the special effect video template of the recipient to obtain the multi-person special effect video includes: fusing the image information of the recipient in the special effect video template And the receiver’s video information to obtain the receiver’s personal special effect video; synthesize the initiator’s personal special effect video and the receiver’s personal special effect video according to the special effect video template to obtain the multi-person special effect video.
- the expression frame of the video information is recognized by the expression recognition model, and the facial expression frame is obtained.
- the number of the obtained facial expression frames needs to reach a preset number, and the preset number is determined according to the total number of frames of the special effect video template, such as:
- the total number of frames of the special effects video template is 10 frames, and the obtained facial expression frames also need 10 frames. Extract the face area of the facial expression frame to obtain the face area, determine the composite area in each video frame of the special effect video template according to the image information, and merge the corresponding face area of the expression video frame into each video frame of the special effect video template. In the synthesis area, get personal special effects videos.
- the video frame of the multiplayer special effect video includes the personal special effect image of the initiator and the personal special effect image of the recipient, such as the video frame image of the multiplayer special effect video as shown in FIG. 4.
- the personal special effect video of the initiator and the personal special effect video of the recipient are merged to obtain a multi-person special effect video.
- the step of synthesizing the personal special effect video of the initiator and the personal special effect video of the recipient according to the special effect video template to obtain the multi-person special effect video includes: according to the image information corresponding to the personal special effect video of the initiator, and the recipient The image information corresponding to the personal special effects video of the initiating party merges the personal special effects video of the recipient and the personal special effects video of the recipient to obtain the multi-person special effects video.
- the image information corresponding to the personal special effect video of the initiator is used to determine which image information is synthesized by the initiator
- the image information corresponding to the personal special effect video of the recipient is used to determine which image information is synthesized by the receiver. It can be based on the personal special effects video of the initiator, by obtaining the face area integrated in each video frame in the personal special effects video of the recipient, and the face area integrated in each video frame is correspondingly integrated into the personal special effect of the initiator In the video frame of the video, the image in the personal special effect video of the initiator and the image in the personal special effect video of the recipient are formed (as shown in Fig. 4).
- the image in the personal special effects video of the initiator and the image in the personal special effects video of the recipient are formed.
- the image in the personal special effects video refers to the integration of the face area in the video information into the image The image after the synthesis area.
- the personal special effect video of the initiator and the personal special effect video of the recipient are merged.
- the multi-person special effect video is obtained, the currently synthesized special effect video invited by the special effect video is updated.
- you can See the synthesis progress of the current special effect video such as: after the initiator user sends a special effect video invitation to the receiver, each user can see the current synthesized special effect video through the special effect video invitation, when the recipient’s image information and video information are received ,
- the multi-person special effects video includes the image and the initiator in the personal special effects video of the recipient A.
- the reception Multiplayer special effects videos of Fang A and the initiator After receiving the image information and video information of the recipient B, step S340 to step S360 are executed to obtain the multi-person special effect video of the recipient A, the initiator, and the recipient A and B.
- the multiplayer special effect video synthesis is ended, the final multiplayer special effect video is generated, and a completion reminder is sent to the users participating in the multiplayer special effect video.
- the initiator sends an end multiplayer special effect video synthesis instruction through the terminal, and the server combines the currently synthesized special effect based on the received end multiplayer special effect video synthesis instruction
- the video serves as the final multiplayer special effects video, and a completion reminder is sent to users who participate in the multiplayer special effects video.
- a completion reminder is sent to the user, so that the user does not need to be notified of the synthesis progress through special effects video invitations.
- the user can end the multiplayer special effect video synthesis at any time.
- the fusion method of personal special effects videos includes: recognizing video information through an expression recognition model to obtain each expression video frame; extracting the face region in each expression video frame to obtain the person corresponding to each expression video frame Face area: Determine the composite area in each video frame of the special effect video template according to the image information; merge the facial area corresponding to each expression video frame to the composite area in each video frame of the special effect video template to obtain a personal special effect video.
- the personal special effect video may be the personal special effect video of the initiator or the personal special effect video of the recipient.
- the facial expression recognition model is a model for recognizing facial expression video frames. It collects facial expression training pictures (network pictures, standard resource pictures, etc.); performs light and dark tone processing on the training pictures to enhance the generalization ability of the model; performs facial expression classification on the training pictures (Smiling, blinking, funny, opening mouth, etc.) Obtain all kinds of expression pictures; input all kinds of expression pictures into a model based on the tensorflow framework using CNN convolutional neural network for training, and obtain an expression recognition model, so that the expression recognition model can be used Identify which type of picture each picture belongs to, such as: it is a picture of smiling expressions, a picture of open mouth expressions, a picture of funny expressions or other pictures, etc.
- the expression video frame refers to a video frame in which the facial expressions in each video frame of the video information are smiling, blinking, funny, opening mouth, and so on.
- the face area refers to the partial image of the face in the expression video frame, and the face area can be identified based on the face recognition technology.
- the composite area refers to the facial area of the image information in the video frame of the special effect video template. It can be based on face recognition technology or the facial area, or it can be pre-marked on the facial area of each image information of the special effect video template, and directly based on the The image information determines the corresponding composite area.
- Fusion of the facial area corresponding to each emoticon video frame to the composite area in each video frame of the special effect video template refers to replacing the composite area in each video frame of the special effect video template with the face area to obtain the personal special effect video.
- the expression video frame in the message can make the synthesized personal special effect video more interesting.
- the step of fusing the face area corresponding to each emoticon video frame to the composite area in each video frame of the special effects video template to obtain a personal special effect video includes: according to the sequence of each emoticon video frame in the video information , Determine the sequence of each emoticon video frame; According to the sequence of each video frame of the special effect video template, and the sequence of each emoticon video frame, determine the corresponding relationship between each emoticon video frame and each video frame of the special effect video template; According to each emoticon video Correspondence between the frames and the video frames of the special effects video template, correspondingly merge the facial area corresponding to each emoticon video frame to the composite area in each video frame of the special effect video template to obtain a personal special effect video.
- the order of each emoticon video frame in the video information refers to the order in which the video frames are played.
- the video frame l, the video frame k, the video frame j, and the video frame h are displayed in sequence, and the sequence of each emoticon video frame in the video information is: video frame l, video frame k, video frame j, and video frame h.
- the sequence of the video frames of the special effects video template is similar to the sequence of the emoticon video frames, and will not be repeated here.
- the corresponding relationship between the emoticon video frame and the video frames of the special effect video template such as: suppose there are video frame p, video frame y, video frame i, and video frame u in the special effect video template, and the sequence is video frame p, video frame y, video Frame i, video frame u, video information includes video frame l, video frame k, video frame j, and video frame h.
- the sequence of the expression video frames is: video frame l, video frame k, video frame j, video frame h, the video frame p of the special effect video template corresponds to the video frame l, the video frame k corresponds to the video frame y, the video frame i corresponds to the video frame j, and the video frame u corresponds to the video frame h.
- the facial area corresponding to each expression video frame is correspondingly merged into the composite area in each video frame of the special effects video template, such as: the face area of video frame l is merged into the composite area of video frame p, and the face area of video frame k is merged To the synthesis area of video frame y, the face area of video frame j is merged into the synthesis area of video frame i, and the face area of video frame h is merged into the synthesis area of video frame u.
- the facial area corresponding to each emoticon video frame is correspondingly merged into the composite area in each video frame of the special effect video template to obtain the image of the personal special effect video.
- the steps include: according to the corresponding relationship between each emoticon video frame and each video frame of the special effect video template, the corresponding facial area of each emoticon video frame is correspondingly merged into the composite area in each video frame of the special effect video template to obtain the special effect video frame; Feather processing is performed on the edge of the composite area after the special effect video frame is fused to the face area to obtain each processed special effect video frame; according to each processed special effect video frame, a personal special effect video is obtained.
- the feathering processing refers to the effect of making the edges of the selected range of images obscured.
- the feathering processing of the edges of the composite area after fusion into the face area in each special effect video frame is performed to obtain each processed special effect video frame (that is, the special effect video frame after the feathering). By feathering the special effect video frame, the synthesized special effect video frame can be made more natural.
- the step of recognizing the video information through the facial expression recognition model to obtain each facial expression video frame includes: formatting the video information according to a preset video format to obtain the converted video information; using the facial expression recognition model Recognize the converted video information to obtain each emoticon video frame.
- the preset video format can be set according to production requirements.
- the special effect video synthesis method of this application uniformly uses a format that is conducive to network dissemination to synthesize multiplayer special effect videos, such as the F4V format.
- the video formats uploaded by different users may be different, such as: AVI, WMV, RM, RMVB, MPEG1, MPEG2, F4V and other formats
- the uploaded video needs to be converted to a unified format.
- the format conversion of the video information according to the preset video format can be to obtain the video metadata of the video according to the encoding method of the video, and convert the video metadata of the video according to the preset video format to obtain the converted video information.
- multiplayer special effects videos can be realized for different formats of videos, and special effects video synthesis that supports multiple formats can be realized.
- the step of sending unoccupied image information in the special effect video template to the recipient terminal according to the image information selection instruction includes: obtaining user information in the image information selection instruction, and the user information includes: user account information , User location information; when verifying that the recipient is a designated user according to the user information, the unoccupied image information in the special effect video template is sent to the recipient terminal.
- the user account information can be used to identify the identity of each user.
- the user location information is the area where the current user is located.
- the special effects video synthesis applet is associated with WeChat
- both the initiator and recipient users can enter the special effects video synthesis applet through WeChat to perform special effects video synthesis.
- Program when you enter the special effects video synthesis applet, you need to obtain the user's WeChat account, which is the user account information, and obtain the user's area.
- the client of the terminal interacts with the server (ie, the server) of the special effect video synthesis applet through the client on the terminal.
- the server ie, the server
- the image information selection instruction is sent through the recipient terminal
- the user information of the recipient is obtained, and the image information selection instruction is generated based on the user information.
- the initiator sends the special effect video synthesis instruction through the initiator terminal
- the user information of the initiator may also be obtained, so that the special effect video synthesis instruction also carries user information.
- the user information of each user can be collected, and big data analysis can be further carried out based on the user information, further realizing product demand analysis, and further recommending products to users. Obtain user information through entertainment, and recommend products, which improves work efficiency and improves the accuracy of product recommendations.
- steps in the flowchart of FIG. 2 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in FIG. 2 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
- a special effect video synthesis device including: a synthesis instruction receiving module 310, a first fusion module 320, an invitation sending module 330, an instruction receiving module 340, and an image information sending module 350 , The information acquisition module 360 and the second fusion module 370, where:
- the synthesis instruction receiving module 310 is configured to receive the special effect video synthesis instruction sent by the initiator terminal.
- the special effect video synthesis instruction includes the special effect video template identifier, the image information of the initiator in the special effect video template, and the initiator video information;
- the first fusion module 320 is used for fusing the image information of the initiator and the video information of the initiator in the special effect video template to obtain the personal special effect video of the initiator;
- the invitation sending module 330 is configured to generate a video shooting invitation according to the special effect video of the initiating party and send it to the initiating party terminal, and the video shooting invitation is sent to the designated user by the initiating party terminal;
- the instruction receiving module 340 is configured to obtain the image information selection instruction sent by the receiving terminal receiving the video shooting invitation;
- the image information sending module 350 is configured to send the unoccupied image information in the special effect video template to the receiving terminal according to the image information selection instruction;
- the information acquisition module 360 is configured to acquire the image information of the special effect video template sent by the receiver terminal based on the unoccupied image information in the special effect video template, and the receiver's video information;
- the second fusion module 370 is configured to merge the personal special effect video of the initiator and the video information of the recipient according to the image information of the recipient in the special effect video template to obtain a multi-person special effect video.
- the second fusion module 370 is further used for: fusing the image information of the recipient and the video information of the recipient in the special effect video template to obtain the recipient's personal special effect video; and synthesizing the initiator's personal special effect according to the special effect video template Video and the recipient’s personal special effects video to get multi-person special effects videos.
- the first fusion module 320 and the second fusion module 370 are further used to: recognize the video information through the expression recognition model to obtain each expression video frame; extract the face region in each expression video frame, Obtain the face area corresponding to each emoticon video frame; determine the composite area in each video frame of the special effect video template according to the image information; merge the face area corresponding to each emoticon video frame to the composite area in each video frame of the special effect video template to obtain the individual Special effects video.
- the first fusion module 320 and the second fusion module 370 are further used to: determine the sequence of each emoticon video frame according to the sequence of each emoticon video frame in the video information; The sequence of frames, and the sequence of each emoticon video frame, determine the corresponding relationship between each emoticon video frame and each video frame of the special effect video template; according to the corresponding relationship between each emoticon video frame and each video frame of the special effect video template, each emoticon video The frame corresponding to the face area is correspondingly fused to the composite area in each video frame of the special effect video template to obtain a personal special effect video.
- the first fusion module 320 and the second fusion module 370 are further configured to: according to the corresponding relationship between each emoticon video frame and each video frame of the special effect video template, merge the corresponding face area of each emoticon video frame into the special effect.
- the composite area in each video frame of the video template obtains the special effect video frame; the edge of the composite area merged into the face area in each special effect video frame is feathered to obtain each processed special effect video frame; according to each processed Special effects video frames, get personal special effects videos.
- the first fusion module 320 and the second fusion module 370 are further used to: format the video information according to the preset video format to obtain the converted video information; Recognize the information to obtain each expression video frame.
- the image information sending module 350 is further configured to: obtain user information in the image information selection instruction, the user information includes: user account information, user location information; The party terminal sends the unoccupied image information in the special effect video template.
- each module in the above-mentioned special effect video synthesis device can be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 6.
- the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
- the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, a computer program, and a database.
- the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
- the database of the computer equipment is used to store special effects video data.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer program is executed by the processor to realize a special effect video synthesis method, wherein the special effect video synthesis method includes the following steps: receiving a special effect video synthesis instruction sent by the initiator terminal, and the special effect video synthesis instruction includes a special effect video Template identification, the image information of the initiator in the special effect video template and the video information of the initiator; fusion of the image information of the initiator in the special effect video template and the video information of the initiator to obtain the personal special effect video of the initiator; The special effect video of the initiator generates a video shooting invitation and sends it to the initiator terminal.
- the special effect video synthesis method includes the following steps: receiving a special effect video synthesis instruction sent by the initiator terminal, and the special effect video synthesis instruction includes a special effect video Template identification, the image information of the initiator in the special effect video template and the video information of the initiator; fusion of the image information of the initiator in the special effect video template and the video information of the initiator to obtain the personal special effect video of the initiator;
- the video shooting invitation is sent from the initiator terminal to the designated user; obtains the image information selection instruction sent by the receiver terminal that receives the video shooting invitation;
- the image information selection instruction sends the unoccupied image information in the special effect video template to the receiver terminal; and obtains the special effect sent by the receiver terminal based on the unoccupied image information in the special effect video template
- the image information of the video template and the video information of the receiver; according to the image information of the receiver in the special effect video template, the personal special effect video of the initiator and the video information of the receiver are merged to obtain a multi-person special effect video.
- FIG. 6 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- a storage medium storing computer-readable instructions.
- the storage medium is a volatile storage medium or a non-volatile storage medium.
- the computer-readable instructions are executed by one or more processors. When executed, one or more processors are made to perform the following steps: receive a special effect video synthesis instruction sent by the initiator terminal, the special effect video synthesis instruction includes the special effect video template identifier, the image information of the initiator in the special effect video template, and the initiator video information Integrate the image information of the initiator and the video information of the initiator in the special effects video template to obtain the initiator’s personal special effect video; generate a video shooting invitation based on the initiator’s special effect video and send it to the initiator terminal, and the video shooting invitation is from the initiator terminal Send to the designated user; obtain the image information selection instruction sent by the receiver terminal that received the video shooting invitation; send the unoccupied image information in the special effect video template to the receiver terminal according to the image information selection instruction; obtain the receiver terminal based on the special effect video The image information of
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Circuits (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (20)
- 一种特效视频合成方法,其中,所述方法包括:接收发起方终端发送的特效视频合成指令,所述特效视频合成指令中包括特效视频模板标识、所述特效视频模板中发起方的形象信息及发起方视频信息;融合所述特效视频模板中发起方的形象信息和发起方的视频信息,得到发起方的个人特效视频;根据所述发起方的特效视频生成视频拍摄邀请并发送至发起方终端,所述视频拍摄邀请由发起方终端发送至指定用户;获取接到所述视频拍摄邀请的接收方终端发送的形象信息选择指令;根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息;获取所述接收方终端基于所述特效视频模板中未被占用的形象信息发送的所述特效视频模板的形象信息,以及接收方视频信息;根据所述特效视频模板中接收方的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频。
- 根据权利要求1所述的方法,其中,所述根据接收方的所述特效视频模板的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频的步骤,包括:融合所述特效视频模板中接收方的形象信息和接收方的视频信息,得到接收方的个人特效视频;根据所述特效视频模板合成所述发起方的个人特效视频和接收方的个人特效视频,得到多人特效视频。
- 根据权利要求1或2所述的方法,其中,个人特效视频的融合方法包括:通过表情识别模型对所述视频信息进行识别,获得各表***帧;对各所述表***帧中的人脸区域进行提取,获得各所述表***帧对应人脸区域;根据所述形象信息确定所述特效视频模板各视频帧中的合成区域;将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求3所述的方法,其中,所述将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧在所述视频信息中的先后顺序,确定各所述表***帧的先后顺序;根据所述特效视频模板各视频帧的先后顺序,与各所述表***帧的先后顺序,确定各所述表***帧与所述特效视频模板各所述视频帧的对应关系;根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求4所述的方法,其中,所述根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得特效视频帧;对各所述特效视频帧中融合至人脸区域后的合成区域的边缘进行羽化处理,获得各处理后的特效视频帧;根据各处理后的所述特效视频帧,获得个人特效视频。
- 根据权利要求3所述的方法,其中,所述通过表情识别模型对所述视频信息进行识别,获得各表***帧的步骤,包括:根据预设的视频格式对所述视频信息进行格式转换,获得转换后的视频信息;通过表情识别模型对所述转换后的视频信息进行识别,获得各表***帧。
- 根据权利要求1所述的方法,其中,所述根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息的步骤,包括:获取所述形象信息选择指令中的用户信息,所述用户信息包括:用户账号信息、用户位置信息;根据所述用户信息验证所述接收方是所述指定用户时,向所述接收方终端发送所述特效视频模板中未被占用的形象信息。
- 一种特效视频合成装置,其中,所述装置包括:合成指令接收模块,用于接收发起方终端发送的特效视频合成指令,所述特效视频合成指令中包括特效视频模板标识、所述特效视频模板中发起方的形象信息及发起方视频信息;第一融合模块,用于融合所述特效视频模板中发起方的形象信息和发起方的视频信息,得到发起方的个人特效视频;邀请发送模块,用于根据所述发起方的特效视频生成视频拍摄邀请并发送至发起方终端,所述视频拍摄邀请由发起方终端发送至指定用户;指令接收模块,用于获取接到所述视频拍摄邀请的接收方终端发送的形象信息选择指令;形象信息发送模块,用于根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息;信息获取模块,用于获取所述接收方终端基于所述特效视频模板中未被占用的形象信息发送的所述特效视频模板的形象信息,以及接收方视频信息;第二融合模块,用于根据所述特效视频模板中接收方的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频。
- 一种计算机设备,其中,包括:一个或多个处理器;存储器;一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个计算机程序配置用于执行一种特效视频合成方法;其中,所述特效视频合成方法包括以下步骤:接收发起方终端发送的特效视频合成指令,所述特效视频合成指令中包括特效视频模板标识、所述特效视频模板中发起方的形象信息及发起方视频信息;融合所述特效视频模板中发起方的形象信息和发起方的视频信息,得到发起方的个人特效视频;根据所述发起方的特效视频生成视频拍摄邀请并发送至发起方终端,所述视频拍摄邀请由发起方终端发送至指定用户;获取接到所述视频拍摄邀请的接收方终端发送的形象信息选择指令;根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息;获取所述接收方终端基于所述特效视频模板中未被占用的形象信息发送的所述特效视频模板的形象信息,以及接收方视频信息;根据所述特效视频模板中接收方的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频。
- 根据权利要求9所述的计算机设备,其中,所述根据接收方的所述特效视频模板的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频的步骤,包括:融合所述特效视频模板中接收方的形象信息和接收方的视频信息,得到接收方的个人特效视频;根据所述特效视频模板合成所述发起方的个人特效视频和接收方的个人特效视频,得到多人特效视频。
- 根据权利要求9或10所述的计算机设备,其中,个人特效视频的融合方法包括:通过表情识别模型对所述视频信息进行识别,获得各表***帧;对各所述表***帧中的人脸区域进行提取,获得各所述表***帧对应人脸区域;根据所述形象信息确定所述特效视频模板各视频帧中的合成区域;将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求11所述的计算机设备,其中,所述将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧在所述视频信息中的先后顺序,确定各所述表***帧的先后顺序;根据所述特效视频模板各视频帧的先后顺序,与各所述表***帧的先后顺序,确定各所述表***帧与所述特效视频模板各所述视频帧的对应关系;根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求12所述的计算机设备,其中,所述根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得特效视频帧;对各所述特效视频帧中融合至人脸区域后的合成区域的边缘进行羽化处理,获得各处理后的特效视频帧;根据各处理后的所述特效视频帧,获得个人特效视频。
- 根据权利要求11所述的计算机设备,其中,所述通过表情识别模型对所述视频信息进行识别,获得各表***帧的步骤,包括:根据预设的视频格式对所述视频信息进行格式转换,获得转换后的视频信息;通过表情识别模型对所述转换后的视频信息进行识别,获得各表***帧。
- 根据权利要求9所述的计算机设备,其中,所述根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息的步骤,包括:获取所述形象信息选择指令中的用户信息,所述用户信息包括:用户账号信息、用户位置信息;根据所述用户信息验证所述接收方是所述指定用户时,向所述接收方终端发送所述特效视频模板中未被占用的形象信息。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现一种特效视频合成方法;其中,所述特效视频合成方法包括以下步骤:接收发起方终端发送的特效视频合成指令,所述特效视频合成指令中包括特效视频模板标识、所述特效视频模板中发起方的形象信息及发起方视频信息;融合所述特效视频模板中发起方的形象信息和发起方的视频信息,得到发起方的个人特效视频;根据所述发起方的特效视频生成视频拍摄邀请并发送至发起方终端,所述视频拍摄邀请由发起方终端发送至指定用户;获取接到所述视频拍摄邀请的接收方终端发送的形象信息选择指令;根据所述形象信息选择指令向所述接收方终端发送所述特效视频模板中未被占用的形象信息;获取所述接收方终端基于所述特效视频模板中未被占用的形象信息发送的所述特效视频模板的形象信息,以及接收方视频信息;根据所述特效视频模板中接收方的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频。
- 根据权利要求16所述的计算机可读存储介质,其中,所述根据接收方的所述特效视频模板的形象信息融合所述发起方的个人特效视频和接收方的视频信息,得到多人特效视频的步骤,包括:融合所述特效视频模板中接收方的形象信息和接收方的视频信息,得到接收方的个人特效视频;根据所述特效视频模板合成所述发起方的个人特效视频和接收方的个人特效视频,得到多人特效视频。
- 根据权利要求16或17所述的计算机可读存储介质,其中,个人特效视频的融合方法包括:通过表情识别模型对所述视频信息进行识别,获得各表***帧;对各所述表***帧中的人脸区域进行提取,获得各所述表***帧对应人脸区域;根据所述形象信息确定所述特效视频模板各视频帧中的合成区域;将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求18所述的计算机可读存储介质,其中,所述将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧在所述视频信息中的先后顺序,确定各所述表***帧的先后顺序;根据所述特效视频模板各视频帧的先后顺序,与各所述表***帧的先后顺序,确定各所述表***帧与所述特效视频模板各所述视频帧的对应关系;根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频。
- 根据权利要求19所述的计算机可读存储介质,其中,所述根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得个人特效视频的步骤,包括:根据各所述表***帧与所述特效视频模板各所述视频帧的对应关系,将各所述表***帧对应人脸区域对应融合至所述特效视频模板各视频帧中的合成区域,获得特效视频帧;对各所述特效视频帧中融合至人脸区域后的合成区域的边缘进行羽化处理,获得各处理后的特效视频帧;根据各处理后的所述特效视频帧,获得个人特效视频。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911147121.8A CN111147766A (zh) | 2019-11-21 | 2019-11-21 | 特效视频合成方法、装置、计算机设备和存储介质 |
CN201911147121.8 | 2019-11-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021098151A1 true WO2021098151A1 (zh) | 2021-05-27 |
Family
ID=70517212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/087712 WO2021098151A1 (zh) | 2019-11-21 | 2020-04-29 | 特效视频合成方法、装置、计算机设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111147766A (zh) |
WO (1) | WO2021098151A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112153422B (zh) * | 2020-09-25 | 2023-03-31 | 连尚(北京)网络科技有限公司 | 视频融合方法和设备 |
CN112312163B (zh) * | 2020-10-30 | 2024-05-28 | 北京字跳网络技术有限公司 | 视频生成方法、装置、电子设备及存储介质 |
CN113806306B (zh) * | 2021-08-04 | 2024-01-16 | 北京字跳网络技术有限公司 | 媒体文件处理方法、装置、设备、可读存储介质及产品 |
CN114429611B (zh) * | 2022-04-06 | 2022-07-08 | 北京达佳互联信息技术有限公司 | 视频合成方法、装置、电子设备及存储介质 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000175171A (ja) * | 1998-12-03 | 2000-06-23 | Nec Corp | テレビ会議の映像生成装置及びその生成方法 |
CN102665026A (zh) * | 2012-05-03 | 2012-09-12 | 华为技术有限公司 | 一种利用视频会议实现远程合影的方法、设备及*** |
CN106331529A (zh) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | 一种图像拍摄方法及装置 |
CN106375193A (zh) * | 2016-09-09 | 2017-02-01 | 四川长虹电器股份有限公司 | 远程合照方法 |
CN107734257A (zh) * | 2017-10-25 | 2018-02-23 | 北京玩拍世界科技有限公司 | 一种群拍视频拍摄方法及装置 |
CN109040647A (zh) * | 2018-08-31 | 2018-12-18 | 北京小鱼在家科技有限公司 | 媒体信息合成方法、装置、设备及存储介质 |
CN109785229A (zh) * | 2019-01-11 | 2019-05-21 | 百度在线网络技术(北京)有限公司 | 基于区块链实现的智能合影方法、装置、设备和介质 |
CN110012352A (zh) * | 2019-04-17 | 2019-07-12 | 广州华多网络科技有限公司 | 图像特效处理方法、装置及视频直播终端 |
CN110166799A (zh) * | 2018-07-02 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 直播互动方法、装置及存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005033532A (ja) * | 2003-07-14 | 2005-02-03 | Noritsu Koki Co Ltd | 写真処理装置 |
CN104680480B (zh) * | 2013-11-28 | 2019-04-02 | 腾讯科技(上海)有限公司 | 一种图像处理的方法及装置 |
US10057205B2 (en) * | 2014-11-20 | 2018-08-21 | GroupLuv, Inc. | Systems and methods for creating and accessing collaborative electronic multimedia compositions |
CN106355551A (zh) * | 2016-08-26 | 2017-01-25 | 北京金山安全软件有限公司 | 拼图处理方法、装置、电子设备及服务器 |
KR101894956B1 (ko) * | 2017-06-21 | 2018-10-24 | 주식회사 미디어프론트 | 실시간 증강 합성 기술을 이용한 영상 생성 서버 및 방법 |
CN108259788A (zh) * | 2018-01-29 | 2018-07-06 | 努比亚技术有限公司 | 视频编辑方法、终端和计算机可读存储介质 |
CN110121094A (zh) * | 2019-06-20 | 2019-08-13 | 广州酷狗计算机科技有限公司 | 视频合拍模板的显示方法、装置、设备及存储介质 |
-
2019
- 2019-11-21 CN CN201911147121.8A patent/CN111147766A/zh active Pending
-
2020
- 2020-04-29 WO PCT/CN2020/087712 patent/WO2021098151A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000175171A (ja) * | 1998-12-03 | 2000-06-23 | Nec Corp | テレビ会議の映像生成装置及びその生成方法 |
CN102665026A (zh) * | 2012-05-03 | 2012-09-12 | 华为技术有限公司 | 一种利用视频会议实现远程合影的方法、设备及*** |
CN106375193A (zh) * | 2016-09-09 | 2017-02-01 | 四川长虹电器股份有限公司 | 远程合照方法 |
CN106331529A (zh) * | 2016-10-27 | 2017-01-11 | 广东小天才科技有限公司 | 一种图像拍摄方法及装置 |
CN107734257A (zh) * | 2017-10-25 | 2018-02-23 | 北京玩拍世界科技有限公司 | 一种群拍视频拍摄方法及装置 |
CN110166799A (zh) * | 2018-07-02 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 直播互动方法、装置及存储介质 |
CN109040647A (zh) * | 2018-08-31 | 2018-12-18 | 北京小鱼在家科技有限公司 | 媒体信息合成方法、装置、设备及存储介质 |
CN109785229A (zh) * | 2019-01-11 | 2019-05-21 | 百度在线网络技术(北京)有限公司 | 基于区块链实现的智能合影方法、装置、设备和介质 |
CN110012352A (zh) * | 2019-04-17 | 2019-07-12 | 广州华多网络科技有限公司 | 图像特效处理方法、装置及视频直播终端 |
Also Published As
Publication number | Publication date |
---|---|
CN111147766A (zh) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021098151A1 (zh) | 特效视频合成方法、装置、计算机设备和存储介质 | |
US11670015B2 (en) | Method and apparatus for generating video | |
CN108322832B (zh) | 评论方法、装置、及电子设备 | |
CN111080759B (zh) | 一种分镜效果的实现方法、装置及相关产品 | |
CN112199016B (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
CN105608715A (zh) | 一种在线合影方法及*** | |
WO2023011221A1 (zh) | 混合变形值的输出方法及存储介质、电子装置 | |
CN106375193A (zh) | 远程合照方法 | |
CN112492231B (zh) | 远程交互方法、装置、电子设备和计算机可读存储介质 | |
CN112004034A (zh) | 合拍方法、装置、电子设备及计算机可读存储介质 | |
CN114430494B (zh) | 界面显示方法、装置、设备及存储介质 | |
KR20170102570A (ko) | 소셜 네트워킹 툴들과의 텔레비전 기반 상호작용의 용이화 | |
CN107911601A (zh) | 一种拍照时智能推荐拍照表情和拍照姿势的方法及其*** | |
CN108961368A (zh) | 三维动画环境中实时直播综艺节目的方法和*** | |
US20150341541A1 (en) | Methods and systems of remote acquisition of digital images or models | |
CN109529350A (zh) | 一种应用于游戏中的动作数据处理方法及其装置 | |
CN108320331B (zh) | 一种生成用户场景的增强现实视频信息的方法与设备 | |
CN117011497A (zh) | 一种ar场景下基于ai通用助手的远程多方视频交互方法 | |
WO2023082737A1 (zh) | 一种数据处理方法、装置、设备以及可读存储介质 | |
CN115442658B (zh) | 直播方法、装置、存储介质、电子设备及产品 | |
CN116016837A (zh) | 一种沉浸式虚拟网络会议方法和装置 | |
Sun et al. | Video Conference System in Mixed Reality Using a Hololens | |
US20230138434A1 (en) | Extraction of user representation from video stream to a virtual environment | |
CN114125552A (zh) | 视频数据的生成方法及装置、存储介质、电子装置 | |
CN112734657A (zh) | 基于人工智能和三维模型的云合影方法、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20889519 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20889519 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20889519 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29-09-2022) |