WO2019149000A1 - 互动特效视频的处理方法、介质和终端设备 - Google Patents

互动特效视频的处理方法、介质和终端设备 Download PDF

Info

Publication number
WO2019149000A1
WO2019149000A1 PCT/CN2018/123236 CN2018123236W WO2019149000A1 WO 2019149000 A1 WO2019149000 A1 WO 2019149000A1 CN 2018123236 W CN2018123236 W CN 2018123236W WO 2019149000 A1 WO2019149000 A1 WO 2019149000A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
effect
video
special
reference video
Prior art date
Application number
PCT/CN2018/123236
Other languages
English (en)
French (fr)
Inventor
袁少龙
周宇涛
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Priority to US16/965,454 priority Critical patent/US11533442B2/en
Priority to EP18903360.8A priority patent/EP3748954B1/en
Priority to RU2020128552A priority patent/RU2758910C1/ru
Publication of WO2019149000A1 publication Critical patent/WO2019149000A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region

Definitions

  • the invention relates to an information processing technology, in particular to a method, a medium and a terminal device for processing an interactive special effects video.
  • the existing video effects are generally obtained after the user pre-shoots the video, and then add special effects through post-processing; for videos with multiple special effects, each effect is generally produced one by one, and the production of this special effect is very time-consuming. And when there are certain interactions between multiple effects, there is a higher requirement for the professional level and fineness of the user operation, making it difficult for ordinary users to create videos with multiple special effects in video entertainment, especially with interaction.
  • the special effects video of the relationship increases the user's production threshold and limits the video entertainment mode of ordinary users.
  • the object of the present invention is to solve at least one of the above technical drawbacks, and in particular to reduce the difficulty for a user to make an interactive video special effect.
  • the present invention provides a method for processing an interactive special effect video, including: acquiring a reference video including a first special effect; acquiring a second special effect that interacts with the first special effect; and processing a current current of the reference video according to the second special effect Image, resulting in a video containing the second effect.
  • the reference video carries the first content information of the first special effect; the acquiring the second special effect that interacts with the first special effect, comprising: acquiring the first special effect from the first content information; Obtaining the second special effect that interacts with the first special effect from the interactive special effect correspondence table.
  • the reference video carries second content information of the second special effect that interacts with the first special effect; and the acquiring the second special effect that interacts with the first special effect, including: from the In the second content information, obtaining a second special effect that interacts with the first special effect.
  • the acquiring the second special effect that interacts with the first special effect comprises: identifying a feature of the first special effect in the reference video; acquiring, according to the feature, from the interactive special effect correspondence table, The second special effect of the first special effect interaction.
  • the obtaining, by the interaction special effect correspondence table, the second special effect that interacts with the first special effect comprising: acquiring interaction with the first special effect from an interactive special effect correspondence table of the terminal or the special effect server The special effects group; wherein the special effect group includes two or more second special effects, each second special effect has a color attribute and a special effect score of the user feedback; determining whether the second special effect selection instruction input by the user is received; if receiving The second effect selection instruction acquires the second special effect corresponding to the second special effect selection instruction from the special effect group; if the second special effect selection instruction is not received, determining whether to set the special effect color adaptation If yes, calculating a color average value of the current frame picture of the reference video, obtaining a second special effect corresponding to the color attribute of the color average value; otherwise, acquiring a second special effect with the highest effect rating.
  • the first special effect and the second special effect are the same, opposite, or similar special effects.
  • the first special effect and the second special effect are special effects interacting on a time axis with a starting time as a reference starting point.
  • the effect of interacting on the time axis with the starting time as a reference starting point includes: the interaction between the first special effect and the second special effect on the time axis with the starting time as a reference starting point
  • Processing the current image of the reference video according to the second special effect including: acquiring the first effect of the first special effect in the reference video on the time axis with the starting time as a reference starting point a time point; processing, by the second special effect, the reference video corresponds to an image of the first time point.
  • the effect of interacting on the time axis with the starting time as a reference starting point includes: the interaction time between the first special effect and the second special effect on the time axis with the starting time as a reference starting point
  • the point is a sequential relationship; processing the current image of the reference video according to the second special effect, including: acquiring the first special effect in the reference video on a time axis with the starting time as a reference starting point a second time point according to the sequential arrangement relationship, obtaining a second time point of the second special effect in the reference video; processing, by using the second special effect, the reference video corresponding to the image of the second time point .
  • the method further comprises: synthesizing a video by including a video of the second special effect and the reference video containing the first special effect.
  • the present invention further provides a computer readable storage medium having stored thereon a computer program, the program being executed by a processor to implement the steps of the method for processing the interactive effect video of any of the foregoing.
  • the present invention also provides a terminal device, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor; and when the processor executes the computer program, implementing the interactive effect of any one of the foregoing The steps of the video processing method.
  • the second special effect is obtained by the first special effect of the reference video to automatically generate a video including the second special effect, which simplifies the steps of the user to create multiple special effects, and reduces the user's production.
  • the difficulty of the video effects; moreover, the second special effect has a plurality of interactive effects with the first special effect, which increases the entertainment of the user to create a video special effect.
  • the present invention may carry the first content information and/or the second content information in the reference video to separately edit the first special effect and/or the second special effect, thereby avoiding editing the entire
  • the reference video reduces the occupation of the terminal memory space; and it is also easier to replace the first effect and/or the second effect.
  • the invention may determine the special effects group corresponding to the first special effect by using an interactive special effect correspondence table, and determine the second special effect according to a user selection or a system setting or a special effect rating, and add the first special effect and the second special effect.
  • the combination between special effects enriches the interaction effect between various special effects; the entertainment of the video special effects can be further enhanced by the interaction time between the first special effect and the second special effect.
  • FIG. 1 is a schematic flow chart of a first embodiment of a processing method according to the present invention.
  • FIG. 2 is a schematic flow chart of a second embodiment of the processing method according to the present invention.
  • FIG. 3 is a schematic diagram of an embodiment of a terminal device according to the present invention.
  • the present invention provides a method for processing an interactive special effect video.
  • the first embodiment shown in FIG. 1 includes the following steps:
  • Step S10 acquiring a reference video including the first special effect
  • Step S20 Acquire a second special effect that interacts with the first special effect
  • Step S30 Processing the current image of the reference video according to the second special effect to obtain a video including the second special effect.
  • each step is as follows:
  • Step S10 acquiring a reference video including the first special effect
  • the reference video may be a video recorded in real time, or may be a video that has been pre-stored in the terminal; the first special effect included in the reference video may be a visual special effect or a sound special effect; when the first When the special effect is a visual effect, it may be displayed in the picture of the reference video, or may not be displayed in the picture, but only stored in the reference video related file; meanwhile, the first effect may be the same as the reference video.
  • a video stream file may also be a different file from the reference video, and is synthesized and outputted in the picture or sound of the same video according to the corresponding information or the matching information only during playback.
  • the first special effect may be a special effect associated with a limb motion in the video screen, for example, when a limb motion of the snap finger appears in the video screen, the first special effect of the user motion switching is matched for the limb motion;
  • the special effect can be displayed in the video screen of the reference video, and the first special effect can be hidden according to the user's requirement, and the first special effect is displayed when another action that triggers the display of the special effect occurs.
  • Step S20 Acquire a second special effect that interacts with the first special effect
  • the second special effect may be a visual special effect or a sound special effect;
  • the second special effect of the first special effect interaction means that the first special effect has a special correlation effect with the second special effect, for example:
  • a second special effect of the snowflake may appear in the video picture simultaneously or delayed; when the first special effect appears on the left side of the character in the video picture
  • a portal is transmitted, a second effect of another portal may be simultaneously or delayed on the right side of the character in the video screen; when the first special effect is an explosion visual effect in the video image, Or delay the output of the second special effect of the explosion sound in the video.
  • the obtaining manner of the second special effect may be obtained from the terminal, or may be obtained from the special effect server.
  • Step S30 Processing the current image of the reference video according to the second special effect to obtain a video including the second special effect.
  • the processing the current image of the reference video may include multiple manners, for example, simultaneously outputting the first special effect and the second special effect into a current image or sound of the reference video, or a special effect and the second special effect are sequentially displayed in the picture of the reference video according to a certain time, or the first special effect and the second special effect are displayed in the reference video in a certain interactive motion trajectory. Or presetting a certain trigger logic to the first effect and the second effect for subsequent interaction output or the like.
  • a video including a visual special effect may be formed; when the first special effect and the second special effect are sound special effects, according to The sound of the second effect processes the current image of the reference video to form a video with image effects.
  • the second special effect is obtained according to the first special effect of the reference video to automatically generate a video including the second special effect, which simplifies the process for the user to create multiple special effects, and reduces the user to create multiple video special effects. Difficulty; and the second special effect has an interactive effect with the first special effect, which increases the entertainment of the user to create a video special effect.
  • the present invention further provides another embodiment: the reference video carries the first content information of the first special effect;
  • the first content information may directly include a display effect or a sound effect of the first special effect, and may also include acquiring a local address or a network address of the first special effect;
  • the content information may further include a video author of the first special effect, a duration of the special effect, and a reference video decoding manner, etc.; acquiring the first special effect from the first content information, directly from the first content information Obtaining, according to the local address or the network address of the first special effect; before acquiring the second special effect, the interactive special effect correspondence table may be preset to determine the second special effect according to the first special effect .
  • the first special effect may be included in the first content information to avoid directly synthesizing the first special effect into the reference video, so as to facilitate the purpose of modifying the first special effect.
  • the first special effect is a visual special effect
  • the first special effect may not be directly displayed in the picture of the reference video, but is read from the first content information, so the modification may be separately modified or edited. Determining the first special effect in the first content information, thereby avoiding the purpose of avoiding editing the reference video and modifying the first special effect in the reference video, saving memory space occupied by editing the special effect, and saving the The modification time or replacement time of the first special effect.
  • the present invention provides a further embodiment: the reference video carries second content information of the second special effect that interacts with the first special effect;
  • the second content information not only carries related information of the first special effect, but also carries related information of the second special effect.
  • the second content information may directly include the display effect or the sound effect of the second special effect, and may also include acquiring the local address or the network address of the second special effect, and may further include the first special effect and the The interactive special effect correspondence table of the second special effect correspondence; in addition, the second content information may further include a video author of the second special effect, a special effect duration, and the like; and acquiring the second content from the second content information
  • the special effect may be obtained by directly acquiring the second special effect from the second content information, or may be obtained according to the local address or the network address of the second special effect.
  • the second content information of the second special effect is carried in the reference video, and the second content information may be modified or replaced to achieve the purpose of modifying the second special effect, without the reference. Modifying the second special effect in the video simplifies the operation of modifying the second special effect.
  • the present invention provides a further embodiment: the obtaining a second special effect that interacts with the first special effect, including:
  • the second special effect interacting with the first special effect is obtained from the interactive special effect correspondence table.
  • the embodiment may identify the feature of the first special effect from the reference video to obtain the according to the interactive special effect correspondence table.
  • the second special effect may be pre-stored in the special effect server or the local terminal to determine a correspondence between the first special effect and the second special effect; when the second special effect corresponding to the first special effect needs to be modified,
  • the modification step can be simplified by directly modifying the interactive effect correspondence table without having to modify it in the reference video.
  • the reference video may also carry the first content information, or carry information that the first special effect has a corresponding relationship with the reference video.
  • the feature of the first special effect may be directly included in the reference video, or may be located in a corresponding local address or network address.
  • the second special effect interacting with the first special effect is obtained from the interactive special effect correspondence table according to the feature.
  • the second effect may be directly included in the reference video, or may be located in a corresponding local address or network address.
  • the second special effect is directly included in the reference video, it may not be displayed or output temporarily, and the special effect picture or the output special effect audio is displayed after the instruction for triggering display or output is acquired.
  • the time for acquiring the second special effect may be saved, and the situation that the time delay abnormality or the acquisition failure of the second special effect is acquired may be avoided.
  • the present invention also proposes another embodiment:
  • the second special effect may be directly obtained from the second content information to speed up the time for acquiring the second special effect; if there is no second content information of the second special effect, according to the Obtaining the first special effect in the first content information of the first special effect, and acquiring the second special effect corresponding to the first special effect from the interactive special effect correspondence table; And the content information is used to identify the feature of the first special effect from the reference video, and then acquire the second special effect based on the feature.
  • This embodiment has multiple ways of obtaining the second special effect, and adopts the fastest acquisition mode as the first-order acquisition mode to speed up the acquisition speed of the second special effect; The method is selected to ensure that the second special effect can be obtained.
  • the present invention provides a further embodiment: the second special effect that interacts with the first special effect is obtained from the interaction special effect correspondence table, including:
  • the second special effect selection instruction determines whether to set the effect color adaptation, if yes, calculating a color average value of the current frame picture of the reference video, and acquiring a second special effect corresponding to the color attribute of the color average value Otherwise, the second special effect with the highest effect rating is obtained.
  • the first special effect is a sound
  • the second special effect interacting with the first special effect is a fruit visual special effect of a different color
  • the user can choose which special effect to input; if the user does not select, it can be automatically selected according to the current frame picture color average value of the reference video, or automatically selected according to the special effect score.
  • the color effect interaction mode in this embodiment can be determined by the user to improve the user's participation degree and increase the creative form of the second special effect, and also provides the function of automatically adding the second special effect, simplifying User operation has improved the user experience.
  • the second special effect in the special effect group may be provided to the user by a third-party server, and the user may score each second special effect in the special effect group to improve user interaction.
  • the first special effect and the second special effect may be the same, opposite, or similar special effects.
  • the same content means that the screen, sound or special effect of the special effect moves in the same manner.
  • the first special effect is a special effect of adding blush on the character A
  • the second special effect is to add the same to the character B.
  • the blush effect may instead include a mirror image of the screen motion, or an opposite change of the screen, for example, when the first special effect is to enlarge the character A, the second special effect is to narrow the character B; the content Similarity may include adding similar effects, for example, when the first effect is to add a stun effect to the top of the character A, the second effect is to add another different vertigo effect to the top of the character B.
  • the invention can form various special effects such as exaggeration, contrast, contrast, etc. by different combinations of the first special effect and the second special effect, and provides the user with more rich entertainment modes.
  • the present invention also proposes a second embodiment: the first special effect and the second special effect may also be special effects on the time axis with the starting time as a reference starting point.
  • step S20 in the first embodiment may be changed to:
  • Step S21 Acquire a second special effect that interacts with the first special effect, where the first special effect and the second special effect are special effects interacting on a time axis with a starting time as a reference starting point.
  • the interaction time or the trigger time of the first special effect and the second special effect may be the reference object of the time axis, and may trigger the special effect of displaying the picture or playing the sound at the same time, or delay one of the special effects, or in the time axis. Interlaced on top.
  • the present invention further provides an embodiment in which the interaction effect on the time axis with the starting time as a reference starting point includes: on the time axis with the starting time as the reference starting point, The interaction time between the first special effect and the second special effect is the same;
  • Processing the current image of the reference video according to the second special effect including:
  • the reference video corresponds to an image of the first time point.
  • the interaction time of the first special effect and the second special effect is the same.
  • the first special effect and the second special effect are both flame effects, and may be simultaneously displayed at different positions of the video screen after 5 seconds of the starting time; or the first special effect is a flame visual effect,
  • the second special effect is a flame sound effect, which can appear simultaneously after 5 seconds of the start-up time to enhance the special effects of the combustion.
  • the present invention further provides an embodiment in which the interaction effect on the time axis with the starting time as a reference starting point includes: the time axis with the starting time as a reference starting point, The interaction time point of the first special effect and the second special effect is a sequential relationship;
  • Processing the current image of the reference video according to the second special effect including:
  • the reference video corresponds to an image of the second time point.
  • the interaction time points of the first special effect and the second special effect are staggered.
  • the first special effect is raining
  • the second special effect is an umbrella
  • the second special effect may be extended according to the obtained first special effect at the first time point in the reference video.
  • the method may further include:
  • the first special effect may or may not be included in the video including the second special effect.
  • the video including the second effect and the reference video including the first effect may be synthesized into one video to form two video images with contrast effects in the one video.
  • the reference video is a kind of food for the user
  • the first special effect is an exaggerated expression that the user sees the delicious food
  • the second special effect is an exaggerated expression that the user sees the cold rejection of the food; the user can be seen
  • the first video of the delicious food (reference video) and the second video (the video containing the second special effect) that was rejected by the food are combined in one video to achieve a contrast effect.
  • the positional relationship after the first video and the second video are combined into one video can be set left and right or set up and down, or can be set to broadcast one video first. After the broadcast is finished, another video is resumed; the broadcast time of the first video and the second video may be staggered by a preset time, and/or the preset distance may be staggered, so that the user can adopt various combinations. Achieve more diverse combinations and provide users with a richer way of entertainment.
  • the present invention further provides a computer readable storage medium having stored thereon a computer program, the program being executed by a processor to implement the steps of the method for processing the interactive effect video of any of the foregoing.
  • the present invention also provides a terminal device, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor; and when the processor executes the computer program, implementing the interactive effect of any one of the foregoing The steps of the video processing method.
  • FIG. 3 is a partial structural block diagram of a terminal device according to the present invention.
  • the terminal device may be a terminal device that can watch a video program, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, and the like.
  • a video program such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, and the like.
  • the following describes the working mode of the terminal device of the present invention by taking a mobile phone as an example.
  • the mobile phone includes components such as a processor, a memory, an input unit, a display unit, and the like.
  • the memory can be used to store computer programs and various functional modules, and the processor executes various functional applications and data processing of the mobile phone by running a computer program stored in the memory.
  • the memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a function of playing a video), and the like; the storage data area may be stored according to the use of the mobile phone. Data (such as video data, etc.) and so on.
  • the memory may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit can be used to receive search keywords entered by the user and to generate signal inputs related to user settings and function controls of the handset.
  • the input unit may include a touch panel and other input devices.
  • the touch panel can collect touch operations on or near the user (such as the user using any suitable object or accessory such as a finger or a stylus on the touch panel or near the touch panel), and according to a preset
  • the program drives the corresponding connection device; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as play control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit may take the form of a liquid crystal display, an organic light emitting diode or the like.
  • the processor is the control center of the mobile phone, and connects various parts of the entire computer by various interfaces and lines, and performs various functions by running or executing software programs and/or modules stored in the memory, and calling data stored in the memory. And processing data.
  • the processor included in the terminal device further has the following functions:
  • each module in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Studio Devices (AREA)
  • Studio Circuits (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

一种互动特效视频的处理方法、介质和终端设备,所述互动特效视频的处理方法包括:获取包含第一特效的参考视频;获取与所述第一特效互动的第二特效;根据所述第二特效处理所述参考视频的当前图像,得到包含所述第二特效的视频。本方法简化了用户制作多个特效的步骤,而且,所述第二特效与所述第一特效的互动效果增加了用户的娱乐性。

Description

互动特效视频的处理方法、介质和终端设备 技术领域
本发明涉及信息处理技术,尤其是一种互动特效视频的处理方法、介质和终端设备。
背景技术
现有的视频特效一般为用户预先拍摄好视频后,再经由后期处理添加特效合成得到;对于带有多个特效的视频,一般是每个特效逐一制作,这种特效合成的制作方式非常耗时,而且当多个特效之间具有一定的互动关系时,对用户操作的专业程度和精细程度具有更高的要求,致使普通用户难以在视频娱乐中制作具有多个特效的视频,尤其是具有互动关系的特效视频,增加了用户的制作门槛,限制了普通用户的视频娱乐方式。
发明内容
本发明的目的旨在至少解决上述技术缺陷之一,特别降低了用户制作具有互动视频特效难度。
本发明提供了一种互动特效视频的处理方法,包括:获取包含第一特效的参考视频;获取与所述第一特效互动的第二特效;根据所述第二特效处理所述参考视频的当前图像,得到包含所述第二特效的视频。
优选地,所述参考视频携带所述第一特效的第一内容信息;所述获取与所述第一特效互动的第二特效,包括:从所述第一内容信息获取所述第一特效;从互动特效对应表中,获取与所述第一特效互动的所述第二特效。
优选地,所述参考视频携带与所述第一特效互动的所述第二特效的第二内容信息;所述获取与所述第一特效互动的所述第二特效,包括:从所述第二内容信息中,获取与所述第一特效互动的第二特效。
优选地,所述获取与所述第一特效互动的第二特效,包括:识别所述参考视频中所述第一特效的特征;根据所述特征,从互动特效对应表中,获取与所述第一特效互动的所述第二特效。
优选地,所述从互动特效对应表中,获取与所述第一特效互动的所述第二特效,包括:从本终端或特效服务器的互动特效对应表中,获取与所述第一特效互动的特效组;其中,所述特效组包括两个以上的第二特效,每个第 二特效具有颜色属性和用户反馈的特效评分;判断是否接收到用户输入的第二特效选择指令;若接收到所述第二特效选择指令,从所述特效组中获取所述第二特效选择指令对应的所述第二特效;若没有接收到所述第二特效选择指令,判断是否设置为特效颜色自适应,若是,计算所述参考视频的当前帧图片的颜色平均值,获取所述颜色平均值对应颜色属性的第二特效,否则,获取所述特效评分最高的第二特效。
优选地,所述第一特效与所述第二特效为内容相同、相反、或相似的特效。
优选地,所述第一特效与所述第二特效为以起播时间为参考起点的时间轴上交互的特效。
优选地,所述以起播时间为参考起点的时间轴上交互的特效,包括:在所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点相同;根据所述第二特效处理所述参考视频的当前图像,包括:所述以起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;使用所述第二特效处理所述参考视频对应所述第一时间点的图像。
优选地,所述以起播时间为参考起点的时间轴上交互的特效,包括:所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点为先后排列关系;根据所述第二特效处理所述参考视频的当前图像,包括:以所述起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;根据所述先后排列关系,得到所述第二特效在所述参考视频中的第二时间点;使用所述第二特效处理所述参考视频对应所述第二时间点的图像。
优选地,所述得到包含所述第二特效的视频之后,还包括:把包含第二特效的视频与所述包含第一特效的参考视频,合成一个视频。
本发明还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现前述任一项所述互动特效视频的处理方法的步骤。
本发明还提出一种终端设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序;所述处理器执行所述计算机程序时,实现 前述任意一项所述互动特效视频的处理方法的步骤。
本发明的有益效果如下:
1、本发明可根据所述参考视频的所述第一特效获取所述第二特效,以自动生成包含所述第二特效的视频,简化了用户制作多个特效的步骤,降低了用户制作多个视频特效的难度;而且,所述第二特效与所述第一特效具有多种互动效果,增加了用户制作视频特效的娱乐性。
2、本发明可将所述第一内容信息和/或所述第二内容信息携带于所述参考视频中,以便单独编辑所述第一特效和/或所述第二特效,从而避免编辑整个参考视频,减少了对终端内存空间的占用;而且替换所述第一特效和/或所述第二特效也变得更为简单。
3、本发明可通过互动特效对应表确定所述第一特效对应的特效组,并根据用户选择或***设置或特效评分确定所述第二特效,增加了所述第一特效和所述第二特效之间的组合方式,丰富了多种特效之间的互动效果;还可通过所述第一特效和所述第二特效之间的交互时间进一步增强视频特效的娱乐性。
本发明附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本发明所述处理方法第一实施例的流程示意图;
图2为本发明所述处理方法第二实施例的流程示意图;
图3为本发明所述终端设备的实施例示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理 解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
本发明提出一种互动特效视频的处理方法,如图1所示的第一实施例,包括如下步骤:
步骤S10:获取包含第一特效的参考视频;
步骤S20:获取与所述第一特效互动的第二特效;
步骤S30:根据所述第二特效处理所述参考视频的当前图像,得到包含第二特效的视频。
其中,每个步骤具体如下:
步骤S10:获取包含第一特效的参考视频;
所述参考视频可为即时录制的视频,也可为已预存于本终端的视频;所述参考视频中包括的所述第一特效可为视觉特效,也可为声音特效;当所述第一特效为视觉特效时,其可显示于所述参考视频的画面中,也可不显示于画面中,而仅存储于参考视频相关文件中;同时,所述第一特效可与所述参考视频为同一个视频流文件,也可与所述参考视频为不同的文件,仅在播放时才根据对应信息或匹配信息合成输出于同一视频的画面或声音中。所述第一特效可为与视频画面中肢体动作相关联的特效,例如:当视频画面中出现打响指的肢体动作,则为该肢体动作匹配用户场景切换的第一特效;该用户场景切换的特效可显示于所述参考视频的视频画面中,亦可根据用户需求将该第一特效隐藏,当出现另一触发显示特效的动作时,才显示出所述第一特 效。
步骤S20:获取与所述第一特效互动的第二特效;
所述第二特效可为视觉特效,也可为声音特效;所述第一特效互动的第二特效,是指所述第一特效与所述第二特效具有某种特殊的关联效果,例如:当所述第一特效为在视频画面中出现火花特效时,可同时或延时在所述视频画面中出现雪花的第二特效;当所述第一特效为在视频画面中的人物左侧出现一个传送门时,也可在所述视频画面中的人物右侧同时或延时出现另一传送门的第二特效;当所述第一特效为视频画面中出现***的视觉特效时,可同时或延时在所述视频中输出***音效的第二特效。所述第二特效的获取途径可为从本终端获取,也可从特效服务器中获取。
步骤S30:根据所述第二特效处理所述参考视频的当前图像,得到包含所述第二特效的视频。
所述处理所述参考视频的当前图像,可包括多种方式,例如:将所述第一特效和所述第二特效同时输出于所述参考视频的当前图像或声音中,或将所述第一特效和所述第二特效按照一定的时间依次显示于所述参考视频的画面中,或将所述第一特效和所述第二特效以一定的交互式运动轨迹显示于所述参考视频中,或将所述第一特效和所述第二特效预设一定的触发逻辑,以便后续交互输出等。当所述第一特效和所述第二特效中的至少一个特效为视觉特效时,可形成包括视觉特效的视频;当所述第一特效和所述第二特效均为声音特效时,可根据所述第二特效的声音处理所述参考视频的当前图像,以形成具有图像特效的视频。
本实施例可根据所述参考视频的第一特效获取所述第二特效,以自动生成包含所述第二特效的视频,简化了用户制作多个特效的流程,降低了用户制作多个视频特效的难度;且,所述第二特效与所述第一特效具有互动的效果,增加了用户制作视频特效的娱乐性。
基于第一实施例,本发明还提出另一实施例:所述参考视频携带所述第一特效的第一内容信息;
所述获取与所述第一特效互动的第二特效,包括:
从所述第一内容信息获取所述第一特效;
从互动特效对应表中,获取所述第一特效互动的第二特效。
在本实施例中,所述第一内容信息可直接包括所述第一特效的显示效果或声音效果,也可包括获取所述第一特效的本机地址或网络地址;此外,所述第一内容信息还可包括所述第一特效的视频作者、特效时长,以及参考的视频解码方式等;从所述第一内容信息中获取所述第一特效,可直接从所述第一内容信息中获取,亦可根据所述第一特效的本机地址或网络地址获取;获取所述第二特效之前,可预设所述互动特效对应表,以根据所述第一特效确定所述第二特效。本实施例可将所述第一特效包括于所述第一内容信息中,以避免将所述第一特效直接合成于所述参考视频中,达到方便修改所述第一特效的目的。例如:当所述第一特效为视觉特效时,所述第一特效可不直接显示于所述参考视频的画面中,而是从所述第一内容信息中读取,故可单独修改或编辑所述第一内容信息中的所述第一特效,从而达到避免编辑所述参考视频则修改所述参考视频中的第一特效的目的,节约了编辑特效时占用的内存空间,而且节约了所述第一特效的修改时间或替换时间。
基于上一实施例,本发明提出又一实施例:所述参考视频携带与所述第一特效互动的所述第二特效的第二内容信息;
所述获取与所述第一特效互动的所述第二特效,包括:
从所述第二内容信息中,获取与所述第一特效互动的所述第二特效。
在本实施例中,所述第二内容信息不仅携带所述第一特效的相关信息,还携带所述第二特效的相关信息。所述第二内容信息中可直接包括所述第二特效的显示效果或声音效果,也可包括获取所述第二特效的本机地址或网络地址,还可包括所述第一特效与所述第二特效对应关系的所述互动特效对应表;此外,所述第二内容信息还可包括所述第二特效的视频作者、特效时长等;从所述第二内容信息中获取所述第二特效,可为直接从所述第二内容信息中获取所述第二特效,亦可根据所述第二特效的本机地址或网络地址获取。本实施例将所述第二特效的第二内容信息携带于所述参考视频中,可通过修改或替换所述第二内容信息,达到修改所述第二特效的目的,而不必于所述参考视频中修改所述第二特效,简化了修改所述第二特效的操作。
基于第一实施例,本发明提出又一实施例:所述获取与所述第一特效互动的第二特效,包括:
识别所述参考视频中所述第一特效的特征;
根据所述特征,从互动特效对应表中,获取与所述第一特效互动的所述第二特效。
当所述第一特效与所述参考视频为同一个视频流文件时,本实施例可从所述参考视频中识别出所述第一特效的特征,以根据所述互动特效对应表获取所述第二特效。所述互动特效对应表可预存于特效服务器或本终端,以确定所述第一特效与所述第二特效的对应关系;当需要修改所述第一特效所对应的所述第二特效时,可直接于所述互动特效对应表中修改,而不必于所述参考视频中修改,简化了修改步骤。
当所述第一特效与所述参考视频为分开的文件时,所述参考视频中亦可携带所述第一内容信息,或携带所述第一特效与所述参考视频具有对应关系的信息,以识别出所述第一特效的特征。所述第一特效的特征可直接包括于所述参考视频中,也可位于对应的本机地址或网络地址中。识别出所述第一特效的特征后,再根据所述特征,从互动特效对应表中获取与所述第一特效互动的所述第二特效。所述第二特效可直接包括于所述参考视频中,也可位于对应的本机地址或网络地址中。当所述第二特效直接包括于所述参考视频中时,可暂不显示或输出,在获取到触发显示或输出的指令后,才显示特效画面或输出特效音频。当所述第二特效包括于所述参考视频中时,可节省获取所述第二特效的时间,避免获取所述第二特效的延时异常或获取失败的情形。
基于上一实施例,本发明还提出又一实施例:
所述获取包含第一特效的参考视频,获取与所述第一特效互动的第二特效,包括:
判断所述参考视频是否携带与所述第一特效互动的所述第二特效的第二内容信息;
若携带所述第二特效的第二内容信息,从所述第二内容信息中,获取与所述第一特效互动的所述第二特效;
若没有携带所述第二特效的第二内容信息,判断所述参考视频是否携带所述第一特效的第一内容信息;
若携带所述第一特效的第一内容信息,从所述第一内容信息中获取所述第一特效;从互动特效对应表中,获取与所述第一特效互动的所述第二特效;
若没有携带所述第一特效的第一内容信息,识别所述参考视频中所述第一特效的特征;根据所述特征,从所述互动特效对应表中,获取与所述第一特效互动的所述第二特效。
本实施例可优先从所述第二内容信息中,直接获取所述第二特效,以加快获取所述第二特效的时间;若没有所述第二特效的第二内容信息,则根据所述第一特效的第一内容信息中获取所述第一特效,再从所述互动特效对应表中获取与所述第一特效对应的所述第二特效;若没有所述第一特效的第一内容信息,则从所述参考视频中识别所述第一特效的特征,再基于所述特征获取所述第二特效。本实施例具有多种获取所述第二特效的方式,并将获取速度最快的方式作为第一顺位的获取方式,以加快所述第二特效的获取速度;同时将其它获取方式作为备选方式,以确保能够获取到所述第二特效。
基于上一实施例,本发明提出又一实施例:所述从互动特效对应表中,获取与所述第一特效互动的所述第二特效,包括:
从本终端或特效服务器的互动特效对应表中,获取与所述第一特效互动的特效组;其中,所述特效组包括两个以上的第二特效,每个第二特效具有颜色属性和用户反馈的特效评分;
判断是否接收到用户输入的第二特效选择指令;
若接收到所述第二特效选择指令,从所述特效组中获取所述第二特效选择指令对应的所述第二特效;
若没有接收到所述第二特效选择指令,判断是否设置为特效颜色自适应,若是,计算所述参考视频的当前帧图片的颜色平均值,获取所述颜色平均值对应颜色属性的第二特效,否则,获取所述特效评分最高的第二特效。
本实施例提供了一种基于颜色特效的具体实施方式。例如:所述第一特效为声音,与所述第一特效互动的第二特效为不同颜色的水果视觉特效;当所述第一特效触发所述第二特效时,由于所述特效组中具有不同颜色的水果视觉特效,则用户可选择输入哪种特效;若用户不选择,则可根据参考视频的当前帧图片颜色平均值自动选择,或根据所述特效评分自动选择。本实施例中的颜色特效互动方式既可由用户确定所述第二特效,以提高用户的参与程度和增加所述第二特效的创意形式,也提供了自动添加所述第二特效的功能,简化了用户操作,提升了用户体验。
所述特效组中的第二特效可由第三方服务器提供给用户,用户可对所述特效组中的每个第二特效进行打分,以提高用户的互动性。
在本发明中,所述第一特效与所述第二特效可以为内容相同、相反、或相似的特效。例如所述内容相同是指特效的画面、声音或特效的移动方式相同,例如:当所述第一特效为在人物A上添加脸红的特效时,所述第二特效为在人物B上添加同样的脸红特效;所述内容相反可包括画面动作的镜像,或画面的相反变化,例如:当所述第一特效为将人物A放大时,所述第二特效为将人物B缩小;所述内容相似可包括增加类似的特效,例如:当所述第一特效为在人物A头顶增加眩晕的特效时,所述第二特效为在人物B头顶增加另一种不同的眩晕特效。本发明可通过所述第一特效和所述第二特效的不同组合,形成夸张、对比、反差等多种特效,为用户提供了更多的更丰富的娱乐方式。
本发明还提出第二实施例:所述第一特效与所述第二特效还可为以起播时间为参考起点的时间轴上交互的特效。在本实施例中,所述第一实施例中的步骤S20可变为:
步骤S21:获取与所述第一特效互动的第二特效,所述第一特效与所述第二特效为以起播时间为参考起点的时间轴上交互的特效。
所述第一特效与所述第二特效的互动时间或触发时间可以时间轴为参照对象,可同时触发显示画面或播出声音的特效,亦可将其中一个特效进行延时,或在时间轴上交错进行。
基于前述第二实施例,本发明还提出以下实施例:所述以起播时间为参考起点的时间轴上交互的特效,包括:在所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点相同;
根据所述第二特效处理所述参考视频的当前图像,包括:
在所述以起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;
使用所述第二特效处理所述参考视频对应所述第一时间点的图像。
在本实施例中,所述第一特效与所述第二特效的交互时间点相同。例如所述第一特效和所述第二特效均为火焰特效,可在所述起播时间的5秒后同时显示于视频画面的不同位置;或者所述第一特效为火焰视觉特效,所述第二特效为火焰声音特效,可在所述起播时间的5秒后同时出现,以强化燃烧的特 效效果。
基于前述第二实施例,本发明还提出以下实施例:所述以起播时间为参考起点的时间轴上交互的特效,包括:所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点为先后排列关系;
根据所述第二特效处理所述参考视频的当前图像,包括:
在所述以起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;
根据所述先后排列关系,得到所述第二特效在所述参考视频中的第二时间点;
使用所述第二特效处理所述参考视频对应所述第二时间点的图像。
在本实施例中,所述第一特效与所述第二特效的交互时间点错开。例如所述第一特效为下雨,所述第二特效为撑伞,可根据获取的所述第一特效在所述参考视频中的第一时间点,将所述第二特效的出现时间延时,以体现两个特效之间的逻辑关系,或根据两个特效之间的时间差关系(先后排列关系)达到更加多样的娱乐效果。
基于以上各实施例:在得到包含所述第二特效的视频之后,还可包括:
把包含所述第二特效的视频与所述包含第一特效的参考视频,合成一个视频。
所述包含所述第二特效的视频中可包括所述第一特效,也可不包括所述第一特效。为增强娱乐效果,可再将所述包含所述第二特效的视频与所述包含第一特效的参考视频再次合成为一个视频,以在该一个视频中形成具有对比效果的两个视频画面。例如:所述参考视频为用户见到一种美食,所述第一特效为用户看到美食大快朵颐的夸张表情,所述第二特效为用户看到美食冷淡拒绝的夸张表情;可将用户见到美食大快朵颐的第一个视频(参考视频)和见到美食冷淡拒绝的第二个视频(包含第二特效的视频)合成在一个视频中,以达到对比的效果。
所述合成的方式有多种,例如:将所述第一个视频和所述第二个视频合成为一个视频后的位置关系可以左右设置或上下设置,亦可设置为先播其中一个视频,播完之后再继续播另一个视频;还可将所述第一个视频和所述第二个视频的播出时间错开预设时间,和/或错开预设距离,以便用户通过多种 组合形式达到更多样化的组合效果,为用户提供更丰富的娱乐方式。
本发明还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现前述任一项所述互动特效视频的处理方法的步骤。
本发明还提出一种终端设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序;所述处理器执行所述计算机程序时,实现前述任意一项所述互动特效视频的处理方法的步骤。
如图3所示为本发明所述终端设备的部分结构框图,为了便于说明,仅示出了与本发明实施例相关的部分。所述终端设备可以为包括手机、平板电脑、笔记本电脑、台式电脑等可观看视频节目的终端设备。下面以手机为例说明本发明终端设备的工作方式。
参考图3,手机包括处理器、存储器、输入单元、显示单元等部件。本领域技术人员可以理解,图3中示出的手机结构并不构成对所有手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件。存储器可用于存储计算机程序以及各功能模块,处理器通过运行存储在存储器的计算机程序,从而执行手机的各种功能应用以及数据处理。存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如播放视频的功能)等;存储数据区可存储根据手机的使用所创建的数据(比如视频数据等)等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其它易失性固态存储器件。
输入单元可用于接收用户输入的搜索关键字,以及产生与手机的用户设置以及功能控制有关的信号输入。具体地,输入单元可包括触控面板以及其它输入设备。触控面板可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作),并根据预先设定的程序驱动相应的连接装置;其它输入设备可以包括但不限于物理键盘、功能键(比如播放控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。显示单元可用于显示用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元可采用液晶显 示器、有机发光二极管等形式。处理器是手机的控制中心,利用各种接口和线路连接整个电脑的各个部分,通过运行或执行存储在存储器内的软件程序和/或模块,以及调用存储在存储器内的数据,执行各种功能和处理数据。
在本发明实施例中,该终端设备所包括的处理器还具有以下功能:
获取包含第一特效的参考视频;
获取与所述第一特效互动的第二特效;
根据所述第二特效处理所述参考视频的当前图像,得到包含所述第二特效的视频。
此外,在本发明各个实施例中的各模块可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (12)

  1. 一种互动特效视频的处理方法,其特征在于,包括:
    获取包含第一特效的参考视频;
    获取与所述第一特效互动的第二特效;
    根据所述第二特效处理所述参考视频的当前图像,得到包含所述第二特效的视频。
  2. 根据权利要求1所述的处理方法,其特征在于:所述参考视频携带所述第一特效的第一内容信息;
    所述获取与所述第一特效互动的第二特效,包括:
    从所述第一内容信息获取所述第一特效;
    从互动特效对应表中,获取与所述第一特效互动的所述第二特效。
  3. 根据权利要求2所述的处理方法,其特征在于:所述参考视频携带与所述第一特效互动的所述第二特效的第二内容信息;
    所述获取与所述第一特效互动的所述第二特效,包括:
    从所述第二内容信息中,获取与所述第一特效互动的第二特效。
  4. 根据权利要求1所述的处理方法,其特征在于:所述获取与所述第一特效互动的第二特效,包括:
    识别所述参考视频中所述第一特效的特征;
    根据所述特征,从互动特效对应表中,获取与所述第一特效互动的所述第二特效。
  5. 根据权利要求2或4所述的处理方法,其特征在于:所述从互动特效对应表中,获取与所述第一特效互动的所述第二特效,包括:
    从本终端或特效服务器的互动特效对应表中,获取与所述第一特效互动的特效组;其中,所述特效组包括两个以上的第二特效,每个第二特效具有颜色属性和用户反馈的特效评分;
    判断是否接收到用户输入的第二特效选择指令;
    若接收到所述第二特效选择指令,从所述特效组中获取所述第二特效选择指令对应的所述第二特效;
    若没有接收到所述第二特效选择指令,判断是否设置为特效颜色自适应,若是,计算所述参考视频的当前帧图片的颜色平均值,获取所述颜色平均值 对应颜色属性的第二特效,否则,获取所述特效评分最高的第二特效。
  6. 根据权利要求1所述的处理方法,其特征在于:所述第一特效与所述第二特效为内容相同、相反、或相似的特效。
  7. 根据权利要求1或5所述的处理方法,其特征在于:所述第一特效与所述第二特效为以起播时间为参考起点的时间轴上交互的特效。
  8. 根据权利要求7所述的处理方法,其特征在于:所述以起播时间为参考起点的时间轴上交互的特效,包括:在所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点相同;
    根据所述第二特效处理所述参考视频的当前图像,包括:
    在所述以起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;
    使用所述第二特效处理所述参考视频对应所述第一时间点的图像。
  9. 根据权利要求7或8所述的处理方法,其特征在于:所述以起播时间为参考起点的时间轴上交互的特效,包括:所述以起播时间为参考起点的时间轴上,所述第一特效与所述第二特效的交互时间点为先后排列关系;
    根据所述第二特效处理所述参考视频的当前图像,包括:
    在所述起播时间为参考起点的时间轴上,获取所述第一特效在所述参考视频中的第一时间点;
    根据所述先后排列关系,得到所述第二特效在所述参考视频中的第二时间点;
    使用所述第二特效处理所述参考视频对应所述第二时间点的图像。
  10. 根据权利要求1所述的处理方法,其特征在于:所述得到包含所述第二特效的视频之后,还包括:
    把包含第二特效的视频与所述包含第一特效的参考视频,合成一个视频。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现权利要求1至10任一项所述互动特效视频的处理方法的步骤。
  12. 一种终端设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序;其特征在于,所述处理器执行所述计算机程序时,实现权利要求1至10任意一项所述互动特效视频的处理方法的步骤。
PCT/CN2018/123236 2018-01-30 2018-12-24 互动特效视频的处理方法、介质和终端设备 WO2019149000A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/965,454 US11533442B2 (en) 2018-01-30 2018-12-24 Method for processing video with special effects, storage medium, and terminal device thereof
EP18903360.8A EP3748954B1 (en) 2018-01-30 2018-12-24 Processing method for achieving interactive special effects for video, medium, and terminal apparatus
RU2020128552A RU2758910C1 (ru) 2018-01-30 2018-12-24 Способ обработки взаимоувязанных спецэффектов для видео, носитель данных и терминал

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810089957.6A CN108234903B (zh) 2018-01-30 2018-01-30 互动特效视频的处理方法、介质和终端设备
CN201810089957.6 2018-01-30

Publications (1)

Publication Number Publication Date
WO2019149000A1 true WO2019149000A1 (zh) 2019-08-08

Family

ID=62669780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123236 WO2019149000A1 (zh) 2018-01-30 2018-12-24 互动特效视频的处理方法、介质和终端设备

Country Status (5)

Country Link
US (1) US11533442B2 (zh)
EP (1) EP3748954B1 (zh)
CN (1) CN108234903B (zh)
RU (1) RU2758910C1 (zh)
WO (1) WO2019149000A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108234903B (zh) * 2018-01-30 2020-05-19 广州市百果园信息技术有限公司 互动特效视频的处理方法、介质和终端设备
CN109104586B (zh) * 2018-10-08 2021-05-07 北京小鱼在家科技有限公司 特效添加方法、装置、视频通话设备以及存储介质
CN109529329B (zh) * 2018-11-21 2022-04-12 北京像素软件科技股份有限公司 游戏特效处理方法及装置
CN109710255B (zh) * 2018-12-24 2022-07-12 网易(杭州)网络有限公司 特效处理方法、特效处理装置、电子设备及存储介质
CN112181572B (zh) * 2020-09-28 2024-06-07 北京达佳互联信息技术有限公司 互动特效展示方法、装置、终端及存储介质
CN112291590A (zh) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 视频处理方法及设备
CN112906553B (zh) * 2021-02-09 2022-05-17 北京字跳网络技术有限公司 图像处理方法、装置、设备及介质
CN114885201B (zh) * 2022-05-06 2024-04-02 林间 视频对比查看方法、装置、设备及存储介质
CN115941841A (zh) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 关联信息展示方法、装置、设备、存储介质和程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779028A (zh) * 2011-05-09 2012-11-14 腾讯科技(深圳)有限公司 一种客户端特效合成引擎的实现方法及装置
CN103389855A (zh) * 2013-07-11 2013-11-13 广东欧珀移动通信有限公司 一种移动终端交互的方法及装置
CN104703043A (zh) * 2015-03-26 2015-06-10 努比亚技术有限公司 一种添加视频特效的方法和装置
CN104780458A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的特效加载方法和电子设备
WO2016091172A1 (en) * 2014-12-12 2016-06-16 Huawei Technologies Co., Ltd. Systems and methods to achieve interactive special effects
CN108234903A (zh) * 2018-01-30 2018-06-29 广州市百果园信息技术有限公司 互动特效视频的处理方法、介质和终端设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052909A1 (en) * 2001-06-25 2003-03-20 Arcsoft, Inc. Real-time rendering of edited video stream
US7102643B2 (en) 2001-11-09 2006-09-05 Vibe Solutions Group, Inc. Method and apparatus for controlling the visual presentation of data
JP4066162B2 (ja) * 2002-09-27 2008-03-26 富士フイルム株式会社 画像編集装置、画像編集プログラム並びに画像編集方法
JP4820136B2 (ja) 2005-09-22 2011-11-24 パナソニック株式会社 映像音声記録装置及び映像音声記録方法
US20150339010A1 (en) * 2012-07-23 2015-11-26 Sudheer Kumar Pamuru System and method for producing videos with overlays
KR101580237B1 (ko) 2013-05-15 2015-12-28 씨제이포디플렉스 주식회사 4d 컨텐츠 제작 서비스 제공 방법 및 시스템, 이를 위한 컨텐츠 제작 장치
US20150113408A1 (en) * 2013-10-18 2015-04-23 Apple Inc. Automatic custom sound effects for graphical elements
US20160173960A1 (en) * 2014-01-31 2016-06-16 EyeGroove, Inc. Methods and systems for generating audiovisual media items
CN103905885B (zh) * 2014-03-25 2018-08-31 广州华多网络科技有限公司 视频直播方法及装置
WO2016030879A1 (en) 2014-08-26 2016-03-03 Mobli Technologies 2010 Ltd. Distribution of visual content editing function
CN104394331A (zh) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 一种画面视频中添加匹配音效的视频处理方法
CN104618797B (zh) * 2015-02-06 2018-02-13 腾讯科技(北京)有限公司 信息处理方法、装置及客户端
CN104954848A (zh) * 2015-05-12 2015-09-30 乐视致新电子科技(天津)有限公司 智能终端的显示图形用户界面的控制方法及装置
CN105491441B (zh) * 2015-11-26 2019-06-25 广州华多网络科技有限公司 一种特效管理控制方法及装置
CN105959725A (zh) * 2016-05-30 2016-09-21 徐文波 视频中媒体特效的加载方法和装置
CN106385591B (zh) * 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 视频处理方法及视频处理装置
CN107195310A (zh) 2017-03-05 2017-09-22 杭州趣维科技有限公司 一种声音驱动粒子特效的视频处理方法
CN107168606B (zh) * 2017-05-12 2019-05-17 武汉斗鱼网络科技有限公司 对话框控件显示方法、装置及用户终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779028A (zh) * 2011-05-09 2012-11-14 腾讯科技(深圳)有限公司 一种客户端特效合成引擎的实现方法及装置
CN103389855A (zh) * 2013-07-11 2013-11-13 广东欧珀移动通信有限公司 一种移动终端交互的方法及装置
WO2016091172A1 (en) * 2014-12-12 2016-06-16 Huawei Technologies Co., Ltd. Systems and methods to achieve interactive special effects
CN104703043A (zh) * 2015-03-26 2015-06-10 努比亚技术有限公司 一种添加视频特效的方法和装置
CN104780458A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的特效加载方法和电子设备
CN108234903A (zh) * 2018-01-30 2018-06-29 广州市百果园信息技术有限公司 互动特效视频的处理方法、介质和终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3748954A4 *

Also Published As

Publication number Publication date
EP3748954A4 (en) 2021-03-24
EP3748954A1 (en) 2020-12-09
RU2758910C1 (ru) 2021-11-03
US20210058564A1 (en) 2021-02-25
CN108234903A (zh) 2018-06-29
CN108234903B (zh) 2020-05-19
US11533442B2 (en) 2022-12-20
EP3748954B1 (en) 2024-07-03

Similar Documents

Publication Publication Date Title
WO2019149000A1 (zh) 互动特效视频的处理方法、介质和终端设备
US9743145B2 (en) Second screen dilemma function
US9576334B2 (en) Second screen recipes function
US9583147B2 (en) Second screen shopping function
WO2019085574A1 (zh) 视频播放控制方法、装置及终端
WO2020029523A1 (zh) 视频生成方法、装置、电子设备及存储介质
WO2020015334A1 (zh) 视频处理方法、装置、终端设备及存储介质
WO2016124095A1 (zh) 生成视频的方法、装置及终端
US9578370B2 (en) Second screen locations function
TW202007142A (zh) 視頻檔案的生成方法、裝置及儲存媒體
US10468004B2 (en) Information processing method, terminal device and computer storage medium
CN103686200A (zh) 智能电视视频资源搜索的方法和***
WO2022078167A1 (zh) 互动视频的创建方法、装置、设备及可读存储介质
WO2020108045A1 (zh) 视频播放方法、装置和多媒体数据播放方法
JP2011035837A (ja) 電子機器および画像データの表示方法
CN110072138A (zh) 视频播放方法、设备及计算机可读存储介质
CN112291615A (zh) 音频输出方法、音频输出装置
WO2017181595A1 (zh) 一种视频显示方法及装置
CN114302221A (zh) 一种虚拟现实设备及投屏媒资播放方法
US20170155943A1 (en) Method and electronic device for customizing and playing personalized programme
WO2013080407A1 (en) Server device, terminal device, and program
CN116017082A (zh) 一种信息处理方法和电子设备
WO2020248682A1 (zh) 一种显示设备及虚拟场景生成方法
CN104185043A (zh) 网络媒体播放***与方法
JP2008178449A (ja) パズルゲームシステム、テンキーキャラクター

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018903360

Country of ref document: EP

Effective date: 20200831