WO2022068478A1 - 基于视频的交互、视频处理方法、装置、设备及存储介质 - Google Patents

基于视频的交互、视频处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022068478A1
WO2022068478A1 PCT/CN2021/114669 CN2021114669W WO2022068478A1 WO 2022068478 A1 WO2022068478 A1 WO 2022068478A1 CN 2021114669 W CN2021114669 W CN 2021114669W WO 2022068478 A1 WO2022068478 A1 WO 2022068478A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image frame
video
display
interactive control
Prior art date
Application number
PCT/CN2021/114669
Other languages
English (en)
French (fr)
Inventor
袁野
嵇鹏飞
王宇晨
买昭一丁
石嘉祎
聂婷
张景宇
Original Assignee
北京字跳网络技术有限公司
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司, 北京有竹居网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to JP2023507659A priority Critical patent/JP2024507605A/ja
Priority to EP21874139.5A priority patent/EP4175309A4/en
Publication of WO2022068478A1 publication Critical patent/WO2022068478A1/zh
Priority to US18/090,323 priority patent/US20230161471A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present disclosure relates to the field of data processing, and in particular, to a video-based interaction, video processing method, apparatus, device, and storage medium.
  • the present disclosure provides a video-based interaction, video processing method, apparatus, device and storage medium, which can realize the playback from video on the premise of ensuring user experience. Switch to the display of the target page.
  • the present disclosure provides a video-based interaction method, the method comprising:
  • a mask layer including interactive controls is displayed on the video playback interface corresponding to the target image frame; wherein, the position of the display area of the interactive controls on the mask layer is the same as that of all the interactive controls.
  • the position of the display area of the target element on the target image frame is the same, and the display image on the interactive control has a corresponding relationship with the target element;
  • the target page is displayed; wherein, the content displayed on the target page is related to the content displayed on the target image frame.
  • the method further includes:
  • the method further includes:
  • the method before displaying the target page in response to the triggering operation for the interactive control, the method further includes:
  • An operation prompt corresponding to the interactive control is displayed on the mask layer; the operation prompt is used to guide the user to trigger an operation for the interactive control.
  • the method further includes:
  • a sound prompt or a vibration prompt is played.
  • the triggering operation for the interactive control includes: a click operation for the interactive control, or an operation for dragging the interactive control to a target area.
  • the present disclosure provides a video processing method, the method comprising:
  • an interactive control is generated on the mask; wherein the interactive control is in the The position of the display area on the mask is the same as the position of the display area of the target element on the target image frame, the display image of the interactive control has a corresponding relationship with the target element, and the interactive control is used for triggering the display of a target page, where the content displayed on the target page is related to the content displayed on the target image frame;
  • the mask inserting the mask into a position between the target image frame and the first image frame in the video to be processed, to obtain a target video corresponding to the video to be processed; wherein the first image frame is the The image of the next frame adjacent to the target image frame in the video to be processed.
  • generating an interactive control on a mask layer based on the target element on the target image frame includes:
  • an interactive control corresponding to the target element is generated on the mask layer.
  • the interactive control is generated on the mask layer, it also includes:
  • the target area corresponding to the interactive control is determined on the mask; wherein, the operation of dragging the interactive control to the target area is used to trigger the display of the target page.
  • the method further includes:
  • a preset stay duration corresponding to the mask layer is determined; wherein, the preset stay duration is used to control the display duration of the mask layer.
  • the present disclosure provides a video-based interaction device, the device comprising:
  • a mask display module configured to display a mask including interactive controls on the video playback interface corresponding to the target image frame when the target video is played to the target image frame; wherein the interactive controls are in the mask
  • the position of the display area on the target image frame is the same as the position of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element;
  • a page display module configured to display a target page in response to a triggering operation for the interactive control; wherein, the target page has a corresponding relationship with the target image frame.
  • the present disclosure provides a video processing device, the device comprising:
  • the generating module is used to generate an interactive control on the mask layer based on the target element on the target image frame after receiving the selected operation for the target element on the target image frame in the video to be processed;
  • the position of the display area of the interactive control on the mask is the same as the position of the display area of the target element on the target image frame, the display image of the interactive control is determined based on the target element, and the The interactive control is used to trigger the display of a target page, and the target page has a corresponding relationship with the target image frame;
  • an inserting module configured to insert the mask into a position between the target image frame and the first image frame in the target video to obtain a processed target video; wherein the first image frame is the The adjacent next frame image of the target image frame in the target video.
  • the present disclosure provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the above method.
  • the present disclosure provides a device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program, Implement the above method.
  • the embodiments of the present disclosure provide a video-based interaction method, in response to the target video being played to a target image frame, the target video is paused, and a mask layer including interactive controls is displayed on the video playback interface corresponding to the target image frame.
  • the display area of the interactive control on the mask layer has a corresponding relationship with the display area of the target element on the target image frame
  • the display image on the interactive control has a corresponding relationship with the target element.
  • an interactive control that has a corresponding relationship with the display area of the target element on the target image frame is displayed on the video playback interface, and the display image has a corresponding relationship with the target element.
  • the playback interface switches to the display function of the target page.
  • the display of the interactive controls in the embodiment of the present disclosure is related to the native display content on the target image frame
  • a mask with interactive controls is displayed on the video playback interface of the target video, and based on the display of the interactive controls on the target image frame
  • the interactive controls related to the native display content trigger the display of the target web page related to the target image frame, which can immerse the user from the video playback interface to the browsing of the target page and ensure the user's experience.
  • FIG. 1 is a flowchart of a video-based interaction method provided by an embodiment of the present disclosure
  • FIG. 2 is an interface rendering diagram of a target image frame of a target video according to an embodiment of the present disclosure
  • FIG. 3 is an interface rendering diagram for displaying a mask including interactive controls according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of a video processing method according to an embodiment of the present disclosure.
  • FIG. 5 is an interface effect diagram of a masking layer provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a video-based interaction device according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a video-based interaction device according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a video processing device according to an embodiment of the present disclosure.
  • the video-based interaction method is one of the important factors to ensure the user experience of short video software. Therefore, the video-based interaction method has also attracted much attention.
  • the present disclosure provides a video-based interaction method.
  • a target video is played to a target image frame
  • the target video is paused, and a mask layer including interactive controls is displayed on the video playback interface corresponding to the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as that of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element.
  • an interactive control is displayed on the video playback interface at the same position as the display area of the target element on the target image frame, and the display image has a corresponding relationship with the target element.
  • the playback interface switches to the display function of the target page.
  • the display of the interactive controls in the embodiment of the present disclosure is implemented based on the display area of the target element on the target image frame, that is, the display of the interactive controls is related to the native display content on the target image frame, therefore, the video in the target video A mask with interactive controls is displayed on the playback interface, and based on the interactive controls related to the native display content on the target image frame, the display of the target web page related to the target image frame is triggered, which can immerse the user from the playback video interface.
  • the browsing brought to the target page ensures the user's experience.
  • an embodiment of the present disclosure provides a video-based interaction method.
  • a flowchart of a video-based interaction method provided by an embodiment of the present disclosure includes:
  • S101 In response to the target video being played to a target image frame, display a mask including interactive controls on a video playback interface corresponding to the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as that of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element.
  • the target video refers to a video or a video segment having a certain playback duration.
  • the next or more adjacent image frames of the target image frame in the target video are image frames used to display the mask layer, that is, the target video contains at least one image frame that displays the mask layer.
  • the image frames from the 5th to 7th second playback time point in the target video are used to display the mask.
  • a mask layer may be displayed on the playback interface corresponding to the target image frame, and playback of the target video may be paused.
  • An interactive control is displayed on the mask displayed on the video playing interface, and the display area of the interactive control on the mask has a corresponding relationship with the display area of the target element on the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as the position of the display area of the target element on the target image frame. Since the display of the mask can have a certain degree of transparency, for the user, the interactive controls displayed on the mask overlap the display area of the target element on the target image frame.
  • the displayed image on the interactive control has a corresponding relationship with the target element.
  • the displayed image on the interactive control and the displayed image of the target element may be the same or similar.
  • one interactive control may be displayed on the mask layer alone, or multiple interactive controls may be displayed simultaneously, and the number of interactive controls may be determined based on the display requirements of the target page.
  • different interactive controls are used to trigger the display of different target pages.
  • the target page is an advertisement page designed based on the display content on the target image frame, and different interactive controls can be used to trigger different advertisement pages designed based on the display content on the target image frame.
  • the display of the mask layer may have a certain degree of transparency. Therefore, when the mask layer is displayed on the video playback interface corresponding to the target image frame, the image on the target image frame can also be vaguely displayed.
  • the target image frame in the target video is determined based on the setting requirements of the mask.
  • FIG. 2 an interface effect diagram of a target image frame of a target video provided by an embodiment of the present disclosure is shown.
  • a mask layer including interactive controls is displayed on the video playback interface corresponding to the target image frame.
  • FIG. 3 an interface effect diagram for displaying a mask including interactive controls provided by an embodiment of the present disclosure. Specifically, the display area of the interactive control "composited" on the mask layer in FIG. 3 overlaps with the display area of the target element "composited" on the target image frame in FIG. 2 .
  • the display image on the interactive control is the same as the display image corresponding to the target element on the target image frame.
  • the display image of the target element "composited" in the target image frame in Figure 2 is the same.
  • the interactive controls displayed on the mask layer are set based on the native display elements in the target image frame, so that the content displayed on the mask layer for the user does not deviate from the content of the target video, and avoids exceeding the user's sensory acceptance. Guaranteed user experience.
  • an interactive control not only an interactive control but also an operation prompt corresponding to the interactive control may be displayed on the mask layer.
  • the operation prompt “click to synthesize, download and synthesize the ** character” is displayed to guide the user to trigger the interactive control, and make the user to trigger the trigger operation.
  • the content of the target page after interacting with the controls is expected.
  • S102 In response to the triggering operation for the interactive control, display a target page, wherein the content displayed on the target page is related to the content displayed on the target image frame.
  • the video playback interface jumps to the target page to realize the display of the target page.
  • the target page has a corresponding relationship with the target image frame.
  • the target page is set based on the display content on the target image frame, for example, an advertisement page, a personal homepage, a public account homepage, and the like.
  • the trigger operation for the interactive control may include: a click operation for the interactive control, or an operation of dragging the interactive control to the target area.
  • a click operation for the interactive control or an operation of dragging the interactive control to the target area.
  • the user can jump to the target page from the playback interface for playing the target video by triggering the interactive controls in the mask.
  • the user can also jump from the target page to the video playback interface for playing the target video by triggering a return operation.
  • the following step S103 may be performed.
  • the user can trigger the return operation on the target page to continue playing the subsequent video frames of the target video.
  • the target video may be continuously played from the target image frame, or the target video may be continuously played based on the next frame of the target image frame. That is, the image frame for starting to continue playing the target video may be determined based on the target image frame.
  • the embodiment of the present disclosure plays a sound prompt or a vibration prompt when receiving the trigger operation for the interactive control, so as to prompt the user to trigger the operation It has taken effect to ensure the user's interactive experience.
  • the display duration of the mask layer on the video playback interface may also be timed, and when it is determined that the mask layer is displayed on the video playback interface Continue to play the target video when the display time on the screen reaches the preset dwell time.
  • the embodiment of the present disclosure switches from the display interface of the mask layer to the video playback interface, and continues to play the subsequent image frames of the target video.
  • the user can actively trigger the continued playing of the target video. Specifically, when a continue playing operation triggered on the mask layer on the video playing interface is received, the target video is continued to be played based on the target image frame.
  • the target video when the target video is played to the target image frame, the target video is paused, and a mask layer including interactive controls is displayed on the video playback interface corresponding to the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as that of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element.
  • the target page is displayed, wherein the content displayed on the target page is related to the content displayed on the target image frame.
  • the target page jumps to the video playback interface, and continues to play the target video based on the target image frame on the video playback interface.
  • an interactive control that has a corresponding relationship with the display area of the target element on the target image frame is displayed on the video playback interface, and the display image has a corresponding relationship with the target element.
  • the playback interface switches to the display function of the target page.
  • the display of the interactive controls in the embodiment of the present disclosure is related to the native display content on the target image frame
  • a mask with interactive controls is displayed on the video playback interface of the target video, and based on the display of the interactive controls on the target image frame
  • the interactive controls related to the native display content trigger the display of the target web page related to the target image frame, which can immerse the user from the video playback interface to the browsing of the target page, ensuring the user's experience.
  • the present disclosure also provides a video processing method, which is used to obtain the target video in the video-based interaction method provided by the above embodiments after processing the video to be processed.
  • FIG. 4 a flowchart of a video processing method provided by an embodiment of the present disclosure, wherein the video processing method includes:
  • S401 After receiving a selection operation on a target element on a target image frame in the video to be processed, generate an interactive control on a mask layer based on the target element on the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as the display area of the target element on the target image frame, and the display image of the interactive control has a corresponding relationship with the target element,
  • the interactive control is used to trigger the display of a target page, and the content displayed on the target page is related to the content displayed on the target image frame.
  • the video to be processed may be any type of video or video clip with a certain playback duration, such as a game video clip, a travel video clip, and the like.
  • the target image frame in the video to be processed is determined first.
  • the target image frame is determined based on the setting requirements of the mask layer. For example, if the mask layer needs to be displayed on the 25th frame of the video to be processed, the 25th frame of the video to be processed can be determined as the target image. frame.
  • the user may determine the image frame in the video to be processed as the target image frame based on the setting requirements.
  • the target element on the target image frame is determined.
  • the target element is determined based on the set requirements of the interactive controls displayed on the mask. For example, assuming that the interactive controls on the mask need to be set to an image style displayed on the target image frame, such as the "composite" image style in Figure 2, the "composite” image style can be determined as the target on the target image frame element.
  • the target image frame may include at least one display element, and after the target image frame is determined, at least one display element on the target image frame is selected as the target element for generating interactive controls. Specifically, after receiving the selection operation of the target element on the target image frame, the interactive control is generated based on the target element.
  • an interactive control having a corresponding relationship with the display area of the target element on the target image frame is generated on the mask, so that the target image frame corresponds to
  • the interactive control displayed on the mask overlaps or is close to the display area of the target element on the target image frame.
  • an interactive control with the same position as the display area of the target element on the target image frame is generated on the mask.
  • the display position information corresponding to the target element on the target image frame of the video to be processed may be determined, wherein the display position information corresponding to the target element is used to determine whether the target element is in the target image frame.
  • Display position on the image frame may include information such as the coordinates of each pixel on the boundary of the target element, and may also include information such as the coordinates of the center point of the target element and the display shape.
  • an interactive control corresponding to the target element is generated on the mask layer.
  • the display position of the interactive controls on the mask layer is the same as the display position of the target element on the target image frame, that is, when the mask layer is displayed on the video playback interface corresponding to the target image frame, the interactive controls displayed on the mask layer are the same as those displayed on the mask layer.
  • the display area of the target element corresponding to the interactive control overlaps.
  • the interactive controls generated on the mask can be used to trigger the display of the target page.
  • a click operation or a move operation on the interactive control can be used to trigger the display of the target page.
  • the target page may be displayed in response to a click operation triggered by the user on the interactive control.
  • a drag operation triggered by the user on the interactive control for example, moving the interactive control to the target area, the target user may be displayed.
  • the trigger operation for the interactive control can be set to be any type of interactive operation, which is not limited in the embodiments of the present disclosure.
  • the target page may be a page determined based on the content on the target image frame, for example, an advertisement page, a homepage of an official account, and the like.
  • the click operation for the interactive control when setting the click operation for the interactive control to trigger the display of the target page, it is also necessary to set the click range of the click operation for the interactive control. Specifically, the corresponding display of the interactive control can be performed. The area is determined as the click range of the click operation for the interactive control, and when the click operation triggered by the user is detected within the click range, the display of the target page corresponding to the interactive control is triggered.
  • the display area on the mask layer is the same as the display on the target image frame.
  • the locations of the regions are the same.
  • the display position information of the target area may also be set. Specifically, a display element having a corresponding relationship with the target element on the target image frame may be determined, and then the display position information of the display element on the target image frame may be determined. As shown in FIG.
  • an interface rendering diagram of a mask provided by an embodiment of the present disclosure, wherein the card 1 is the target element on the target image frame, and the display element that has a corresponding relationship with the card 1 is the card frame , determine the display position information of the card frame on the target image frame, and then, based on the display position information of the card frame, determine the target area corresponding to card 1 on the mask layer, that is, the display area where the card frame is located, and set When an operation of dragging an interactive control to the target area is detected in the target area, the display of the target page is triggered.
  • drag operation prompts for interactive controls can also be set on the mask layer, which can include drag gesture animation prompts, drag text prompts, etc.
  • the mask layer is provided with "cards” for interactive controls. 1” drag gesture animation prompt, used to guide the user to drag the card 1 to the target area. Specifically, the number of times the drag gesture animation is displayed on the display interface of the mask layer, etc. can be set.
  • the mask layer is also provided with a text prompt of "drag cards, unlock the game", which is used to guide the user to trigger the interactive control, and make the user have expectations for the content of the target page after the interactive control is triggered, so as to ensure that the user interactive experience.
  • the duration of stay can also be set to control the display duration of the mask layer.
  • the preset dwell time corresponding to the mask can be determined, and when the display time of the mask reaches the preset dwell time, the display interface of the mask is triggered to switch to the next frame of image. Play interface, continue to play subsequent videos for the user.
  • the embodiment of the present disclosure may also set a corresponding playback sound prompt or vibration prompt for the interactive control on the mask layer after being triggered, so as to remind the user that the trigger operation has taken effect, ensuring that the trigger operation has taken effect.
  • a corresponding playback sound prompt or vibration prompt for the interactive control on the mask layer after being triggered, so as to remind the user that the trigger operation has taken effect, ensuring that the trigger operation has taken effect.
  • the embodiments of the present disclosure can also set corresponding visual effects after triggering for the interactive controls on the mask layer, such as the effect of the mask layer disappearing, to prompt the user to trigger the effectiveness of the interactive control operation and improve the user's interactive experience.
  • S402 Insert the mask into a position between the target image frame and the first image frame in the video to be processed, to obtain a target video corresponding to the video to be processed; wherein the first image frame is The adjacent next frame image of the target image frame in the video to be processed.
  • an interactive control is generated on the mask layer, and an interactive operation is set for the interactive control, and the mask layer is inserted into the position between the target image frame and the first image frame in the video to be processed to obtain the video to be processed the corresponding target video.
  • the first image frame is an adjacent next frame image of the target image frame in the video to be processed.
  • the target image frame is the fifth frame of the video to be processed
  • the first image frame is the sixth frame of the to-be-processed video
  • the mask is inserted into the fifth and sixth frames of the to-be-processed video. between frame images.
  • it can also be set to detect a return operation triggered by the user after entering the target page. In response to the user triggering the return operation, the user can return to the playback interface of the fifth frame image or the sixth frame image as described above, and continue to play the target video.
  • an interactive control for jumping to the target page can be automatically generated for the user on the mask layer, and After inserting the mask into the video to be processed, the target video is automatically generated for the user. It can be seen that the video processing method provided by the embodiments of the present disclosure lowers the operation threshold for the user to perform video processing, and improves the user's operation experience for video processing.
  • the present disclosure also provides a video-based interaction device.
  • a schematic structural diagram of a video-based interaction device provided by the embodiments of the present disclosure is provided, and the device includes: :
  • the mask display module 601 is configured to display a mask including interactive controls on the video playback interface corresponding to the target image frame in response to the target video being played to the target image frame; wherein the interactive controls are in the mask
  • the position of the display area on the target image frame is the same as the position of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element;
  • the page display module 602 is configured to display a target page in response to a trigger operation for the interactive control; wherein, the content displayed on the target page is related to the content displayed on the target image frame.
  • the device further includes:
  • a jumping module configured to jump from the target page to a video playback interface in response to a return operation triggered on the target page, and continue to play the target based on the target image frame on the video playback interface video.
  • the device further includes:
  • the first playback module is configured to continue to play the target video based on the target image frame in response to the display duration of the mask on the video playback interface reaching a preset duration; During the continuous playback operation triggered on the mask layer on the video playback interface, the target video is continuously played based on the target image frame.
  • the device further includes:
  • the display module is configured to display the operation prompt corresponding to the interactive control on the mask layer; the operation prompt is used to guide the user to trigger the operation of the interactive control.
  • the device further includes:
  • a prompting module configured to play a sound prompt or a vibration prompt in response to a trigger operation for the interactive control.
  • the triggering operation for the interactive control includes: a click operation for the interactive control, or an operation for dragging the interactive control to a target area.
  • the video-based interaction device pauses the playback of the target video when the target video reaches the target image frame, and displays a mask including interactive controls on the video playback interface corresponding to the target image frame.
  • the position of the display area of the interactive control on the mask layer is the same as the position of the display area of the target element on the target image frame, and the display image on the interactive control has a corresponding relationship with the target element.
  • an interactive control having a corresponding relationship with the display area of the target element on the target image frame is displayed on the video playback interface, and the display image has a corresponding relationship with the target element.
  • the playback interface switches to the display function of the target page.
  • the target page jumps to the video playback interface, and continues to play the target video based on the target image frame on the video playback interface.
  • the display of the interactive controls in the embodiment of the present disclosure is implemented based on the display area of the target element on the target image frame, that is, the display of the interactive controls is related to the native display content on the target image frame, therefore, the video in the target video A mask with interactive controls is displayed on the playback interface, and based on the interactive controls related to the native display content on the target image frame, the display of the target web page related to the target image frame is triggered, which can immerse the user from the playback video interface.
  • the browsing brought to the target page ensures the user's experience.
  • an embodiment of the present disclosure further provides a video processing apparatus.
  • a schematic structural diagram of a video processing apparatus provided by an embodiment of the present disclosure, the apparatus includes: :
  • the generating module 701 is configured to generate an interactive control on the mask layer based on the target element on the target image frame after receiving the selected operation for the target element on the target image frame in the video to be processed; wherein, The position of the display area of the interactive control on the mask layer is the same as the position of the display area of the target element on the target image frame, and the display image of the interactive control has a corresponding relationship with the target element, The interactive control is used to trigger the display of a target page, and the content displayed on the target page is related to the content displayed on the target image frame;
  • An inserting module 702 configured to insert the mask into a position between the target image frame and the first image frame in the target video to obtain a processed target video; wherein the first image frame is the The adjacent next frame image of the target image frame in the target video.
  • the generation module includes:
  • a first determination submodule configured to determine the display position information corresponding to the target element on the target image frame
  • a generating sub-module is configured to generate an interactive control corresponding to the target element on the mask layer based on the display position information.
  • the generation module further includes:
  • a second determination submodule configured to determine the display position information of the display element on the target image frame that has a corresponding relationship with the target element
  • a third determination submodule configured to determine the target area corresponding to the interactive control on the mask layer based on the display position information of the display element; wherein, the operation of dragging the interactive control to the target area Used to trigger the display of the target page.
  • the device further includes:
  • a determination module configured to determine a preset stay duration corresponding to the mask layer; wherein the preset stay duration is used to control the display duration of the mask layer.
  • the video processing apparatus provided by the embodiment of the present disclosure can automatically generate an interactive control for the user to jump to the target page on the mask layer after the user selects the target element on the target image frame in the video to be processed, and After inserting the mask layer into the video to be processed, the target video is automatically generated for the user. It can be seen that the video processing method provided by the embodiments of the present disclosure lowers the operation threshold for the user to perform video processing, and improves the user's operation experience for video processing.
  • embodiments of the present disclosure also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a terminal device, the terminal device is made to implement the present invention.
  • the video-based interaction method or the video processing method according to the disclosed embodiments is disclosed.
  • an embodiment of the present disclosure also provides a video-based interaction device, as shown in FIG. 8 , which may include:
  • Processor 801 , memory 802 , input device 803 and output device 804 The number of processors 801 in the video-based interactive device may be one or more, and one processor is taken as an example in FIG. 8 .
  • the processor 801 , the memory 802 , the input device 803 , and the output device 804 may be connected by a bus or in other ways, wherein the connection by a bus is taken as an example in FIG. 8 .
  • the memory 802 can be used to store software programs and modules, and the processor 801 executes various functional applications and data processing of the video-based interactive device by running the software programs and modules stored in the memory 802 .
  • the memory 802 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function, and the like. Additionally, memory 802 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input device 803 may be used to receive input numerical or character information, and to generate signal input related to user settings and functional control of the video-based interactive device.
  • the processor 801 loads the executable files corresponding to the processes of one or more application programs into the memory 802 according to the following instructions, and the processor 801 runs the executable files stored in the memory 802 application, so as to realize various functions of the above-mentioned video-based interactive device.
  • an embodiment of the present disclosure also provides a video processing device, as shown in FIG. 9 , which may include:
  • Processor 901 , memory 902 , input device 903 and output device 904 The number of processors 901 in the video processing device may be one or more, and one processor is taken as an example in FIG. 9 .
  • the processor 901 , the memory 902 , the input device 903 , and the output device 904 may be connected by a bus or in other ways, wherein the connection by a bus is taken as an example in FIG. 9 .
  • the memory 902 can be used to store software programs and modules, and the processor 901 executes various functional applications and data processing of the video processing device by running the software programs and modules stored in the memory 902 .
  • the memory 902 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function, and the like. Additionally, memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input device 903 may be used to receive input numerical or character information, and to generate signal input related to user settings and function control of the video processing device.
  • the processor 901 loads the executable files corresponding to the processes of one or more application programs into the memory 902 according to the following instructions, and the processor 901 runs the executable files stored in the memory 902 application program, so as to realize various functions of the above-mentioned video processing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开提供了一种基于视频的交互、视频处理方法、装置、设备及存储介质,所述方法包括:响应于目标视频播放至目标图像帧,在目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层,交互控件在蒙层上的显示区域的位置与目标图像帧上的目标元素的显示区域具有对应关系,交互控件上的显示图像与目标元素具有对应关系。当接收到针对该交互控件的触发操作时,对目标页面进行展示。本公开在视频播放界面上显示与目标图像帧上的目标元素的显示区域具有对应关系,且显示图像与目标元素具有对应关系的交互控件,并通过对交互控件的触发操作实现了从视频播放界面切换至目标页面的展示功能。

Description

基于视频的交互、视频处理方法、装置、设备及存储介质
本申请要求于2020年9月30日提交中国专利局、申请号为202011060471.3、申请名称为“基于视频的交互、视频处理方法、装置设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理领域,尤其涉及一种基于视频的交互、视频处理方法、装置、设备及存储介质。
背景技术
随着短视频软件用户量的不断增加,基于视频的交互方式,越来越受到人们的关注。
其中,在保证用户体验的前提下,如何实现从视频的播放切换至目标页面的展示,是目前亟需解决的技术问题。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种基于视频的交互、视频处理方法、装置、设备及存储介质,能够在保证用户体验的前提下,实现从视频的播放切换至目标页面的展示。
第一方面,本公开提供了一种基于视频的交互方法,所述方法包括:
响应于目标视频播放至目标图像帧,在所述目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系;
响应于针对所述交互控件的触发操作,对目标页面进行展示;其中,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
一种可选的实施方式中,所述方法还包括:
响应于在所述目标页面上触发的返回操作,由所述目标页面跳转至视频播放界面,并在所述视频播放界面上基于所述目标图像帧继续播放所述目标视 频。
一种可选的实施方式中,所述方法还包括:
响应于所述蒙层在所述视频播放界面上的显示时长达到预设停留时长时,基于所述目标图像帧继续播放所述目标视频;
或者,当接收到在所述视频播放界面上的所述蒙层上触发的继续播放操作时,基于所述目标图像帧继续播放所述目标视频。
一种可选的实施方式中,所述响应于针对所述交互控件的触发操作,对目标页面进行展示之前,还包括:
在所述蒙层上显示所述交互控件对应的操作提示;所述操作提示用于引导用户针对所述交互控件的触发操作。
一种可选的实施方式中,所述方法还包括:
响应于针对所述交互控件的触发操作,播放声音提示或震动提示。
一种可选的实施方式中,所述针对所述交互控件的触发操作包括:针对所述交互控件的点击操作,或者,将所述交互控件拖拽至目标区域的操作。
第二方面,本公开提供了一种视频处理方法,所述方法包括:
在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像与所述目标元素具有对应关系,所述交互控件用于触发目标页面的展示,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关;
将所述蒙层***至所述待处理视频中所述目标图像帧与第一图像帧之间的位置,得到所述待处理视频对应的目标视频;其中,所述第一图像帧为所述待处理视频中所述目标图像帧的相邻下一帧图像。
一种可选的实施方式中,所述基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件,包括:
确定所述目标图像帧上的所述目标元素对应的显示位置信息;
基于所述显示位置信息,在蒙层上生成所述目标元素对应的交互控件。
一种可选的实施方式中,所述基于所述显示位置信息,在蒙层上生成交互 控件之后,还包括:
确定所述目标图像帧上与所述目标元素具有对应关系的显示元素的显示位置信息;
基于所述显示元素的显示位置信息,在所述蒙层上确定所述交互控件对应的目标区域;其中,所述交互控件被拖拽至所述目标区域的操作用于触发目标页面的展示。
一种可选的实施方式中,所述方法还包括:
确定所述蒙层对应的预设停留时长;其中,所述预设停留时长用于控制所述蒙层的显示时长。
第三方面,本公开提供了一种基于视频的交互装置,所述装置包括:
蒙层显示模块,用于响应于目标视频播放至目标图像帧时,在所述目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系;
页面展示模块,用于响应于针对所述交互控件的触发操作,对目标页面进行展示;其中,所述目标页面与所述目标图像帧具有对应关系。
第四方面,本公开提供了一种视频处理装置,所述装置包括:
生成模块,用于在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像为基于所述目标元素确定,所述交互控件用于触发目标页面的展示,所述目标页面与所述目标图像帧具有对应关系;
***模块,用于将所述蒙层***至所述目标视频中所述目标图像帧与第一图像帧之间的位置,得到处理后的目标视频;其中,所述第一图像帧为所述目标视频中所述目标图像帧的相邻下一帧图像。
第五方面,本公开提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现上述的方法。
第六方面,本公开提供了一种设备,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述的方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供了一种基于视频的交互方法,响应于目标视频播放至目标图像帧,暂停播放该目标视频,并在目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层。其中,该交互控件在蒙层上的显示区域与目标图像帧上的目标元素的显示区域具有对应关系,所述交互控件上的显示图像与目标元素具有对应关系。当接收到针对该交互控件的触发操作时,对目标页面进行展示,其中,目标页面与目标图像帧具有对应关系。。本公开实施例在视频播放界面上显示与目标图像帧上的目标元素的显示区域具有对应关系,且显示图像与目标元素具有对应关系的交互控件,并通过对交互控件的触发操作实现了从视频播放界面切换至目标页面的展示功能。
另外,由于本公开实施例对交互控件的显示与目标图像帧上的原生显示内容相关,因此,在目标视频的视频播放界面上显示带有交互控件的蒙层,以及基于与目标图像帧上的原生显示内容相关的交互控件,触发与目标图像帧相关的目标网页的展示,能够将用户从视频播放界面沉浸式的带入至目标页面的浏览,保证用户的体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种基于视频的交互方法的流程图;
图2为本公开实施例提供的一种目标视频的目标图像帧的界面效果图;
图3为本公开实施例提供的一种显示包括交互控件的蒙层的界面效果图;
图4为本公开实施例提供的一种视频处理方法的流程图;
图5为本公开实施例提供的一种蒙层的界面效果图;
图6为本公开实施例提供的一种基于视频的交互装置的结构示意图;
图7为本公开实施例提供的一种视频处理装置的结构示意图;
图8为本公开实施例提供的一种基于视频的交互设备的结构示意图;
图9为本公开实施例提供的一种视频处理设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
随着短视频软件的用户量的迅速增长,如何保证短视频软件中用户的使用体验,越来越受到开发人员的关注。而基于视频的交互方式是保证短视频软件用户的使用体验的重要因素之一,因此,基于视频的交互方式也备受关注。
为此,本公开提供了一种基于视频的交互方法,在目标视频播放至目标图像帧时,暂停播放该目标视频,并在目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层。其中,该交互控件在蒙层上的显示区域与目标图像帧上的目标元素的显示区域的位置相同,交互控件上的显示图像与目标元素具有对应关系。当接收到针对该交互控件的触发操作时,对与目标图像帧具有对应关系的目标页面进行展示。本公开实施例在视频播放界面上显示与目标图像帧上的目标元素的显示区域的位置相同,且显示图像与目标元素具有对应关系的交互控件,并通过对交互控件的触发操作实现了从视频播放界面切换至目标页面的展示功能。
另外,由于本公开实施例对交互控件的显示是基于目标图像帧上的目标元素的显示区域实现的,即交互控件的显示与目标图像帧上的原生显示内容相关,因此,在目标视频的视频播放界面上显示带有交互控件的蒙层,以及基于与目标图像帧上的原生显示内容相关的交互控件,触发与目标图像帧相关的目标网页的展示,能够将用户从播放视频界面沉浸式的带入至目标页面的浏览, 保证用户的体验。
基于此,本公开实施例提供了一种基于视频的交互方法,参考图1,为本公开实施例提供的一种基于视频的交互方法的流程图,该方法包括:
S101:响应于目标视频播放至目标图像帧,在所述目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层。
其中,所述交互控件在所述蒙层上的显示区域与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系。
本公开实施例中,目标视频是指具有一定播放时长的视频或视频片段。其中,目标视频中的目标图像帧的相邻下一或多帧图像为用于显示蒙层的图像帧,也就是说,目标视频中包含至少一帧显示有蒙层的图像帧。例如,目标视频中播放时间点为第5至7秒的图像帧,用于显示蒙层。
实际应用中,在目标视频播放的过程中,当播放至目标图像帧时,可以在目标图像帧对应的播放界面上显示蒙层,并暂停播放目标视频。在视频播放界面上显示的蒙层上显示有交互控件,且该交互控件在蒙层上的显示区域与目标图像帧上的目标元素的显示区域具有对应关系。具体的,该交互控件在蒙层上的显示区域的位置与目标元素在目标图像帧上的显示区域的位置相同。由于蒙层的显示可以具有一定的透明度,因此,对于用户而言,在蒙层上显示的交互控件与目标图像帧上的目标元素的显示区域重叠。
另外,交互控件上的显示图像与目标元素具有对应关系,具体的,交互控件上的显示图像与目标元素的显示图像可以相同或相似。
一种可选的实施方式中,在蒙层上可以单独显示一个交互控件,也可以同时显示有多个交互控件,对于交互控件的数量可以基于目标页面的展示需求确定。具体的,不同的交互控件用于触发不同的目标页面的展示。例如,目标页面是基于目标图像帧上的显示内容设计的广告页,不同的交互控件可以用于触发基于目标图像帧上的显示内容设计的不同的广告页。
实际应用中,蒙层的显示可以具有一定的透明度,因此,目标图像帧对应的视频播放界面上显示蒙层时,对于目标图像帧上的图像也能够隐约显示。另 外,目标视频中的目标图像帧是基于蒙层的设置需求确定的。
为了更形象的对本公开实施例进行介绍,如图2所示,为本公开实施例提供的一种目标视频的目标图像帧的界面效果图。实际应用中,在目标视频播放至该目标图像帧时,在该目标图像帧对应的视频播放界面上显示包括交互控件的蒙层。如图3所示,为本公开实施例提供的一种显示包括交互控件的蒙层的界面效果图。具体的,图3中的交互控件“合成”在蒙层上的显示区域与图2中目标图像帧上的目标元素“合成”的显示区域重叠。
本公开实施例中,交互控件上的显示图像与目标图像帧上的目标元素对应的显示图像相同,如图3中的交互控件“合成”上显示有黄色背景的“合成”字样的图像,与图2中的目标图像帧中的目标元素“合成”的显示图像相同。
本公开实施例中,蒙层上显示的交互控件是基于目标图像帧中原生显示元素设置的,使得为用户显示在蒙层上的内容不脱离目标视频的内容,避免超出用户的感官接受度,保证用户的体验。
一种可选的实施方式中,在蒙层上不仅显示有交互控件,还可以显示该交互控件对应的操作提示。如图3所示,在蒙城上不仅显示有交互控件“合成”,还显示有操作提示“点击合成,下载合成**角色”,以引导用户针对交互控件的触发操作,并且使用户对触发交互控件后的目标页面的内容有预期。
S102:响应于针对所述交互控件的触发操作,对目标页面进行展示,其中,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
本公开实施例中,在目标图像帧对应的视频播放界面上显示的蒙层上,检测到用户针对交互控件的触发操作时,从视频播放界面跳转至目标页面,实现对目标页面的展示。其中,目标页面与目标图像帧具有对应关系,通常目标页面为基于目标图像帧上的显示内容设置的,例如为广告页、个人主页、公众号主页等。
其中,针对交互控件的触发操作可以包括:针对交互控件的点击操作,或者,将交互控件拖拽至目标区域的操作。具体的实现方式在后续实施例中进行介绍。
如图3所示,在检测到用户针对交互控件“合成”的点击操作时,从当前的视频播放界面跳转至“合成**角色”下载页面,对“合成**角色”下载页面进行展 示,用户可以基于该下载页面的展示,实现对“合成**角色”的下载。另外,在视频播放的过程中,通过引导用户点击“合成”按钮进入“合成**角色”下载页面,也可以增加“合成**角色”的下载量。
在上述可能的实现方式中,用户可以通过触发蒙层中的交互控件,从播放目标视频的播放界面中跳转到目标页面。可选地,用户还可以通过触发返回操作,从目标页面跳转至播放目标视频的视频播放界面。具体地,可以执行下述步骤S103。
S103:响应于在所述目标页面上触发的返回操作,由所述目标页面跳转至视频播放界面,并在所述视频播放界面上基于所述目标图像帧继续播放所述目标视频。
本公开实施例中,在目标页面的展示过程中,用户可以触发目标页面上的返回操作,以继续对目标视频的后续视频帧进行播放。具体的,当接收到在目标页面上触发的返回操作时,从目标页面跳转回视频播放界面,并在所述视频播放界面上继续播放目标视频的后续视频帧。具体的,可以从目标图像帧开始继续播放目标视频,也可以基于目标图像帧的下一帧开始继续播放目标视频。也就是说,开始继续播放目标视频的图像帧可以是基于目标图像帧确定的。
另一种可选的实施方式中,为了提示用户针对交互控件的触发操作的有效性,本公开实施例在接收到针对交互控件的触发操作时,播放声音提示或震动提示,以便提示用户触发操作已生效,保证用户的交互体验。
另一种可选的实施方式中,在目标图像帧对应的视频播放界面上显示蒙层时,还可以对该蒙层在视频播放界面上的显示时长计时,当确定该蒙层在视频播放界面上的显示时长达到预设停留时长时,继续播放目标视频。
其中,蒙层在视频播放界面上的显示时长达到预设停留时间,说明在蒙层的显示时间内用户并没有针对蒙层上显示的交互控件进行触发,因此,为了保证用户对视频的播放体验,本公开实施例从蒙层的显示界面切换至视频播放界面,继续播放目标视频的后续图像帧。
另一种可选的实施方式中,在目标图像帧对应的视频播放界面上显示蒙层时,用户可以主动触发目标视频的继续播放。具体的,当接收到在视频播放界面上的蒙层上触发的继续播放操作时,基于目标图像帧继续播放目标视频。
本公开实施例提供的基于视频的交互方法中,在目标视频播放至目标图像帧时,暂停播放该目标视频,并在目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层。其中,该交互控件在蒙层上的显示区域与目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与目标元素具有对应关系。当接收到针对该交互控件的触发操作时,对目标页面进行展示,其中,目标页面上展示的内容与所述目标图像帧上展示的内容相关。当接收到在目标页面上触发的返回操作时,由目标页面跳转至视频播放界面,并在视频播放界面上基于目标图像帧继续播放目标视频。本公开实施例在视频播放界面上显示与目标图像帧上的目标元素的显示区域具有对应关系,且显示图像与目标元素具有对应关系的交互控件,并通过对交互控件的触发操作实现了从视频播放界面切换至目标页面的展示功能。
另外,由于本公开实施例对交互控件的显示与目标图像帧上的原生显示内容相关,因此,在目标视频的视频播放界面上显示带有交互控件的蒙层,以及基于与目标图像帧上的原生显示内容相关的交互控件,触发与目标图像帧相关的目标网页的展示,能够将用户从播放视频界面沉浸式的带入至目标页面的浏览,保证用户的体验。
基于上述实施例,本公开还提供了一种视频处理方法,用于对待处理视频进行处理后,得到上述实施例提供的基于视频的交互方法中的目标视频。
参考图4,为本公开实施例提供的一种视频处理方法的流程图,其中,该视频处理方法包括:
S401:在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件。
其中,所述交互控件在所述蒙层上的显示区域与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像与所述目标元素具有对应关系,所述交互控件用于触发目标页面的展示,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
本公开实施例中,待处理视频可以为具有一定播放时长的任意类型的视频或视频片段,如游戏视频片段、旅游视频片段等。
本公开实施例中,在确定待处理视频后,首先确定待处理视频中的目标图像帧。通常,目标图像帧是基于蒙层的设置需求确定的,例如,假设蒙层需要设置在待处理视频中的第25帧图像上显示,则可以将待处理视频的第25帧图像确定为目标图像帧。
一种可选的实施方式中,用户可以基于设置需求,将待处理视频中的图像帧确定为目标图像帧。
另外,在确定待处理视频中的目标图像帧之后,确定目标图像帧上的目标元素。通常,目标元素是基于蒙层上显示的交互控件的设置需求确定的。例如,假设蒙层上的交互控件需要设置成目标图像帧上显示的某个图像样式,如图2中的“合成”图像样式,则可以将“合成”图像样式确定为目标图像帧上的目标元素。
实际应用中,目标图像帧上可以包括至少一个显示元素,在确定目标图像帧之后,将目标图像帧上的至少一个显示元素选定为目标元素,用于交互控件的生成。具体的,在接收到目标图像帧上目标元素的选定操作后,基于该目标元素生成交互控件。
本公开实施例中,在确定待处理视频的目标图像帧上的目标元素后,在蒙层上生成与目标元素在目标图像帧上的显示区域具有对应关系的交互控件,使得在目标图像帧对应的视频播放界面上显示该蒙层时,在该蒙层上显示的该交互控件与目标图像帧上的目标元素的显示区域重叠或接近。具体的,在蒙层上生成与目标元素在目标图像帧上的显示区域的位置相同的交互控件。
实际应用中,在蒙层上生成交互控件之前,可以先确定待处理视频的目标图像帧上的目标元素对应的显示位置信息,其中,目标元素对应的显示位置信息,用于确定目标元素在目标图像帧上的显示位置。具体的,显示位置信息可以包括目标元素的边界上各个像素点的坐标等信息,还可以包括目标元素的中心点坐标和显示形状等信息。
在确定目标元素对应的显示位置信息后,基于该显示位置信息,在蒙层上生成与该目标元素对应的交互控件。具体的,交互控件在蒙层上的显示位置与目标元素在目标图像帧上的显示位置相同,即在目标图像帧对应的视频播放界面上显示蒙层时,蒙层上显示的交互控件与该交互控件对应的目标元素的显示 区域重叠。
另外,在蒙层上生成的交互控件,可以用于触发展示目标页面。具体的,针对交互控件的点击操作或移动操作可以用于触发目标页面的展示。例如,响应于用户针对交互控件触发的点击操作,可以对目标页面进行展示。或者,响应于用户针对交互控件触发的拖拽操作,例如将交互控件移动到目标区域,可以对目标用户进行展示。具体的,可以设置针对该交互控件的触发操作为任意类型的交互操作,在本公开实施例中不做限制。
另外,目标页面可以为基于目标图像帧上的内容确定的页面,例如为广告页、公众号主页等。
一种可选的实施方式中,设置针对交互控件的点击操作用于触发目标页面的展示时,还需要设置针对该交互控件的点击操作的点击范围,具体的,可以将该交互控件对应的显示区域确定为针对该交互控件的点击操作的点击范围,在该点击范围内检测到用户触发的点击操作时,触发该交互控件对应的目标页面的展示。
另外,为了丰富蒙层上的显示内容,还可以将目标图像帧上的其他显示元素显示于蒙层上,具体的,针对同一个显示元素,蒙层上的显示区域与目标图像帧上的显示区域的位置相同。
另一种可选的实施方式中,设置将交互控件拖拽至目标区域的操作用于触发目标页面的展示时,还可以设置该目标区域的显示位置信息。具体的,可以确定目标图像帧上与目标元素具有对应关系的显示元素,然后确定该显示元素在该目标图像帧上的显示位置信息。如图5所示,为本公开实施例提供的一种蒙层的界面效果图,其中,卡牌1为目标图像帧上的目标元素,与卡牌1具有对应关系的显示元素为卡牌框,确定卡牌框在目标图像帧上的显示位置信息,然后,基于卡牌框的显示位置信息,在蒙层上确定卡牌1对应的目标区域,即卡牌框所在的显示区域,并设置在目标区域内检测到交互控件被拖拽至该目标区域的操作时,触发目标页面的展示。
另外,在蒙层上还可以设置针对交互控件的拖拽操作提示,具体可以包括拖拽手势动画提示、拖拽文字提示等,如图5所示,蒙层上设置有针对交互控件“卡牌1”的拖拽手势动画提示,用于引导用户将卡牌1拖拽至目标区域。具 体的,可以设置在蒙层的显示界面上显示拖拽手势动画的次数等。蒙层上还设置有“拖拽卡牌,解锁**游戏”的文字提示,用于引导用户针对交互控件的触发操作,并且使用户对触发交互控件后的目标页面的内容有预期,保证用户的交互体验。
一种可选的实施方式中,为避免因长时间停留在蒙层显示的界面,降低了用户的使用体验,还可以设置停留时长,用于控制蒙层的显示时长。具体的,可以根据蒙层设置需求,确定蒙层对应的预设停留时长,并设置在该蒙层的显示时间达到预设停留时长时,触发由蒙层的显示界面切换至下一帧图像的播放界面,为用户继续播放后续视频。
另外,为了能够提示用户针对交互控件的触发操作的有效性,本公开实施例还可以为蒙层上的交互控件设置触发后对应的播放声音提示或震动提示,以便提示用户触发操作已生效,保证用户的交互体验。
另外,本公开实施例还可以为蒙层上的交互控件设置触发后对应的视觉效果,例如蒙层散去的效果,以提示用户触发交互控件操作的有效性,提高用户的交互体验。
S402:将所述蒙层***至所述待处理视频中所述目标图像帧与第一图像帧之间的位置,得到所述待处理视频对应的目标视频;其中,所述第一图像帧为所述待处理视频中所述目标图像帧的相邻下一帧图像。
本公开实施例中,在蒙层上生成交互控件,并为交互控件设置交互操作后,将该蒙层***至待处理视频中目标图像帧与第一图像帧之间的位置,得到待处理视频对应的目标视频。
其中,第一图像帧为待处理视频中的目标图像帧的相邻下一帧图像。例如,目标图像帧为待处理视频中的第5帧图像,则第一图像帧为该待处理视频中的第6帧图像,将蒙层***至该待处理视频的第5帧图像和第6帧图像之间。
另外,还可以设置在进入目标页面后,检测用户触发的返回操作。响应于用户触发了返回操作,可以返回至如上所述的第5帧图像或第6帧图像的播放界面,继续对目标视频进行播放。
本公开实施例提供的视频处理方法中,在用户选定待处理视频中目标图像帧上的目标元素后,能够在蒙层上为用户自动的生成用于跳转至目标页面的交 互控件,以及在将蒙层***至待处理视频后为用户自动的生成目标视频。可见,本公开实施例提供的视频处理方法降低了用户进行视频处理的操作门槛,提升了用户对视频处理的操作体验。
与上述方法实施例基于同一个发明构思,本公开还提供了一种基于视频的交互装置,参考图6,为本公开实施例提供的一种基于视频的交互装置的结构示意图,所述装置包括:
蒙层显示模块601,用于响应于目标视频播放至目标图像帧,在所述目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系;
页面展示模块602,用于响应于针对所述交互控件的触发操作,对目标页面进行展示;其中,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
一种可选的实施方式中,所述装置还包括:
跳转模块,用于响应于在所述目标页面上触发的返回操作,由所述目标页面跳转至视频播放界面,并在所述视频播放界面上基于所述目标图像帧继续播放所述目标视频。
一种可选的实施方式中,所述装置还包括:
第一播放模块,用于响应于所述蒙层在所述视频播放界面上的显示时长达到预设停留时长,基于所述目标图像帧继续播放所述目标视频;或者,当接收到在所述视频播放界面上的所述蒙层上触发的继续播放操作时,基于所述目标图像帧继续播放所述目标视频。
一种可选的实施方式中,所述装置还包括:
显示模块,用于在所述蒙层上显示所述交互控件对应的操作提示;所述操作提示用于引导用户针对所述交互控件的触发操作。
一种可选的实施方式中,所述装置还包括:
提示模块,用于响应于针对所述交互控件的触发操作,播放声音提示或震动提示。
一种可选的实施方式中,所述针对所述交互控件的触发操作包括:针对所述交互控件的点击操作,或者,将所述交互控件拖拽至目标区域的操作。
本公开实施例提供的基于视频的交互装置,在目标视频播放至目标图像帧时,暂停播放该目标视频,并在目标图像帧对应的视频播放界面上,显示包括交互控件的蒙层。其中,该交互控件在蒙层上的显示区域的位置与目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与目标元素具有对应关系。当接收到针对该交互控件的触发操作时,对目标页面进行展示,其中,目标页面上展示的内容与所述目标图像帧上展示的内容相关。本公开实施例在视频播放界面上显示与目标图像帧上的目标元素的显示区域具有对应关系,且显示图像与目标元素具有对应关系的交互控件,并通过对交互控件的触发操作实现了从视频播放界面切换至目标页面的展示功能。当接收到在目标页面上触发的返回操作时,由目标页面跳转至视频播放界面,并在视频播放界面上基于目标图像帧继续播放目标视频。
另外,由于本公开实施例对交互控件的显示是基于目标图像帧上的目标元素的显示区域实现的,即交互控件的显示与目标图像帧上的原生显示内容相关,因此,在目标视频的视频播放界面上显示带有交互控件的蒙层,以及基于与目标图像帧上的原生显示内容相关的交互控件,触发与目标图像帧相关的目标网页的展示,能够将用户从播放视频界面沉浸式的带入至目标页面的浏览,保证用户的体验。
另外,与上述方法实施例基于同一个发明构思,本公开实施例还提供了一种视频处理装置,参考图7,为本公开实施例提供的一种视频处理装置的结构示意图,所述装置包括:
生成模块701,用于在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像与所述目标元素具有对应关系,所述交互控件用于触发目标页面的展示,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关;
***模块702,用于将所述蒙层***至所述目标视频中所述目标图像帧与第一图像帧之间的位置,得到处理后的目标视频;其中,所述第一图像帧为所述目标视频中所述目标图像帧的相邻下一帧图像。
一种可选的实施方式中,所述生成模块,包括:
第一确定子模块,用于确定所述目标图像帧上的所述目标元素对应的显示位置信息;
生成子模块,用于基于所述显示位置信息,在蒙层上生成所述目标元素对应的交互控件。
一种可选的实施方式中,所述生成模块,还包括:
第二确定子模块,用于确定所述目标图像帧上与所述目标元素具有对应关系的显示元素的显示位置信息;
第三确定子模块,用于基于所述显示元素的显示位置信息,在所述蒙层上确定所述交互控件对应的目标区域;其中,所述交互控件被拖拽至所述目标区域的操作用于触发目标页面的展示。
一种可选的实施方式中,所述装置还包括:
确定模块,用于确定所述蒙层对应的预设停留时长;其中,所述预设停留时长用于控制所述蒙层的显示时长。
本公开实施例提供的视频处理装置,在用户选定待处理视频中目标图像帧上的目标元素后,能够在蒙层上为用户自动的生成用于跳转至目标页面的交互控件,以及在将蒙层***至待处理视频后为用户自动的生成目标视频。可见,本公开实施例提供的视频处理方法降低了用户进行视频处理的操作门槛,提升了用户对视频处理的操作体验。
除了上述方法和装置以外,本公开实施例还提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现本公开实施例所述的基于视频的交互方法或视频处理方法。
另外,本公开实施例还提供了一种基于视频的交互设备,参见图8所示, 可以包括:
处理器801、存储器802、输入装置803和输出装置804。基于视频的交互设备中的处理器801的数量可以一个或多个,图8中以一个处理器为例。在本公开的一些实施例中,处理器801、存储器802、输入装置803和输出装置804可通过总线或其它方式连接,其中,图8中以通过总线连接为例。
存储器802可用于存储软件程序以及模块,处理器801通过运行存储在存储器802的软件程序以及模块,从而执行基于视频的交互设备的各种功能应用以及数据处理。存储器802可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序等。此外,存储器802可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。输入装置803可用于接收输入的数字或字符信息,以及产生与基于视频的交互设备的用户设置以及功能控制有关的信号输入。
具体在本实施例中,处理器801会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器802中,并由处理器801来运行存储在存储器802中的应用程序,从而实现上述基于视频的交互设备的各种功能。
另外,本公开实施例还提供了一种视频处理设备,参见图9所示,可以包括:
处理器901、存储器902、输入装置903和输出装置904。视频处理设备中的处理器901的数量可以一个或多个,图9中以一个处理器为例。在本公开的一些实施例中,处理器901、存储器902、输入装置903和输出装置904可通过总线或其它方式连接,其中,图9中以通过总线连接为例。
存储器902可用于存储软件程序以及模块,处理器901通过运行存储在存储器902的软件程序以及模块,从而执行视频处理设备的各种功能应用以及数据处理。存储器902可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序等。此外,存储器902可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储 器件、闪存器件、或其他易失性固态存储器件。输入装置903可用于接收输入的数字或字符信息,以及产生与视频处理设备的用户设置以及功能控制有关的信号输入。
具体在本实施例中,处理器901会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器902中,并由处理器901来运行存储在存储器902中的应用程序,从而实现上述视频处理设备的各种功能。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (14)

  1. 一种基于视频的交互方法,其特征在于,所述方法包括:
    响应于目标视频播放至目标图像帧,在所述目标图像帧对应的播放界面上,显示包括交互控件的蒙层;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系;
    响应于针对所述交互控件的触发操作,对目标页面进行展示;其中,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于在所述目标页面上触发的返回操作,由所述目标页面跳转至视频播放界面,并在所述视频播放界面上基于所述目标图像帧继续播放所述目标视频。
  3. 根据权利要求1所述的方法,其特征在于,所述响应于针对所述交互控件的触发操作,对目标页面进行展示之前,还包括:
    响应于所述蒙层在所述视频播放界面上的显示时长达到预设停留时长,基于所述目标图像帧继续播放所述目标视频;
    或者,当接收到在所述视频播放界面上的所述蒙层上触发的继续播放操作时,基于所述目标图像帧继续播放所述目标视频。
  4. 根据权利要求1所述的方法,其特征在于,所述响应于针对所述交互控件的触发操作,对目标页面进行展示之前,还包括:
    在所述蒙层上显示所述交互控件对应的操作提示;所述操作提示用于引导用户针对所述交互控件的触发操作。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于针对所述交互控件的触发操作,播放声音提示或震动提示。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述针对所述交互控件的触发操作包括:
    针对所述交互控件的点击操作,或者,将所述交互控件拖拽至目标区域的 操作。
  7. 一种视频处理方法,其特征在于,所述方法包括:
    在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像与所述目标元素具有对应关系,所述交互控件用于触发目标页面的展示,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关;
    将所述蒙层***至所述待处理视频中所述目标图像帧与第一图像帧之间的位置,得到所述待处理视频对应的目标视频;其中,所述第一图像帧为所述待处理视频中所述目标图像帧的相邻下一帧图像。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件,包括:
    确定所述目标图像帧上的所述目标元素对应的显示位置信息;
    基于所述显示位置信息,在蒙层上生成交互控件。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述显示位置信息,在蒙层上生成交互控件之后,还包括:
    确定所述目标图像帧上与所述目标元素具有对应关系的显示元素的显示位置信息;
    基于所述显示元素的显示位置信息,在所述蒙层上确定所述交互控件对应的目标区域;其中,所述交互控件被拖拽至所述目标区域的操作用于触发目标页面的展示。
  10. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    确定所述蒙层对应的预设停留时长;其中,所述预设停留时长用于控制所述蒙层的显示时长。
  11. 一种基于视频的交互装置,其特征在于,所述装置包括:
    蒙层显示模块,用于响应于目标视频播放至目标图像帧,在所述目标图像 帧对应的视频播放界面上,显示包括交互控件的蒙层;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标图像帧上的目标元素的显示区域的位置相同,所述交互控件上的显示图像与所述目标元素具有对应关系;
    页面展示模块,用于响应于针对所述交互控件的触发操作,对目标页面进行展示;其中,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关。
  12. 一种视频处理装置,其特征在于,所述装置包括:
    生成模块,用于在接收到针对待处理视频中目标图像帧上的目标元素的选定操作后,基于所述目标图像帧上的所述目标元素,在蒙层上生成交互控件;其中,所述交互控件在所述蒙层上的显示区域的位置与所述目标元素在所述目标图像帧上的显示区域的位置相同,所述交互控件的显示图像为基于所述目标元素确定,所述交互控件用于触发目标页面的展示,所述目标页面上展示的内容与所述目标图像帧上展示的内容相关;
    ***模块,用于将所述蒙层***至所述目标视频中所述目标图像帧与第一图像帧之间的位置,得到处理后的目标视频;其中,所述第一图像帧为所述目标视频中所述目标图像帧的相邻下一帧图像。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现如权利要求1-10任一项所述的方法。
  14. 一种设备,其特征在于,包括:存储器,处理器,及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1-10任一项所述的方法。
PCT/CN2021/114669 2020-09-30 2021-08-26 基于视频的交互、视频处理方法、装置、设备及存储介质 WO2022068478A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023507659A JP2024507605A (ja) 2020-09-30 2021-08-26 ビデオによるインタラクション、ビデオ処理方法、装置、機器及び記憶媒体
EP21874139.5A EP4175309A4 (en) 2020-09-30 2021-08-26 METHODS OF VIDEO PROCESSING AND VIDEO-BASED INTERACTION, APPARATUS, DEVICE AND RECORDING MEDIUM
US18/090,323 US20230161471A1 (en) 2020-09-30 2022-12-28 Video-based interaction and video processing methods, apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011060471.3 2020-09-30
CN202011060471.3A CN112188255A (zh) 2020-09-30 2020-09-30 基于视频的交互、视频处理方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/090,323 Continuation US20230161471A1 (en) 2020-09-30 2022-12-28 Video-based interaction and video processing methods, apparatus, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022068478A1 true WO2022068478A1 (zh) 2022-04-07

Family

ID=73946303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114669 WO2022068478A1 (zh) 2020-09-30 2021-08-26 基于视频的交互、视频处理方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20230161471A1 (zh)
EP (1) EP4175309A4 (zh)
JP (1) JP2024507605A (zh)
CN (1) CN112188255A (zh)
WO (1) WO2022068478A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188255A (zh) * 2020-09-30 2021-01-05 北京字跳网络技术有限公司 基于视频的交互、视频处理方法、装置、设备及存储介质
CN113031842B (zh) * 2021-04-12 2023-02-28 北京有竹居网络技术有限公司 基于视频的交互方法、装置、存储介质及电子设备
CN113127128B (zh) * 2021-05-07 2024-06-18 广州酷狗计算机科技有限公司 界面控制方法、装置、计算机设备及存储介质
CN113656130B (zh) * 2021-08-16 2024-05-17 北京百度网讯科技有限公司 一种数据展示方法、装置、设备以及存储介质
CN114003326B (zh) * 2021-10-22 2023-10-13 北京字跳网络技术有限公司 消息处理方法、装置、设备及存储介质
CN114415928A (zh) * 2022-01-17 2022-04-29 北京字跳网络技术有限公司 一种页面控制方法、装置、设备以及存储介质
CN114513705A (zh) * 2022-02-21 2022-05-17 北京字节跳动网络技术有限公司 视频显示方法、装置和存储介质
CN114786063B (zh) * 2022-03-11 2023-12-05 北京字跳网络技术有限公司 一种音视频处理方法、装置、设备以及存储介质
CN116126331B (zh) * 2023-02-10 2023-07-25 安芯网盾(北京)科技有限公司 一种可交互流程图生成方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769814A (zh) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 视频互动方法、装置及可读介质
US20190160376A1 (en) * 2017-11-30 2019-05-30 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Video game processing program, video game processing method, and video game processing system
CN110062270A (zh) * 2019-04-24 2019-07-26 北京豆萌信息技术有限公司 广告展示方法和装置
CN111669639A (zh) * 2020-06-15 2020-09-15 北京字节跳动网络技术有限公司 一种活动入口的展示方法、装置、电子设备及存储介质
CN111698566A (zh) * 2020-06-04 2020-09-22 北京奇艺世纪科技有限公司 视频播放方法、装置、电子设备及存储介质
CN112188255A (zh) * 2020-09-30 2021-01-05 北京字跳网络技术有限公司 基于视频的交互、视频处理方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8526782B2 (en) * 2010-12-22 2013-09-03 Coincident.Tv, Inc. Switched annotations in playing audiovisual works
US20140013196A1 (en) * 2012-07-09 2014-01-09 Mobitude, LLC, a Delaware LLC On-screen alert during content playback
US20160011758A1 (en) * 2014-07-09 2016-01-14 Selfie Inc. System, apparatuses and methods for a video communications network
US9799134B2 (en) * 2016-01-12 2017-10-24 Indg Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image
JP6997338B2 (ja) * 2018-03-28 2022-01-17 華為技術有限公司 動画プレビュー方法及び電子デバイス
WO2020198238A1 (en) * 2019-03-24 2020-10-01 Apple Inc. User interfaces for a media browsing application
CN111026392B (zh) * 2019-11-14 2023-08-22 北京金山安全软件有限公司 一种引导页面生成方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190160376A1 (en) * 2017-11-30 2019-05-30 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Video game processing program, video game processing method, and video game processing system
CN108769814A (zh) * 2018-06-01 2018-11-06 腾讯科技(深圳)有限公司 视频互动方法、装置及可读介质
CN110062270A (zh) * 2019-04-24 2019-07-26 北京豆萌信息技术有限公司 广告展示方法和装置
CN111698566A (zh) * 2020-06-04 2020-09-22 北京奇艺世纪科技有限公司 视频播放方法、装置、电子设备及存储介质
CN111669639A (zh) * 2020-06-15 2020-09-15 北京字节跳动网络技术有限公司 一种活动入口的展示方法、装置、电子设备及存储介质
CN112188255A (zh) * 2020-09-30 2021-01-05 北京字跳网络技术有限公司 基于视频的交互、视频处理方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4175309A4

Also Published As

Publication number Publication date
EP4175309A4 (en) 2024-03-13
JP2024507605A (ja) 2024-02-21
EP4175309A1 (en) 2023-05-03
US20230161471A1 (en) 2023-05-25
CN112188255A (zh) 2021-01-05

Similar Documents

Publication Publication Date Title
WO2022068478A1 (zh) 基于视频的交互、视频处理方法、装置、设备及存储介质
CN110062270B (zh) 广告展示方法和装置
US11620784B2 (en) Virtual scene display method and apparatus, and storage medium
CN107728905B (zh) 一种弹幕显示方法、装置及存储介质
EP4390642A1 (en) Page processing method and apparatus, device, and storage medium
WO2022068639A1 (zh) 一种基于视频的交互方法、装置、设备及存储介质
CN109947979B (zh) 歌曲识别方法、装置、终端及存储介质
WO2022252998A1 (zh) 一种视频处理方法、装置、设备及存储介质
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
WO2023134470A1 (zh) 页面控制方法、装置、设备以及存储介质
WO2023116661A1 (zh) 一种直播间的资源处理方法、装置、设备以及存储介质
WO2023273853A1 (zh) 一种图像处理方法、装置、设备及存储介质
WO2024104182A1 (zh) 一种基于视频的交互方法、装置、设备及存储介质
US20240098328A1 (en) Video processing method and apparatus, and device and storage medium
WO2024114397A1 (zh) 一种基于视频的交互方法、装置、设备及存储介质
WO2024078266A1 (zh) 一种视频处理方法、装置、设备及存储介质
CN108984263A (zh) 视频显示方法和装置
WO2022142750A1 (zh) 基于教程的多媒体资源编辑方法, 装置, 设备及存储介质
CN115834967A (zh) 用于生成多媒体内容的方法、装置、电子设备和存储介质
CN112653931B (zh) 资源信息播放的控制方法、装置、存储介质以及电子设备
CN115623274A (zh) 一种视频处理方法、装置、设备及存储介质
CN107277602B (zh) 信息获取方法及电子设备
WO2023207549A1 (zh) 视频处理方法、装置、设备及存储介质
CN111079051B (zh) 一种展示内容的播放方法及装置
CN116744064A (zh) 基于虚拟现实的视频播放方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874139

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023507659

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021874139

Country of ref document: EP

Effective date: 20230126

NENP Non-entry into the national phase

Ref country code: DE