CN113766294A - Video processing method, device and storage medium - Google Patents

Video processing method, device and storage medium Download PDF

Info

Publication number
CN113766294A
CN113766294A CN202110332648.9A CN202110332648A CN113766294A CN 113766294 A CN113766294 A CN 113766294A CN 202110332648 A CN202110332648 A CN 202110332648A CN 113766294 A CN113766294 A CN 113766294A
Authority
CN
China
Prior art keywords
video
playing
control layer
transparent
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110332648.9A
Other languages
Chinese (zh)
Other versions
CN113766294B (en
Inventor
李毛毛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202110332648.9A priority Critical patent/CN113766294B/en
Publication of CN113766294A publication Critical patent/CN113766294A/en
Application granted granted Critical
Publication of CN113766294B publication Critical patent/CN113766294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a video processing method, a video processing device and a storage medium, wherein on a current interface, a video playing interface is jumped to through touch operation, and a video request instruction is sent to a server; in a video barrier-free mode, responding to a playing instruction, playing a target video based on video comprehensive data, and displaying a transparent control layer and a control layer on the target video in an overlapping manner; responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data; when the target object needs video prompt, triggering operation can be carried out on the transparent control layer or the control layer, and then short content prompt or play condition prompt can be obtained respectively.

Description

Video processing method, device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of video processing, in particular to a video processing method, a video processing device and a storage medium.
Background
In China, 1731 million people with visual impairment exist, one hundred million blind people exist in the world, and about 13 hundred million people have visual impairment in different degrees. According to mobile insight reports of visually impaired netizens in 2018 of information barrier-free research society, the highest mobile phone accounts for 95% of the users who use common internet equipment, and the mobile phone accounts for nearly 6% of videos browsed by the mobile phone.
Currently, when these visually impaired users browse videos using a mobile phone, most of the videos only include how long the current video is and the current playing time after being clicked in the barrier-free mode of the mobile phone. Or part of the mobile phone can edit the current playing time length, the current playing time length and the click operable information together to form prompt content with larger information amount. Therefore, the technical problems that the information quantity of the prompting contents is single and the prompting contents are overlong exist at present.
Disclosure of Invention
The video processing method, the video processing device and the storage medium provided by the embodiment of the invention can enrich the video prompt content and simplify the prompt content.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides a video processing method, which comprises the following steps:
skipping to a video playing interface through touch operation on the current interface, and sending a video request instruction to a server;
in a video barrier-free mode, triggering a playing instruction on the video playing interface according to video comprehensive data fed back by the server responding to the video request instruction;
responding to the playing instruction, playing a target video based on the video comprehensive data, and overlaying and displaying a transparent control layer and a control layer on a display interface of the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface;
and responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data.
In the foregoing solution, the video integration data includes: the target video, the transparent control related data of the transparent control layer, the control related data of the control layer, the content corresponding relation between a plurality of time periods of the target video and content prompt information, and the playing state corresponding relation between the playing state of the target video and the playing state of the corresponding playing state information.
In the foregoing solution, the responding to the play instruction, playing a target video based on the video integrated data, and displaying a transparent control layer and a control layer in an overlay manner on a display interface of the target video includes:
and responding to the playing instruction, extracting the target video from the video comprehensive data for playing, and respectively displaying the transparent control layer and the control layer in a superposed manner on a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data.
In the foregoing solution, the responding to the play instruction, extracting the target video from the video integrated data to play, and respectively displaying the transparent control layer and the control layer in a superimposed manner on a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, includes:
responding to the playing instruction, extracting the target video from the video comprehensive data for playing, sequentially superposing the transparent control layer and the control layer on a display interface of the target video along a direction perpendicular to the display interface according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, displaying the transparent layer in the first display area, and displaying the control layer in the second display area;
wherein the second display area partially covers the first display area.
In the foregoing solution, the responding to the play instruction, extracting the target video from the video integrated data to play, and respectively displaying the transparent control layer and the control layer in a superimposed manner on a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, includes:
responding to the playing instruction, extracting the target video from the video comprehensive data for playing, sequentially arranging and displaying the transparent control layer and the control layer along the same horizontal plane of a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, displaying the transparent layer in the first display area, and displaying the control layer in the second display area;
wherein the first display area and the second display area do not overlap.
In the foregoing solution, before the responding to the trigger operation for the transparent control layer and/or the control layer and performing content prompt and/or play condition prompt on the target video according to the video integrated data, the method further includes:
and acquiring the playing information of the current moment of the target video, wherein the playing information represents the playing progress and/or the playing state of the current moment of the target video.
In the above scheme, the playing information includes: current playing time information and/or current playing state information;
the responding to the triggering operation aiming at the transparent control layer and/or the control layer, and performing content prompt and/or play condition prompt on the target video according to the video comprehensive data comprises the following steps:
in response to a first trigger operation aiming at the transparent control layer, finding the current content prompt information corresponding to the current playing time information in the content corresponding relation, and carrying out content prompt;
and/or, in response to a second trigger operation for the control layer, finding the current playing condition information corresponding to the current playing state information in the playing condition corresponding relation, and performing playing condition prompt.
In the foregoing solution, the searching for the current content prompt information corresponding to the current play time information in the content correspondence in response to the first trigger operation for the transparent control layer, and performing content prompt include:
in response to a first trigger operation aiming at the transparent control layer, counting the total prompting number of content prompting information which is already prompted before the current time;
if the total number of prompts is less than the number of preset content prompt messages, the current content prompt messages corresponding to the current playing time messages are found in the content corresponding relations, and content prompt is carried out.
In the foregoing solution, after counting, in response to the first trigger operation for the transparent control layer, the total number of content prompt messages that have been prompted before the current time, the method further includes: and if the total prompt quantity is not less than the preset content prompt information quantity, not prompting the content.
In the foregoing solution, the searching, in response to the second trigger operation for the control layer, for the current playing condition information corresponding to the current playing state information in the playing condition correspondence, and performing playing condition prompt includes:
responding to a second trigger operation aiming at the control layer, and converting the playing state of the target video into a preset state;
if the preset state is a pause state, the current playing situation information corresponding to the pause state is found in the playing situation corresponding relation, the playing situation pause is prompted, the current playing situation information represents the playing situation pause, and the pause state is converted into the playing state.
In the foregoing solution, after the response to the second trigger operation for the control layer is made, and the playing state of the target video is converted into a preset state, the method further includes:
if the preset state is a playing state, the current playing condition information corresponding to the playing state is found in the playing condition corresponding relation, playing condition prompt is carried out, the current playing condition information represents the playing condition, and the playing state is converted into the pause state.
An embodiment of the present invention further provides a video processing apparatus, including:
the first sending unit is used for jumping to a video playing interface through touch operation on the current interface and sending a video request instruction to the server;
the first triggering unit is used for triggering a playing instruction on the video playing interface according to the video comprehensive data fed back by the server responding to the video request instruction in a video barrier-free mode;
the playing unit is used for responding to the playing instruction, playing a target video based on the video comprehensive data, and superposing and displaying a transparent control layer and a control layer on a display interface of the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface;
and the second trigger unit is used for responding to the trigger operation aiming at the transparent control layer and/or the control layer and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data.
An embodiment of the present invention further provides a video processing apparatus, which includes a first memory and a first processor, where the first memory stores a computer program that can be executed on the first processor, and the first processor implements the steps in the method when executing the computer program.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a first processor, implements the steps in the method as described above.
In the embodiment of the invention, on the current interface, skipping to a video playing interface through touch operation and sending a video request instruction to a server; in the video barrier-free mode, triggering a playing instruction on a video playing interface according to video comprehensive data fed back by a server responding to a video request instruction; responding to a playing instruction, playing a target video based on the video comprehensive data, and overlaying and displaying a transparent control layer and a control layer on the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface; responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data; when the target object needs video prompt, triggering operation can be carried out on the transparent control layer or the control layer, and then short content prompt or play condition prompt can be obtained respectively.
Drawings
Fig. 1 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating an optional effect of a video processing method according to an embodiment of the present invention;
fig. 3 is a schematic side cross-sectional view of a terminal of a video processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating an alternative effect of a video processing method according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating an alternative effect of a video processing method according to an embodiment of the present invention;
fig. 8 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating an alternative effect of a video processing method according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating an alternative effect of a video processing method according to an embodiment of the present invention;
fig. 11 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 12 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 13 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 14 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 15 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 16 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 17 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 18 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 19 is a schematic flow chart of an alternative video processing method according to an embodiment of the present invention;
fig. 20 is an interaction diagram of a video processing method according to an embodiment of the present invention;
fig. 21 is a first schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 22 is a first hardware entity diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 23 is a second schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 24 is a hardware entity diagram of a video processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention are further described in detail with reference to the drawings and the embodiments, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
To the extent that similar descriptions of "first/second" appear in this patent document, the description below will be added, where reference is made to the term "first \ second \ third" merely to distinguish between similar objects and not to imply a particular ordering with respect to the objects, it being understood that "first \ second \ third" may be interchanged either in a particular order or in a sequential order as permitted, to enable embodiments of the invention described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Fig. 1 is an optional flowchart of a video processing method according to an embodiment of the present invention, which is applied to a terminal and will be described with reference to the steps shown in fig. 1.
And S01, jumping to a video playing interface through touch operation on the current interface, and sending a video request instruction to the server.
In the embodiment of the invention, the terminal receives the touch operation of the target object on the current interface. And the terminal jumps to a video playing interface appointed by the touch operation, and simultaneously sends a video request instruction to the server through a communication line pre-established with the server. The video request instruction is used for requesting the corresponding video comprehensive data from the server. The video request command may be a character string or encoded information.
In the embodiment of the invention, the terminal receives the operation instruction of the target object and opens a piece of video software. And the terminal displays one interface of the video software as a current interface. The terminal acquires the touch operation of the target object on the current interface. And the terminal responds to the touch operation, jumps to the video playing interface from the current interface, and simultaneously sends a video request instruction to the server.
In the embodiment of the invention, the touch operation can be a single-click on the current interface or a double-click on the current interface.
Fig. 2 is a schematic diagram illustrating an optional effect of the video processing method according to the embodiment of the present invention. Among them, the current interface 10 includes: the picture 1 and the link 1 corresponding to the first video, the picture 2 and the link 2 corresponding to the second video, and the picture 3 and the link 3 corresponding to the third video. The terminal obtains the touch operation of the target object on the current interface 10 for the first video. The touch operation may be a single-click operation of the target object on picture 1 or link 1 of the first video. The terminal jumps to the video playing interface 20 in response to the touch operation and sends a video request instruction to the server.
And S02, in the video barrier-free mode, according to the video comprehensive data fed back by the server responding to the video request instruction, triggering a playing instruction on a video playing interface.
In the embodiment of the invention, the terminal receives the video comprehensive data fed back by the server responding to the video request instruction. The target object may set the terminal to the barrier-free mode in a setting option of the terminal in advance, or the target object may set the barrier-free mode in the current interface. And the terminal receives a trigger playing instruction of the target object in a video playing interface in the barrier-free mode. And then the video playing interface of the terminal starts to play the target video.
The playing instruction can be a single click or double click on a video playing button in the video playing interface.
In the embodiment of the invention, if the terminal is not in the video barrier-free mode, the terminal sends the video request instruction to the server, and the server only feeds back the target video which is the same as the played video to the terminal. And the terminal receives a trigger playing instruction corresponding to the target on the video playing interface and plays the target video.
And S03, responding to the playing instruction, playing the target video based on the video comprehensive data, and superposing and displaying the transparent control layer and the control layer on the display interface of the target video.
In the embodiment of the invention, the terminal receives the playing instruction corresponding to the target video and responds to the playing instruction. And the terminal extracts the target video from the video comprehensive data to play. And the terminal superposes and displays the transparent control layer and the control layer on the display interface of the target video.
In the embodiment of the present invention, the video integration data includes: the method comprises the steps of obtaining a target video, transparent control related data of a transparent control layer, control related data of a control layer, content corresponding relations between a plurality of time periods of the target video and content prompt information, and playing state corresponding relations between the playing state of the target video and playing state information corresponding to the playing state information. The transparent control related data can represent position information or size information of the transparent control layer or relative position information of the transparent control layer and the control layer, or represent logic of how to perform content prompt after the terminal acquires the trigger operation of the target object on the transparent control layer. The control related data can represent position information or size information of the control layer or relative position information of the control layer and the transparent control layer, or represent logic of how to prompt playing conditions after the terminal acquires triggering operation of the target object on the control layer.
In the embodiment of the invention, the terminal extracts the target video from the video comprehensive data. And constructing a transparent control layer on the display interface of the target video and constructing a control layer on the display interface of the target video by the terminal. Fig. 3 is a schematic diagram illustrating an optional effect of the video processing method according to the embodiment of the present invention. The terminal firstly constructs a transparent control layer 12 with the same size as the target video display interface on the display interface of the target video 13, and secondly, the terminal constructs a control layer 11 smaller than the target video display interface on the transparent control layer 12. Fig. 4 is a schematic diagram illustrating an optional effect of the video processing method according to the embodiment of the present invention. The control layer 11 is at the lowest end of the display interface, and the target object can watch the target video 13 through the transparent control layer 12.
The transparent control layer is of a transparent structure, and a target object can watch videos conveniently. The control layer may be a semi-transparent or opaque structure.
And S04, responding to the trigger operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data.
In the embodiment of the invention, the terminal receives the trigger operation of the target object on the transparent control layer and/or the control layer. And the terminal responds to the trigger operation, extracts the current content prompt information and/or the current playing condition information of the target object at the current moment from the video comprehensive data and prompts.
The trigger operation can be that the target object clicks the transparent control layer or double clicks the control layer. The current content prompt message may be the content subject message played by the video of the target video at the current moment. For example, if weather-related information is played at the current time of the target video, the current content prompt information may be: the weather is clear, the temperature is 15 ℃, and the people are suitable for going out. The current playing condition information may be playing state information of the video of the target video at the current time, for example, if the playing state of the target video at the current time is pause, the current playing state information may be: "video pause, one click playable video".
In the embodiment of the invention, the terminal receives the click operation of the target object on the transparent control layer. The terminal responds to the click operation. And the terminal executes the transparent logic data and searches the current content prompt information corresponding to the current moment in the video comprehensive data according to the current moment of the target video. The terminal can prompt the current content prompt information on a display interface through voice prompt or through characters. The target video comprises a plurality of time periods, and each time period corresponds to respective content prompt information.
In the embodiment of the invention, the terminal receives the click operation of the target object on the control layer. The terminal responds to the click operation. And the terminal executes the control logic data and searches the current content prompt information corresponding to the current moment in the video comprehensive data according to the playing state of the current moment of the target video.
In the embodiment of the invention, on the current interface, skipping to a video playing interface through touch operation and sending a video request instruction to a server; in the video barrier-free mode, triggering a playing instruction on a video playing interface according to video comprehensive data fed back by a server responding to a video request instruction; responding to a playing instruction, playing a target video based on the video comprehensive data, and overlaying and displaying a transparent control layer and a control layer on the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface; responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data; when the target object needs video prompt, triggering operation can be carried out on the transparent control layer or the control layer, and then short content prompt or play condition prompt can be obtained respectively.
In some embodiments, referring to fig. 5, fig. 5 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S03 shown in fig. 1 may be implemented by S05, which will be described with reference to the steps.
And S05, responding to the playing instruction, extracting the target video from the video comprehensive data to play, and respectively displaying the transparent control layer and the control layer in a superposition manner on the display interface of the target video according to the first display area corresponding to the transparent control related data and the second display area corresponding to the control related data.
In the embodiment of the invention, the terminal responds to the playing instruction and extracts the target video from the video comprehensive data for playing. And the terminal executes the program in the related data of the transparent control on the display interface of the target video to construct a first display area, and displays the transparent control layer in the first display area. Meanwhile, the terminal executes a program in the control muskmelon data to construct a second display area on a display interface of the target video, and displays a control layer in the second display area.
Wherein the first display area is larger than the second display area. The first display area and the second display area can be sequentially distributed in a superposition manner in the vertical direction of the display interface, and at the moment, as the transparent control layer is of a transparent structure, the target object can clearly see the content of the target video. In other embodiments, the first display area and the second display area may be distributed in a horizontal direction of the display interface, and since the first display interface is larger than the second display interface and the transparent control layer is a transparent structure, the content of the target video may also be clearly seen by the target video.
In some embodiments, referring to fig. 6, fig. 6 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S05 shown in fig. 5 may be implemented by S06, which will be described with reference to the steps.
And S06, responding to a playing instruction, extracting a target video from the video comprehensive data and playing, sequentially superposing a transparent control layer and a control layer on a display interface of the target video along a direction perpendicular to the display interface in a first display area and displaying the control layer on a second display area, wherein the first display area corresponds to the transparent control related data and the second display area corresponds to the control related data.
In the embodiment of the invention, the terminal responds to the playing instruction of the target object and extracts the target video from the video comprehensive data for playing. Meanwhile, the terminal constructs a first display area on the display interface according to the related data of the transparent control, and the terminal constructs a second display area on the display interface according to the related data of the control. And the first display area and the second display area are sequentially distributed in a superposition manner along a direction perpendicular to the display interface. And the terminal displays the transparent control layer in the first display area and displays the control layer in the second display area.
Wherein, the area of the first display region may be the same as the area of the display interface. The area of the second display region may be one third or one fourth of the area of the display interface, which is not limited herein. The first display area partially covers the second display area.
Fig. 7 is a schematic diagram illustrating an optional effect of the video processing method according to the embodiment of the present invention. The transparent control layer 12 and the control layer 11 are sequentially displayed on the display interface of the target video 13 at the lowest end of the target video 13 in a stacked manner. The area of the control layer 11 is much smaller than the area of the transparent control layer 12. Since the transparent control layer 12 is a transparent structure, the target object can see the target video 13 through the transparent control layer 12.
In some embodiments, referring to fig. 8, fig. 8 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S05 shown in fig. 5 may be implemented by S07, which will be described with reference to the steps.
And S07, responding to the playing instruction, extracting the target video from the video comprehensive data to play, sequentially arranging and displaying the transparent control layer and the control layer along the same horizontal plane of the display interface of the target video according to the first display area corresponding to the transparent control related data and the second display area corresponding to the control related data, displaying the transparent layer in the first display area, and displaying the control layer in the second display area.
In the embodiment of the invention, the terminal responds to the playing instruction of the target object and extracts the target video from the video comprehensive data for playing. Meanwhile, the terminal constructs a first display area on the display interface according to the related data of the transparent control, and the terminal constructs a second display area on the display interface according to the related data of the control. And the first display area and the second display area are sequentially arranged and displayed on the same horizontal plane of the display interface of the target video. And the terminal displays the transparent control layer in the first display area and displays the control layer in the second display area.
The area of the first display area is smaller than that of the display interface. The area of the first display region may be 0.8 times the area of the display interface. The area of the second display region may be 0.2 times the area of the display interface. The first display area and the second display area do not overlap.
Fig. 9 is a schematic diagram illustrating an optional effect of the video processing method according to the embodiment of the present invention. The target video 13 is arranged at the lowest end, and a transparent control layer 12 and a control layer 11 are displayed on a display interface of the target video 13 in a horizontal defensive line arrangement mode. The area of the transparent control layer 12 is smaller than that of the display interface, and the area of the control layer 11 is far smaller than that of the transparent control layer 12. The sum of the area of the transparent control layer 12 and the area of the control layer 11 is equal to the area of the target video 13. In the embodiment of the present invention, the control layer 11 is displayed at the bottom of the display interface, and in other embodiments, as shown in fig. 10, the control layer 11 may also be displayed at one side of the display interface.
In some embodiments, referring to fig. 11, fig. 11 is an optional flowchart of a video processing method according to an embodiment of the present invention, and S04 shown in fig. 1 may be implemented through S08-S10, which will be described with reference to the steps.
And S08, acquiring the playing information of the current moment of the target video.
In the embodiment of the invention, the terminal acquires the playing information of the current moment of the target video through a preset program. The playing information represents the playing progress and/or the playing state of the target video at the current moment.
For example, the playing information may be time information that the current playing progress of the target video is 13 minutes and 18 seconds, or playing state information that the playing state of the target video is paused.
And S09, in response to the first trigger operation aiming at the transparent control layer, finding out the current content prompt information corresponding to the current playing time information in the content corresponding relation, and carrying out content prompt.
In the embodiment of the invention, the terminal responds to the first trigger operation of the target object aiming at the transparent control layer, and the terminal determines the current content prompt information corresponding to the time period to which the current playing time information belongs in the content corresponding relation in the video comprehensive data. And the terminal prompts the current content prompt information.
In the embodiment of the invention, the content corresponding relationship comprises the content corresponding relationship between a plurality of time ends of the target video and the corresponding content prompt information. The content prompt information represents the content information played by the target video in the corresponding time period.
In the embodiment of the invention, the contents played in the multiple time periods of the target video can be different, so that the content prompt messages corresponding to the multiple time periods can also be different.
The first trigger operation can be that the target object clicks or double clicks the transparent control layer.
And S10, responding to the second trigger operation aiming at the control layer, finding the current playing condition information corresponding to the current playing state information in the playing condition corresponding relation, and prompting the playing condition.
In the embodiment of the invention, the terminal responds to the second trigger operation of the target object aiming at the control layer, and the terminal determines the current playing condition information corresponding to the current playing state information in the playing condition corresponding relation in the video comprehensive data. And the terminal prompts the current playing situation information.
In the embodiment of the present invention, the current playing state information may be characterized as follows: a video play state or a video pause state. The current playing situation information corresponding to the video playing state may be: "video play, one click can pause play". The current playing situation information corresponding to the video pause state may be: "video pause, single click can start play".
The second trigger operation can be that the target object clicks or double-clicks the control layer.
In some embodiments, referring to fig. 12, fig. 12 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S09-S10 shown in fig. 11 can be implemented by S11-S14, which will be described with reference to the steps.
And S11, in response to the first trigger operation aiming at the transparent control layer, counting the total prompting number of the content prompting information which is already prompted before the current time.
In the embodiment of the invention, the terminal responds to the first trigger operation of the target object aiming at the transparent control layer, and the terminal counts the total prompting quantity of the content prompting information which is already prompted before the current moment.
For example, if the terminal has already prompted 8 pieces of content prompting information before the current time, the total number of prompts may be 8.
And S12, if the total number of prompts is less than the number of preset content prompt messages, finding the current content prompt message corresponding to the current playing time information in the content corresponding relation, and prompting the content.
In the embodiment of the invention, the terminal compares the total prompt quantity with the preset content prompt information quantity in the content corresponding relation in the video comprehensive data. And if the total prompt quantity is less than the preset content prompt information quantity, the terminal searches the current content prompt information corresponding to the time end to which the current playing moment information belongs in the content corresponding relation. And the terminal prompts the current content prompt information.
In the embodiment of the present invention, the content correspondence includes a correspondence between a plurality of time periods of the target video and the corresponding content prompt information, and if the target video has 10 time periods, the number of the content prompt information is preset to be 10.
And S13, responding to the second trigger operation aiming at the control layer, and converting the playing state of the target video into a preset state.
In the embodiment of the invention, the terminal responds to the second trigger operation of the target object aiming at the control layer, and the terminal converts the playing state of the target video into the preset state.
In the embodiment of the invention, the preset state is set according to the current playing state of the target video. And if the current playing state of the target video is the playing state, the terminal responds to the second trigger operation to convert the playing state of the target video into a pause state. The pause state is a preset state. And if the current playing state of the target video is the pause state, the terminal responds to the second trigger operation to convert the playing state of the target video into the playing state. The playing state is the preset state.
And S14, if the preset state is a pause state, finding the current playing situation information corresponding to the pause state in the playing situation corresponding relation, and prompting the pause playing situation.
In the embodiment of the invention, if the preset state is the pause state, the terminal searches the current playing situation information corresponding to the pause state in the playing situation corresponding relation. And the terminal carries out the pause playing condition prompt according to the current playing condition information.
The current playing situation information may be: "video pause, single click can start video play".
In some embodiments, referring to fig. 13, fig. 13 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S12 shown in fig. 12 may be implemented by S15, which will be described with reference to the steps.
And S15, if the total prompting quantity is not less than the preset content prompting information quantity, not prompting the content.
In the embodiment of the invention, the terminal compares the total prompt quantity with the preset content prompt quantity. And if the total prompt quantity is not less than the preset content prompt information quantity, the terminal does not prompt the content.
In some embodiments, referring to fig. 13, fig. 13 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S14 shown in fig. 12 may be implemented by S16, which will be described with reference to the steps.
And S16, if the preset state is the playing state, finding the current playing situation information corresponding to the playing state in the corresponding relation of the playing situation, and prompting the playing situation.
In the embodiment of the present invention, if the preset state is the playing state, the terminal finds the current playing situation information corresponding to the playing state in the playing situation corresponding relationship, and the terminal prompts the playing situation according to the current playing situation information.
The current playing situation information may be: "video play, one click can pause video play".
In the embodiment of the invention, on the current interface, skipping to a video playing interface through touch operation and sending a video request instruction to a server; in the video barrier-free mode, triggering a playing instruction on a video playing interface according to video comprehensive data fed back by a server responding to a video request instruction; responding to a playing instruction, playing a target video based on the video comprehensive data, and overlaying and displaying a transparent control layer and a control layer on the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface; responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data; when the target object needs video prompt, triggering operation can be carried out on the transparent control layer or the control layer, and then short content prompt or play condition prompt can be obtained respectively.
Fig. 14 is an alternative flowchart of a video processing method according to an embodiment of the present invention, which is applied to a server and will be described with reference to the steps shown in fig. 14.
And S21, receiving a video request command sent by the terminal.
In the embodiment of the invention, the server receives the video request instruction sent by the terminal through the communication line pre-established with the terminal.
And S22, responding to the video request instruction, and feeding back the video comprehensive data corresponding to the video request instruction to the terminal.
In the embodiment of the invention, the server responds to the video request instruction, and finds the video comprehensive data corresponding to the information of the video request instruction in the database of the server according to the information in the video request instruction. The server transmits the video integrated data to the terminal through a communication line established in advance.
In the embodiment of the invention, the server receives the video request instruction of the terminal and also receives the information which is sent by the terminal to the server and represents that the terminal is in the video barrier-free mode. The server responds to the video request instruction and finds out the video comprehensive data corresponding to the video request instruction in the database. And if the server does not receive the information which is sent to the server by the terminal and represents that the terminal is in the video barrier-free mode while receiving the video request instruction of the terminal, the server responds to the video request instruction and summarizes and finds the target video corresponding to the video request instruction in the database. And the server sends the target video to the terminal.
In the embodiment of the invention, the server sends the video comprehensive data including the target video to the terminal by receiving the video request instruction of the terminal, so that the terminal can extract the target video from the video comprehensive data, construct the corresponding transparent control layer and the control layer and further prompt the content and/or play condition.
In some embodiments, referring to fig. 15, fig. 15 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S21 shown in fig. 14 further includes implementation of S23-S30 before, which will be described with reference to the steps.
And S23, acquiring the target video.
In the embodiment of the invention, the server acquires the target video from the original video resource library. The target video may be a movie, a short video, a news video, a music video, a documentary video, etc., and is not limited herein.
S24, dividing the target video into a plurality of sub-videos corresponding to a plurality of time segments.
In the embodiment of the invention, the server divides the target video into a plurality of sub-videos corresponding to a plurality of time periods. The server may split the target video into ten sub-videos corresponding to ten time periods, or split the target video into 100 sub-videos corresponding to 100 time periods.
S25, content presentation information corresponding to each of the plurality of sub-videos is acquired.
In the embodiment of the invention, the server acquires the content prompt information corresponding to the plurality of sub-videos from the third party.
And S26, establishing the corresponding relation between the plurality of time segments of the target video and the content of the content prompt information.
S27, acquiring the playing situation information corresponding to the playing state of the target video, and establishing the corresponding relation between the playing state of the target video and the playing situation of the corresponding playing situation information.
In the embodiment of the present invention, the playing state of the target video may be a playing state or a pause state. The server acquires play situation information respectively corresponding to the play state and the pause state. The server respectively establishes corresponding relations between corresponding playing states and corresponding playing situation information, and between pause states and corresponding playing situation information.
And S28, acquiring the related data of the transparent control layer of the target video and the related data of the control layer of the target video.
And S29, packaging the target video, the content corresponding relation, the playing condition corresponding relation, the transparent control related data and the control related data to form video comprehensive data.
And S30, storing the video comprehensive data in a video database.
In the embodiment of the invention, the video database stores video comprehensive data corresponding to a plurality of videos. Each video comprehensive data comprises a corresponding video, and a content corresponding relation, a playing condition corresponding relation, transparent space related data and control space related data of the video.
In some embodiments, referring to fig. 16, fig. 16 is an optional flowchart of the video processing method according to the embodiment of the present invention, and S22 shown in fig. 14 can also be implemented through S31, which will be described in conjunction with this step.
And S31, responding to the video request instruction, finding the video comprehensive data requested by the video request instruction in the video database, and feeding back the video comprehensive data to the terminal.
In the embodiment of the invention, the server responds to the video request instruction. And the server searches the video comprehensive data corresponding to the coding information in the video database according to the coding information carried in the video request instruction. The server transmits the video integration data to the terminal through a communication line established in advance.
Fig. 17 is an alternative flowchart of a video processing method according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 17.
And S40, the terminal sets a transparent layer with the same size as the original video above the original video, and a prompt for the user is set in the transparent layer.
In the embodiment of the invention, the terminal is provided with a transparent layer with the same size as the original video above the display interface of the original video. The terminal sets a cue word corresponding to a plurality of time periods of the original video in the rear end of the transparent layer.
The prompt words may be video content information played by the original video in the corresponding time period.
And S41, placing a control layer smaller than the original video above the transparent layer at the terminal, and setting a play and pause prompt for the user in the control layer.
In the embodiment of the invention, the terminal is provided with a control layer smaller than the original video above the display interface of the transparent layer, and the terminal is provided with a prompt corresponding to the playing and pausing of the original video in the rear end of the control layer.
And S42, the terminal judges whether to click the transparent layer.
In the embodiment of the invention, the terminal judges whether the target object clicks the transparent layer.
S43, the terminal prompts the current video-related summary.
If yes, the terminal prompts the current video related summary corresponding to the current moment.
S44, the terminal determines whether to click the control layer.
If not, the terminal judges whether the target object clicks the control layer or not.
And S45, the terminal prompts the video state and changes the gesture operation available for the state.
If so, the terminal prompts the playing state of the original video at the current moment and the gesture operation for changing the state. For example, the playing state of the original video at the current time is a pause state, and the terminal prompts "video pause, and video play can be started by clicking".
In the embodiment of the invention, the transparent layer and the control layer are sequentially arranged above the original video, the control layer is smaller than the transparent layer, the transparent layer and the original video are the same in size, the content prompt is arranged in the rear end of the transparent layer, and the state prompt is arranged in the rear end of the control layer.
Fig. 18 is an alternative flowchart of a video processing method according to an embodiment of the present invention, which will be described with reference to the steps in fig. 18.
And S46, the terminal arranges a transparent layer with the same size as the original video above the original video.
In the embodiment of the invention, the transparent layer with the same size as the original video is arranged above the display interface of the original video at the terminal. The transparent layer is a transparent structure, and the target object can conveniently watch the original video.
And S47, the terminal sets content prompts corresponding to a plurality of time periods of the original video in the transparent layer.
In the embodiment of the invention, before the original video is not sent to the terminal, the server divides the original video into the sub-videos corresponding to a plurality of time periods. And the server acquires the content prompt words corresponding to the sub-videos. And the server sends the original video and the content prompt words to the terminal together. The terminal sets content prompts corresponding to a plurality of time periods of the original video in the rear end of the transparent layer.
S48, the terminal judges whether the number of the prompt words before the current time is less than the preset number.
In the embodiment of the invention, the terminal calculates the number of the prompted prompts before the current moment, and the terminal judges whether the number of the prompted prompts before the moment is less than the preset number. The preset number is the total number of the content prompts corresponding to a plurality of time periods corresponding to the original video.
And S49, the terminal prompts the content prompt words corresponding to the current time.
If the time is less than the preset time, the terminal prompts a content prompt language corresponding to the time period corresponding to the current time.
S50, the terminal does not perform the presentation.
And if not, the terminal does not prompt the content.
Fig. 19 is an alternative flowchart of a video processing method according to an embodiment of the present invention, which will be described with reference to the steps in fig. 19.
S51, the terminal places a control layer over the transparent layer that is smaller than the original video.
In the embodiment of the invention, the terminal is provided with a control layer smaller than the original video on the transparent layer. The control layer may be in a semi-transparent state.
And S52, setting the prompt words of playing and pausing for the user in the control layer by the terminal.
In the embodiment of the invention, the terminal sets the prompt words corresponding to the playing and pause states of the original video in the rear end of the control layer.
S53, the terminal determines whether the current state of the video is a paused state.
In the embodiment of the invention, the terminal judges whether the current state of the original video is a pause state.
And S54, the terminal prompts the gesture operation of video pause and play.
If the current state of the original video is a pause state, the terminal prompts: "video pause, one click playable video".
And S55, the terminal prompts the video to play and pauses gesture operation.
If the current state of the original video is the playing state, the terminal prompts video playing and video playing pausing through clicking.
Please refer to fig. 20, which is an interaction diagram of a video processing method according to an embodiment of the present invention, and will be described with reference to the steps in fig. 20.
And S32, the terminal jumps to the video playing interface through touch operation on the current interface and sends a video request instruction to the server.
The detailed implementation of step S32 is consistent with the implementation of step S01, and is not described here.
And S33, the terminal triggers a playing instruction on a video playing interface according to the video comprehensive data fed back by the server responding to the video request instruction in the video barrier-free mode.
The detailed implementation of step S32 is consistent with the implementation of step S02, and is not described here.
And S34, the terminal responds to the playing instruction, plays the target video based on the video comprehensive data, and superposes and displays the transparent control layer and the control layer on the display interface of the target video.
The detailed implementation of step S32 is consistent with the implementation of step S03, and is not described here.
And S35, the terminal responds to the triggering operation aiming at the transparent control layer and/or the control layer, and content prompt and/or play condition prompt are carried out on the target video according to the video comprehensive data.
The detailed implementation of step S32 is consistent with the implementation of step S04, and is not described here.
Fig. 21 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention.
An embodiment of the present invention further provides a video processing apparatus, including: a first sending unit 803, a first triggering unit 804, a playing unit 805, and a second triggering unit 806.
A first sending unit 803, configured to jump to a video playing interface through touch operation on a current interface, and send a video request instruction to a server;
the first triggering unit 804 is configured to trigger a playing instruction on a video playing interface according to video comprehensive data fed back by a server in response to a video request instruction in a video barrier-free mode;
the playing unit 805 is configured to respond to a playing instruction, play the target video based on the video integrated data, and display a transparent control layer and a control layer in an overlapping manner on a display interface of the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface;
and the second triggering unit 806 is configured to perform content prompt and/or play condition prompt on the target video according to the video comprehensive data in response to a triggering operation for the transparent control layer and/or the control layer.
In the embodiment of the present invention, the video integration data includes: the method comprises the steps of obtaining a target video, transparent control related data of a transparent control layer, control related data of a control layer, content corresponding relations between a plurality of time periods of the target video and content prompt information, and playing state corresponding relations between the playing state of the target video and playing state information corresponding to the playing state information.
In this embodiment of the present invention, the playing unit 805 of the video processing apparatus 800 is configured to respond to a playing instruction, extract a target video from the video integrated data for playing, and respectively display a transparent control layer and a control layer in a superimposed manner on a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data.
In this embodiment of the present invention, the playing unit 805 of the video processing apparatus 800 is configured to respond to a playing instruction, extract a target video from video integrated data, play the target video, sequentially superimpose a transparent control layer and a control layer on a display interface of the target video along a direction perpendicular to the display interface according to a first display area corresponding to transparent control related data and a second display area corresponding to control related data, display the transparent layer in the first display area, and display the control layer in the second display area; wherein the second display area partially covers the first display area.
In this embodiment of the present invention, the playing unit 805 of the video processing apparatus 800 is configured to respond to a playing instruction, extract a target video from video integrated data, play the target video, sequentially arrange and display a transparent control layer and a control layer along a same horizontal plane of a display interface of the target video according to a first display area corresponding to transparent control related data and a second display area corresponding to control related data, display the transparent layer in the first display area, and display the control layer in the second display area; wherein the first display area and the second display area are not overlapped.
In this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to obtain the playing information of the current time of the target video, where the playing information represents the playing progress and/or the playing status of the current time of the target video.
In the embodiment of the present invention, the playing information includes: current playing time information and/or current playing state information;
in this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to, in response to the first triggering operation for the transparent control layer, find, in the content correspondence, the current content prompt information corresponding to the current play time information, and perform content prompt; and/or, in response to a second trigger operation aiming at the control layer, searching the current playing condition information corresponding to the current playing state information in the playing condition corresponding relation, and prompting the playing condition.
In this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to, in response to the first triggering operation for the transparent control layer, count the total number of content prompt messages that have been prompted before the current time; and if the total prompt quantity is less than the preset content prompt information quantity, finding the current content prompt information corresponding to the current playing time information in the content corresponding relation, and prompting the content.
In this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to not perform content prompting if the total prompting number is not less than the preset content prompting information number.
In this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to respond to a second triggering operation for the control layer, and convert the playing state of the target video into a preset state; if the preset state is the pause state, searching the current playing condition information corresponding to the pause state in the playing condition corresponding relation, prompting the pause playing condition, representing the pause playing condition by the current playing condition information, and converting the pause state into the playing state.
In this embodiment of the present invention, the second triggering unit 806 of the video processing apparatus 800 is configured to, if the preset state is the playing state, find the current playing situation information corresponding to the playing state in the playing situation corresponding relationship, perform a playing situation prompt, where the current playing situation information represents the playing situation, and convert the playing state into the pause state.
In the embodiment of the invention, a first sending unit skips to a video playing interface through touch operation on a current interface and sends a video request instruction to a server; triggering a playing instruction on a video playing interface according to video comprehensive data fed back by a server responding to a video request instruction in a video barrier-free mode through a first triggering unit; the playing unit is used for responding to a playing instruction, playing a target video based on the video comprehensive data, and displaying a transparent control layer and a control layer on the target video in an overlapping mode; the second trigger unit is used for responding to trigger operation aiming at the transparent control layer and/or the control layer and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data; when the target object needs video prompt, triggering operation can be carried out on the transparent control layer or the control layer, and then short content prompt or play condition prompt can be obtained respectively.
Correspondingly, the embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by the first processor 801 implements the steps in the method described above.
Correspondingly, the embodiment of the present invention provides a video processing apparatus 800, which includes a first memory 802 and a first processor 801, wherein the first memory 802 stores a computer program that can be executed on the first processor 801, and the first processor 801 executes the computer program to implement the steps of the method.
Here, it should be noted that: the above description of the storage medium and apparatus embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus according to the invention, reference is made to the description of the embodiments of the method according to the invention.
It should be noted that fig. 22 is a schematic diagram of a hardware entity of a video processing apparatus according to an embodiment of the present invention, as shown in fig. 22, the hardware entity of the video processing apparatus 800 includes: a first processor 801 and a first memory 802, wherein;
the first processor 801 generally controls the overall operation of the video processing apparatus 800.
The first Memory 802 is configured to store instructions and applications executable by the first processor 801, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the first processor 801 and the video processing apparatus 800, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Fig. 23 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention.
An embodiment of the present invention further provides a video processing apparatus 900, including: a receiving unit 903 and a second transmitting unit 904.
A receiving unit 903, configured to receive a video request instruction sent by a terminal.
And a second sending unit 904, configured to respond to the video request instruction, and feed back video comprehensive data corresponding to the video request instruction to the terminal.
In this embodiment of the present invention, the receiving unit 903 of the video processing apparatus 900 is configured to obtain a target video, divide the target video into a plurality of sub-videos corresponding to a plurality of time periods, obtain content prompting information corresponding to the plurality of sub-videos, establish a content corresponding relationship between the plurality of time periods of the target video and the content prompting information, obtain playing status information corresponding to a playing status of the target video, and establishing a playing condition corresponding relation between the playing state of the target video and the corresponding playing condition information, acquiring transparent control related data of a transparent control layer of the target video and control related data of a control layer of the target video, packaging the target video, the content corresponding relation, the playing condition corresponding relation, the transparent control related data and the control related data to form video comprehensive data, and storing the video comprehensive data in a video database.
In this embodiment of the present invention, the second sending unit 904 is configured to respond to the video request instruction, search the video database for the video integrated data requested by the video request instruction, and feed back the video integrated data to the terminal.
In the embodiment of the invention, the server receiving unit receives the video request instruction of the terminal and then sends the video comprehensive data including the target video to the terminal through the second sending unit, so that the terminal can extract the target video from the video comprehensive data and construct the corresponding transparent control layer and the control layer so as to prompt the content and/or play condition.
Correspondingly, the embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by the second processor 901 implements the steps in the above method.
Correspondingly, the embodiment of the present invention provides a video processing apparatus 900, which includes a second memory 902 and a second processor 901, where the second memory 902 stores a computer program that can be executed on the second processor 901, and the second processor 901 implements the steps in the above method when executing the program.
Here, it should be noted that: the above description of the storage medium and apparatus embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus according to the invention, reference is made to the description of the embodiments of the method according to the invention.
It should be noted that fig. 24 is a schematic diagram of a hardware entity of a video processing apparatus according to an embodiment of the present invention, as shown in fig. 24, the hardware entity of the video processing apparatus 900 includes: a second processor 901 and a second memory 902, wherein;
the second processor 901 generally operates as the overall video processing apparatus 900.
The second Memory 902 is configured to store instructions and applications executable by the second processor 901, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the second processor 901 and the video processing apparatus 900, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the unit is only a logical division, and there may be other divisions when the actual implementation is implemented, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a Read Only Memory (ROM), a magnetic disk, and an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media that can store program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (14)

1. A video processing method, comprising:
skipping to a video playing interface through touch operation on the current interface, and sending a video request instruction to a server;
in a video barrier-free mode, triggering a playing instruction on the video playing interface according to video comprehensive data fed back by the server responding to the video request instruction;
responding to the playing instruction, playing a target video based on the video comprehensive data, and overlaying and displaying a transparent control layer and a control layer on a display interface of the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface;
and responding to the triggering operation aiming at the transparent control layer and/or the control layer, and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data.
2. The video processing method according to claim 1, wherein the video integration data comprises: the target video, the transparent control related data of the transparent control layer, the control related data of the control layer, the content corresponding relation between a plurality of time periods of the target video and content prompt information, and the playing state corresponding relation between the playing state of the target video and the playing state of the corresponding playing state information.
3. The video processing method according to claim 1 or 2, wherein the playing a target video based on the video integration data in response to the playing instruction, and displaying a transparent control layer and a control layer in an overlapping manner on a display interface of the target video, comprises:
and responding to the playing instruction, extracting the target video from the video comprehensive data for playing, and respectively displaying the transparent control layer and the control layer in a superposed manner on a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data.
4. The video processing method according to claim 3, wherein the extracting, in response to the play instruction, the target video from the video integrated data to play, and displaying the transparent control layer and the control layer in a display interface of the target video in an overlaid manner according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, respectively, includes:
responding to the playing instruction, extracting the target video from the video comprehensive data for playing, sequentially superposing the transparent control layer and the control layer on a display interface of the target video along a direction perpendicular to the display interface according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, displaying the transparent layer in the first display area, and displaying the control layer in the second display area;
wherein the second display area partially covers the first display area.
5. The video processing method according to claim 3, wherein the extracting, in response to the play instruction, the target video from the video integrated data to play, and displaying the transparent control layer and the control layer in a display interface of the target video in an overlaid manner according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, respectively, includes:
responding to the playing instruction, extracting the target video from the video comprehensive data for playing, sequentially arranging and displaying the transparent control layer and the control layer along the same horizontal plane of a display interface of the target video according to a first display area corresponding to the transparent control related data and a second display area corresponding to the control related data, displaying the transparent layer in the first display area, and displaying the control layer in the second display area;
wherein the first display area and the second display area do not overlap.
6. The video processing method according to claim 2, wherein before the responding to the trigger operation for the transparent control layer and/or the control layer and performing content prompt and/or play condition prompt on the target video according to the video integration data, the method further comprises:
and acquiring the playing information of the current moment of the target video, wherein the playing information represents the playing progress and/or the playing state of the current moment of the target video.
7. The video processing method of claim 6, wherein the playback information comprises: current playing time information and/or current playing state information;
the responding to the triggering operation aiming at the transparent control layer and/or the control layer, and performing content prompt and/or play condition prompt on the target video according to the video comprehensive data comprises the following steps:
in response to a first trigger operation aiming at the transparent control layer, finding the current content prompt information corresponding to the current playing time information in the content corresponding relation, and carrying out content prompt;
and/or, in response to a second trigger operation for the control layer, finding the current playing condition information corresponding to the current playing state information in the playing condition corresponding relation, and performing playing condition prompt.
8. The video processing method according to claim 7, wherein the searching for the current content prompt information corresponding to the current play time information in the content correspondence in response to the first trigger operation for the transparent control layer and performing content prompt includes:
in response to a first trigger operation aiming at the transparent control layer, counting the total prompting number of content prompting information which is already prompted before the current time;
if the total number of prompts is less than the number of preset content prompt messages, the current content prompt messages corresponding to the current playing time messages are found in the content corresponding relations, and content prompt is carried out.
9. The video processing method according to claim 8, wherein after counting a total number of content cue information that has been cued before a current time in response to the first trigger operation for the transparent control layer, the method further comprises:
and if the total prompt quantity is not less than the preset content prompt information quantity, not prompting the content.
10. The video processing method according to claim 7, wherein the searching, in response to the second trigger operation for the control layer, for the current playing situation information corresponding to the current playing state information in the playing situation correspondence, and performing playing situation prompt includes:
responding to a second trigger operation aiming at the control layer, and converting the playing state of the target video into a preset state;
if the preset state is a pause state, the current playing situation information corresponding to the pause state is found in the playing situation corresponding relation, the playing situation pause is prompted, the current playing situation information represents the playing situation pause, and the pause state is converted into the playing state.
11. The method of claim 10, wherein after the transitioning the playing state of the target video to the preset state in response to the second trigger operation for the control layer, the method further comprises:
if the preset state is a playing state, the current playing condition information corresponding to the playing state is found in the playing condition corresponding relation, playing condition prompt is carried out, the current playing condition information represents the playing condition, and the playing state is converted into the pause state.
12. A video processing apparatus, comprising:
the first sending unit is used for jumping to a video playing interface through touch operation on the current interface and sending a video request instruction to the server;
the first triggering unit is used for triggering a playing instruction on the video playing interface according to the video comprehensive data fed back by the server responding to the video request instruction in a video barrier-free mode;
the playing unit is used for responding to the playing instruction, playing a target video based on the video comprehensive data, and superposing and displaying a transparent control layer and a control layer on a display interface of the target video; the transparent control layer and the control layer are positioned in different display areas on the video playing interface;
and the second trigger unit is used for responding to the trigger operation aiming at the transparent control layer and/or the control layer and carrying out content prompt and/or play condition prompt on the target video according to the video comprehensive data.
13. A video processing apparatus comprising a first memory and a first processor, the first memory storing a computer program operable on the first processor, the first processor implementing the steps of the method of any one of claims 1 to 11 when executing the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a first processor, carries out the steps of the method of any one of claims 1 to 11.
CN202110332648.9A 2021-03-26 2021-03-26 Video processing method, device and storage medium Active CN113766294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110332648.9A CN113766294B (en) 2021-03-26 2021-03-26 Video processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110332648.9A CN113766294B (en) 2021-03-26 2021-03-26 Video processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113766294A true CN113766294A (en) 2021-12-07
CN113766294B CN113766294B (en) 2024-07-19

Family

ID=78786800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110332648.9A Active CN113766294B (en) 2021-03-26 2021-03-26 Video processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113766294B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079911A (en) * 2022-06-14 2022-09-20 北京字跳网络技术有限公司 Data processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170195743A1 (en) * 2015-12-30 2017-07-06 Roku, Inc. Controlling Display of Media Content
US20180048935A1 (en) * 2016-08-12 2018-02-15 International Business Machines Corporation System, method, and recording medium for providing notifications in video streams to control video playback
CN111263204A (en) * 2018-11-30 2020-06-09 青岛海尔多媒体有限公司 Control method and device for multimedia playing equipment and computer storage medium
CN111435999A (en) * 2019-01-12 2020-07-21 北京字节跳动网络技术有限公司 Method, device, equipment and storage medium for displaying information on video
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170195743A1 (en) * 2015-12-30 2017-07-06 Roku, Inc. Controlling Display of Media Content
US20180048935A1 (en) * 2016-08-12 2018-02-15 International Business Machines Corporation System, method, and recording medium for providing notifications in video streams to control video playback
CN111263204A (en) * 2018-11-30 2020-06-09 青岛海尔多媒体有限公司 Control method and device for multimedia playing equipment and computer storage medium
CN111435999A (en) * 2019-01-12 2020-07-21 北京字节跳动网络技术有限公司 Method, device, equipment and storage medium for displaying information on video
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MARC-ANDRÉ CARBONNEAU; ALEXANDRE J. RAYMOND; ÉRIC GRANGER; GHYSLAIN GAGNON: "Real-time visual play-break detection in sport events using a context descriptor", 2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) *
高恩泽;毛雅君;李健;: "携手共建信息无障碍平台 共同推进图书馆文化助残――中国盲人数字图书馆服务情况及展望", 新世纪图书馆, no. 06 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079911A (en) * 2022-06-14 2022-09-20 北京字跳网络技术有限公司 Data processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113766294B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN110784752B (en) Video interaction method and device, computer equipment and storage medium
CN106570100B (en) Information search method and device
KR101664754B1 (en) Method, device, program and recording medium for information acquisition
CN109688475B (en) Video playing skipping method and system and computer readable storage medium
JP2020504475A (en) Providing related objects during video data playback
CN103108248B (en) A kind of implementation method of interactive video and system
CN110913241B (en) Video retrieval method and device, electronic equipment and storage medium
CN109829064B (en) Media resource sharing and playing method and device, storage medium and electronic device
CN103026681A (en) Video-based method, server and system for realizing value-added service
CN109766457A (en) A kind of media content search method, apparatus and storage medium
CN111444415B (en) Barrage processing method, server, client, electronic equipment and storage medium
KR20240042145A (en) Video publishing methods, devices, electronic equipment and storage media
JP2017538328A (en) Promotion information processing method, apparatus, device, and computer storage medium
CN111432288A (en) Video playing method and device, electronic equipment and storage medium
KR20150114386A (en) Apparatus and method for playing contents, and apparatus and method for providing contents
CN113766294A (en) Video processing method, device and storage medium
CN112954426B (en) Video playing method, electronic equipment and storage medium
KR20200008341A (en) Media play device and method for controlling screen and server for analyzing screen
CN112052315A (en) Information processing method and device
WO2017165253A1 (en) Modular communications
CN113301394B (en) Voice control method combined with user grade
CN113158094B (en) Information sharing method and device and electronic equipment
CN112073738B (en) Information processing method and device
CN115695844A (en) Display device, server and media asset content recommendation method
CN105208424A (en) Remote control method and device based on voices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant