CN114845152A - Display method and device of playing control, electronic equipment and storage medium - Google Patents

Display method and device of playing control, electronic equipment and storage medium Download PDF

Info

Publication number
CN114845152A
CN114845152A CN202110137898.7A CN202110137898A CN114845152A CN 114845152 A CN114845152 A CN 114845152A CN 202110137898 A CN202110137898 A CN 202110137898A CN 114845152 A CN114845152 A CN 114845152A
Authority
CN
China
Prior art keywords
video
target
segment
playing
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110137898.7A
Other languages
Chinese (zh)
Other versions
CN114845152B (en
Inventor
陈法圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110137898.7A priority Critical patent/CN114845152B/en
Publication of CN114845152A publication Critical patent/CN114845152A/en
Application granted granted Critical
Publication of CN114845152B publication Critical patent/CN114845152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method and device of a play control, electronic equipment and a storage medium, and belongs to the technical field of computers. According to the method and the device, the target video is divided to obtain the plurality of video fragments, the playing control comprising the plurality of sub-controls is drawn based on the starting time of each video fragment, the summary information of each video fragment is rendered on each sub-control, the user can see the summary information of each video fragment clearly when seeing the playing control, the key plot is not required to be clicked again to refine the key plot, the operation flow of the user when watching the key plot is simplified, the intuitiveness of the playing control is improved, and the man-machine interaction efficiency is improved.

Description

Display method and device of playing control, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a display method and apparatus for a play control, an electronic device, and a storage medium.
Background
With the development of computer technology and the diversification of terminal functions, a user can watch various videos on the terminal anytime and anywhere, and in order to facilitate the user to control the video playing progress, the terminal can also display a playing progress bar at the bottom end of a video interface when playing videos. A plurality of key points are usually used on the play progress bar to identify the start time of a key scenario in the video, and after clicking the key points, a user can present a thumbnail frame to preview the key scenario. In the above process, if the user wants to know the key scenario, the user needs to click the key point and then preview the key point through the thumbnail frame, so that the user has the disadvantages of complicated operation, poor intuitiveness and low human-computer interaction efficiency when watching the key scenario.
Disclosure of Invention
The embodiment of the application provides a display method and device of a play control, electronic equipment and a storage medium, which can improve the intuitiveness of a user in watching a key plot, simplify the operation process and improve the human-computer interaction efficiency. The technical scheme is as follows:
in one aspect, a method for displaying a play control is provided, where the method includes:
acquiring a plurality of starting moments and a plurality of summary information of a plurality of video segments in a target video, wherein each video segment corresponds to one starting moment and one summary information;
based on the plurality of start moments, acquiring a playing control of the target video, wherein the playing control is used for controlling the playing progress of the target video and comprises a plurality of sub-controls respectively corresponding to the plurality of video clips;
and respectively displaying the plurality of summary information on the plurality of sub-controls.
In one aspect, a method for displaying a play control is provided, where the method includes:
sending a video playing request, wherein the video playing request is used for requesting to play a target video;
receiving the target video and display resources of a play control of the target video, wherein the play control comprises a plurality of sub-controls respectively corresponding to a plurality of video segments of the target video;
playing the target video in a video playing interface;
and displaying the plurality of sub-controls included in the playing control in the video playing interface based on the display resources of the playing control, wherein the plurality of sub-controls are used for displaying a plurality of summary information corresponding to the plurality of video segments.
In one aspect, a display device for playing a control is provided, the device including:
the first acquisition module is used for acquiring a plurality of starting moments and a plurality of summary information of a plurality of video segments in a target video, wherein each video segment corresponds to one starting moment and one summary information;
a second obtaining module, configured to obtain a playing control of the target video based on the multiple start moments, where the playing control is used to control a playing progress of the target video, and the playing control includes multiple sub-controls respectively corresponding to the multiple video segments;
and the display module is used for respectively displaying the plurality of summary information on the plurality of sub-controls.
In one possible implementation, the playing control is a target stripe, the plurality of sub-controls are a plurality of segments of the target stripe, and the second obtaining module includes:
a first determining unit, configured to determine a slice length of the target slice based on a video picture length of the target video;
a second determining unit configured to determine a plurality of segment lengths of the plurality of segments based on the plurality of start times and the slice length, wherein a proportion of one segment length to the slice length is equal to a proportion of a duration of the corresponding video segment to the target video;
an obtaining unit configured to obtain the target stripe including the plurality of segments based on the stripe length and the plurality of segment lengths.
In one possible implementation, the second determining unit is configured to:
for any video segment, subtracting the starting time of the next video segment from the starting time of the any video segment to obtain the segment duration of the any video segment;
dividing the segment duration of any video segment by the video duration of the target video to obtain a target proportion;
and multiplying the target proportion by the stripe length to obtain the corresponding segment length of any video segment.
In one possible implementation, the display module includes:
a third determining unit, configured to determine, for any piece of summary information, a target character size of the any piece of summary information based on a number of characters of the any piece of summary information;
and the display unit is used for displaying any summary information on the corresponding segment of the target band based on the target character size.
In one possible implementation, the third determining unit includes:
a first determining subunit, configured to determine an initial character size of the any piece of summary information as a band height of the target band;
a second determining subunit, configured to determine a character length of the any piece of summary information based on the initial character size and the number of characters;
a third determining subunit, configured to determine the target character size based on the character length and a segment length of the corresponding segment.
In one possible implementation, the third determining subunit is configured to:
determining the initial character size as the target character size in response to the character length being less than or equal to the segment length;
determining a second target multiple of the initial character size as the target character size in response to the character length being greater than the segment length and less than or equal to a first target multiple of the segment length, the second target multiple being equal to a ratio between the segment length and character length;
determining one-half of the initial character size as the target character size in response to the character length being greater than the first target multiple of the segment length and less than or equal to a third target multiple of the segment length, wherein the first target multiple is greater than 1 and less than the third target multiple.
In one possible embodiment, the display module is further configured to:
in response to the character length being larger than the third target multiple of the segment length, displaying prompt information for prompting that the number of characters of any summary information exceeds the display capacity of the corresponding segment.
In one possible implementation, the first obtaining module is configured to:
displaying the editing areas of the plurality of video clips in the uploading interface of the target video, wherein the editing areas are used for editing the starting time and the summary information of the video clips;
and acquiring the plurality of starting moments and the plurality of summary information based on the editing area.
In one possible implementation, the first obtaining module is configured to:
calling a video segmentation model, and dividing the target video into the plurality of video segments, wherein the video segmentation model is used for dividing the video segments based on video content;
determining the starting moments based on the plurality of divided video clips;
and calling a summary generation model to extract the summary information of the video clips, wherein the summary generation model is used for extracting the summary information based on the video clips.
In one aspect, a display device for playing a control is provided, the device including:
the device comprises a sending module, a receiving module and a playing module, wherein the sending module is used for sending a video playing request which is used for requesting to play a target video;
a receiving module, configured to receive the target video and a display resource of a play control of the target video, where the play control includes multiple sub-controls corresponding to multiple video segments of the target video respectively;
the playing module is used for playing the target video in a video playing interface;
and a display module, configured to display, in the video playing interface, the multiple sub-controls included in the playing control based on the display resource of the playing control, where the multiple sub-controls are used to display multiple summary information corresponding to the multiple video segments.
In one possible implementation, the playing module is further configured to:
and responding to the triggering operation of any sub-control, and playing the video clip corresponding to the any sub-control.
In a possible implementation manner, the play control further displays a play progress of the target video, and the play module is further configured to:
responding to the dragging operation of the playing progress, and acquiring the stop position of the dragging operation;
and starting to play the target video from the moment corresponding to the stop position.
In one aspect, an electronic device is provided, which includes one or more processors and one or more memories, where at least one computer program is stored in the one or more memories, and loaded by the one or more processors and executed to implement the display method of the play control according to any one of the above possible implementations.
In one aspect, a storage medium is provided, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to implement the display method of the play control according to any one of the above possible implementation manners.
In one aspect, a computer program product or computer program is provided that includes one or more program codes stored in a computer readable storage medium. One or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can execute the display method of the play control according to any one of the above possible embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the target video is divided to obtain the plurality of video segments, the playing control comprising the plurality of sub-controls is drawn based on the starting time of each video segment, and the summary information of each video segment is rendered on each sub-control, so that a user can see the summary information of each video segment clearly when seeing the playing control, the user does not need to click again to refine the key scenario, the operation flow of the user when watching the key scenario is simplified, the intuitiveness of the playing control is improved, and the man-machine interaction efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a display method of a play control according to an embodiment of the present application;
fig. 2 is a schematic architecture diagram of a video playing system according to an embodiment of the present application;
fig. 3 is a flowchart of a display method of a play control according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an editing area of a plurality of video clips provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an uploading interface of a target video provided by an embodiment of the present application;
fig. 6 is a schematic flowchart of a display method of a play control according to an embodiment of the present application;
fig. 7 is a schematic flowchart of uploading a target video according to an embodiment of the present application;
fig. 8 is a flowchart of a display method of a play control according to an embodiment of the present application;
fig. 9 is an interaction flowchart of a display method of a play control according to an embodiment of the present application;
fig. 10 is a schematic diagram of a play control of a target video according to an embodiment of the present application;
fig. 11 is a schematic diagram of a play control of a target video according to an embodiment of the present application;
fig. 12 is a schematic flowchart of a display method of a play control according to an embodiment of the present application;
FIG. 13 is a schematic flow chart illustrating a process for storing display resources of a playback control according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a display device of a play control according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a display device of a play control according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, and the meaning of "a plurality" means two or more, for example, a plurality of first locations means two or more first locations.
Fig. 1 is a schematic diagram of an implementation environment of a display method of a play control according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an Application (APP) supporting uploading of video. Alternatively, the application may be a live application, a video-on-demand application, a short video application, a browser application, a social application, or the like. The first terminal 120 may be a terminal used by a first user, the first user logs in the application program by using the first terminal 120 and uploads the target video to the server 140 based on the application program, and the first user may edit the start time and the summary information of each video segment in an upload interface of the target video and preview a play control generated based on the start time and the summary information. The playing control is used for controlling the playing progress of the target video. In one example, when the application program is a browser application, then the first terminal 120 logs in a web page in the browser application and uploads the target video to the server 140 based on the web page.
The server 140 may be a server, a plurality of servers, a cloud computing platform, or a virtualization center, etc. The server 140 is used for providing background services for the application programs. Alternatively, the server 140 may undertake primary computational tasks and the first and second terminals 120, 160 may undertake secondary computational tasks; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting the playing of the video. Alternatively, the application may be a live application, a video-on-demand application, a short video application, a browser application, a social application, or the like. The second terminal 160 may be a terminal used by a second user, the second user logs in the application program by using the second terminal 160, and sends a video playing request to the server 140 based on the application program, the server 140 returns a target video and a display resource of a playing control to the second terminal 160, the second terminal 160 plays the target video in a video playing interface, and when a triggering operation of the playing control by the second user is detected, the playing control is displayed in the video playing interface based on the display resource of the playing control.
The first terminal 120 and the second terminal 160 may be directly or indirectly connected to the server 140 through wired or wireless communication, and the connection manner is not limited in this embodiment of the application.
The server 140 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The first terminal 120 or the second terminal 160 may be a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart car device, a smart watch, a smart palm, a portable game device, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer III) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, an e-book reader, and the like, but is not limited thereto.
The applications installed on the first terminal 120 and the second terminal 160 may be the same or different.
The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 may be the same or different, and the first terminal 120 and the second terminal 160 may be the same terminal or different terminals. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a schematic architecture diagram of a video playing system according to an embodiment of the present application, please refer to fig. 2, where the video playing system includes a video uploading side 201, a background service side 202, and a terminal playing side 203. Optionally, since the play control includes a plurality of sub-controls, summary information of one video clip can be displayed on each sub-control, the play control may also be referred to as an "outline progress bar".
The video uploading side 201 is also the first terminal in the implementation environment, and when or after the first user (referred to as a video publisher, commonly called as up master) uploads a video, the first user can manually edit the start time and summary information (also called as outline text) of each video segment in the currently uploaded target video, and the first user can also preview the generated outline progress bar in the uploading interface.
The background service side 202 is also the server in the implementation environment, and after the server collects video segmentation information (i.e. start time of video segments) and summary information labeled by a large number of users, the server can utilize the data carrying the labels to train a machine learning model based on an AI (Artificial Intelligence) technology, for example, train a video segmentation model and a summary generation model respectively, the video segmentation model is used for dividing the video segments based on the video content, and the summary generation model is used for extracting the summary information based on the video segments, so as to achieve automatic production of the outline progress bar. On one hand, for some videos which are not marked or edited by the user, the outline progress bar can also be generated by utilizing a machine learning model. On the other hand, after the first user uploads the video, the server can also give a result automatically generated by the machine learning model, so that the first user can further edit and adjust the video on the basis, and the operation difficulty of the first user can be reduced.
The terminal playing side 203 is also the second terminal in the above implementation environment, the second terminal downloads the target video from the server and also downloads the display resource of the playing control, the display resource of the playing control is used for displaying the outline progress bar, when a second user (referred to as a video viewer, also referred to as a video viewer) views the target video, the second user can call the outline progress bar at any time, because the summary information of the corresponding video segment is directly displayed on each segment in the outline progress bar, the second user can intuitively know the video content, when the second user wants to skip to play, a certain segment can be directly clicked in the outline progress bar, the second user can directly skip to the starting time of the corresponding video segment and automatically play, so that the second user can quickly skip to the interested video segment.
Fig. 3 is a flowchart of a display method of a play control according to an embodiment of the present application. Referring to fig. 3, the embodiment is applied to an electronic device, and is described in detail below:
301. the electronic equipment acquires a plurality of starting moments and a plurality of summary information of a plurality of video segments in a target video, wherein each video segment corresponds to one starting moment and one summary information.
The electronic device may be an uploading terminal (also referred to as a first terminal) of the target video, or a watching terminal (also referred to as a second terminal) of the target video, or may also be a server, where the server may be a server cluster, a distributed system, a cloud server, and the like.
The sum of the segment durations of the plurality of video segments is equal to the video duration of the target video, that is, the plurality of video segments just form the target video, the starting time of the first video segment is 00:00:00, and the ending time of the last video segment is the video duration.
In some embodiments, an application program is installed on the electronic device, the first user logs in the application program, an uploading interface of the video is displayed in the application program, and the first user can upload the target video to the server in the uploading interface and edit the start time and the summary information of each video segment in the target video. Alternatively, the application may be a live application, a video-on-demand application, a short video application, a browser application, a social application, or the like. In one example, the upload interface is a web page when the application is a browser application, and in another example, the upload interface is a functional interface within the application when the application is a video-on-demand application.
In some embodiments, the electronic device displays an editing area of the plurality of video segments in an uploading interface of the target video, wherein the editing area is used for editing the starting time and the summary information of the video segments; based on the editing area, the plurality of start times and the plurality of summary information are acquired. That is to say, the first user can manually input each start time and each summary information to the electronic device through the editing area, so as to improve the operability of the first user on the play control.
Fig. 4 is a schematic diagram of an editing area of multiple video segments according to an embodiment of the present application, please refer to fig. 4, taking a production video of a target video as a chiffon cake as an example, in the editing area 400, a start time input box and a summary information input box of each of five video segments (i.e., paragraphs) are provided, a first user may add a new video segment at any time or delete an existing video segment at any time, and in an example, five start times corresponding to each of the five video segments respectively include: 00:00, 00:30, 02:11, 03:58, 05:02, five summary information (i.e. paragraph outline) corresponding to each of the five video segments respectively comprises: introduction, food materials needed, early preparation, baking and plate arrangement. In addition, a preview option of the play control is also provided in the editing area 400, and after the first user clicks the preview option, the electronic device may display the pre-generated play control in the uploading interface of the target video, so that the first user can conveniently view the display effect of the play control.
Fig. 5 is a schematic diagram of an upload interface of a target video provided in an embodiment of the present application, please refer to fig. 5, taking an application as a browser application as an example, a first user enters an upload interface 500 in the browser application, and can upload the target video to a server in the upload interface 500, where the upload interface 500 includes a title editing area 501, a paragraph editing area 502, a control preview area 503, a topic selection area 504, and a profile editing area 505. In an exemplary scenario, after the first user finishes uploading the target video, the first user inputs the title "how to bake chiffon cake" of the target video in the title editing area 501, sets the start time and summary information of each video segment in the paragraph editing area 502, after the first user clicks the "preview" option, the play control (i.e., outline progress bar) of the target video is displayed in the control preview area 503, then, the first user may click the alternative topics provided in the topic selection area 504, so as to add some topic labels to the target video, and finally, the first user may input the brief text of the target video in the brief editing area 505, and then upload the target video and the configuration information of the play control to the server for persistent storage through the "publish" option.
In some embodiments, in addition to the first user freely inputting each start time and each summary information, each start time may be intelligently generated by the video segmentation model, and similarly, each summary information may also be intelligently generated by the summary generation model, which can save the workload of the first user in making the outline progress bar.
In some embodiments, the electronic device invokes a video segmentation model to divide the target video into the plurality of video segments, the video segmentation model being configured to divide the video segments based on video content; determining a plurality of starting moments based on the plurality of divided video segments; and calling a summary generation model to extract the summary information of the video clips, wherein the summary generation model is used for extracting the summary information based on the video clips.
In the process, the video segments can be automatically divided through the video segmentation model, the abstract information can be automatically generated through the abstract generation model, the starting time of model output and the abstract information can be used as a reference result, and the first user can conduct fine adjustment on the basis, so that the workload of the first user is greatly reduced.
It should be noted that any one of the video segmentation model or the abstract generation model may be stored locally in the electronic device, so that the electronic device may be called offline at any time, or may be stored in a remote server, and after receiving the target video, the server calls the model to output a reference result and then sends the reference result to the electronic device, so as to save the computing resource of the electronic device.
302. The electronic equipment obtains a playing control of the target video based on the plurality of starting moments, the playing control is used for controlling the playing progress of the target video, and the playing control comprises a plurality of sub-controls respectively corresponding to the plurality of video clips.
In some embodiments, the playback control is a target stripe and the plurality of child controls are segments of the target stripe. Optionally, the target strip is also referred to as a outline progress bar, the target strip may be an elongated strip, or the target strip may also be an annular strip, and the shape of the target strip is not specifically limited in the embodiments of the present application.
It should be noted that, in the embodiment of the present application, the summary information of each video segment is displayed on each sub-control of the play control, so that the summary information of each video segment can be observed at a glance based on the play control, and the play progress bar carrying the summary information may also be referred to as an outline progress bar, that is, a play progress bar with an outline.
In some embodiments, the electronic device determines a slice length of the target slice based on a video picture length of the target video; determining a plurality of segment lengths of the plurality of segments based on the plurality of start moments and the stripe length, wherein a proportion of a segment length to the stripe length is equal to a proportion of a duration of a corresponding video segment to the target video; based on the stripe length and the plurality of segment lengths, the target stripe including the plurality of segments is obtained.
In the process, the electronic device can draw a target band comprising a plurality of segments according to each starting moment, so that the target band is segmented according to the segment time length of the video segment, and the time length occupation ratio of the corresponding video segment in the whole target video is visually displayed through the segment length.
In some embodiments, the electronic device may directly determine the video frame length of the target video as the band length of the target band, or the electronic device may determine a numerical value obtained by scaling the video frame length according to the scaling size of the upload interface as the band length, which is not specifically limited in this embodiment of the present application.
In some embodiments, for any video segment, the electronic device may subtract the starting time of the next video segment from the starting time of the any video segment to obtain the segment duration of the any video segment; dividing the segment duration of any video segment by the video duration of the target video to obtain a target proportion; and multiplying the target proportion by the stripe length to obtain the corresponding segment length of any video segment.
In an exemplary embodiment, taking the ith (i ≧ 1) video clip as an example, assuming that the video picture length of the target video is W and the video picture height is H, the stripe length of the target stripe is equal to the video picture length W. The electronic equipment can start the start time t of the (i + 1) th video clip i+1 With the start time t of the ith video segment i Subtracting to obtain the segment duration T of the ith video segment i Segment duration T of ith video segment i Dividing the video time length T of the target video to obtain a target proportion T i (ii)/T, the target ratio T i multiplying/T with the stripe length W to obtain the segment length L corresponding to the ith video segment i =(T i /T)*W。
In some embodiments, the electronic device repeatedly performs the above operation of determining the corresponding segment length on each video segment, and traverses all the video segments, so as to determine the segment lengths of all the segments in the target band.
303. The electronic equipment respectively displays the plurality of summary information on the plurality of sub-controls.
In some embodiments, for any summary information, the electronic device may determine a target character size of the any summary information based on the number of characters of the any summary information; and displaying any summary information on the corresponding segment of the target band based on the target character size.
In the process, because the number of characters of the summary information of different video segments is usually different, the size of the target character of the summary information is flexibly determined through the number of characters, so that the summary information can not exceed the display capacity of the corresponding segment, and the condition that the summary information is displayed incompletely is avoided.
In other embodiments, the electronic device may further set a character size corresponding to the band height of the target band for each piece of summary information, and scroll-display the summary information exceeding the display capacity, so that the computing resources of the electronic device can be saved.
In some embodiments, when determining the target character size, the electronic device may determine an initial character size of the any summary information as a band height of the target band; determining the character length of any summary information based on the initial character size and the number of characters; the target character size is determined based on the character length and the segment length of the corresponding segment.
In an exemplary embodiment, taking the summary information of the ith video segment (i.e., the ith summary information) as an example, the electronic device initializes the character size p of the ith summary information to h based on the stripe height h of the target stripe, that is, determines the initial character size p ═ h, determines the width of each character in the ith summary information under the initial character size in the font library, and sums up the widths of the individual characters to obtain the character length l of the ith summary information i
In the process, the character length of the summary information is compared with the length of the corresponding segment, whether the initial character size needs to be adjusted or not can be determined, the display attractiveness of the summary information can be improved, and the summary information is matched with the length of the corresponding segment in the target band when being displayed.
In some embodiments, the electronic device determines the initial character size as the target character size in response to the character length being less than or equal to the segment length; determining a second target multiple of the initial character size as the target character size in response to the character length being greater than the segment length and less than or equal to a first target multiple of the segment length, the second target multiple being equal to a ratio between the segment length and character length; in response to the character length being greater than the first target multiple of the segment length and less than or equal to a third target multiple of the segment length, determining one-half of the initial character size as the target character size, wherein the first target multiple is greater than 1 and less than the third target multiple.
The first target multiple may be any value greater than 1 and less than the third target multiple, for example, the first target multiple is 1.5; since the segment length is smaller than the character length, the second target multiple (the ratio between the segment length and the character length) is a numerical value larger than 0 and smaller than 1; the third target multiple may be any value greater than the first target multiple, for example, the third target multiple is 2.
In some embodiments, where one-half of the initial character size is determined to be the target character size, the electronic device may also add a line break after the character in the middle digit in the summary information, such that the entire summary information is displayed divided into two lines in the corresponding segment of the target band.
In the process, the initial character size can be adaptively adjusted according to the size relationship between the character length of the summary information and the corresponding segment length to obtain the final target character size, so that the situations that adjacent segments in the target band are overlapped in characters and the like are avoided.
In some embodiments, the electronic device displays a prompt message in response to the character length being greater than the third target multiple of the segment length, the prompt message being used to prompt that the number of characters of any summary information exceeds the display capacity of the corresponding segment.
In the above process, if the length of the character exceeds the third target multiple of the segment length, the first user may be prompted that the length of the summary information is too long, so as to prompt the first user to manually edit the summary information, thereby avoiding affecting the aesthetic degree of the playing control.
In one exemplary embodiment, at a first target multipleThe number is 1.5, the third target multiple is 2 for example, if the character length l of the ith summary information i Less than or equal to the segment length L of the ith video segment i If the length of the ith abstract information is legal, the ith abstract information can be normally displayed in the corresponding segment, and the size of the target character is equal to that of the initial character; if l i Greater than L i And less than or equal to 1.5 times L i If so, it indicates that the length of the ith summary information is legal, but the target character size needs to be reduced to L of the original initial character size p i /l i Doubling; if l i Greater than 1.5 times L i And less than or equal to 2 times L i If the length of the ith summary information is legal, 1/2 that the size of the target character is reduced to the original initial character size p is needed, and a line feed character is also needed to be added at the middle position of the summary information; if l i Greater than 2 times L i And displaying prompt information, wherein the prompt information is used for prompting that the content of the ith abstract information is too long.
In some embodiments, the step of displaying, by the server, each piece of summary information on each segment of the target band means that the target band is drawn according to the segment length and the band height of each segment, and each piece of summary information is rendered on a corresponding segment according to the corresponding target character size.
FIG. 6 is a schematic flowchart of a display method of a play control according to an embodiment of the present application, and as shown in 600, taking an ith (i ≧ 1) video clip as an example, an electronic device sets a start time t of an (i + 1) th video clip i+1 With the start time t of the ith video segment i Subtracting to obtain the segment duration T of the ith video segment i According to the segment duration T of the ith video segment i And the video time length T (namely the total video time length) of the target video, determining the corresponding segment length L of the ith video segment in the target strip i (namely the length of the progress bar fragment), in addition, based on the summary information of the ith video fragment, determining the character length (namely the length of the outline character) of the summary information, judging whether the current length is legal or not by comparing the length of the progress bar fragment with the length of the outline character, judging whether line changing is needed or not, rendering the result in an uploading interface based on the judged structureAnd dyeing a target strip carrying the summary information.
Fig. 7 is a schematic flow chart of uploading a target video, as shown in 700, taking a first user as an up master as an example, after the up master uploads the target video in an uploading interface of a web page or APP, the up master acquires the number of video segments of the target video, the starting time of each video segment, and summary information (i.e., a segment outline) of each video segment by combining an algorithm automatic detection (i.e., automatic model generation) and a manual input method, and renders a playing control (i.e., an outline progress bar) carrying the summary information in the uploading interface according to the three items of information, and prompts whether there are too many characters, etc., the up master manually checks the display effect of the playing control, checks whether the currently previewed outline progress bar is available, if not, the up master can manually edit each starting time and summary information, and regenerate a new outline progress bar, and uploading configuration information of the target video and the outline progress bar to a server until the up master confirms that the outline progress bar is available, wherein the configuration information at least comprises the starting time and summary information of each video segment.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, the target video is divided to obtain the plurality of video segments, the playing control comprising the plurality of sub-controls is drawn based on the starting time of each video segment, the summary information of each video segment is rendered on each sub-control, so that a user can clearly see the summary information of each video segment when seeing the playing control, the key plot is not required to be clicked again to refine the key plot, the operation flow of the user when watching the key plot is simplified, the intuitiveness of the playing control is improved, and the man-machine interaction efficiency is improved.
Fig. 8 is a flowchart of a display method of a play control according to an embodiment of the present application. Referring to fig. 8, the embodiment is applied to an electronic device, and is described by taking the electronic device as a first terminal as an example, and includes:
801. and the first terminal uploads the target video in an uploading interface.
In some embodiments, an application is installed on the first terminal, the first user logs in the application, an uploading interface of the video is displayed in the application, and the first user can upload the target video to the server in the uploading interface. Alternatively, the application may be a live application, a video-on-demand application, a short video application, a browser application, a social application, or the like.
In one example, the upload interface is a web page when the application is a browser application, and in another example, the upload interface is a functional interface within the application when the application is a video-on-demand application.
802. And the first terminal displays the editing areas of the plurality of video clips of the target video in the uploading interface, wherein the editing areas are used for editing the starting time and the summary information of the video clips.
The sum of the segment durations of the plurality of video segments is equal to the video duration of the target video, that is, the plurality of video segments just form the target video, the starting time of the first video segment is 00:00:00, and the ending time of the last video segment is the video duration.
803. The first terminal acquires a plurality of start times and a plurality of summary information of the plurality of video segments based on the editing area, wherein each video segment corresponds to one start time and one summary information.
In step 802 and 803, a possible implementation manner is provided for the first terminal to obtain a plurality of start times and a plurality of summary information of a plurality of video segments in the target video. That is to say, the first user can manually input each start time and each summary information to the first terminal through the editing area, so that the operability of the first user on the play control is improved.
In other embodiments, besides that the first user freely inputs each start time and each summary information, each start time may be intelligently generated by the video segmentation model, and similarly, each summary information may also be intelligently generated by the summary generation model, which can save the workload of the first user in making the outline progress bar.
Optionally, the first terminal invokes a video segmentation model to divide the target video into the plurality of video segments, where the video segmentation model is used to divide the video segments based on the video content; determining the plurality of starting moments based on the plurality of divided video clips; and calling a summary generation model to extract the summary information of the video clips, wherein the summary generation model is used for extracting the summary information based on the video clips.
In the process, the video segments can be automatically divided through the video segmentation model, the abstract information can be automatically generated through the abstract generation model, the starting time of model output and the abstract information can be used as a reference result, and the first user can conduct fine adjustment on the basis, so that the workload of the first user is greatly reduced.
It should be noted that any one of the video segmentation model or the abstract generation model may be stored locally at the first terminal, so that the first terminal may be called offline at any time, or may be stored in a remote server, and after receiving the target video, the server calls the model to output a reference result and sends the reference result to the first terminal, so as to save the computing resource of the first terminal.
804. The first terminal determines the stripe length of the target stripe based on the video picture length of the target video.
In some embodiments, the first terminal may directly determine the video frame length of the target video as the band length of the target band, or the first terminal may determine a value obtained by scaling the video frame length according to the scaling size of the upload interface as the band length, which is not specifically limited in this embodiment of the present application.
805. The first terminal determines a plurality of segment lengths of a plurality of segments of the target band based on the plurality of start times and the band length, wherein a proportion of a segment length to the band length is equal to a proportion of a duration of the corresponding video segment to the target video.
In some embodiments, for any video segment, the first terminal subtracts the starting time of the next video segment from the starting time of the any video segment to obtain the segment duration of the any video segment; dividing the segment duration of any video segment by the video duration of the target video to obtain a target proportion; and multiplying the target proportion by the stripe length to obtain the corresponding segment length of any video segment.
The first terminal repeatedly performs the above operations on each video clip, traverses all the video clips, and can determine the segment lengths of all the segments in the target band.
806. The first terminal acquires the target stripe including the plurality of segments based on the stripe length and the plurality of segment lengths.
Optionally, the target strip may be an elongated strip, or may also be an annular strip, and the shape of the target strip is not specifically limited in the embodiments of the present application.
In the above step 804, 806, taking the playing control of the target video as the target stripe as an example for illustration, showing a possible implementation manner of obtaining the playing control of the target video based on the plurality of start times, where the playing control is used for controlling the playing progress of the target video, the playing control includes a plurality of sub-controls respectively corresponding to the plurality of video segments, and the plurality of sub-controls are a plurality of segments of the target stripe.
807. The first terminal displays the plurality of summary information on a plurality of segments of the target band respectively.
Step 807 is similar to step 303, and is not described herein.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, the target video is divided to obtain the plurality of video segments, the playing control comprising the plurality of sub-controls is drawn based on the starting time of each video segment, the summary information of each video segment is rendered on each sub-control, so that a user can clearly see the summary information of each video segment when seeing the playing control, the key plot is not required to be clicked again to refine the key plot, the operation flow of the user when watching the key plot is simplified, the intuitiveness of the playing control is improved, and the man-machine interaction efficiency is improved.
Fig. 9 is an interaction flowchart of a display method for a play control according to an embodiment of the present application, and with reference to fig. 9, the embodiment is applied to an interaction process between a second terminal and a server, and includes:
901. and the second terminal sends a video playing request to the server, wherein the video playing request is used for requesting to play the target video.
In some embodiments, the second terminal is installed with an application program, the second user logs in the application program, displays a plurality of video covers in the application program, determines a video corresponding to any video cover as a target video in response to a click operation of the second user on any video cover, acquires a video identifier of the target video, and sends a video playing request carrying the video identifier to the server.
902. And the server responds to the video playing request and sends the target video and the display resources of the playing control of the target video to the second terminal, wherein the playing control comprises a plurality of sub-controls respectively corresponding to a plurality of video clips of the target video.
Optionally, the display resource of the playing control includes a plurality of start times and a plurality of summary information of a plurality of video segments of the target video, so that the second terminal performs similar steps to those in the above embodiments to display the playing control, and thus the second terminal can adapt to different sizes of video playing screens to adaptively render the playing control with a suitable size.
Optionally, the display resource of the playing control includes layout information of the playing control, and the layout information specifies the width and height of each sub-control and the character size of each abstract information, so that the second terminal does not need to recalculate the layout information, and the playing control is directly displayed based on the layout information, thereby saving the processing resource of the second terminal.
In some embodiments, the server receives the video playing request, analyzes the video playing request to obtain a video identifier of a target video, queries, using the video identifier as an index, a target video and display resources of a playing control stored corresponding to the index from a video library, and sends the target video and the display resources of the playing control to the second terminal. Optionally, the server performs streaming transmission on the target video based on a streaming transmission protocol, so that the second terminal can download the target video while watching the target video, thereby shortening the waiting time of the second user.
903. And the second terminal receives the target video and the display resource of the playing control of the target video.
In the process, the second terminal receives the target video and the display resource of the playing control returned by the server.
904. And the second terminal plays the target video in the video playing interface.
In some embodiments, the second terminal displays a video playing interface in which the target video is played based on a video player.
905. And the second terminal displays the plurality of sub-controls included by the playing control in the video playing interface based on the display resource of the playing control, wherein the plurality of sub-controls are used for displaying a plurality of summary information corresponding to the plurality of video segments.
In some embodiments, when the second terminal plays the target video, the plurality of sub-controls are displayed in the video playing interface directly based on the display resource of the playing control.
In other embodiments, while playing the target video, the second user may call the display of the play control through some triggering operations, optionally including but not limited to: at least one of clicking a video playing interface, double clicking the video playing interface, pressing a bottom area of the video playing interface, a voice instruction or a gesture instruction is clicked, and the type of the triggering operation is not specifically limited in the embodiment of the present application.
In some embodiments, the playback control is a target stripe, and the plurality of child controls are a plurality of segments included in the target stripe. On the basis, the second terminal may display the target stripe in a target area of the video playing picture, for example, the target area is a top area, or the target area is a bottom area, and the embodiment of the present application does not specifically limit the position of the target area.
In some embodiments, the playing control may be used as another outline progress bar besides the traditional playing progress bar, so that a second user can conveniently select a progress bar more conforming to the operation habit for interaction, or the playing control is directly used to replace the traditional playing progress bar, so that the video playing interface has a more concise display layout.
In some embodiments, the second terminal responds to the triggering operation of the second user on any sub-control, and plays the video segment corresponding to the any sub-control. That is, when the second user clicks the segment of the outline progress bar, the second user can directly jump to the start time of the corresponding video segment and play the video segment.
In some embodiments, the playing control also displays the playing progress of the target video, and the played portion (i.e. the segment before the playing progress) and the non-played portion (i.e. the segment after the playing progress) have different display manners.
In some embodiments, the second terminal responds to a dragging operation of the second user on the playing progress, and obtains a stop position of the dragging operation; the target video is played from a time corresponding to the stop position. That is, the second user may drag the play progress (commonly referred to as a play scale), thereby performing the same function of adjusting the play progress as the conventional play progress bar.
In some embodiments, if the distance between the touch position of the second user on the target strip and the playing progress is smaller than the distance threshold, it is determined that the second user drags the playing progress instead of clicking the corresponding segment, so that the second user can be prevented from mistakenly touching, the human-computer interaction efficiency is improved, and the user experience is optimized.
Fig. 10 is a schematic diagram of a playing control of a target video provided in an embodiment of the present application, please refer to fig. 10, where an outline progress bar 1001 (also called a playing control) is displayed in a bottom region of a video playing interface 1000, summary information (also called outline content) of a corresponding video segment is displayed on each segment of the outline progress bar 1001, a ratio of a length of each segment to a length of the outline progress bar is equal to a ratio of a segment duration of each video segment to a total duration of the video, in addition, a background color of the outline progress bar 1001 may change along with the playing progress to indicate a current playing progress of the target video, and the outline progress bar 1001 completely replaces a conventional playing progress bar to control a video playing progress and intuitively displays the outline content of each video segment.
Fig. 11 is a schematic diagram of a target video playing control provided in an embodiment of the present application, please refer to fig. 11, a traditional progress bar 1101 and an outline progress bar 1102 are respectively displayed in a bottom region of a video playing interface 1100, a second user may adjust a playing progress of a target video based on the traditional progress bar 1101, and may also intuitively know outline contents of each video clip based on the outline progress bar 1102, and of course, the playing progress of the target video may also be adjusted based on the outline progress bar 1102, so that a variety of operation choices for controlling the video playing progress may be provided to the second user, which facilitates the second user to select a control method that best meets a usage habit, and improves human-computer interaction efficiency.
Fig. 12 is a schematic flowchart of a display method of a playing control according to an embodiment of the present application, as shown in 1200, after receiving a video playing request from a second terminal, a server queries corresponding target video and outline progress bar data (including at least a plurality of start times and a plurality of summary information), and may use the outline progress bar data as a display resource of the playing control directly, or use layout information adjusted based on the outline progress bar data when a video uploader previews the video as a display resource of the playing control, the server issues data, that is, issues the target video and the display resource of the playing control to the second terminal, the second terminal plays the target video in a video playing interface, if a triggering operation of invoking the progress bar by the second user is detected, renders the outline progress bar in the video playing interface, if the second user clicks a certain segment of the progress bar, and the second terminal jumps to the starting time of the corresponding video clip and plays the video clip.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, when the target video is played, the playing control is displayed in the video playing interface based on the display resource of the playing control, and the plurality of summary information are respectively displayed on the plurality of sub-controls included in the playing control, so that a user can clearly see the summary information of each video clip when seeing the playing control, the key plot is not required to be clicked again to refine the key plot, the operation flow of the user when watching the key plot is simplified, the intuitiveness of the playing control is improved, and the human-computer interaction efficiency is improved.
Fig. 13 is a schematic flowchart of a display resource storing a play control according to an embodiment of the present application, as shown in 1300, illustrating a processing flow of a background service side. After the first user uploads the target video, the plurality of start moments and the plurality of summary information, the server performs transcoding, detection and other operations on the target video and distributes a video identifier for the target video. Then, the server performs content inspection on the plurality of summary information, for example, a technician performs manual inspection or machine learning model automatically identifies the plurality of summary information to detect whether illegal content (for example, some sensitive words) is included in the plurality of summary information, transcodes the target video after the content inspection is passed, and correspondingly stores the video identification, the plurality of start times and the plurality of summary information in a progress bar database.
In some embodiments, for a target video which passes content review, since the plurality of start times and the plurality of summary information can be regarded as natural annotation information, the target video can be put into a training process of a machine learning model to train a video segmentation model capable of automatically dividing video segments and a summary generation model capable of automatically extracting summaries. Optionally, the server performs Embedding (Embedding) processing on the target video, trains to obtain a video segmentation model by using the video features obtained by the Embedding processing and the plurality of start moments, and segments each video by using the video segmentation model. And continuously training to obtain a summary generation model based on the segmented result (each video segment) and the Embedding data of the corresponding video segment, so that for some videos without user marks, the outline progress bar can be constructed by adopting a video segmentation model and a summary generation model.
Fig. 14 is a schematic structural diagram of a display device of a play control according to an embodiment of the present application, please refer to fig. 14, where the device includes:
a first obtaining module 1401, configured to obtain a plurality of start times and a plurality of summary information of a plurality of video segments in a target video, where each video segment corresponds to one start time and one summary information;
a second obtaining module 1402, configured to obtain a playing control of the target video based on the multiple start moments, where the playing control is used to control a playing progress of the target video, and the playing control includes multiple sub-controls corresponding to the multiple video segments respectively;
a display module 1403, configured to display the plurality of summary information on the plurality of child controls respectively.
The device that this application embodiment provided, through obtaining a plurality of video fragments with the target video division, based on the starting time of each video fragment, draw the broadcast controlling part including a plurality of sub-controlling parts, and render the summary information of each video fragment on each sub-controlling part, make the user can see out the summary information of each video fragment when seeing the broadcast controlling part at a glance, need not to click again in order to refine the key scenario, the operation flow when having simplified the user and watching the key scenario, the intuitiveness of broadcast controlling part has been improved, human-computer interaction efficiency has been promoted.
In a possible implementation manner, the playing control is a target stripe, the multiple sub-controls are multiple segments of the target stripe, and based on the apparatus composition in fig. 14, the second obtaining module 1402 includes:
a first determining unit, configured to determine a slice length of the target slice based on a video picture length of the target video;
a second determining unit, configured to determine, based on the start times and the slice length, segment lengths of the segments, where a proportion of a segment length to the slice length is equal to a proportion of a duration of the corresponding video segment to the target video;
an obtaining unit configured to obtain the target stripe including the plurality of segments based on the stripe length and the plurality of segment lengths.
In one possible implementation, the second determining unit is configured to:
for any video segment, subtracting the starting time of the next video segment from the starting time of the any video segment to obtain the segment duration of the any video segment;
dividing the segment duration of any video segment by the video duration of the target video to obtain a target proportion;
and multiplying the target proportion by the stripe length to obtain the corresponding segment length of any video segment.
In one possible implementation, based on the device composition of fig. 14, the display module 1403 includes:
a third determining unit, configured to determine, for any piece of digest information, a target character size of the any piece of digest information based on the number of characters of the any piece of digest information;
and the display unit is used for displaying any summary information on the corresponding segment of the target band based on the target character size.
In one possible embodiment, based on the apparatus composition of fig. 14, the third determining unit includes:
a first determining subunit, configured to determine an initial character size of the any piece of summary information as a band height of the target band;
a second determining subunit, configured to determine a character length of the any one piece of summary information based on the initial character size and the number of characters;
a third determining subunit, configured to determine the target character size based on the character length and the segment length of the corresponding segment.
In one possible embodiment, the third determining subunit is configured to:
determining the initial character size as the target character size in response to the character length being less than or equal to the segment length;
determining a second target multiple of the initial character size as the target character size in response to the character length being greater than the segment length and less than or equal to a first target multiple of the segment length, the second target multiple being equal to a ratio between the segment length and character length;
in response to the character length being greater than the first target multiple of the segment length and less than or equal to a third target multiple of the segment length, determining one-half of the initial character size as the target character size, wherein the first target multiple is greater than 1 and less than the third target multiple.
In one possible implementation, the display module 1403 is further configured to:
and responding to the third target multiple of which the character length is larger than the segment length, and displaying prompt information for prompting that the number of characters of any abstract information exceeds the display capacity of the corresponding segment.
In one possible implementation, the first obtaining module 1401 is configured to:
displaying the editing areas of the plurality of video clips in the uploading interface of the target video, wherein the editing areas are used for editing the starting time and the summary information of the video clips;
based on the editing area, the plurality of start times and the plurality of summary information are acquired.
In one possible implementation, the first obtaining module 1401 is configured to:
calling a video segmentation model, and dividing the target video into the plurality of video segments, wherein the video segmentation model is used for dividing the video segments based on video content;
determining the plurality of starting moments based on the plurality of divided video clips;
and calling a summary generation model to extract the summary information of the video clips, wherein the summary generation model is used for extracting the summary information based on the video clips.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the display apparatus for a play control provided in the foregoing embodiment, when the play control is displayed, only the division of the function modules is illustrated, and in practical applications, the function distribution can be completed by different function modules according to needs, that is, the internal structure of the electronic device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the display apparatus of the play control and the display method embodiment of the play control provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the display method embodiment of the play control, and are not described herein again.
Fig. 15 is a schematic structural diagram of a display device of a play control according to an embodiment of the present application, please refer to fig. 15, where the device includes:
a sending module 1501, configured to send a video playing request, where the video playing request is used to request to play a target video;
a receiving module 1502, configured to receive the target video and a display resource of a playing control of the target video, where the playing control includes a plurality of sub-controls corresponding to a plurality of video segments of the target video respectively;
the playing module 1503 is configured to play the target video in the video playing interface;
a display module 1504, configured to display, in the video playing interface, the multiple sub-controls included in the playing control based on the display resource of the playing control, where the multiple sub-controls are used to display multiple summary information corresponding to the multiple video segments.
The device that this application embodiment provided, through when playing the target video, based on the demonstration resource of playing the controlling part, show this play controlling part in the video playing interface, and show a plurality of summary information on a plurality of sub-controlling parts that this play controlling part includes respectively, make the user can be when seeing the summary information of seeing each video clip clearly, need not to click again in order to refine the key scenario, the operation flow when having simplified the user and watching the key scenario, the intuitiveness of playing the controlling part has been improved, human-computer interaction efficiency has been promoted.
In a possible implementation, the playing module 1503 is further configured to:
and responding to the triggering operation of any sub-control, and playing the video clip corresponding to the any sub-control.
In a possible implementation manner, the playing control further displays a playing progress of the target video, and the playing module 1503 is further configured to:
responding to the dragging operation of the playing progress, and acquiring the stop position of the dragging operation;
the target video is played from a time corresponding to the stop position.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the display apparatus for a play control provided in the foregoing embodiment, when the play control is displayed, only the division of the function modules is illustrated, and in practical applications, the function distribution can be completed by different function modules according to needs, that is, the internal structure of the electronic device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the display apparatus of the play control and the display method embodiment of the play control provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the display method embodiment of the play control, and are not described herein again.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 16, taking an electronic device as an example of a terminal 1600, the terminal 1600 may be a first terminal or a second terminal, and the terminal 1600 may also be referred to as other names such as a user equipment, a portable terminal, a laptop terminal, a desktop terminal, and the like.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
Optionally, processor 1601 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. Alternatively, the processor 1601 is implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). In some embodiments, the processor 1601 includes a main processor and a coprocessor, the main processor is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 is integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1601 further includes an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
In some embodiments, memory 1602 includes one or more computer-readable storage media, which are optionally non-transitory. Optionally, memory 1602 also includes high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one program code for execution by the processor 1601 to implement a display method of a play control provided by various embodiments in the present application.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 can be connected via a bus or signal line. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a display 1605, a camera assembly 1606, audio circuitry 1607, a positioning assembly 1608, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or both of processor 1601, memory 1602 and peripheral interface 1603 are implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Optionally, the radio frequency circuit 1604 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1604 further comprises NFC (Near Field Communication) related circuitry, which is not limited by this application.
The display 1605 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal can be input to the processor 1601 as a control signal for processing. Optionally, the display 1605 is also used to provide virtual buttons and/or a virtual keyboard, also known as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 is one, providing the front panel of the terminal 1600; in other embodiments, there are at least two display screens 1605, which are respectively disposed on different surfaces of the terminal 1600 or are in a foldable design; in still other embodiments, display 1605 is a flexible display disposed on a curved surface or folded surface of terminal 1600. Even more optionally, the display 1605 is arranged in a non-rectangular irregular pattern, i.e. a shaped screen. Optionally, the Display 1605 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 further includes a flash. Optionally, the flash is a monochrome temperature flash, or a bi-color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, the audio circuitry 1607 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones are respectively arranged at different positions of the terminal 1600. Optionally, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. Alternatively, the speaker is a conventional membrane speaker, or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, the audio circuit 1607 also includes a headphone jack.
The positioning component 1608 is utilized to locate a current geographic Location of the terminal 1600 for purposes of navigation or LBS (Location Based Service). Alternatively, the Positioning component 1608 is a Positioning component based on a GPS (Global Positioning System) of the united states, a beidou System of china, a graves System of russia, or a galileo System of the european union.
Power supply 1609 is used to provide power to the various components of terminal 1600. Optionally, power supply 1609 is alternating current, direct current, a disposable battery, or a rechargeable battery. When power supply 1609 comprises a rechargeable battery, the rechargeable battery supports wired or wireless charging. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
In some embodiments, acceleration sensor 1611 detects acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 is used to detect components of the gravitational acceleration in three coordinate axes. Alternatively, the processor 1601 controls the display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 is also used for acquisition of motion data of a game or a user.
In some embodiments, the gyroscope sensor 1612 detects the body direction and the rotation angle of the terminal 1600, and the gyroscope sensor 1612 and the acceleration sensor 1611 cooperate to acquire the 3D motion of the user on the terminal 1600. The processor 1601 is configured to perform the following functions according to the data collected by the gyro sensor 1612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Optionally, pressure sensors 1613 are disposed on the side bezel of terminal 1600 and/or underlying display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, the holding signal of the user to the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1614 is configured to collect a fingerprint of the user, and the processor 1601 is configured to identify the user based on the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 is configured to identify the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Optionally, a fingerprint sensor 1614 is provided on the front, back, or side of the terminal 1600. When a physical key or vendor Logo is provided on the terminal 1600, the fingerprint sensor 1614 can be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 controls the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the display screen 1605 is adjusted down. In another embodiment, the processor 1601 is further configured to dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the display 1605 to switch from the light screen state to the clear screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the display 1605 is controlled by the processor 1601 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and can include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 1700 may generate a relatively large difference due to different configurations or performances, and the electronic device 1700 includes one or more processors (CPUs) 1701 and one or more memories 1702, where the memory 1702 stores at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1701 to implement the display method of the play control provided in the foregoing embodiments. Optionally, the electronic device 1700 further has components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the electronic device 1700 further includes other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including at least one computer program, which is executable by a processor in an electronic device to perform the display method of the play control in the above embodiments, is also provided. For example, the computer-readable storage medium includes a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. One or more processors of the electronic device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the electronic device can execute the method for displaying the play control in the above embodiments.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, or can be implemented by a program instructing relevant hardware, and optionally, the program is stored in a computer readable storage medium, and optionally, the above mentioned storage medium is a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A display method of a play control, the method comprising:
acquiring a plurality of start moments and a plurality of summary information of a plurality of video segments in a target video, wherein each video segment corresponds to one start moment and one summary information;
based on the plurality of start moments, acquiring a playing control of the target video, wherein the playing control is used for controlling the playing progress of the target video and comprises a plurality of sub-controls respectively corresponding to the plurality of video clips;
and respectively displaying the plurality of summary information on the plurality of sub-controls.
2. The method of claim 1, wherein the playing control is a target stripe, the plurality of sub-controls are a plurality of segments of the target stripe, and the obtaining the playing control of the target video based on the plurality of start times comprises:
determining a stripe length of the target stripe based on a video picture length of the target video;
determining a plurality of segment lengths of the plurality of segments based on the plurality of start times and the stripe length, wherein a proportion of a segment length to the stripe length is equal to a proportion of a duration of a corresponding video segment to the target video;
based on the stripe length and the plurality of segment lengths, obtaining the target stripe including the plurality of segments.
3. The method of claim 2, wherein the determining a plurality of segment lengths for the plurality of segments based on the plurality of start times and the stripe length comprises:
for any video segment, subtracting the starting time of the next video segment from the starting time of the any video segment to obtain the segment duration of the any video segment;
dividing the segment duration of any video segment by the video duration of the target video to obtain a target proportion;
and multiplying the target proportion by the stripe length to obtain the corresponding segment length of any video segment.
4. The method of claim 2, wherein the displaying the plurality of summary information on the plurality of child controls respectively comprises:
for any abstract information, determining the target character size of the any abstract information based on the character number of the any abstract information;
and displaying any summary information on the corresponding segment of the target band based on the target character size.
5. The method according to claim 4, wherein the determining the target character size of any summary information based on the number of characters of any summary information comprises:
determining the initial character size of any summary information as the stripe height of the target stripe;
determining the character length of any summary information based on the initial character size and the number of characters;
determining the target character size based on the character length and a segment length of the corresponding segment.
6. The method of claim 5, wherein the determining the target character size based on the character length and a segment length of the corresponding segment comprises:
determining the initial character size as the target character size in response to the character length being less than or equal to the segment length;
determining a second target multiple of the initial character size as the target character size in response to the character length being greater than the segment length and less than or equal to a first target multiple of the segment length, the second target multiple being equal to a ratio between the segment length and character length;
determining one-half of the initial character size as the target character size in response to the character length being greater than the first target multiple of the segment length and less than or equal to a third target multiple of the segment length, wherein the first target multiple is greater than 1 and less than the third target multiple.
7. The method of claim 6, further comprising:
and responding to the third target multiple of which the character length is larger than the segment length, and displaying prompt information for prompting that the number of characters of any summary information exceeds the display capacity of the corresponding segment.
8. The method of claim 1, wherein the obtaining the plurality of start times and the plurality of summary information of the plurality of video segments in the target video comprises:
displaying the editing areas of the plurality of video clips in the uploading interface of the target video, wherein the editing areas are used for editing the starting time and the summary information of the video clips;
and acquiring the plurality of start moments and the plurality of summary information based on the editing area.
9. The method of claim 1, wherein the obtaining the plurality of start times and the plurality of summary information of the plurality of video segments in the target video comprises:
calling a video segmentation model, and dividing the target video into the plurality of video segments, wherein the video segmentation model is used for dividing the video segments based on video content;
determining the starting moments based on the plurality of divided video clips;
and calling a summary generation model to extract the summary information of the video clips, wherein the summary generation model is used for extracting the summary information based on the video clips.
10. A display method of a play control, the method comprising:
sending a video playing request, wherein the video playing request is used for requesting to play a target video;
receiving the target video and display resources of a play control of the target video, wherein the play control comprises a plurality of sub-controls respectively corresponding to a plurality of video segments of the target video;
playing the target video in a video playing interface;
and displaying the plurality of sub-controls included in the playing control in the video playing interface based on the display resources of the playing control, wherein the plurality of sub-controls are used for displaying a plurality of summary information corresponding to the plurality of video segments.
11. The method of claim 10, wherein after displaying the plurality of sub-controls included in the playback control in the video playback interface, the method further comprises:
and responding to the triggering operation of any sub-control, and playing the video clip corresponding to the any sub-control.
12. The method of claim 10, wherein the play control further displays a play progress of the target video, and wherein the method further comprises:
responding to the dragging operation of the playing progress, and acquiring the stop position of the dragging operation;
and starting to play the target video from the moment corresponding to the stop position.
13. A display apparatus for playing a control, the apparatus comprising:
the first acquisition module is used for acquiring a plurality of starting moments and a plurality of summary information of a plurality of video segments in a target video, wherein each video segment corresponds to one starting moment and one summary information;
a second obtaining module, configured to obtain a playing control of the target video based on the multiple start moments, where the playing control is used to control a playing progress of the target video, and the playing control includes multiple sub-controls respectively corresponding to the multiple video segments;
and the display module is used for respectively displaying the plurality of summary information on the plurality of sub-controls.
14. An electronic device comprising one or more processors and one or more memories, wherein at least one computer program is stored in the one or more memories, and loaded and executed by the one or more processors to implement the display method of the play control according to any one of claims 1 to 9 or 10 to 12.
15. A storage medium having at least one computer program stored therein, the at least one computer program being loaded and executed by a processor to implement the method for displaying a play control according to any one of claims 1 to 9 or 10 to 12.
CN202110137898.7A 2021-02-01 2021-02-01 Display method and device of play control, electronic equipment and storage medium Active CN114845152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110137898.7A CN114845152B (en) 2021-02-01 2021-02-01 Display method and device of play control, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110137898.7A CN114845152B (en) 2021-02-01 2021-02-01 Display method and device of play control, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114845152A true CN114845152A (en) 2022-08-02
CN114845152B CN114845152B (en) 2023-06-30

Family

ID=82561234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110137898.7A Active CN114845152B (en) 2021-02-01 2021-02-01 Display method and device of play control, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114845152B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111220A (en) * 2021-03-26 2021-07-13 北京达佳互联信息技术有限公司 Video processing method, device, equipment, server and storage medium
CN115567758A (en) * 2022-09-30 2023-01-03 联想(北京)有限公司 Processing method, processing device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143741A1 (en) * 2008-05-29 2009-12-03 腾讯科技(深圳)有限公司 Method, system and apparatus for playing media files on demand
CN102547458A (en) * 2012-03-07 2012-07-04 山东大学 Novel digital media playing system and method based on user behavior
CN103702220A (en) * 2013-12-13 2014-04-02 乐视网信息技术(北京)股份有限公司 Video playing method and device
CN104823450A (en) * 2013-12-01 2015-08-05 Lg电子株式会社 Method and device for transmitting and receiving broadcast signal for providing trick play service
CN106658213A (en) * 2016-10-19 2017-05-10 上海幻电信息科技有限公司 Play progress intercommunicated communication processing method
CN108259997A (en) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 Image correlation process method and device, intelligent terminal, server, storage medium
CN108391171A (en) * 2018-02-27 2018-08-10 京东方科技集团股份有限公司 Control method and device, the terminal of video playing
CN110163237A (en) * 2018-11-08 2019-08-23 腾讯科技(深圳)有限公司 Model training and image processing method, device, medium, electronic equipment
WO2020077856A1 (en) * 2018-10-19 2020-04-23 北京微播视界科技有限公司 Video photographing method and apparatus, electronic device and computer readable storage medium
CN112104648A (en) * 2020-09-14 2020-12-18 北京达佳互联信息技术有限公司 Data processing method, device, terminal, server and storage medium
WO2021003949A1 (en) * 2019-07-05 2021-01-14 广州酷狗计算机科技有限公司 Song playback method, device and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143741A1 (en) * 2008-05-29 2009-12-03 腾讯科技(深圳)有限公司 Method, system and apparatus for playing media files on demand
CN102547458A (en) * 2012-03-07 2012-07-04 山东大学 Novel digital media playing system and method based on user behavior
CN104823450A (en) * 2013-12-01 2015-08-05 Lg电子株式会社 Method and device for transmitting and receiving broadcast signal for providing trick play service
CN103702220A (en) * 2013-12-13 2014-04-02 乐视网信息技术(北京)股份有限公司 Video playing method and device
CN106658213A (en) * 2016-10-19 2017-05-10 上海幻电信息科技有限公司 Play progress intercommunicated communication processing method
CN108391171A (en) * 2018-02-27 2018-08-10 京东方科技集团股份有限公司 Control method and device, the terminal of video playing
US20190267037A1 (en) * 2018-02-27 2019-08-29 Boe Technology Group Co., Ltd. Method, apparatus and terminal for controlling video playing
CN108259997A (en) * 2018-04-02 2018-07-06 腾讯科技(深圳)有限公司 Image correlation process method and device, intelligent terminal, server, storage medium
WO2020077856A1 (en) * 2018-10-19 2020-04-23 北京微播视界科技有限公司 Video photographing method and apparatus, electronic device and computer readable storage medium
CN110163237A (en) * 2018-11-08 2019-08-23 腾讯科技(深圳)有限公司 Model training and image processing method, device, medium, electronic equipment
WO2021003949A1 (en) * 2019-07-05 2021-01-14 广州酷狗计算机科技有限公司 Song playback method, device and system
CN112104648A (en) * 2020-09-14 2020-12-18 北京达佳互联信息技术有限公司 Data processing method, device, terminal, server and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111220A (en) * 2021-03-26 2021-07-13 北京达佳互联信息技术有限公司 Video processing method, device, equipment, server and storage medium
CN115567758A (en) * 2022-09-30 2023-01-03 联想(北京)有限公司 Processing method, processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114845152B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN109982102B (en) Interface display method and system for live broadcast room, live broadcast server and anchor terminal
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN111901658B (en) Comment information display method and device, terminal and storage medium
CN110149557B (en) Video playing method, device, terminal and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN109982129B (en) Short video playing control method and device and storage medium
CN111741366A (en) Audio playing method, device, terminal and storage medium
CN112256181B (en) Interaction processing method and device, computer equipment and storage medium
CN111935516B (en) Audio file playing method, device, terminal, server and storage medium
CN111836069A (en) Virtual gift presenting method, device, terminal, server and storage medium
CN114845152B (en) Display method and device of play control, electronic equipment and storage medium
CN111459363A (en) Information display method, device, equipment and storage medium
CN110337042B (en) Song on-demand method, on-demand order processing method, device, terminal and medium
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN112004134B (en) Multimedia data display method, device, equipment and storage medium
EP4125274A1 (en) Method and apparatus for playing videos
CN112230910A (en) Page generation method, device, equipment and storage medium of embedded program
CN110996115B (en) Live video playing method, device, equipment, storage medium and program product
CN113301422A (en) Method, terminal and storage medium for acquiring video cover
CN111381801B (en) Audio playing method based on double-screen terminal and communication terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant