CN113784216B - Video clamping and recognizing method and device, terminal equipment and storage medium - Google Patents

Video clamping and recognizing method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN113784216B
CN113784216B CN202110977961.8A CN202110977961A CN113784216B CN 113784216 B CN113784216 B CN 113784216B CN 202110977961 A CN202110977961 A CN 202110977961A CN 113784216 B CN113784216 B CN 113784216B
Authority
CN
China
Prior art keywords
frame
video
time
downloaded
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110977961.8A
Other languages
Chinese (zh)
Other versions
CN113784216A (en
Inventor
张弛
刘�东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd, MIGU Music Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202110977961.8A priority Critical patent/CN113784216B/en
Publication of CN113784216A publication Critical patent/CN113784216A/en
Application granted granted Critical
Publication of CN113784216B publication Critical patent/CN113784216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video clamping and recognizing method, which comprises the following steps: receiving video frame summary information; when receiving a target video, obtaining estimated download frame time bits of unrendered video frames in the target video based on video frame summary information and the receiving time of downloaded selected video frames in the target video; determining the playing time of a selected video frame when playing a target video; based on the video frame summary information, the playing time and the receiving time, obtaining estimated playing frame time bits of the un-downloaded video frames; based on the estimated download frame time and the estimated play frame time, the katon identification result of the target video is obtained. The invention also discloses a video card-on identification device, terminal equipment and a computer readable storage medium. By using the method of the invention, whether the video frames are blocked or not in the video frames which are not downloaded can be determined, and whether the target video of the part which is not downloaded is blocked or not can be further determined.

Description

Video clamping and recognizing method and device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video clip identifying method, a device, a terminal device, and a computer readable storage medium.
Background
In the existing video color ring service, a video color ring played by a user is pushed to a user terminal by a color ring server, and in the pushing process, if a network abnormality (network interruption or network deceleration) exists between the color ring server and the user terminal, etc., the video color ring server may cause the video streaming to push to be blocked, thereby further causing the playing to be blocked.
In the related art, a method for identifying a clip when playing a streaming media is disclosed: in the process of downloading video data, dividing the video data into a plurality of segments of data, obtaining the size of an nth data packet, determining the nth time of the nth data packet, simultaneously obtaining the size of an (n+1) th data packet, determining the (n+1) th time of the (n+1) th data packet, and obtaining the (n+1) th time difference according to the nth time and the (n+1) th time; acquiring theoretical play data volume according to the N acquired time differences; and obtaining the actual download data volume according to the obtained size of the N+1 data packets, and determining whether the video data play is blocked or not according to the theoretical play data volume and the actual download data volume.
However, with the existing method, it is difficult to predict whether the video data which is not downloaded is stuck in advance.
Disclosure of Invention
The invention mainly aims to provide a video clamping and stopping identification method, a device, terminal equipment and a computer readable storage medium, and aims to solve the technical problem that whether clamping and stopping occur on video data which is not downloaded is difficult to predict in advance by adopting the existing method in the prior art.
In order to achieve the above object, the present invention provides a video clip identifying method, which includes the following steps:
receiving video frame abstract information of a target video;
When receiving the target video, obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video;
determining the playing time of the selected video frame when the target video is played;
Based on the video frame summary information, the playing time and the receiving time, obtaining estimated playing frame time of the unrendered video frame;
And obtaining a cartoon recognition result of the target video based on the estimated download frame time and the estimated play frame time.
Optionally, the video frame summary information includes a selected frame size of the selected video frame; the obtaining, based on the video frame summary information and the receiving time, estimated download frame time bits of the unrendered video frame in the target video includes:
Obtaining a selected frame interval of the selected video frame based on the receive time;
determining a download speed of the selected video frame using the selected frame interval and the selected frame size;
determining a download speed ratio of the selected video frame using the download speed;
determining the arrival time of the unrendered video frame by using the downloading speed ratio;
and obtaining the estimated download frame time based on the receiving time and the arrival time.
Optionally, the video frame summary information further includes a target frame duration of the un-downloaded video frame; the obtaining, based on the video frame summary information, the playing time and the receiving time, the estimated playing frame time bit of the un-downloaded video frame includes:
Obtaining a time difference based on the playing time and the receiving time;
And when the time difference is smaller than or equal to the target frame duration, acquiring the estimated playing frame time based on the target frame duration and the playing time.
Optionally, the video frame summary information further includes a target frame size of the selected video frame; after the time difference is obtained based on the playing time and the receiving time, the method further comprises:
When the time difference is greater than the target frame duration, obtaining a frame processing speed of the selected video frame by using the selected frame size and the time difference;
Determining the frame processing speed as an expected frame processing speed of the un-downloaded video frame;
And obtaining the estimated playing frame time by using the playing time, the expected frame processing speed and the target frame size.
Optionally, the obtaining the result of the katon identification of the target video based on the estimated download frame time and the estimated play frame time includes:
judging whether a video frame with a pre-estimated playing frame time and a pre-estimated downloading frame time meeting preset conditions exists in the un-downloaded video frame or not;
If the stuck video frame exists, based on the stuck video frame, a stuck identification result of the target video with stuck is obtained.
Optionally, the determining, by using the download speed ratio, an arrival duration of the un-downloaded video frame includes:
obtaining a download speed expected ratio of the un-downloaded video frames based on the download speed ratio;
based on the download speed expected ratio, obtaining an estimated download speed of the un-downloaded video frame;
and obtaining the arrival time based on the estimated download speed.
Optionally, the obtaining the expected download speed ratio of the un-downloaded video frame based on the download speed ratio includes:
And determining the expected downloading speed ratio of the nth frame in the non-downloaded video frames based on the downloading speed ratio and the time position relation between the nth frame in the non-downloaded video frames and a preset reference video frame, wherein the preset reference video frame is the mth frame in the non-downloaded video frames, and n and m are integers greater than or equal to 1.
In addition, in order to achieve the above object, the present invention further provides a video clip identifying apparatus, which includes:
the receiving module is used for receiving video frame abstract information of the target video;
The first obtaining module is used for obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time when the target video is received, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video;
the determining module is used for determining the playing time of the selected video frame when the target video is played;
the second obtaining module is used for obtaining estimated playing frame time bits of the unrendered video frames based on the video frame abstract information, the playing time and the receiving time;
And the third obtaining module is used for obtaining the clamping and stopping recognition result of the target video based on the estimated download frame time bit and the estimated play frame time bit.
In addition, to achieve the above object, the present invention also proposes a terminal device including: the video card recognition system comprises a memory, a processor and a video card recognition program stored in the memory and running on the processor, wherein the video card recognition program realizes the steps of the video card recognition method when being executed by the processor.
In addition, in order to achieve the above object, the present invention also proposes a computer readable storage medium, on which a video clip identifying program is stored, which when executed by a processor, implements the steps of the video clip identifying method as set forth in any one of the above.
The technical scheme of the invention provides a video clamping and recognizing method, which comprises the following steps: receiving video frame abstract information of a target video; when receiving the target video, obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video; determining the playing time of the selected video frame when the target video is played; based on the video frame summary information, the playing time and the receiving time, obtaining estimated playing frame time of the unrendered video frame; and obtaining a cartoon recognition result of the target video based on the estimated download frame time and the estimated play frame time.
In the existing method, whether the video data play is blocked or not can only be determined based on the theoretical play data volume and the actual download data volume which are already downloaded, and the data packets which are not downloaded in the video data cannot be analyzed, so that whether the video data which are not downloaded are blocked or not is difficult to estimate in advance. According to the invention, based on the playing time, receiving time and video frame abstract information of the downloaded selected video frame, the estimated downloaded frame time bit of the un-downloaded video frame and the estimated playing frame time bit of the un-downloaded video frame are obtained, and whether the video frame is blocked in the un-downloaded video frame or not can be determined based on the estimated downloaded frame time bit and the estimated playing frame time bit, so that whether the target video of the un-downloaded part is blocked or not can be determined.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a terminal device structure of a hardware running environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a video clip identifying method according to a first embodiment of the present invention;
Fig. 3 is a schematic flow chart of refinement of step S12 in the second embodiment of the video clip identifying method of the present invention;
FIG. 4 is a graph showing the relationship between the expected download speed ratio and the frame number of the un-downloaded video frame;
FIG. 5 is a diagram of a predicted download frame time bitmap according to the present invention;
Fig. 6 is a schematic flow chart of refinement of step S14 in the third embodiment of the video clip identifying method of the present invention;
FIG. 7 is a diagram illustrating the predicted play of frames according to the present invention;
FIG. 8 is a diagram showing the bitmap of a predicted playing frame versus the bitmap of a predicted downloading frame according to the present invention;
fig. 9 is a block diagram of a video clip identifying apparatus according to a first embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a terminal device structure of a hardware running environment according to an embodiment of the present invention.
The terminal device may be a Mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet personal computer (PAD), or other User Equipment (UE), a handheld device, a vehicle mounted device, a wearable device, a computing device, or other processing device connected to a wireless modem, a Mobile Station (MS), or the like. The terminal device may be referred to as a user terminal, a portable terminal, a desktop terminal, etc.
In general, a terminal device includes: at least one processor 301, a memory 302 and a video clip identifying program stored on the memory and executable on the processor, the video clip identifying program being configured to implement the steps of the video clip identifying method as previously described.
Processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 301 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 301 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central ProcessingUnit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. The processor 301 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing the relevant video-on-recognition method operations so that the video-on-recognition method model may be self-training learned, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the video clip on identification method provided by the method embodiments of the present application.
In some embodiments, the terminal may further optionally include: a communication interface 303, and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the communication interface 303 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power supply 306.
The communication interface 303 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 301, the memory 302, and the communication interface 303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 304 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 304 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 305 is a touch screen, the display 305 also has the ability to collect touch signals at or above the surface of the display 305. The touch signal may be input as a control signal to the processor 301 for processing. At this point, the display 305 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 305 may be one, the front panel of an electronic device; in other embodiments, the display screen 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or a folded surface of the electronic device. Even more, the display screen 305 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 305 may be made of LCD (LiquidCrystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The power supply 306 is used to power the various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology. It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the terminal device, and may include more or less components than illustrated, or may combine certain components, or may be arranged in different components.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a video card recognition program, and the video card recognition program realizes the steps of the video card recognition method when being executed by a processor. Therefore, a detailed description will not be given here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application. As an example, the program instructions may be deployed to be executed on one terminal device or on multiple terminal devices located at one site or on multiple terminal devices distributed across multiple sites and interconnected by a communication network.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The computer readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
Based on the above hardware structure, the embodiment of the video clip identifying method is provided.
Embodiment one:
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of a video clip on identification method according to the present invention, where the method is used for a terminal device, and the method includes the following steps:
step S11: video frame summary information of a target video is received.
It should be noted that, the execution main body of the present invention is a terminal device, the terminal device is provided with a video cartoon recognition program, and when the terminal device executes the video cartoon recognition program, the steps of the video cartoon recognition method of the present invention are realized.
In the present invention, the video frame summary information of the target video and the target video may be both sent by a server, which is a video server. When the terminal equipment (belonging to the calling end) calls the receiving end, the video sent by the video server and received by the terminal equipment is the target video. The video frame summary information includes information such as frame duration of all video frames in the target video, frame sequence numbers of all video frames (for representing downloading sequence and playing sequence of video frames, the downloading sequence and the playing sequence are generally the same), and frame sizes of all video frames. Typically, the server will send the video frame summary information first and then the target video.
In general, a server is required to generate corresponding video frame summary information for a target video, wherein for one target video, the frame duration of each video frame in the video is a fixed uniform value.
Step S12: when the target video is received, based on the video frame summary information and the receiving time, the estimated downloading frame time bit of the video frame which is not downloaded in the target video is obtained, wherein the receiving time is the time when the selected video frame is downloaded, and the selected video frame is the downloaded video frame in the target video.
It should be noted that, for the downloaded selected video frames, when the terminal device just downloads the target video, the first downloaded video frames (for example, the first three video frames or the first four video frames, etc.), where the downloading sequence of the video is the playing sequence of the video frames, all the video frames of the video are sent frame by frame, i.e., the terminal device downloads all the video frames of the video according to the playing sequence of all the video frames in the video, where all the video frames include the downloaded selected video frames and the un-downloaded video frames.
It will be appreciated that the selected video frames that have been downloaded are typically the first few frames of the video, i.e. the selected video frames comprise a plurality of, in the present invention, the first three video frames are preferably selected as the selected video frames; when the terminal device receives the first three video frames of the target video, step S12 is performed. Wherein the reception time of the selected video frame includes the reception time of each selected video frame, that is, in the above example, the reception time includes three reception times corresponding to the first three video frames, respectively.
In the invention, the estimated downloading time of the un-downloaded video frame is obtained by selecting the video frame summary information and the receiving time of the video frame, and the estimated downloading time of the un-downloaded video frame is the estimated downloading frame time bit. It will be appreciated that in this embodiment, the estimated download time is an estimation result obtained by using the method of the present invention, and the estimated download time is an important element for obtaining the katon identification result of the target video.
Step S13: and when the target video is played, determining the playing time of the selected video frame.
The description of the selected video frame refers to the above description and will not be repeated. After the selected video frame is downloaded, the terminal device performs relevant image processing, and then, after the terminal device completes the image processing, the selected video frame is played. The play time and the receive time of a selected video frame typically have a gap. In addition, the playing time of the selected video frame includes the playing time of each selected video frame, that is, in the example of the first embodiment, the playing time includes three playing times corresponding to the first three video frames respectively.
Step S14: and obtaining estimated playing frame time of the unrendered video frame based on the video frame summary information, the playing time and the receiving time.
In the invention, the estimated playing time of the un-downloaded video frame is obtained by selecting the video frame summary information, the receiving time and the playing time of the video frame, and the estimated playing time of the un-downloaded video frame is the estimated playing frame time. It can be understood that, in this embodiment, the estimated play frame time is an estimated result obtained by using the method of the present invention, and the estimated play frame time is another important element for obtaining the katon identification result of the target video, namely: the result of the video clip recognition is obtained by using the predicted playing frame time and the predicted downloading frame time.
Step S15: and obtaining a cartoon recognition result of the target video based on the estimated download frame time and the estimated play frame time.
And obtaining the estimated download frame time bit and the estimated play frame time bit, and judging whether the target video is blocked or not by utilizing the estimated download frame time bit and the estimated play frame time bit.
Specifically, step S15 includes: judging whether a video frame with a pre-estimated playing frame time and a pre-estimated downloading frame time meeting preset conditions exists in the un-downloaded video frame or not; if the stuck video frame exists, based on the stuck video frame, a stuck identification result of the target video with stuck is obtained; if the stuck video frame does not exist, based on the stuck video frame, a stuck identification result that the target video does not have stuck is obtained.
It should be noted that, if the time corresponding to the estimated playing frame time of one of the non-downloaded video frames is not less than the time corresponding to the estimated downloading frame time of the non-downloaded video frame, the non-downloaded video frame satisfies the preset condition.
When the time corresponding to the estimated playing frame time bit of the p-th video frame of the non-downloaded video frame is smaller than the time corresponding to the estimated downloading frame time bit of the p-th video frame, the p-th video frame is a stuck video frame, and the stuck identification result is that the p-th video frame is stuck; when the time corresponding to the estimated playing frame time bit of the plurality of continuous video frames after the q-th video frame is smaller than the time corresponding to the estimated downloading frame time bit, the clamping and stopping recognition result is that the q-th video frame starts to be continuously clamped and stopped. The katon recognition result may be other types of descriptions, and the present invention is not particularly limited.
The technical solution of the present embodiment provides a video clip identifying method, which includes: receiving video frame abstract information of a target video; when receiving the target video, obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video; determining the playing time of the selected video frame when the target video is played; based on the video frame summary information, the playing time and the receiving time, obtaining estimated playing frame time of the unrendered video frame; and obtaining a cartoon recognition result of the target video based on the estimated download frame time and the estimated play frame time.
In the existing method, whether the video data play is blocked or not can only be determined based on the theoretical play data volume and the actual download data volume which are already downloaded, and the data packets which are not downloaded in the video data cannot be analyzed, so that whether the video data which are not downloaded are blocked or not is difficult to estimate in advance. According to the invention, based on the playing time, receiving time and video frame abstract information of the downloaded selected video frame, the estimated downloaded frame time bit of the un-downloaded video frame and the estimated playing frame time bit of the un-downloaded video frame are obtained, and whether the video frame is blocked in the un-downloaded video frame or not can be determined based on the estimated downloaded frame time bit and the estimated playing frame time bit, so that whether the target video of the un-downloaded part is blocked or not can be determined.
Embodiment two:
Based on the same inventive concept, referring to fig. 3, fig. 3 is a schematic flow chart of a second embodiment of the video clip identifying method according to the present invention, wherein the method is used for a terminal device, and the method includes the following steps:
step S21: based on the receive time, a selected frame interval of the selected video frame is obtained.
Step S22: and determining the downloading speed of the selected video frame by using the selected frame interval and the selected frame size.
Step S23: and determining the downloading speed ratio of the selected video frame by utilizing the downloading speed.
It should be noted that, in the second embodiment, the video frame summary information includes a selected frame size of the selected video frame, and referring to the description of the first embodiment, the video frame summary information includes information such as a frame duration, a frame sequence number, and a frame size of all video frames, and the video frame summary information also includes the frame size of the selected video frame, where the frame size of the selected video frame is the selected frame size, and the frame interval of the selected video frame is the selected frame interval.
Reference is made to the description in embodiment one above: the selected video frame is exemplified by three video frames, in which the receiving time of the second frame-the receiving time of the first frame=the frame interval of the second frame, and similarly, the receiving time of the third frame-the receiving time of the second frame=the frame interval of the third frame; frame size of the second frame/frame interval of the second frame = download speed of the second frame, frame size of the third frame/frame interval of the third frame = download speed of the third frame; the download speed of the third frame/the download speed of the second frame is the download speed ratio. It will be appreciated that for a plurality of selected video frames, the first video frame is not capable of calculating the frame interval and download speed; in addition, when the selected video frames include more than three video frames, if the video frames are Q (Q > 3), the download speed ratio is determined by comparing the download speed of the Q-th frame with the download speed of the (Q-1) -th frame, that is, by taking the download speeds of the last two video frames of the selected video frames.
Step S24: and determining the arrival time of the unrendered video frame by using the downloading speed ratio.
Step S25: and obtaining the estimated download frame time based on the receiving time and the arrival time.
After obtaining the downloading speed ratio, continuing to use the downloading speed ratio to estimate the arrival time of the un-downloaded video frame, wherein the arrival time refers to the time when the arrival time of the un-downloaded video frame arrives, and the un-downloaded video frame is downloaded. Typically, the un-downloaded video frame also includes a plurality, i.e., the arrival time period includes the arrival time period of each of the plurality of un-downloaded video frames.
It will be appreciated that when the selected video frame includes a plurality of selected video frames, the time of receipt of the selected video frame is typically the time of receipt of the last video frame in the selected video frame; reference is made to the description in embodiment one above: the selected video frames are three video frames, and the reception time of the selected video frame is usually the reception time of the last video frame of the selected video frame, i.e. the reception time of the third selected video frame. When the arrival time of an unreported video frame is obtained, the estimated download frame time of the unreported video frame is obtained by adding the receiving time of the third selected video frame and the arrival time of the unreported video frame. And obtaining corresponding estimated download frame time bits for all the unrendered video frames according to the mode.
Specifically, the determining, by using the download speed ratio, the arrival time of the unrendered video frame includes: obtaining a download speed expected ratio of the un-downloaded video frames based on the download speed ratio; based on the download speed expected ratio, obtaining an estimated download speed of the un-downloaded video frame; and obtaining the arrival time based on the estimated download speed.
Wherein the obtaining the download speed expected ratio of the un-downloaded video frame based on the download speed ratio comprises: and determining the expected downloading speed ratio of the nth frame in the non-downloaded video frames based on the downloading speed ratio and the time position relation between the nth frame in the non-downloaded video frames and a preset reference video frame, wherein the preset reference video frame is the mth frame in the non-downloaded video frames, and n and m are integers greater than or equal to 1.
In some embodiments, a preset reference video frame may be set first, an unreported video frame before the preset reference video frame is determined to be a first predicted video frame, and other video frames in the unreported video frame are determined to be second predicted video frames; calculating a first sub-download speed prospective ratio of the first predicted video frame using the download speed ratio and an exponential function, the first sub-download speed prospective ratio typically having an increasing or decreasing relationship (as determined by the exponential function); determining the highest value (the highest value in the increasing relation or the lowest value in the decreasing relation) in the first sub-download speed expectation ratio as a reference download speed expectation ratio; and determining a reference download speed anticipation ratio as a second sub-download speed anticipation ratio of the second predicted video frame; and obtaining the download speed expected ratio of the un-downloaded video frame based on the first sub-download speed expected ratio and the second sub-download speed expected ratio.
The above-described steps may be expressed by the formula one, which is:
Wherein K is the download speed ratio, K n is the download speed expected ratio of the nth video frame in the non-downloaded video frames, and the mth video frame is the preset reference video frame in the non-downloaded video frames.
It can be understood that, in the un-downloaded video frames, the video frames (from the 1 st frame to the (m-1) th frame) before the mth video frame are all first predicted video frames, the remaining other video frames are second predicted video frames, and typically, the last video frame in the first predicted video frame is the preset reference video frame, that is, the (m-1) th frame is the preset reference video frame. The first estimated video frame downloading speed expected ratio is the first sub downloading speed expected ratio, the second estimated video frame downloading speed expected ratio is the second sub downloading speed expected ratio, and the preset reference video frame downloading speed expected ratio is the reference downloading speed expected ratio. The first sub-download speed expected ratio and the second sub-download speed expected ratio are the download speed expected ratio.
It should be noted that, m may be set by the user based on the requirement, the user may set the value of m, and for any target video, the mth video frame is a preset reference video frame, for example, m=4, and the 3 rd video frame that is not downloaded is the preset reference video frame.
Specifically, the obtaining, based on the download speed expected ratio, the estimated download speed of the un-downloaded video frame includes: obtaining an estimated download speed of a first un-downloaded video frame based on the download speed and a download speed expected ratio of the first un-downloaded video frame; determining a second one of the non-downloaded video frames as a current video frame; obtaining an estimated download speed of the current video frame based on the download speed expected ratio of the current video frame and a basic estimated download speed, wherein the basic estimated download speed is an estimated download speed of a previous video frame, and the previous video frame is a previous video frame adjacent to the current video frame in the un-downloaded video frames; determining a next video frame adjacent to the current video frame in the un-downloaded video frames as a new current video frame, and updating the current video frame by using the new current video frame; and returning to the step of executing the estimated download speed based on the download speed expected ratio and the basic estimated download speed of the current video frame to obtain the estimated download speed of the current video frame until the estimated download speed of all video frames in the un-downloaded video frame is obtained.
The above text description process can be expressed by the formula four, which is as follows:
Dn+1=Dn*Kn
Wherein D n is the estimated download speed of the nth video frame in the un-downloaded video frames, D 1 =v×k, and V is the download speed. The downloading speed refers to the downloading speed of the last downloaded video frame in the selected video frames, and takes the above three video frames as an example, the downloading speed is the downloading speed of the third video frame.
It can be understood that, referring to the first and fourth formulas, the expected downloading speed ratio is an exponential function, so the estimated downloading speed will be an exponential function, and therefore, if there is no m limit, the power increase continues, and if the downloading speed ratio is greater than 1, the estimated downloading speed of the video frame with the later downloading order in the non-downloaded video frames will exceed a reasonable value.
Still taking the above embodiment in which the selected video frame is three video frames as an example, and m=4 as an example, the expected download speed ratios of the 1 st video frame to the 3 rd video frame in the un-downloaded video frames are K 1、K2 and K 3, respectively, and the estimated download speeds of the 1 st video frame to the 3 rd video frame in the un-downloaded video frames are D 1=V*K、D2=V*K3 and D 3=V*K6, respectively, can be obtained continuously. In this way, the estimated download speed of all the unrendered video frames is obtained.
Referring to fig. 4, fig. 4 is a graph showing the relationship between the expected ratio of the downloading speed and the frame number of the un-downloaded video frame according to the present invention; when the frame number (frame number of the unrendered video frame) is greater than or equal to m, the download speed expectation ratio is not set to a value K m-1 and does not change.
The download speed ratio is 1 or more, which indicates a download speed of one video frame/a download speed of a preceding video frame >1, and is 1 or less, which indicates a download speed of one video frame/a download speed of a preceding video frame <1.
After obtaining the estimated downloading speed of the un-downloaded video frame, the arrival time of the un-downloaded video frame can be continuously obtained. Specifically, based on the estimated download speed, the arrival duration is obtained by using a formula five;
The fifth formula is:
Wherein R n is the arrival time of the nth video frame in the un-downloaded video frames, S n is the frame size of the nth video frame, The arrival time of all the unrendered video frames continues to be calculated in this way.
Then, continuously obtaining the estimated download frame time based on the receiving time and the arrival time, wherein the estimated download frame time of one video frame is the download completion time of the video frame; specifically, based on the receiving time and the arrival time, obtaining the estimated download frame time bit by using a formula six;
The formula six is:
An=Rn+a
Wherein a is the receiving time, and a n is the estimated download frame time of the nth video frame. The receiving time is the receiving time of the last downloaded video frame in the selected video frames, taking the above three video frames as an example, and the receiving time a is the receiving time of the third video frame.
In some embodiments, the estimated download frame time bit may be further shown in a statistical diagram based on the data obtained in the above steps.
Referring to fig. 5, fig. 5 is a bitmap of a predicted download frame according to the present invention. In fig. 5, the horizontal axis represents estimated download frame time, and the vertical axis represents frame size. The first vertical line from left to right represents the first un-downloaded video frame, the second un-downloaded video frame, …, the Z-th un-downloaded video frame. The point at which the estimated download time bit is 0 may represent the time of completion of the download of the first video frame of the selected video frames.
In this embodiment, a specific and preferred method for calculating the time-bit of the estimated download frame is provided: and obtaining the estimated downloading frame time bit of the final un-downloaded video frame through the selected frame interval of the selected video frame and the selected frame size.
Embodiment III:
based on the same inventive concept, referring to fig. 6, fig. 6 is a schematic flow chart of a third embodiment of the video clip identifying method according to the present invention, wherein the method is used for a terminal device, and the method includes the following steps:
step S31: and obtaining a time difference based on the playing time and the receiving time.
It should be noted that, in this embodiment, the receiving time and the playing time of the selected video frame are separated by a time difference—the time difference is typically the time difference of the last video frame in the selected video frame.
Step S32: and when the time difference is smaller than or equal to the target frame duration, acquiring the estimated playing frame time based on the target frame duration and the playing time.
The frame duration of the video frame which is not downloaded is the target frame duration, specifically, the step of obtaining the estimated playing frame time based on the target frame duration and the playing time includes: based on the target frame duration and the playing time, obtaining the estimated playing frame time position by using a formula II;
The formula II is as follows:
Px=xT1+T0
Wherein, P x is the estimated playing frame time of the x-th video frame in the un-downloaded video frames, T 0 is the playing time, and T 1 is the target frame duration.
It can be understood that when the time difference is less than or equal to the target frame duration, it is indicated that the video frame decoding processing duration is less than the target frame duration, and the frame decoding processing will not affect the playing of the frame, so that the solution of predicting the playing frame time bit can be performed by using the formula two.
Step S33: and when the time difference is larger than the target frame duration, obtaining the frame processing speed of the selected video frame by utilizing the selected frame size and the time difference.
Step S34: the frame processing speed is determined as an expected frame processing speed of the un-downloaded video frame.
Step S35: and obtaining the estimated playing frame time by using the playing time, the expected frame processing speed and the target frame size.
Specifically, the difference between the receiving time and the playing time of the selected video frame is used to determine the frame processing duration (i.e., the time difference) of the selected video frame, and then the ratio of the selected frame size to the time difference is determined as the frame processing speed of the selected video frame. Typically, the frame processing speed is the frame processing speed of the last video frame of the selected video frame, and according to the above, the selected video frame includes three examples, which may be the ratio of the frame size of the third video frame to the time difference; or the frame processing speed corresponding to any one of the selected video frames. For T 0, the play time of the last video frame of the selected video frame is certain.
In some examples, if T 0 is not the playing time of the last video frame of the selected video frame, but the playing time of the other video frames of the selected video frame, then an adjustment to equation two is required based on the relationship between T 0 and the playing time of the last video frame of the selected video frame.
In addition, the step of obtaining the estimated play frame time bit by using the play time, the expected frame processing speed and the target frame size includes: obtaining the expected processing duration of the un-downloaded video frame by using the target frame size and the expected frame processing speed; based on the playing time and the expected processing time length, obtaining the estimated playing frame time position by using a formula III;
The formula III is:
Px=xT2+T0
Wherein T 2 is the expected processing duration. Wherein the expected processing duration of each un-downloaded video frame is the ratio of the target frame size of the un-downloaded video frame to the expected frame processing speed.
According to the two conditions, the estimated playing frame time of all the unrendered video frames is obtained.
In some embodiments, the predicted playing frame time bit may be further shown in a statistical diagram based on the data obtained in the foregoing steps.
Referring to fig. 7, fig. 7 is a bitmap of a predicted play frame according to the present invention. In fig. 7, the horizontal axis indicates the predicted play frame time, and the vertical axis indicates the frame size. The first vertical line from left to right represents the first un-downloaded video frame, the second un-downloaded video frame, …, the Z-th un-downloaded video frame. The point at which the predicted play time bit is 0 may represent the time when the downloading of the first video frame of the selected video frames is completed.
In addition, for step S16, in order to obviously reflect whether the predicted playing frame time bit and the predicted downloading frame time bit in the un-downloaded video frame meet the preset conditions, a comparison chart may be drawn by using the predicted playing frame time bit map and the predicted downloading frame time bit map.
Referring to fig. 8, fig. 8 is a comparison diagram of a bitmap when a playing frame is estimated and a bitmap when a downloading frame is estimated according to the present invention. In fig. 8, the horizontal axis indicates the predicted play frame time, and the vertical axis indicates the frame size. The first vertical line from left to right represents the first un-downloaded video frame, the second un-downloaded video frame, …, the Z-th un-downloaded video frame. The point with the estimated play time bit of 0 can represent the downloading completion time of the first video frame in the selected video frames; comparing the two diagrams in fig. 6, there is a gap between the estimated download frame time bit and the estimated play frame for the first un-downloaded video frame (the gap is the time difference).
In addition, referring to fig. 8, for a video frame, if the corresponding predicted playing frame time bit is arranged in front of the predicted downloading frame time bit, the video frame is a cartoon video frame. In fig. 8, there is no video frame with predicted play frame time bits arranged in front of predicted download frame time bits.
In this embodiment, a specific method for obtaining the predicted playing frame time bit and the predicted downloading frame time bit is provided, and meanwhile, a contrast map can be drawn by using the predicted playing frame time bit map and the predicted downloading frame time bit map, so that whether the video frame is jammed or not can be visually and intuitively displayed.
Referring to fig. 9, fig. 9 is a block diagram illustrating a first embodiment of a video clip identifying apparatus according to the present invention, which is used for a terminal device, based on the same inventive concept as the previous embodiment, and includes:
a receiving module 10, configured to receive video frame summary information of a target video;
the first obtaining module 20 is configured to obtain, when receiving the target video, estimated download frame time bits of video frames not downloaded in the target video based on the video frame summary information and a receiving time, where the receiving time is a time when a selected video frame is downloaded, and the selected video frame is a downloaded video frame in the target video;
A determining module 30, configured to determine a playing time of the selected video frame when the target video is played;
A second obtaining module 40, configured to obtain estimated playing frame time bits of the un-downloaded video frame based on the video frame summary information, the playing time and the receiving time;
And a third obtaining module 50, configured to obtain a katon recognition result of the target video based on the estimated download frame time and the estimated play frame time.
It should be noted that, since the steps executed by the apparatus of this embodiment are the same as those of the foregoing method embodiment, specific implementation manners and technical effects that can be achieved of the apparatus of this embodiment may refer to the foregoing embodiment, and will not be repeated herein.
The foregoing description is only of the optional embodiments of the present invention, and is not intended to limit the scope of the invention, and all the equivalent structural changes made by the description of the present invention and the accompanying drawings or the direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (9)

1. A video clip identification method, the method comprising:
Receiving video frame abstract information of a target video; the video frame summary information comprises frame duration, frame sequence number and frame size of all video frames in the target video;
When receiving the target video, obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video;
determining the playing time of the selected video frame when the target video is played;
Based on the video frame summary information, the playing time and the receiving time, obtaining estimated playing frame time of the unrendered video frame;
Based on the estimated download frame time and the estimated play frame time, obtaining a cartoon identification result of the target video;
The obtaining the katon identification result of the target video based on the estimated download frame time and the estimated play frame time includes:
judging whether a video frame with a pre-estimated playing frame time and a pre-estimated downloading frame time meeting preset conditions exists in the un-downloaded video frame or not;
If the stuck video frame exists, based on the stuck video frame, a stuck identification result of the target video with stuck is obtained.
2. The method of claim 1, wherein the video frame summary information comprises a selected frame size of the selected video frame; the obtaining, based on the video frame summary information and the receiving time, estimated download frame time bits of the unrendered video frame in the target video includes:
Obtaining a selected frame interval of the selected video frame based on the receive time;
determining a download speed of the selected video frame using the selected frame interval and the selected frame size;
determining a download speed ratio of the selected video frame using the download speed;
determining the arrival time of the unrendered video frame by using the downloading speed ratio;
and obtaining the estimated download frame time based on the receiving time and the arrival time.
3. The method of claim 2, wherein the video frame summary information further comprises a target frame duration of the un-downloaded video frame; the obtaining, based on the video frame summary information, the playing time and the receiving time, the estimated playing frame time bit of the un-downloaded video frame includes:
Obtaining a time difference based on the playing time and the receiving time;
And when the time difference is smaller than or equal to the target frame duration, acquiring the estimated playing frame time based on the target frame duration and the playing time.
4. The method of claim 3, wherein the video frame summary information further comprises a target frame size of the selected video frame; after the time difference is obtained based on the playing time and the receiving time, the method further comprises:
When the time difference is greater than the target frame duration, obtaining a frame processing speed of the selected video frame by using the selected frame size and the time difference;
Determining the frame processing speed as an expected frame processing speed of the un-downloaded video frame;
And obtaining the estimated playing frame time by using the playing time, the expected frame processing speed and the target frame size.
5. The method of claim 2, wherein said determining the arrival time of the un-downloaded video frame using the download speed ratio comprises:
obtaining a download speed expected ratio of the un-downloaded video frames based on the download speed ratio;
based on the download speed expected ratio, obtaining an estimated download speed of the un-downloaded video frame;
and obtaining the arrival time based on the estimated download speed.
6. The method of claim 5, wherein said obtaining a download speed anticipation ratio for the un-downloaded video frames based on the download speed ratio comprises:
And determining the expected downloading speed ratio of the nth frame in the non-downloaded video frames based on the downloading speed ratio and the time position relation between the nth frame in the non-downloaded video frames and a preset reference video frame, wherein the preset reference video frame is the mth frame in the non-downloaded video frames, and n and m are integers greater than or equal to 1.
7. A video clip identifying apparatus, the apparatus comprising:
The receiving module is used for receiving video frame abstract information of the target video; the video frame summary information comprises frame duration, frame sequence number and frame size of all video frames in the target video;
The first obtaining module is used for obtaining estimated downloading frame time bits of the video frames which are not downloaded in the target video based on the video frame summary information and receiving time when the target video is received, wherein the receiving time is the time when the selected video frames are downloaded, and the selected video frames are downloaded in the target video;
the determining module is used for determining the playing time of the selected video frame when the target video is played;
the second obtaining module is used for obtaining estimated playing frame time bits of the unrendered video frames based on the video frame abstract information, the playing time and the receiving time;
the third obtaining module is used for obtaining a clamping and stopping recognition result of the target video based on the estimated download frame time and the estimated play frame time;
The third obtaining module is further configured to determine whether a video frame exists in the un-downloaded video frame, where the video frame has a time bit of the predicted playing frame and a time bit of the predicted downloading frame satisfy a preset condition; if the stuck video frame exists, based on the stuck video frame, a stuck identification result of the target video with stuck is obtained.
8. A terminal device, characterized in that the terminal device comprises: memory, a processor and a video clip identifying program stored on the memory and running on the processor, which video clip identifying program when executed by the processor implements the steps of the video clip identifying method of any one of claims 1 to 6.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a video-on-recognition program, which when executed by a processor, implements the steps of the video-on-recognition method according to any one of claims 1 to 6.
CN202110977961.8A 2021-08-24 2021-08-24 Video clamping and recognizing method and device, terminal equipment and storage medium Active CN113784216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110977961.8A CN113784216B (en) 2021-08-24 2021-08-24 Video clamping and recognizing method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110977961.8A CN113784216B (en) 2021-08-24 2021-08-24 Video clamping and recognizing method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113784216A CN113784216A (en) 2021-12-10
CN113784216B true CN113784216B (en) 2024-05-31

Family

ID=78839007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110977961.8A Active CN113784216B (en) 2021-08-24 2021-08-24 Video clamping and recognizing method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113784216B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114401447B (en) * 2021-12-20 2024-08-23 北京字节跳动网络技术有限公司 Video clamping prediction method, device, equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100782875B1 (en) * 2007-06-04 2007-12-06 주식회사 셀런 Method for preventing stoppage of playing contents in video on demand service and set top box of the same
CN107018379A (en) * 2017-04-25 2017-08-04 北京东土科技股份有限公司 The transmission method and device of a kind of video flowing
CN108696771A (en) * 2017-04-11 2018-10-23 上海谦问万答吧云计算科技有限公司 A kind of video broadcasting method and device
CN109982159A (en) * 2017-12-27 2019-07-05 华为技术有限公司 The method and terminal of online playing stream media
CN110784760A (en) * 2019-09-16 2020-02-11 清华大学 Video playing method, video player and computer storage medium
CN111031347A (en) * 2019-11-29 2020-04-17 广州市百果园信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112019916A (en) * 2020-08-26 2020-12-01 广州市百果园信息技术有限公司 Video downloading method, device, server and storage medium
CN112637631A (en) * 2020-12-17 2021-04-09 清华大学 Code rate determining method and device, electronic equipment and storage medium
CN112822521A (en) * 2020-12-30 2021-05-18 百果园技术(新加坡)有限公司 Code rate control method, device and equipment for audio and video transmission and storage medium
CN112995654A (en) * 2021-02-08 2021-06-18 咪咕音乐有限公司 Video playing pause detection method and device, server and readable storage medium
WO2021159609A1 (en) * 2020-02-11 2021-08-19 深圳壹账通智能科技有限公司 Video lag identification method and apparatus, and terminal device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8689267B2 (en) * 2010-12-06 2014-04-01 Netflix, Inc. Variable bit video streams for adaptive streaming
US20130070051A1 (en) * 2011-09-20 2013-03-21 Cheng-Tsai Ho Video encoding method and apparatus for encoding video data inputs including at least one three-dimensional anaglyph video, and related video decoding method and apparatus
TWI528798B (en) * 2012-10-11 2016-04-01 緯創資通股份有限公司 Streaming data downloading method and computer readable recording medium thereof
US10389785B2 (en) * 2016-07-17 2019-08-20 Wei-Chung Chang Method for adaptively streaming an audio/visual material

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100782875B1 (en) * 2007-06-04 2007-12-06 주식회사 셀런 Method for preventing stoppage of playing contents in video on demand service and set top box of the same
CN108696771A (en) * 2017-04-11 2018-10-23 上海谦问万答吧云计算科技有限公司 A kind of video broadcasting method and device
CN107018379A (en) * 2017-04-25 2017-08-04 北京东土科技股份有限公司 The transmission method and device of a kind of video flowing
CN109982159A (en) * 2017-12-27 2019-07-05 华为技术有限公司 The method and terminal of online playing stream media
CN110784760A (en) * 2019-09-16 2020-02-11 清华大学 Video playing method, video player and computer storage medium
CN111031347A (en) * 2019-11-29 2020-04-17 广州市百果园信息技术有限公司 Video processing method and device, electronic equipment and storage medium
WO2021159609A1 (en) * 2020-02-11 2021-08-19 深圳壹账通智能科技有限公司 Video lag identification method and apparatus, and terminal device
CN112019916A (en) * 2020-08-26 2020-12-01 广州市百果园信息技术有限公司 Video downloading method, device, server and storage medium
CN112637631A (en) * 2020-12-17 2021-04-09 清华大学 Code rate determining method and device, electronic equipment and storage medium
CN112822521A (en) * 2020-12-30 2021-05-18 百果园技术(新加坡)有限公司 Code rate control method, device and equipment for audio and video transmission and storage medium
CN112995654A (en) * 2021-02-08 2021-06-18 咪咕音乐有限公司 Video playing pause detection method and device, server and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XAI Models for Quality of Experience Prediction in Wireless Networks;Alessandro Renda等;《2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)》;20210805;全文 *
一种基于互联网电视的智能带宽动态调整技术;姜忠正等;《信息通信》;20201016(第7期);全文 *
基于用户行为的ISP端视频卡顿检测;余恕狮;《中国优秀硕士学位论文全文数据库》;20200915(第9期);全文 *

Also Published As

Publication number Publication date
CN113784216A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US10783364B2 (en) Method, apparatus and device for waking up voice interaction function based on gesture, and computer readable medium
CN108540965B (en) Internet of things communication method and device based on LoRa technology and storage medium
WO2015035870A1 (en) Multiple cpu scheduling method and device
CN112883036B (en) Index creation method, device, storage server and storage medium
CN112464095B (en) Message pushing method, device, terminal and storage medium
CN113784216B (en) Video clamping and recognizing method and device, terminal equipment and storage medium
WO2018161969A1 (en) Broadcast queue adjustment method and apparatus, and terminal device
KR20200106178A (en) Method and apparatus for determining listening information in search space
CN112235082A (en) Communication information transmission method, device, equipment and storage medium
CN112351097A (en) Device control method, device, sending end and storage medium
CN111010740B (en) System information sending and receiving method, mapping method, network equipment and terminal
CN112612526B (en) Application program control method, device, terminal equipment and storage medium
CN109753262B (en) Frame display processing method and device, terminal equipment and storage medium
CN111949187B (en) Electronic whiteboard content editing and sharing method, system, equipment and server
CN111628801B (en) Radio frequency front-end device control method and user equipment
CN110972320A (en) Receiving method, sending method, terminal and network side equipment
CN112738726A (en) Positioning method, positioning device, terminal and storage medium
CN114598876B (en) Motion compensation method and device for dynamic image, terminal equipment and storage medium
CN114265645B (en) Information display method, device, terminal and storage medium
CN114546171A (en) Data distribution method, data distribution device, storage medium and electronic equipment
CN112035036A (en) Electronic whiteboard sharing method, system, terminal equipment and storage medium
CN108693951B (en) Display content updating method and device
CN112468870A (en) Video playing method, device, equipment and storage medium
CN112423004B (en) Video data transmission method, device, transmitting end and storage medium
CN112437333B (en) Program playing method, device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant