CN117061827B - Image frame processing method, device, equipment and storage medium - Google Patents

Image frame processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117061827B
CN117061827B CN202311039708.3A CN202311039708A CN117061827B CN 117061827 B CN117061827 B CN 117061827B CN 202311039708 A CN202311039708 A CN 202311039708A CN 117061827 B CN117061827 B CN 117061827B
Authority
CN
China
Prior art keywords
frame
image frame
jitter
parameter
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311039708.3A
Other languages
Chinese (zh)
Other versions
CN117061827A (en
Inventor
黄浩成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kaidelian Software Technology Co ltd
Original Assignee
Guangzhou Kaidelian Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kaidelian Software Technology Co ltd filed Critical Guangzhou Kaidelian Software Technology Co ltd
Priority to CN202311039708.3A priority Critical patent/CN117061827B/en
Publication of CN117061827A publication Critical patent/CN117061827A/en
Application granted granted Critical
Publication of CN117061827B publication Critical patent/CN117061827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42607Internal components of the client ; Characteristics thereof for processing the incoming bitstream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an image frame processing method, an image frame processing device, image frame processing equipment and a storage medium. The method comprises the following steps: acquiring a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window, wherein the video decoding module is used for outputting the image frame after decoding code stream data, and the time length of the first time window is longer than that of the second time window; and determining a frame sending interval of the current target image frame to be sent in a jitter buffer area according to the first jitter parameter and the second jitter parameter, wherein the jitter buffer area is used for buffering the image frame output by the video decoding module. The processing method has the advantages of small performance loss, low processing complexity and high fault tolerance, improves the smooth transmission performance, further reduces the blocking of video pictures and improves the user experience.

Description

Image frame processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing an image frame.
Background
In video services, smooth transmission of image frames is required to ensure smooth and non-blocking of images displayed to a user. Smooth transmission is a main action of anti-shake, and smoothes an image frame whose original two inter-frame intervals are too large or too small into the same inter-frame interval.
At present, anti-shake is mainly implemented by setting a jitter buffer (jitter buffer) in front of a Video Decoding (VDEC) module of a Video playing device. The main process is as follows: and copying one part of code stream data flowing from a network into j itter buffer, and the VDEC module of the video playing device reads the code stream data from the jitter buffer, decodes the code stream data into an image frame and then sends the image frame to the next module. The core idea of Jitter buffer is to trade off the fluency of video traffic for increased delay. When the network is unstable, namely jitter occurs, the length of the buffer is increased, and some data is cached more so as to cope with the jitter which may occur in the future; when the network is stabilized, the length of the buffer is reduced, some data are less buffered, the end-to-end delay of the video is reduced, and the instantaneity is improved.
However, in the above process, on one hand, memory copy of code stream data is required, and the performance loss is large. On the other hand, in the working process of j itter buffer, the frame types corresponding to the code stream data also need to be considered, and if the frame types corresponding to the code stream data are different, the buffer storage modes of the jitter buffer are also different, which results in higher complexity of jitter buffer implementation and reduced fault tolerance.
Disclosure of Invention
The invention provides an image frame processing method, an image frame processing device, image frame processing equipment and a storage medium, which are used for solving the technical problems of high performance loss and low fault tolerance of the existing image frame processing method.
According to an aspect of the present invention, there is provided an image frame processing method including:
Acquiring a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window; the video decoding module is used for outputting image frames after decoding code stream data, and the time length of the first time window is longer than that of the second time window;
determining a frame sending interval of a target image frame to be sent currently in a jitter buffer according to the first jitter parameter and the second jitter parameter; the jitter buffer area is used for buffering the image frames output by the video decoding module.
According to another aspect of the present invention, there is provided an image frame processing apparatus including:
The first acquisition module is used for acquiring a first dithering parameter of the image frame output by the video decoding module in the current first time window and a second dithering parameter of the image frame output by the video decoding module in the current second time window; the video decoding module is used for outputting image frames after decoding code stream data, and the time length of the first time window is longer than that of the second time window;
the first determining module is used for determining a frame sending interval of a target image frame to be sent currently in a jitter buffer according to the first jitter parameter and the second jitter parameter; the jitter buffer area is used for buffering the image frames output by the video decoding module.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor;
A video decoding module in communication with the at least one processor;
A jitter buffer communicatively coupled to the at least one processor and the video decoding module; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image frame processing method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute an image frame processing method according to any one of the embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, on one hand, the jitter buffer is arranged behind the video decoding module, the image frames output by the video decoding module are cached in j itter buffer, and the memory copy is not needed, but only the addresses of the image frames are copied in the jitter buffer, so that the image frames output by the video decoding module are cached in the jitter buffer. Since the address of the image frame is smaller than the data amount of the code stream data itself, the image frame processing in the present embodiment reduces the performance loss. On the other hand, after VDEC decoding, the jitter buffer is buffered, and different buffers are not needed according to the type of the image frames, so that the processing complexity is low and the fault tolerance is improved. In yet another aspect, since the first jitter parameter and the second jitter parameter of the image frames output by the video decoding module in different time windows are acquired, and the frame transmission interval of the target image frame is determined based on the first jitter parameter and the second jitter parameter. Therefore, in this embodiment, the jitter condition of the image frames in time windows with different time lengths can be considered, so as to determine the frame sending interval of the target image frame, so that a step smoothing can be implemented, the image frame processing performance is improved, that is, the smooth sending performance is improved, the blocking of the video picture is further reduced, and the user experience is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic diagram of an application scenario of an image frame processing method according to an embodiment of the present invention;
fig. 1B is a schematic diagram of another application scenario of an image frame processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a related art process image frame;
FIG. 3 is a flowchart illustrating an image frame processing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an image frame processing device in the image frame processing method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a current first time window and a current second time window in an image frame processing method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating an image frame processing method according to another embodiment of the present invention;
fig. 7 is a flowchart of an image frame processing method according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image frame processing apparatus according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of an electronic device implementing an image frame processing method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1A is a schematic diagram of an application scenario of an image frame processing method according to an embodiment of the present invention. As shown in fig. 1A, the image frame processing method provided in the present embodiment can be applied to the video playback device 21. The video playback device 21 acquires the code stream data from the video input device 22, decodes the code stream data to form image frames, and processes and displays the image frames.
Fig. 1B is a schematic diagram of another application scenario of an image frame processing method according to an embodiment of the present invention. As shown in fig. 1B, the image frame processing method provided in this embodiment may be applied to the recording and playing host 23. The recording and playing host 23 acquires the code stream data from the video camera 24. The number of cameras 24 may be plural. The recording/playback host 23 selects one path of code stream data from among the code stream data captured by the plurality of cameras 24, decodes the code stream data to form an image frame, and processes and displays the image frame. The recording host 23 may be, for example, a recording host in the educational recording industry.
In the above application scenario, a large amount of random jitter may occur due to a network transmission process between the video input device 22 and the video playing device 21 or a network transmission process between the video camera 24 and the recording and playing host 23, so that the displayed image frame may have a blocking phenomenon.
Fig. 2 is a schematic diagram of processing an image frame in the related art. As shown in fig. 2, the current video playing device includes a jitter buffer, a VDEC module, and a video processing subsystem (Video Process Sub-System, abbreviated as VPSS) connected in sequence. The code stream data flowing from the network is copied into the jitter buffer. The VDEC module is used for decoding the code stream data obtained from j itter buffer into Red Green Blue (RGB) or brightness and chrominance concentration (Luminance chrominance, YUV) image frames. VPSS can perform unified preprocessing on the image frames in j itter buffer, and then perform Video Out (VO for short). The core idea of Jitter buffer is to trade off the fluency of video traffic for increased delay. When the network is unstable, namely jitter occurs, the length of the buffer is increased, and some data is cached more so as to cope with the jitter which may occur in the future; when the network is stabilized, the length of the buffer is reduced, some data are less buffered, the end-to-end delay of the video is reduced, and the instantaneity is improved.
However, in this manner, on the one hand, memory copying of the code stream data is required in j itter buffer, and the performance loss is large. The memory copy in this embodiment refers to the need to copy the code stream data itself. The memory copy has a large performance loss due to a large data amount of the code stream data. On the other hand, in the working process of j itter buffer, the frame types corresponding to the code stream data also need to be considered, and if the frame types corresponding to the code stream data are different, the buffer storage mode of j itter buffer is also different. For example, for three frame types, sequence Parameter Set (SPS) PARAMETER SET, picture parameter set (PPS PARAMATER SET), and instantaneous decoding refresh (Instantaneous Decoding Refresh, IDR), smooth transmission is not required, and three frames thereof need to be combined into one frame to be transmitted together. But for forward predictive coded frames (P frames for short) or bi-predictive interpolated coded frames (bi-directional interpolated prediction frame B frames for short), a smooth transmission is required. Therefore, the buffer of the code stream data corresponding to SPS, PPS, and IDR in j itter buffer is different from the buffer of the code stream data corresponding to P-frame and B-frame. This results in higher complexity of jitter buffer implementation and reduced fault tolerance.
The embodiment of the invention provides an image frame processing method, which is used for solving the problems in the related art, reducing the performance loss in the image frame processing process and reducing the complexity to improve the fault tolerance.
Fig. 3 is a flowchart illustrating an image frame processing method according to an embodiment of the invention. The present embodiment is applicable to a case where video playback is performed in a video playback apparatus, and the method may be performed by an image frame processing device. The image frame processing means may be implemented in hardware and/or software, and may be configured in an electronic device, for example, a video playback device. As shown in fig. 3, the method includes the following steps.
Step 301: the method comprises the steps of obtaining a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window.
The video decoding module is used for outputting image frames after decoding the code stream data, and the time length of the first time window is longer than that of the second time window.
Fig. 4 is a schematic diagram of an image frame processing device in the image frame processing method according to the embodiment of the present invention. As shown in fig. 4, in the image frame processing apparatus of the present embodiment, VDEC, jitter buffer, and VPSS are connected in order. VPSS after preprocessing the image frame, VO is performed.
The video decoding module in this embodiment is a decoding module supporting stream decoding. Stream decoding refers to a manner in which stream data that is not a complete frame can be decoded. A typical decoder must require that all incoming data blocks be of the type of a frame-complete network abstraction layer unit (Network Abstraction Layer Unit, NALU for short). But if the image frame of a video is large, it is divided into a plurality of Real-time transport protocol (Real-time Transport Protocol, abbreviated as RTP) packets for transmission. The video decoding module in this embodiment may support decoding of the bitstream data in the input RTP packet.
The video decoding module decodes the code stream data and outputs an image frame. The code stream data may be code stream data output from a network. The code stream data in the present embodiment may be, for example, a compressed code stream such as joint photographic experts group (Joint Photographic Experts Group, abbreviated as JPEG), 264, or 265. In this embodiment, the output image frame after the video decoding module decodes the code stream data may be a YUV frame or an RGB frame.
In this embodiment, the image frames output by the video decoding module are stored in the jitter buffer. Fig. 4 differs from fig. 2 in that j itter buffer is located differently. In this embodiment, the jitter buffer is set after the video decoding module, and the buffered image frames output by the video decoding module. Therefore, in the image frame processing method provided in this embodiment, on one hand, memory copying is not required, but only the address of the image frame is required to be copied in the jitter buffer, so as to realize buffering of the image frame output by the video decoding module in the jitter buffer. Since the address of the image frame is smaller than the data amount of the code stream data itself, the image frame processing in the present embodiment reduces the performance loss. On the other hand, after VDEC decoding, the jitter buffer is buffered, and different buffers are not needed according to the type of the image frames, so that the processing complexity is low and the fault tolerance is improved. Because the image frames output by the video decoding module are in a unified image format, there is no frame type such as IPB in the 264/265 equal-pressure code stream, and there is no more case of multi-slice (slice), that is, one I frame is divided into a plurality of Nalu packets. The process is very simple.
In step 301, a first jitter parameter of an image frame output by the video decoding module in a current first time window and a second jitter parameter of an image frame output by the video decoding module in a current second time window are obtained.
Further, in order to reduce the delay of the image frames, in step 301, the video decoding module may decode the code stream data with a maximum decoding capability.
The video decoding module in this embodiment may be implemented in hardware.
The first time window and the second time window in this embodiment are windows having different time lengths. The time length of the first time window is greater than the time length of the second time window.
The current first time window in this embodiment refers to a first time window closest to the current time, and the current second time window refers to a second time window closest to the current time.
Fig. 5 is a schematic diagram of a current first time window and a current second time window in an image frame processing method according to an embodiment of the present invention. As shown in fig. 5, assuming that the current time is t, the time length of the current first time window is t1, and the time length of the current second time window is t2. t1 is greater than t2. It is understood that the end times of the current first time window and the current second time window are not necessarily the same. In fig. 5, the end time of the current first time window and the end time of the current second time window are both illustrated as time t.
Illustratively, t1 may be 15 seconds and t2 may be 1 second.
The first jitter parameter in this embodiment is used to indicate the jitter-related parameter of the image frame output by the video decoding module in the current first time window. The second jitter parameter is used for indicating the jitter related parameter of the image frame output by the video decoding module in the current second time window.
In one implementation, the jitter parameter in this embodiment may be the average frame rate over a time window. The average frame rate in this embodiment refers to the number of image frames output by the video decoding module per unit time.
In another implementation, the jitter parameter in this embodiment may be a jitter peak in a time window. The jitter peak is used to indicate the maximum time interval between two adjacent image frames within a time window.
In yet another implementation, the jitter parameter in this embodiment may include an average frame rate over a time window and a jitter peak over the time window.
The time windows refer to the current first time window and the current second time window.
Alternatively, in order to further improve the execution efficiency of step 301, step 301 may be implemented by: starting a reading thread, acquiring an image frame formed after decoding code stream data from a video decoding module, storing the image frame into a jitter buffer area, and acquiring a first jitter parameter and a second jitter parameter.
Step 302: and determining the frame transmission interval of the target image frame to be transmitted currently in the jitter buffer according to the first jitter parameter and the second jitter parameter.
The jitter buffer area is used for buffering the image frames output by the video decoding module.
In the present embodiment, the frame transmission interval of the target image frame currently to be transmitted refers to the time interval between the last image frame of the transmission target image frame and the transmission target image frame.
In this embodiment, since the first jitter parameter and the second jitter parameter of the image frame output by the video decoding module in different time windows are obtained, and the frame transmission interval of the target image frame is determined based on the first jitter parameter and the second jitter parameter. Therefore, in this embodiment, the jitter condition of the image frames in time windows with different time lengths can be considered, so as to determine the frame sending interval of the target image frame, so that a step smoothing can be implemented, the image frame processing performance is improved, that is, the smooth sending performance is improved, the blocking of the video picture is further reduced, and the user experience is improved.
Alternatively, to further improve the execution efficiency of step 302, step 301 may be implemented by: and starting a writing thread, and determining the frame sending interval of the target image frame to be sent currently in the jitter buffer area according to the first jitter parameter and the second jitter parameter.
In one implementation, the jitter parameter includes an average frame rate over a time window. The implementation process of step 302 is: determining whether a variation of a second average frame rate in the second jitter parameter relative to a first average frame rate in the first jitter parameter is greater than a preset variation threshold; when the change amount of the second average frame rate relative to the first average frame rate is larger than a preset change amount threshold, determining a frame sending interval of the target image frame in the jitter buffer according to the second jitter parameter; when the change amount of the second average frame rate relative to the first average frame rate is smaller than or equal to a preset change amount threshold value, determining a frame sending interval of the target image frame in the jitter buffer according to the first jitter parameter.
In this implementation manner, in one scenario, since the time length of the current second time window is shorter, when it is determined that the variation of the second average frame rate with respect to the first average frame rate is greater than the preset variation threshold, it is indicated that the jitter suddenly increases at this time, and therefore, it is necessary to determine the frame transmission interval of the target image frame in the jitter buffer according to the second jitter parameter. To accommodate larger jitter peaks faster and to improve the smoothing effect.
In another scenario, when it is determined that the variation of the second average frame rate with respect to the first average frame rate is less than or equal to the preset variation threshold, the jitter is relatively stable, so, in order to improve the smoothing effect, the frame sending interval of the target image frame in the jitter buffer may be determined according to the first jitter parameter in a longer time window. Therefore, the first jitter parameter is objective and accurate, and therefore, in the scene, the smoothness of the frame sending interval of the target image frame in the jitter buffer area is better according to the first jitter parameter. In addition, in this scene, determining the frame-feed interval of the target image frame in the shake buffer according to the first shake parameter may also achieve low delay of the image frame.
The preset variation threshold may be, for example, 20%.
Optionally, further, the jitter parameter further comprises a jitter peak value within a time window. Wherein the jitter peak is used to indicate the maximum time interval between two adjacent image frames within the time window.
Optionally, determining a frame interval of the target image frame in the jitter buffer according to the first jitter parameter includes: multiplying a first average frame rate in the first jitter parameter by a first jitter peak value and then performing preset processing to determine the number of image frames to be stored in a jitter buffer; if the number of the stored image frames in the jitter buffer area is larger than the number of the image frames to be stored, the historical frame sending interval is reduced by a value after a first numerical value is preset, and the value is determined to be the frame sending interval of the target image frame; if the number of the stored image frames in the jitter buffer is smaller than or equal to the number of the image frames to be stored, the value obtained by increasing the historical frame transmission interval by a preset second value is determined as the frame transmission interval of the target image frame.
Optionally, determining a frame interval of the target image frame in the jitter buffer according to the second jitter parameter includes: multiplying a second average frame rate in the second jitter parameter by a second jitter peak value and then performing preset processing to determine the number of image frames to be stored in the jitter buffer; if the number of the stored image frames in the jitter buffer area is larger than the number of the image frames to be stored, the historical frame sending interval is reduced by a value after a first numerical value is preset, and the value is determined to be the frame sending interval of the target image frame; if the number of the stored image frames in the jitter buffer is smaller than or equal to the number of the image frames to be stored, the value obtained by increasing the historical frame transmission interval by a preset second value is determined as the frame transmission interval of the target image frame.
For example, the preset process may be any one of four operations, for example, a value obtained by multiplying the second average frame rate and the second jitter peak value in the second jitter parameter by 1 is determined as the number of image frames to be stored in the jitter buffer.
The number of stored image frames in the jitter buffer is greater than the number of image frames to be stored, meaning that the frame interval may be reduced to reduce the delay, and therefore, the historical frame interval may be reduced by a value that is preset to a first value, to determine the frame interval of the target image frame. The number of stored image frames in the jitter buffer is less than or equal to the number of image frames to be stored, indicating that the number of stored image frames in the jitter buffer is small, and thus, it is necessary to increase the frame-feeding interval to increase the buffer length of the jitter buffer.
The multiplication of the first average frame rate and the first jitter peak value in the first jitter parameter may be expressed as a division of the first jitter peak value and the first inter-frame length in the first jitter parameter. Wherein the first inter-frame length and the first average frame rate are reciprocal, i.e. the first inter-frame length = reciprocal of the first average frame rate. Correspondingly, multiplying the second average frame rate and the second jitter peak value in the second jitter parameter may also be expressed as dividing the second jitter peak value and the second inter-frame length in the second jitter parameter.
The historical frame interval may be the frame interval of the frame immediately preceding the target image frame, or may be a statistical parameter of the frame intervals of a plurality of image frames preceding the target image frame, such as an average value, a maximum value, a median value, or the like.
The implementation mode can effectively meet the requirements of two scenes with low delay and sudden jitter increase, adopts the strategies of ladder smoothing and fast rising and slow falling, is used for slowly reducing delay and adopting jitter rising to rapidly enlarge jitter buffer space, accommodates larger jitter peak values faster, and further improves the smoothness performance.
In the image frame processing method provided in this embodiment, on the one hand, j itter buffer is set after the video decoding module, and the buffer in the jitter buffer is the image frame output by the video decoding module, so that the memory copy is not required, but only the address of the image frame is required to be copied in j itter buffer, so as to realize the buffer of the image frame output by the video decoding module in j itter buffer. Since the address of the image frame is smaller than the data amount of the code stream data itself, the image frame processing in the present embodiment reduces the performance loss. On the other hand, after VDEC decoding, the jitter buffer is buffered, and different buffers are not needed according to the type of the image frames, so that the processing complexity is low and the fault tolerance is improved. In yet another aspect, since the first jitter parameter and the second jitter parameter of the image frames output by the video decoding module in different time windows are acquired, and the frame transmission interval of the target image frame is determined based on the first jitter parameter and the second jitter parameter. Therefore, in this embodiment, the jitter condition of the image frames in time windows with different time lengths can be considered, so as to determine the frame sending interval of the target image frame, so that a step smoothing can be implemented, the image frame processing performance is improved, that is, the smooth sending performance is improved, the blocking of the video picture is further reduced, and the user experience is improved.
Fig. 6 is a flowchart of an image frame processing method according to another embodiment of the present invention. This embodiment provides a detailed description of the steps following step 302 based on the embodiment shown in fig. 3 and various alternative implementations. As shown in fig. 6, the image frame processing method provided in the present embodiment includes the following steps.
Step 601: the method comprises the steps of obtaining a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window.
The video decoding module is used for outputting image frames after decoding the code stream data, and the time length of the first time window is longer than that of the second time window.
Step 602: and determining the frame transmission interval of the target image frame to be transmitted currently in the jitter buffer according to the first jitter parameter and the second jitter parameter.
The jitter buffer area is used for buffering the image frames output by the video decoding module.
The implementation process and technical principle of step 601 and step 301, and step 602 and step 302 are similar, and will not be described herein.
Step 603: after the frame interval of the target image frame, the target image frame is acquired from the shake buffer, and the target image frame is transmitted to VPSS.
VPSS in this embodiment supports unified preprocessing of the input target image frame, such as denoising, de-interlacing, etc., then scaling, sharpening, etc. are performed on each channel, and finally, multiple images with different resolutions are output.
In this embodiment, after determining the frame interval of the target image frame, after waiting for the frame interval of the target image frame, the target image frame is obtained from the jitter buffer area, and the target image frame is sent to VPSS, so that the implementation process is simple, the efficiency is high, and the performance loss is further reduced.
Fig. 7 is a flowchart of an image frame processing method according to another embodiment of the present invention. This embodiment provides a detailed description of steps preceding and following step 302 based on the embodiment shown in fig. 3 and various alternative implementations. As shown in fig. 7, the image frame processing method provided in the present embodiment includes the following steps.
Step 701: the method comprises the steps of obtaining a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window.
The video decoding module is used for outputting image frames after decoding the code stream data, and the time length of the first time window is longer than that of the second time window.
Step 702: the target image frame is acquired from the jitter buffer.
In this embodiment, there is no timing relationship between the step 702 and the step 701.
Step 703: and determining the frame transmission interval of the target image frame to be transmitted currently in the jitter buffer according to the first jitter parameter and the second jitter parameter.
The jitter buffer area is used for buffering the image frames output by the video decoding module.
In this embodiment, the implementation process and technical principle of step 701 and step 301, and step 703 and step 302 are similar, and will not be described here again.
Step 704: and determining the waiting time of the target image frame according to the frame sending interval and the time stamp of the target image frame obtained from the jitter buffer.
Optionally, one possible implementation of step 704 is: determining the waiting time of the target image frame according to the time stamp of the target image frame obtained from the jitter buffer area and the moment of determining the frame sending interval; the difference between the frame-feed interval and the waiting time is determined as the frame-feed waiting time of the target image frame.
Step 705: after the frame-feed waiting time of the target image frame, the target image frame is sent to VPSS.
In the present embodiment, the target image frame is acquired from the shake buffer before the frame transfer interval of the target image frame has not been determined. After the frame-feeding interval of the target image frame is determined, the frame-feeding waiting time of the target image frame is determined according to the frame-feeding interval and the time stamp of the target image frame obtained from the jitter buffer. Thereafter, after the frame-feed waiting time of the target image frame, the target image frame is sent to VPSS. In the present embodiment, compared with the embodiment shown in fig. 6, it is possible to avoid delay in acquiring the target image frame when the target image frame is acquired after the frame-feeding interval of the target image frame. Therefore, the image frame processing method provided by the embodiment can further reduce the delay of the image frame.
Fig. 8 is a schematic structural diagram of an image frame processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the image frame processing apparatus provided in this embodiment includes the following modules: the first acquisition module 81 and the first determination module 82.
The first obtaining module 81 is configured to obtain a first jitter parameter of an image frame output by the video decoding module in a current first time window and a second jitter parameter of an image frame output by the video decoding module in a current second time window.
The video decoding module is used for outputting image frames after decoding the code stream data, and the time length of the first time window is longer than that of the second time window.
The first determining module 82 is configured to determine a frame transmission interval of the target image frame currently to be transmitted in the jitter buffer according to the first jitter parameter and the second jitter parameter.
The jitter buffer area is used for buffering the image frames output by the video decoding module.
In one embodiment, the apparatus further includes a transmitting module configured to acquire the target image frame from the jitter buffer after a frame interval of the target image frame, and transmit the target image frame to VPSS.
In an embodiment, the apparatus further includes a second acquisition module and a second sending module. And the second acquisition module is used for acquiring the target image frame from the jitter buffer area. A second sending module, configured to: determining the waiting time of the target image frame according to the frame sending interval and the time stamp of the target image frame obtained from the jitter buffer; after the frame-feed waiting time of the target image frame, the target image frame is sent to VPSS.
Optionally, in determining the frame waiting time of the target image frame according to the frame sending interval and the time stamp of the target image frame obtained from the jitter buffer, the second sending module is configured to: determining the waiting time of the target image frame according to the time stamp of the target image frame obtained from the jitter buffer area and the moment of determining the frame sending interval; the difference between the frame-feed interval and the waiting time is determined as the frame-feed waiting time of the target image frame.
In one embodiment, the jitter parameter comprises an average frame rate over a time window. The first determination module 82 is configured to: determining whether a variation of a second average frame rate in the second jitter parameter relative to a first average frame rate in the first jitter parameter is greater than a preset variation threshold; when the change amount of the second average frame rate relative to the first average frame rate is larger than a preset change amount threshold, determining a frame sending interval of the target image frame in the jitter buffer according to the second jitter parameter; when the change amount of the second average frame rate relative to the first average frame rate is smaller than or equal to a preset change amount threshold value, determining a frame sending interval of the target image frame in the jitter buffer according to the first jitter parameter.
In one embodiment, the jitter parameter further comprises a jitter peak value within a time window. Wherein the jitter peak is used to indicate the maximum time interval between two adjacent image frames within the time window. In determining a frame interval of the target image frame in the jitter buffer based on the first jitter parameter, the first determining module 82 is configured to: multiplying a first average frame rate in the first jitter parameter by a first jitter peak value and then performing preset processing to determine the number of image frames to be stored in a jitter buffer; if the number of the stored image frames in the jitter buffer area is larger than the number of the image frames to be stored, the historical frame sending interval is reduced by a value after a first numerical value is preset, and the value is determined to be the frame sending interval of the target image frame; if the number of the stored image frames in the jitter buffer is smaller than or equal to the number of the image frames to be stored, the value obtained by increasing the historical frame transmission interval by a preset second value is determined as the frame transmission interval of the target image frame.
In one embodiment, the first obtaining module 81 is configured to: starting a reading thread, acquiring an image frame formed after decoding code stream data from a video decoding module, storing the image frame into a jitter buffer area, and acquiring a first jitter parameter and a second jitter parameter.
In one embodiment, the first determining module 82 is configured to: and starting a writing thread, and determining the frame sending interval of the target image frame to be sent currently in the jitter buffer area according to the first jitter parameter and the second jitter parameter.
The image frame processing device provided by the embodiment of the invention can execute the image frame processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device implementing an image frame processing method according to an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 9, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14. The electronic device further includes a video decoding module communicatively coupled to the at least one processor, and a jitter buffer communicatively coupled to both the at least one processor and the video decoding module.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as an image frame processing method.
In some embodiments, the image frame processing method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the image frame processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the image frame processing method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. An image frame processing method, comprising:
Acquiring a first dithering parameter of an image frame output by a video decoding module in a current first time window and a second dithering parameter of the image frame output by the video decoding module in a current second time window; the video decoding module is used for outputting image frames after decoding code stream data, and the time length of the first time window is longer than that of the second time window;
Determining a frame sending interval of a target image frame to be sent currently in a jitter buffer according to the first jitter parameter and the second jitter parameter; the jitter buffer area is used for buffering the image frames output by the video decoding module;
The jitter parameter includes an average frame rate in a time window, and the determining, according to the first jitter parameter and the second jitter parameter, a frame transmission interval of a target image frame to be currently transmitted in a jitter buffer area includes: determining whether a variation of a second average frame rate in the second jitter parameter relative to a first average frame rate in the first jitter parameter is greater than a preset variation threshold; when the change amount of the second average frame rate relative to the first average frame rate is larger than the preset change amount threshold, determining a frame sending interval of the target image frame in the jitter buffer according to the second jitter parameter; and when the change amount of the second average frame rate relative to the first average frame rate is smaller than or equal to the preset change amount threshold, determining the frame sending interval of the target image frame in the jitter buffer according to the first jitter parameter.
2. The method according to claim 1, wherein the method further comprises:
After a frame interval of the target image frame, the target image frame is acquired from the jitter buffer and transmitted to a video processing subsystem VPSS.
3. The method of claim 1, wherein prior to determining a frame interval of a target image frame currently to be transmitted in the jitter buffer, the method further comprises:
Acquiring the target image frame from the jitter buffer;
After determining the frame interval of the target image frame currently to be transmitted in the jitter buffer, the method further includes:
Determining the frame waiting time of the target image frame according to the frame sending interval and the time stamp of the target image frame obtained from the jitter buffer;
After a frame-feed latency of the target image frame, the target image frame is sent to VPSS.
4. A method according to claim 3, wherein said determining a frame waiting time for said target image frame based on said frame transfer interval and a time stamp of said target image frame obtained from said jitter buffer comprises:
Determining the waiting time of the target image frame according to the time stamp of the target image frame obtained from the jitter buffer area and the moment of determining the frame sending interval;
And determining the difference value between the frame transmission interval and the waiting time as the frame transmission waiting time of the target image frame.
5. The method of claim 1, wherein the jitter parameter further comprises a jitter peak within a time window; the jitter peak value is used for indicating the maximum time interval between two adjacent image frames in the time window;
The determining, according to the first jitter parameter, a frame transmission interval of the target image frame in the jitter buffer area includes:
Multiplying a first average frame rate in the first jitter parameter and a first jitter peak value and then carrying out preset processing to determine the value as the number of image frames to be stored in the jitter buffer;
If the number of the stored image frames in the jitter buffer is greater than the number of the image frames to be stored, the historical frame sending interval is reduced by a value after a first numerical value is preset, and the value is determined to be the frame sending interval of the target image frame;
if the number of the stored image frames in the jitter buffer is smaller than or equal to the number of the image frames to be stored, a value obtained by increasing the historical frame sending interval by a preset second value is determined as the frame sending interval of the target image frame.
6. The method of claim 1, wherein the acquiring the first dithering parameter for the image frame output by the video decoding module within the current first time window and the second dithering parameter for the image frame output by the video decoding module within the current second time window comprises:
Starting a reading thread, acquiring an image frame formed after decoding the code stream data from the video decoding module, storing the image frame into the jitter buffer, and acquiring the first jitter parameter and the second jitter parameter;
The determining, according to the first jitter parameter and the second jitter parameter, a frame transmission interval of a target image frame to be currently transmitted in a jitter buffer area includes:
And starting a writing thread, and determining the frame sending interval of the target image frame to be sent currently in the jitter buffer area according to the first jitter parameter and the second jitter parameter.
7. An image frame processing apparatus, comprising:
The first acquisition module is used for acquiring a first dithering parameter of the image frame output by the video decoding module in the current first time window and a second dithering parameter of the image frame output by the video decoding module in the current second time window; the video decoding module is used for outputting image frames after decoding code stream data, and the time length of the first time window is longer than that of the second time window;
The first determining module is used for determining a frame sending interval of a target image frame to be sent currently in a jitter buffer according to the first jitter parameter and the second jitter parameter; the jitter buffer area is used for buffering the image frames output by the video decoding module;
The jitter parameter comprises an average frame rate over a time window, and the first determining module is configured to: determining whether a variation of a second average frame rate in the second jitter parameter relative to a first average frame rate in the first jitter parameter is greater than a preset variation threshold; when the change amount of the second average frame rate relative to the first average frame rate is larger than the preset change amount threshold, determining a frame sending interval of the target image frame in the jitter buffer according to the second jitter parameter; and when the change amount of the second average frame rate relative to the first average frame rate is smaller than or equal to the preset change amount threshold, determining the frame sending interval of the target image frame in the jitter buffer according to the first jitter parameter.
8. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor;
A video decoding module in communication with the at least one processor;
A jitter buffer communicatively coupled to the at least one processor and the video decoding module; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image frame processing method of any one of claims 1-6.
9. A computer readable storage medium storing computer instructions for causing a processor to perform the image frame processing method of any one of claims 1-6.
CN202311039708.3A 2023-08-17 2023-08-17 Image frame processing method, device, equipment and storage medium Active CN117061827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311039708.3A CN117061827B (en) 2023-08-17 2023-08-17 Image frame processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311039708.3A CN117061827B (en) 2023-08-17 2023-08-17 Image frame processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117061827A CN117061827A (en) 2023-11-14
CN117061827B true CN117061827B (en) 2024-06-14

Family

ID=88665888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311039708.3A Active CN117061827B (en) 2023-08-17 2023-08-17 Image frame processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117061827B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE353503T1 (en) * 2001-04-24 2007-02-15 Nokia Corp METHOD FOR CHANGING THE SIZE OF A CLIMBER BUFFER FOR TIME ALIGNMENT, COMMUNICATIONS SYSTEM, RECEIVER SIDE AND TRANSCODER
KR100754736B1 (en) * 2006-02-10 2007-09-03 삼성전자주식회사 Method and apparatus for reproducing image frames in video receiver system
WO2012154156A1 (en) * 2011-05-06 2012-11-15 Google Inc. Apparatus and method for rendering video using post-decoding buffer
US9812144B2 (en) * 2013-04-25 2017-11-07 Nokia Solutions And Networks Oy Speech transcoding in packet networks
CN105554019B (en) * 2016-01-08 2018-07-24 全时云商务服务股份有限公司 A kind of audio Key dithering system and method
WO2022019874A1 (en) * 2020-07-20 2022-01-27 Google Llc Adaptive resizing of audio jitter buffer based on current network conditions
CN115001632A (en) * 2022-06-09 2022-09-02 咪咕文化科技有限公司 Information transmission method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN117061827A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US10659847B2 (en) Frame dropping method for video frame and video sending apparatus
US10498786B2 (en) Method and apparatus for adaptively providing multiple bit rate streaming media in server
CN109104610B (en) Real-time screen sharing
WO2017219896A1 (en) Method and device for transmitting video stream
CN110784740A (en) Video processing method, device, server and readable storage medium
US10819994B2 (en) Image encoding and decoding methods and devices thereof
EP2123043A2 (en) Fast channel change on a bandwidth constrained network
CN113068001B (en) Data processing method, device, equipment and medium based on cascade camera
CN110582012B (en) Video switching method, video processing device and storage medium
US11356739B2 (en) Video playback method, terminal apparatus, and storage medium
WO2023226915A1 (en) Video transmission method and system, device, and storage medium
WO2021052500A1 (en) Video image transmission method, sending device, and video call method and device
CN112866746A (en) Multi-path streaming cloud game control method, device, equipment and storage medium
CN114422799B (en) Decoding method and device for video file, electronic equipment and program product
CN113490055A (en) Data processing method and device
CN103596037B (en) A kind of control method of live video stream buffering
CN117061827B (en) Image frame processing method, device, equipment and storage medium
CN110912922B (en) Image transmission method and device, electronic equipment and storage medium
CN115767149A (en) Video data transmission method and device
CN115834884A (en) Method and apparatus for controlling transmission of video stream
CN113824985B (en) Live streaming method, apparatus, device, storage medium and computer program product
CN112929667A (en) Encoding and decoding method, device and equipment and readable storage medium
JP2018514133A (en) Data processing method and apparatus
WO2024000463A1 (en) Video processing method and apparatus and computer readable storage medium
CN117097904A (en) Video coding method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant