CN111491201A - Method for adjusting video code stream and video frame loss processing method - Google Patents

Method for adjusting video code stream and video frame loss processing method Download PDF

Info

Publication number
CN111491201A
CN111491201A CN202010271209.7A CN202010271209A CN111491201A CN 111491201 A CN111491201 A CN 111491201A CN 202010271209 A CN202010271209 A CN 202010271209A CN 111491201 A CN111491201 A CN 111491201A
Authority
CN
China
Prior art keywords
code stream
transmission rate
video
current
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010271209.7A
Other languages
Chinese (zh)
Other versions
CN111491201B (en
Inventor
周强
刘德志
马强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hollyland Technology Co Ltd
Original Assignee
Shenzhen Hollyland Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hollyland Technology Co Ltd filed Critical Shenzhen Hollyland Technology Co Ltd
Priority to CN202010271209.7A priority Critical patent/CN111491201B/en
Publication of CN111491201A publication Critical patent/CN111491201A/en
Application granted granted Critical
Publication of CN111491201B publication Critical patent/CN111491201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6373Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64761Control signals issued by the network directed to the server or the client directed to the server
    • H04N21/64769Control signals issued by the network directed to the server or the client directed to the server for rate control

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a method for adjusting video code stream and a video frame loss processing method. The method for adjusting the video code stream is applied to a wireless image transmission system and comprises the following steps: acquiring an appointed transmission index of a current network, wherein the appointed transmission index comprises a transmission rate and identification information representing the change trend of the transmission rate; and determining whether to adjust the current code stream parameter of the video coding according to the specified transmission index. The method ensures that the code stream parameters of the video coding can adapt to the change of the transmission rate of the wireless network, and when the transmission rate of the wireless network is reduced, the data volume required to be transmitted by the video per second can be reduced by reducing the code stream parameters, thereby ensuring the real-time property of video data transmission.

Description

Method for adjusting video code stream and video frame loss processing method
Technical Field
The present application relates to the field of video transmission technologies, and in particular, to a method for adjusting a video stream and a method for processing a video frame loss.
Background
A wireless image transmission system generally includes a transmitter and a receiver, and a connection is established between the transmitter and the receiver through a wireless network. When the system works, the shooting equipment transmits shot video data to the transmitter, the transmitter transmits the video data to the receiver through a wireless network, and the receiver outputs the video data to the display equipment to realize remote wireless transmission of the video data.
However, the environment of the wireless network is in dynamic change, and when the environment of the wireless network is poor, the transmission of the video data is delayed, and the playing of the video cannot be maintained in real time.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a method for adjusting video code streams and a video frame loss processing method.
According to a first aspect of the embodiments of the present application, a method for adjusting a video bitstream is provided, which is applied to a wireless image transmission system, and includes:
acquiring an appointed transmission index of a current network, wherein the appointed transmission index comprises a transmission rate and identification information representing the change trend of the transmission rate;
and determining whether to adjust the current code stream parameter of the video coding according to the specified transmission index.
As an optional implementation, the method further comprises:
and if the identification information represents that the transmission rate is in a rising trend and the current transmission rate is less than a first preset value, maintaining the current code stream parameters.
As an optional implementation, the method further comprises:
and if the identification information represents that the transmission rate is in a rising trend and the current transmission rate is greater than the first preset value, increasing the current code stream parameter according to a preset amplification.
As an optional implementation manner, the increasing the current code stream parameter according to the preset amplification includes:
if the rising amplitude of the transmission rate is larger than the rising amplitude threshold value, increasing the current code stream parameter according to a first preset increasing amplitude; if the rising amplitude of the transmission rate is smaller than or equal to the rising amplitude threshold value, increasing the current code stream parameter according to a second preset increasing amplitude; wherein the first predetermined amplification is greater than the second predetermined amplification.
As an optional implementation, the method further comprises:
if the current code stream parameter is determined to be increased, determining that the increased code stream parameter does not exceed the preset upper limit value of the code stream parameter;
if the identification information represents that the transmission rate is in a descending trend and the current transmission rate is greater than a second preset value, maintaining the current code stream parameters; the second preset value is the transmission rate of the code stream corresponding to the upper limit value of the code stream parameter.
As an optional implementation, the method further comprises:
and if the identification information represents that the transmission rate is in a descending trend and the current transmission rate is less than the second preset value, reducing the current code stream parameter according to a preset reduction.
As an optional implementation manner, the reducing the current codestream parameter according to the preset reduction includes:
if the descending amplitude of the transmission rate is larger than the descending amplitude threshold value, reducing the current code stream parameter according to a first preset descending amplitude; if the descending amplitude of the transmission rate is smaller than or equal to the descending amplitude threshold value, reducing the current code stream parameter according to a second preset descending amplitude; wherein the first predetermined reduction is greater than the second predetermined reduction.
According to a second aspect of the embodiments of the present application, there is provided a video frame loss processing method applied to a wireless image transmission system, including:
when receiving a video frame sent by a transmitter, determining the current total delay frame number; if the current total delay frame number is larger than a preset delay upper limit, determining to perform frame loss processing on the video data sent by the transmitter; the code stream parameters corresponding to the video data sent by the transmitter are determined according to the appointed transmission indexes of the current network, wherein the appointed transmission indexes comprise transmission rate and identification information representing the change trend of the transmission rate.
As an optional implementation, the determining to perform frame loss processing on the video data sent by the transmitter includes:
determining the number of lost frames; the frame loss number is the difference value between the total delay frame number and the delay upper limit;
and discarding one or more video frames received at the end of the time sequence in the GOP according to the determined frame loss number.
As an optional implementation, the determining the current total delay frame number includes:
determining a first delay frame number according to the time interval of receiving the video frames twice;
determining a second delay frame number according to the current frame buffer number;
determining a total delay frame number; the total delay frame number is the sum of the first delay frame number and the second delay frame number.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, whether the current code stream parameter of the video coding is adjusted or not is determined according to the acquired specified transmission index of the current network, so that the code stream parameter of the video coding can adapt to the change of the transmission rate of the wireless network. Therefore, when the wireless network signal is poor and the transmission rate is reduced, the data volume required to be transmitted per second of the video can be reduced by reducing the code stream parameters, so that the real-time performance of video data transmission is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of a wireless image transmission system according to an exemplary embodiment of the present application.
Fig. 2 is a flowchart illustrating a method for adjusting a video bitstream according to an exemplary embodiment of the present application.
Fig. 3 is a flow chart illustrating a video frame loss processing method according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Next, examples of the present application will be described in detail.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a wireless image transmission system according to an exemplary embodiment of the present application.
The wireless image transmission system generally comprises a transmitter 1 and a receiver 2, and the transmitter 1 and the receiver 2 establish connection through a wireless network. Wherein, the transmitter 1 can be connected with a shooting device, and the receiver 2 can be connected with a display device.
When the system works, shooting equipment transmits shot video data to the transmitter 1, the transmitter 1 transmits the video data to the receiver 2 through a wireless network, and the receiver 2 outputs the video data to display equipment so as to realize remote wireless transmission of the video data.
The environment of the wireless network is dynamically changing and therefore the wireless network signals are not necessarily stable. When the wireless network signal is deteriorated, the transmission rate of the wireless network is also reduced, if the transmission rate at the moment can not meet the code stream parameter of the current video coding, the playing of the video is blocked, and the video picture is not real-time any more.
In order to solve the above problems, the present application provides a method for adjusting a video bitstream, which is applied to a wireless image transmission system and can be performed by a transmitter in the wireless image transmission system. Referring to fig. 2, fig. 2 is a flowchart illustrating a method for adjusting a video bitstream according to an exemplary embodiment of the present application. The method comprises the following steps:
step 201, obtaining the designated transmission index of the current network.
The designated transmission index comprises a transmission rate and identification information representing the change trend of the transmission rate.
Step 202, determining whether to adjust the current code stream parameter of the video coding according to the specified transmission index.
Since the environment of the wireless network may change dynamically, the transmission rate of the wireless network is also in the process of changing dynamically. In order to determine the variation trend of the transmission rate, the real-time transmission rate of the wireless network can be obtained according to the set frequency. In one example, after determining the transmission rate at the current time, the transmission rate at the current time may be compared with the transmission rate determined at the previous time to determine whether the transmission rate is in an upward trend or a downward trend.
The change trend of the transmission rate can be characterized by the identification information. For example, in one implementation, the identification information equal to 1 may indicate that the transmission rate is in an upward trend, and the identification information equal to 0 may indicate that the transmission rate is in a downward trend. Of course, the above is only an example, and the technician may personally select other identification information, which is not limited in this application.
By analyzing the content of the identification information, the variation trend of the transmission rate can be determined. In one case, if the transmission rate tends to increase, the bitstream parameters of the video coding may be selected to increase. However, if the code stream parameters are adjusted as soon as the transmission rate increases, the change frequency of the code stream parameters may be too frequent. Since the code stream parameters have a direct correlation with the image quality of the video, the code stream parameters change frequently, and the image quality of the video changes frequently, so that the image quality of the video is unstable for the user, and the user's impression is reduced.
Therefore, in order to solve the above problem, a threshold value of the transmission rate may be set for the case where the transmission rate increases, and the threshold value may be referred to as a first preset value. If the transmission rate is in a rising trend, but the current transmission rate is still smaller than the first preset value, the current transmission rate is still lower, and even if the code stream parameters are increased, the video image quality cannot be obviously improved, so that the current code stream parameters can be kept unchanged and are not adjusted to ensure the stability of the video image quality.
Correspondingly, if the transmission rate is in a rising trend and the current transmission rate is greater than the first preset value, the code stream parameters are increased, and the video image quality can be obviously improved, so that the current code stream parameters can be increased selectively.
However, obviously, the code stream parameters cannot be increased as much as possible when the transmission rate is increased. The upper limit values of the code stream parameters are different corresponding to different transmission rates. Therefore, an upper limit value of a code stream parameter can be set, and when the current code stream parameter is increased, the code stream parameter is ensured not to exceed the upper limit value of the code stream parameter.
In addition, from another point of view, after the image quality of the video reaches a certain definition, the requirements of most users can be met, and it is not significant to continue increasing the code stream parameters to improve the image quality. Therefore, further, when the upper limit value of the code stream parameter is set, the code stream parameter can be relatively conservative, so that a certain distance is kept between the code stream parameter and the code stream which can be supported by the highest transmission rate of the wireless network. For example, the maximum transmission rate of the wireless network may support 18M codestream parameters, and the upper limit value of the codestream parameters may be set to 12M, because the video image quality corresponding to the 12M codestream parameters is sufficiently clear.
By the arrangement, the increase times of the code stream parameters when the transmission rate is increased can be limited, and the situation that the code stream parameters need to be reduced is reduced, so that the effects that the overall adjustment times of the code stream parameters are reduced and the video image quality is more stable can be achieved. Specifically, if the upper limit value of the code stream parameter differs from the code stream that can be supported by the highest transmission rate by a certain value, it means that the transmission rate is decreased, and the code stream parameter does not necessarily need to be decreased.
For example, continuing with the above example, the maximum transmission rate of the wireless network may support 18M codestream parameters, and the upper limit of the codestream parameters is set to 12M. Then, when the transmission rate is increased to a supportable code stream larger than 12M, the code stream parameter will not be increased any more, and the code stream parameter will be maintained at a value of 12M or close to 12M. When the transmission rate is decreased, for example, from being capable of supporting 18M to being capable of supporting 15M code streams, or from being capable of supporting 16M to being capable of supporting 14M code streams, as long as the decreased transmission rate is still greater than the second preset value, the code stream parameters may not be decreased. The second preset value is a transmission rate of the code stream corresponding to the upper limit of the code stream parameter, that is, in the above example, the transmission rate of the 12M code stream can be supported.
Further, when the code stream parameters of the video coding are increased or decreased, if the amplitude of one-time adjustment is too large, the code stream parameters change too much, the image quality of the video will suddenly become very fuzzy or very clear, and the stability of the image quality of the video is poor.
Therefore, for increasing the code stream parameter, an increase can be preset. When the identification information represents that the transmission rate is in a rising trend and the current transmission rate is greater than the first preset value, the current code stream parameter can be increased according to the preset amplification. For example, in one specific example, the codestream parameters may be increased by 2M increments per second, and in another specific example, the codestream parameters may be increased by 0.5M increments per second.
For reducing the code stream parameters, a reducing width can be preset. When the identification information represents that the transmission rate is in a descending trend and the current transmission rate is smaller than a second preset value, the current code stream parameters can be reduced according to the preset reduction. For example, in one specific example, the codestream parameters may be reduced by a reduction of 2M per second, and in another specific example, the codestream parameters may be reduced by a reduction of 0.5M per second.
Because the transmission rate of the wireless network may vary greatly or not much, if the adjustment range of the code stream parameters is fixed, some problems may occur. For example, one possible situation is that the current code stream parameter is 2M, the preset amplification is 0.5M per second, if the transmission rate is suddenly and greatly increased at the next moment, and the transmission rate is increased to a code stream that can support 12M, at this time, if the code stream parameter is increased according to the preset amplification of 0.5M per second, a long period of time is required to increase the code stream parameter to 12M, and the transmission rate of the current network cannot be maximally utilized at the fastest speed.
Another possible situation is that if the preset frame reduction is 0.5M per second, when the transmission rate suddenly drops greatly, for example, the transmission rate drops from a code stream that can support 12M to a code stream that can only support 2M, if the frame reduction parameter is 0.5M per second, the playing of the video will be blocked for a long time.
If the preset increase or decrease is too large, the adjustment will also be problematic when the transmission rate changes by a small amount. For example, if the preset reduction is 2M per second, when the transmission rate is reduced from a code stream capable of supporting 6M to a code stream capable of supporting 5.5M, if the current code stream parameter is 6M, the code stream parameter needs to be reduced, and after the reduction of 2M per second, the code stream parameter is 4M, the transmission rate cannot be utilized to the maximum. If the current code stream parameter is 6M, the preset amplification is 2M per second, and when the transmission rate is increased from the code stream capable of supporting 6M to the code stream capable of supporting 7.5M, the code stream parameter cannot be increased by adopting the amplification of 2M per second, and the transmission rate cannot be maximally utilized.
To this end, the present application provides an embodiment in which a plurality of different increases can be set for the increase in the transmission rate to cope with the increase width different in the transmission rate. In an alternative embodiment, a rise magnitude threshold may be preset. When the transmission rate increases, the increase of the transmission rate can be calculated by using the difference between the current transmission rate and the transmission rate determined at the previous time or the previous moment. If the calculated rise amplitude is greater than the rise amplitude threshold, a larger increase may be selected to increase the current code stream parameter, and this increase may be referred to as a first preset increase, for example, may be an increase of 2M per second.
If the calculated rise amplitude is less than or equal to the rise amplitude threshold, a smaller increase may be selected to increase the current code stream parameter, and this increase may be referred to as a second preset increase, for example, may be an increase of 0.5M per second.
Of course, in order to more finely cope with different changes of the transmission rate, two different rising amplitude thresholds may be further set, and three different preset amplitudes may be set, which is not specifically limited in the present application.
Accordingly, for a decrease in the transmission rate, a plurality of different attenuations may be set to cope with different magnitudes of the decrease in the transmission rate. In an alternative embodiment, a drop amplitude threshold may be preset. When the transmission rate decreases, the difference between the transmission rate determined at the previous time or the previous moment and the current transmission rate (or the absolute value of the current transmission rate minus the previous transmission rate) may be used to calculate the decrease of the transmission rate. If the calculated descending amplitude is greater than the descending amplitude threshold, a larger amplitude may be selected to reduce the current code stream parameter, and this larger amplitude may be referred to as a first preset amplitude, for example, may be a reduction of 2M per second.
If the calculated descending amplitude is less than or equal to the descending amplitude threshold, a smaller amplitude may be selected to reduce the current code stream parameter, and this smaller amplitude may be referred to as a second preset amplitude, and may be reduced by 0.5M per second, for example.
It is to be understood that, in order to conveniently characterize the transmission rate of the wireless network, in an alternative embodiment, a modulation and coding strategy MCS may also be adopted. Different MCS levels correspond to different transmission rates, for example, the MCS levels may include MCS0 to MCS15, and when the determined MCS level is higher, the transmission rate of the current network is also higher; when the MCS level changes, the transmission rate of the current network also changes. Whether the transmission rate is increased or decreased can be determined by a change in the MCS level.
The above is a detailed description of a method for adjusting video streams provided in the present application. The method can determine whether to adjust the current code stream parameter of the video coding according to the acquired specified transmission index of the current network, so that the code stream parameter of the video coding can adapt to the change of the transmission rate of the wireless network. Therefore, when the wireless network signal is poor and the transmission rate is reduced, the data volume required to be transmitted per second of the video can be reduced by reducing the code stream parameters, so that the real-time performance of video data transmission is ensured. When the transmission rate of the wireless network is increased, the code stream parameters can be correspondingly increased, so that the bandwidth is utilized to the maximum extent, and clearer video image quality is provided for users.
Further, during the transmission of video data, video frames are transmitted by the transmitter to the receiver via the wireless network. However, the wireless network is not always stable, and the transmission of video frames will be blocked when the wireless network signal is degraded. For the receiver, the receiver cannot receive the video frame at some time when the video frame should be received, so that the playing of the video frame cannot be performed in real time.
To facilitate understanding of the above problems, a specific example is given below.
For example, in a normal wireless network, a receiver receives a video frame at regular intervals. Each time point at which a video frame should be received may be referred to as a time instant, and at a first time instant, the receiver may receive a first frame, which will be played within a playing period of a next frame; at a second time instant, the receiver may receive a second frame, which will be played during the playing period of the next frame.
However, if the wireless network signal is degraded at the third time and the transmission of the video frame is blocked, the receiver cannot receive the third frame at the third time, and can only continue to play the previously received second frame in the playing period of the next frame. If such a situation persists for several moments, the playing of the video frames will be delayed by a non-negligible amount compared to the real scene, and the user cannot be presented with real-time video images.
Therefore, in order to solve the above problems, the present application may further improve on the basis of the foregoing method for adjusting a video stream, so as to better provide a real-time video image for a user.
Specifically, referring to fig. 3, fig. 3 is a flowchart illustrating a video frame loss processing method according to an exemplary embodiment of the present application. The method is equally applicable to a wireless image transmission system, wherein the acts involved may be performed by a receiver. The method comprises the following steps:
step 301, upon receiving a video frame sent by a transmitter, determines a current total delay frame number.
Step 302, determining whether the determined current total delay frame number is greater than a preset delay upper limit, if so, executing step 303, and if not, ending the process.
Step 303, determining to perform frame loss processing on the video data sent by the transmitter.
It should be noted that the code stream parameter corresponding to the video data transmitted by the transmitter may be determined according to the specified transmission index of the current network. The designated transmission index comprises a transmission rate and identification information representing the change trend of the transmission rate. For the detailed implementation of the part of the content, reference may be made to the detailed description of a method for adjusting a video bitstream in the foregoing, and details are not repeated here.
The video frame loss processing method provided by the application can enable the playing of the video frames to be recovered to real time by discarding some video frames in the video data under the condition that the playing of the video frames is delayed.
The frame loss processing is to restore the real-time playing of the video frame, and whether the video frame is played in real time or not needs to be defined. Since the number of frames visible to the human eye is limited, when the video frames are played with a delay of only a few frames, it is likely that the user does not feel the picture delay, i.e., the user can consider the video picture to be still real-time.
Therefore, a delay upper limit can be preset, and when the determined current total delay frame number is greater than the delay upper limit, the frame loss processing of the video data sent by the transmitter is determined to be needed.
There are several possible ways to determine the total delay frame number. In an alternative embodiment, after determining the next video frame to be played, the determined next video frame to be played and the next video frame to be played may be compared in time sequence to determine the total number of delayed frames. The next video frame that should be played, i.e. the next video frame that should be played when the network has no delay. When comparing the timing of the video frames, the display time stamp PTS of the video frame may be used to determine the timing of the video frame. Of course, a GOP number may be assigned to a video frame in a Group of pictures GOP (Group of pictures), and the GOP number of the video frame may be associated with the time sequence thereof. In order to obtain the PTS or GOP sequence number conveniently, when the transmitter encodes video data, the corresponding PTS or GOP sequence number can be written in a specific position such as the head of a video frame, so that when the receiver receives the video frame, the receiver can directly analyze the data at the specific position to obtain the PTS or GOP sequence number of the video frame.
The application also provides another optional implementation way, when the current total delay frame number is determined, a first delay frame number can be determined according to the time interval of receiving the video frames twice before and after; determining a second delay frame number according to the current frame buffer number; and calculating the sum of the first delay frame number and the second delay frame number to obtain the current total delay frame number.
For ease of understanding, a specific example is given below. Referring to table 1, a total of 10 time instants are shown. At the 1 st moment, the receiver receives the 1 st frame, and at this moment, the frame buffer does not store the video frame, and the next frame played is determined to be the 1 st frame. At the 2 nd time, the receiver receives the 2 nd frame, and at this time, the frame buffer stores the 1 st frame received at the previous time, and it is determined that the next frame played is the 2 nd frame. At the 3 rd moment, the wireless network is blocked, the receiver cannot receive the 3 rd frame successfully, and at the moment, the 2 nd frame received at the previous moment is stored in the frame buffer, so that only the 2 nd frame which is played next can be determined. At time 4, the wireless network has not yet recovered, the same situation as at time 3.
At the 5 th moment, the wireless network returns to normal, the receiver receives the 3 rd, 4 th and 5 th frames at one time, the 2 nd frame is still stored in the frame buffer at the moment, and the 3 rd frame is already received, so that the next frame played is determined to be the 3 rd frame. At the 6 th moment, the receiver receives the 6 th frame, and at this moment, the 3 rd, 4 th and 5 th frames are stored in the frame buffer, and the 4 th frame is determined to be played next. At the 7 th moment, the wireless network is blocked again, the receiver cannot receive the 7 th frame successfully, and at this moment, the 4 th, 5 th and 6 th frames are stored in the frame buffer, so that the next video frame in the time sequence, namely the 5 th frame, can be continuously played, and therefore the 5 th frame is determined to be played next. At the 8 th moment, the wireless network is still not recovered, the video frame is not received, the 5 th frame and the 6 th frame are stored in the frame buffer, and the 6 th frame is determined to be played next.
At the 9 th moment, the wireless network is recovered to be normal, the receiver receives the 7 th, 8 th and 9 th frames at one time, the 6 th frame is stored in the frame buffer at the moment, and the 7 th frame is determined to be played next.
TABLE 1
Figure BDA0002443215250000111
Figure BDA0002443215250000121
When the total delay number is determined, for each time when a video frame is received, the first delay frame number may be determined according to the time interval between the two previous and subsequent received video frames. For example, when the wireless network signal is good, the time interval between two previous and next received video frames should be 10ms, but at the 5 th time, the previous received video frame is the 2 nd time, and the time interval from the 2 nd time to the 5 th time is 30ms, the first delay frame number is determined to be 2.
For the number of frame buffers, under the normal condition of the wireless network, except for the 1 st moment, the number of video frames stored in the frame buffer should be 1. If the frame buffer number is greater than 1, it indicates that network congestion has occurred at some previous time. And the second delay frame number can be calculated by subtracting 1 from the frame buffer number during specific calculation. As in the case of the 5 th time, the frame buffer number is 1, and therefore the second delay frame number is 0. The total delay frame number is the sum of the first delay frame number and the second delay frame number, and thus, at the 5 th time, the total delay frame number is 2.
For another example, at time 6, in the same manner, it can be calculated that the first delay frame number is 0, the second delay frame number is 2, and the total delay frame number is 2.
After the total delay frame number is greater than the delay upper limit and it is determined that the video data sent by the transmitter needs to be subjected to frame loss processing, the frame loss number needs to be determined first. The number of dropped frames may be determined in a variety of ways, and in one implementation, the number of dropped frames may be calculated according to the total number of delayed frames and the delay upper limit, specifically, the number of dropped frames may be a difference between the total number of delayed frames and the delay upper limit. Thus, after frame dropping, the video playing actually has delay, and the number of delayed frames is equal to the delay upper limit and still considered to be real-time. In another implementation, when it is determined that frame dropping processing is required, the total delay frame number may be directly used as the number of dropped frames. Thus, after a frame loss, the video will be played with 0 delay.
After the number of lost frames is determined, it is necessary to determine which frames are specifically lost. Because the encoding processing modes of different video frames are different, for an I frame, the I frame adopts an intra-frame encoding mode, so that a complete image can be decoded independently, and for a P frame or a B frame, the I frame adopts an inter-frame encoding mode, and the image can be decoded by depending on other frames. Thus, when selecting a frame to be dropped, a more preferred embodiment is to select the temporally last video frame or frames in the GOP to be dropped.
It should be noted that, one group of pictures GOP (group of pictures), that is, the distance between two I frames, and a group of pictures formed between two I frames is one GOP. Since video frames other than I-frames generally follow forward temporal prediction, i.e., the decoded playback of the video frame depends on the previous video frame, the playback of the video frame is usually only performed in time sequence and no skip occurs for non-I-frames. For example, in the foregoing example, although the 3 rd, 4 th and 5 th frames are received at the 5 th time, the 5 th frame which is the most real-time frame cannot be directly played, because the playing of the 5 th frame depends on the decoding playing of the 4 th frame, and the playing of the 4 th frame depends on the decoding playing of the 3 rd frame, so that only the next video frame in time sequence, that is, the 3 rd frame, can be played.
Therefore, when selecting the video frames that can be discarded, in order to ensure the stability of the video pictures, the video frames with later time sequence in the GOP should be selected for discarding. If the number of dropped frames in a GOP is still less than the determined number of dropped frames, the dropping can continue in the next GOP.
The following is an example provided to aid understanding.
See table 2. For example, in the transmission of video frames, from the 2 nd time, the wireless network is blocked and continues until the 7 th time, and the wireless network is recovered to be normal, then at the 7 th time, the receiver receives the 2 nd, 3 rd, 4 th, 5 th, 6 th, and 7 th frames at one time, at this time, the 1 st frame is stored in the frame buffer, and according to the foregoing determination method, the total delay frame number can be determined to be 5. If the delay upper limit is set to 1, it can be determined that the number of lost frames is 4.
TABLE 2
Figure BDA0002443215250000131
Figure BDA0002443215250000141
It is assumed that ten video frames are included in one GOP, i.e., the 1 st frame to the 10 th frame are one GOP. If it is considered that a maximum of four video frames can be discarded in a GOP, the 7 th, 8 th, 9 th and 10 th frames in a GOP are considered to be discarded. Thus, in the above example, the 7 th frame received at the 7 th time may be directly discarded, and the other 2 nd, 3 rd, 4 th, 5 th, and 6 th frames may be stored in the frame buffer. At the 8 th moment, if the wireless network keeps normal, the 8 th frame received successfully can also be directly discarded; accordingly, the 9 th frame and the 10 th frame received at the 9 th time and the 10 th time can be directly discarded. As shown in table 3 below.
It can be seen that, at the 12 th time of the second GOP, the 7 th, 8 th, 9 th, and 10 th frames of the first GOP are skipped over and the 1 st frame of the next GOP is directly used as the next video frame to be played, so that the number of delayed frames can be reduced, the number of delayed frames of the video is equal to the preset delay upper limit, and the video playing is restored to real time.
TABLE 3
Figure BDA0002443215250000142
However, if it is considered that at most two video frames can be discarded in one GOP, the 9 th frame and the 10 th frame in one GOP are considered to be discardable. Then, in the above example, all the video frames received at the 7 th time may be stored in the frame buffer, and the first frame is not lost until the 9 th frame is received at the 9 th time, and the second frame is lost when the 10 th frame is received at the 10 th time. It can be easily found that only two frames are discarded in the current GOP, and two frames need to be discarded again according to the determined number of the discarded frames. Two frames that need to be dropped again can be dropped in the next GOP, specifically, when the 9 th frame and the 10 th frame of the next GOP are received, so that the required number of dropped frames is completed.
The maximum number of frames that can be discarded in a GOP can be limited by presetting a value. For example, a preset limit may be set, and corresponding to the above example, the preset limit may be set to be equal to 2, and if the determined number of dropped frames is greater than the preset limit (corresponding to the above example, 4>2), a part of the frame dropping work (i.e., the 2 frame dropping tasks that are exceeded) is allocated to be performed in the next GOP.
In the video frame loss processing method provided by the application, when the video playing is delayed, the frame loss processing is carried out on the specific video frame in the video data, so that the discarded video frame is skipped to be not played on the basis of influencing the video picture stability as little as possible, the video playing delay is reduced, and the video playing is recovered to be real-time.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A method for adjusting video code stream is applied to a wireless image transmission system, and comprises the following steps:
acquiring an appointed transmission index of a current network, wherein the appointed transmission index comprises a transmission rate and identification information representing the change trend of the transmission rate;
and determining whether to adjust the current code stream parameter of the video coding according to the specified transmission index.
2. The method of adjusting video streams of claim 1, further comprising:
and if the identification information represents that the transmission rate is in a rising trend and the current transmission rate is less than a first preset value, maintaining the current code stream parameters.
3. The method of adjusting video streams of claim 2, further comprising:
and if the identification information represents that the transmission rate is in a rising trend and the current transmission rate is greater than the first preset value, increasing the current code stream parameter according to a preset amplification.
4. The method of adjusting video streams of claim 3, wherein said increasing current stream parameters according to a preset amplification comprises:
if the rising amplitude of the transmission rate is larger than the rising amplitude threshold value, increasing the current code stream parameter according to a first preset increasing amplitude; if the rising amplitude of the transmission rate is smaller than or equal to the rising amplitude threshold value, increasing the current code stream parameter according to a second preset increasing amplitude; wherein the first predetermined amplification is greater than the second predetermined amplification.
5. The method of adjusting video streams of claim 1, further comprising:
if the current code stream parameter is determined to be increased, determining that the increased code stream parameter does not exceed the preset upper limit value of the code stream parameter;
if the identification information represents that the transmission rate is in a descending trend and the current transmission rate is greater than a second preset value, maintaining the current code stream parameters; the second preset value is the transmission rate of the code stream corresponding to the upper limit value of the code stream parameter.
6. The method of adjusting video streams of claim 5, further comprising:
and if the identification information represents that the transmission rate is in a descending trend and the current transmission rate is less than the second preset value, reducing the current code stream parameter according to a preset reduction.
7. The method of adjusting video streams of claim 6, wherein said reducing current stream parameters according to a preset reduction comprises:
if the descending amplitude of the transmission rate is larger than the descending amplitude threshold value, reducing the current code stream parameter according to a first preset descending amplitude; if the descending amplitude of the transmission rate is smaller than or equal to the descending amplitude threshold value, reducing the current code stream parameter according to a second preset descending amplitude; wherein the first predetermined reduction is greater than the second predetermined reduction.
8. A video frame loss processing method is applied to a wireless image transmission system and comprises the following steps:
when receiving a video frame sent by a transmitter, determining the current total delay frame number; if the current total delay frame number is larger than a preset delay upper limit, determining to perform frame loss processing on the video data sent by the transmitter; the code stream parameters corresponding to the video data sent by the transmitter are determined according to the appointed transmission indexes of the current network, wherein the appointed transmission indexes comprise transmission rate and identification information representing the change trend of the transmission rate.
9. The method of claim 8, wherein said determining to drop frame of video data transmitted by said transmitter comprises:
determining the number of lost frames; the frame loss number is the difference value between the total delay frame number and the delay upper limit;
and discarding one or more video frames received at the end of the time sequence in the GOP according to the determined frame loss number.
10. The method of claim 8, wherein said determining the current total number of delayed frames comprises:
determining a first delay frame number according to the time interval of receiving the video frames twice;
determining a second delay frame number according to the current frame buffer number;
determining a total delay frame number; the total delay frame number is the sum of the first delay frame number and the second delay frame number.
CN202010271209.7A 2020-04-08 2020-04-08 Method for adjusting video code stream and video frame loss processing method Active CN111491201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010271209.7A CN111491201B (en) 2020-04-08 2020-04-08 Method for adjusting video code stream and video frame loss processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010271209.7A CN111491201B (en) 2020-04-08 2020-04-08 Method for adjusting video code stream and video frame loss processing method

Publications (2)

Publication Number Publication Date
CN111491201A true CN111491201A (en) 2020-08-04
CN111491201B CN111491201B (en) 2023-04-25

Family

ID=71798394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010271209.7A Active CN111491201B (en) 2020-04-08 2020-04-08 Method for adjusting video code stream and video frame loss processing method

Country Status (1)

Country Link
CN (1) CN111491201B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399268A (en) * 2020-11-12 2021-02-23 唐桥科技(杭州)有限公司 Real-time streaming media transmission method and device and electronic equipment
CN113315773A (en) * 2021-05-31 2021-08-27 浙江大华技术股份有限公司 Code rate adjusting method and device, electronic equipment and storage medium
CN114422866A (en) * 2022-01-17 2022-04-29 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1921615A (en) * 2005-04-29 2007-02-28 美国阿尔卡特资源有限合伙公司 System, method, and computer readable medium rapid channel change
US20070150929A1 (en) * 2005-12-26 2007-06-28 Kabushiki Kaisha Toshiba Electronic apparatus and method for controlling data transfer rate in electronic apparatus
CN101159862A (en) * 2007-11-29 2008-04-09 北京中星微电子有限公司 Frame rate control method and device
CN101360058A (en) * 2008-09-08 2009-02-04 华为技术有限公司 Method and apparatus for cache overflow control
CN102036024A (en) * 2009-10-07 2011-04-27 索尼公司 Transmission apparatus and transmission method
CN102325274A (en) * 2011-10-13 2012-01-18 浙江万里学院 A kind of video flowing transfer control method of network bandwidth adaptive
CN103686449A (en) * 2013-12-31 2014-03-26 大连文森特软件科技有限公司 Caching method of improving video fluency and image quality
CN103873382A (en) * 2012-12-17 2014-06-18 马维尔国际有限公司 Data frame buffer method and equipment
CN104702968A (en) * 2015-02-17 2015-06-10 华为技术有限公司 Frame loss method for video frame and video sending device
CN105611309A (en) * 2015-12-22 2016-05-25 北京奇虎科技有限公司 Video transmission method and device
CN106303559A (en) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 A kind of method controlling live video stream and direct broadcast server
CN106954101A (en) * 2017-04-25 2017-07-14 华南理工大学 The frame losing control method that a kind of low latency real-time video Streaming Media is wirelessly transferred
CN107276910A (en) * 2017-06-07 2017-10-20 上海迪爱斯通信设备有限公司 The real-time adjusting apparatus of video code rate and system, video server
CN107438031A (en) * 2017-08-07 2017-12-05 成都三零凯天通信实业有限公司 The audio/video flow transfer control method and system of multichannel network bandwidth adaptive
CN109004972A (en) * 2018-07-13 2018-12-14 深圳市道通智能航空技术有限公司 Data transmission method, device, system and the surface map transmission module of UAV system
CN110933380A (en) * 2019-12-17 2020-03-27 深圳市道通智能航空技术有限公司 Image transmission control method and system and unmanned aerial vehicle

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1921615A (en) * 2005-04-29 2007-02-28 美国阿尔卡特资源有限合伙公司 System, method, and computer readable medium rapid channel change
US20070150929A1 (en) * 2005-12-26 2007-06-28 Kabushiki Kaisha Toshiba Electronic apparatus and method for controlling data transfer rate in electronic apparatus
CN101159862A (en) * 2007-11-29 2008-04-09 北京中星微电子有限公司 Frame rate control method and device
CN101360058A (en) * 2008-09-08 2009-02-04 华为技术有限公司 Method and apparatus for cache overflow control
CN102036024A (en) * 2009-10-07 2011-04-27 索尼公司 Transmission apparatus and transmission method
CN102325274A (en) * 2011-10-13 2012-01-18 浙江万里学院 A kind of video flowing transfer control method of network bandwidth adaptive
CN103873382A (en) * 2012-12-17 2014-06-18 马维尔国际有限公司 Data frame buffer method and equipment
CN103686449A (en) * 2013-12-31 2014-03-26 大连文森特软件科技有限公司 Caching method of improving video fluency and image quality
CN104702968A (en) * 2015-02-17 2015-06-10 华为技术有限公司 Frame loss method for video frame and video sending device
CN105611309A (en) * 2015-12-22 2016-05-25 北京奇虎科技有限公司 Video transmission method and device
CN106303559A (en) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 A kind of method controlling live video stream and direct broadcast server
CN106954101A (en) * 2017-04-25 2017-07-14 华南理工大学 The frame losing control method that a kind of low latency real-time video Streaming Media is wirelessly transferred
CN107276910A (en) * 2017-06-07 2017-10-20 上海迪爱斯通信设备有限公司 The real-time adjusting apparatus of video code rate and system, video server
CN107438031A (en) * 2017-08-07 2017-12-05 成都三零凯天通信实业有限公司 The audio/video flow transfer control method and system of multichannel network bandwidth adaptive
CN109004972A (en) * 2018-07-13 2018-12-14 深圳市道通智能航空技术有限公司 Data transmission method, device, system and the surface map transmission module of UAV system
CN110933380A (en) * 2019-12-17 2020-03-27 深圳市道通智能航空技术有限公司 Image transmission control method and system and unmanned aerial vehicle

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399268A (en) * 2020-11-12 2021-02-23 唐桥科技(杭州)有限公司 Real-time streaming media transmission method and device and electronic equipment
CN113315773A (en) * 2021-05-31 2021-08-27 浙江大华技术股份有限公司 Code rate adjusting method and device, electronic equipment and storage medium
CN114422866A (en) * 2022-01-17 2022-04-29 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium
CN114422866B (en) * 2022-01-17 2023-07-25 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111491201B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN111491201B (en) Method for adjusting video code stream and video frame loss processing method
US8477844B2 (en) Method and apparatus for transmitting video
US8351513B2 (en) Intelligent video signal encoding utilizing regions of interest information
KR101414435B1 (en) Method and apparatus for assessing quality of video stream
EP0771120B1 (en) Video encoding apparatus
US8634458B2 (en) Image processing apparatus
US20130304934A1 (en) Methods and systems for controlling quality of a media session
CN108012163B (en) Code rate control method and device for video coding
US20100092151A1 (en) Image reproducing apparatus, image reproducing method, image capturing apparatus, and control method therefor
KR20070111550A (en) Quasi-constant-quality rate control with look-ahead
EP2002664A1 (en) Temporal quality metric for video coding
KR20040011100A (en) Advanced method for rate control and apparatus thereof
CN111901635A (en) Video processing method, device, storage medium and equipment
CN101466035A (en) Method for distributing video image set bit based on H.264
CN108810656B (en) Real-time live broadcast TS (transport stream) jitter removal processing method and processing system
CN111669619A (en) Video stream data switching method, device, terminal and readable storage medium
CN112203043A (en) Video transmission method and system
EP3522544A1 (en) Estimating video quality of experience
JP2007028598A (en) Compression coding apparatus and compression coding method
KR102137133B1 (en) Information processing apparatus
JP5471328B2 (en) Moving picture playback apparatus, moving picture playback method, and program
JP2017028622A (en) Image quality control device, image quality control method, image quality control system, and image quality controlling program
JPH0744686B2 (en) Variable transmission rate image coding device
JP6289168B2 (en) Image encoding device
CN117711164A (en) Method, system, equipment and storage medium for wireless transmission of multimedia bandwidth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant