CN114760472B - Frame equalization coding and decoding system and method based on strip I - Google Patents

Frame equalization coding and decoding system and method based on strip I Download PDF

Info

Publication number
CN114760472B
CN114760472B CN202210274463.1A CN202210274463A CN114760472B CN 114760472 B CN114760472 B CN 114760472B CN 202210274463 A CN202210274463 A CN 202210274463A CN 114760472 B CN114760472 B CN 114760472B
Authority
CN
China
Prior art keywords
frame
module
stripe
value
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210274463.1A
Other languages
Chinese (zh)
Other versions
CN114760472A (en
Inventor
陈尚武
倪仰
杨欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xujian Science And Technology Co ltd
Original Assignee
Hangzhou Xujian Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xujian Science And Technology Co ltd filed Critical Hangzhou Xujian Science And Technology Co ltd
Priority to CN202210274463.1A priority Critical patent/CN114760472B/en
Publication of CN114760472A publication Critical patent/CN114760472A/en
Application granted granted Critical
Publication of CN114760472B publication Critical patent/CN114760472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a frame equalization coding and decoding system based on a stripe I and a method thereof, which carry out stripe slicing on time, so that a video source transmits in different stripes to avoid network peaks, meanwhile, when the network quality is detected to be poor, the network bandwidth is reduced by reducing the maximum value of an image coding quantization QP, video blocking is reduced, when the condition that the actual image quality of a frame group is poor due to more processing details of the video source occurs, the network is judged to have a margin through the maximum value of the network quantization estimation value, the image quality of the video source is improved by reducing the maximum value of the image coding quantization QP, and when the video copy stream is judged to be poor in receiving through the network quantization estimation value of a display end, the video copy stream is subjected to stripe slicing redistribution, so that I frame transmission dislocation of the copy stream of the same video source is realized, the network peaks are avoided, and the transmission quality is improved.

Description

Frame equalization coding and decoding system and method based on strip I
Technical Field
The invention relates to the field of video encoding and decoding, in particular to a system and a method for equalizing encoding and decoding based on a strip I frame.
Background
With the vigorous development of mobile internet technology, video is ubiquitous, video transmission and storage face great challenges, video encoding and decoding are important technologies for guaranteeing high-quality video experience of users, and the main functions of the video encoding and decoding technologies are pursuing video reconstruction quality and compression ratio as high as possible in available computing resources so as to meet the requirements of bandwidth and storage capacity. In video encoding and decoding, I frames are intra-frame encoded frames, P frames are forward predictive encoded frames, the time redundancy information of the previously encoded frames in an image sequence is used for fully removing encoded images with compressed transmission data quantity, the encoded images are also used as predicted frames, the size of the video encoded I frames is far larger than that of the P frames, and when a video source is in transit distribution, multiple paths of videos are easy to cause network overload when I frames are transmitted at the same time, so that a method is needed to stagger the transmission time points of the I frames to improve the robustness of the network;
Disclosure of Invention
The invention aims to provide a system and a method for equalizing encoding and decoding based on a strip I frame, which are used for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a stripe I-frame based equalization codec system comprising: the video source frame generation module 1, the video source stripe code transmission module 2, the stripe media receiving module 3, the image QP monitoring module 4, the stripe network quality check module 5, the stripe receiving adjustment module 6, the stripe transfer adjustment module 7, the stripe media transmission module 8, the display end network quality check module 9, the display end media receiving module 10 and the display end decoding rendering module 11;
video source frame generation module 1: the video source frame generation module 1 receives a video frame generation time interval of the video source stripe code transmission module 2, and generates YUV video frame data according to the time interval;
video source stripe code transmitting module 2: the video source stripe coding and transmitting module 2 receives YUV video frame data of the video source frame generation module 1, receives video stripe adjustment information of the stripe receiving adjustment module 6, and codes and compresses the YUV video frame data; the video source stripe coding and transmitting module 2 calculates the time duration of the time slicing between P frames to obtain a video frame generation time interval, the video frame generation time interval is transmitted to the video source frame generation module 1, the video source stripe coding and transmitting module 2 calculates the time stamp value of the I frame generation UTC time stamp value as the time duration of the time slicing sequence of the I frame generation time slicing, the video source stripe coding and transmitting module 2 obtains the time stamp value of the next P frame as the time stamp value of the I frame generation slicing sequence number+the time slicing between P frames, the video source stripe coding and transmitting module 2 can sequentially calculate the sequence number of all P frames by accumulating the time stamp value between P frames, the video source stripe coding and transmitting module 2 obtains the next I frame sequence number as the time stamp value of the I frame generation slicing sequence number+the time stamp number between I frame to obtain the sequence number of all I frames by accumulating the time stamp value between I frames, when the sequence number of the generated I frame is the same as the sequence number of the P frame, the time stamp value of the time stamp of the I frame is only generated by the I frame, the video source stripe coding and transmitting module 2 receives the video frame according to the quantized sequence number of the quantized YUV 3 according to the maximum value of the image sub-packet data of the I frame and the time stamp value of the P frame generation time stamp value of the P frame;
Stripe media reception module 3: the stripe media receiving module 3 receives the packet and the packet sequence number of the video compression frame of the video source stripe code sending module 2, the stripe media receiving module 3 calculates the packet loss rate of the frame group, the frame group is an I frame and a reference P frame, the stripe media receiving module 3 calculates the receiving time difference of the last packet and the first packet of each frame of the frame group, the frame receiving time difference is obtained, the maximum value is the maximum frame receiving time length of the frame group, the stripe media receiving module 3 sends the packet loss rate of the frame group and the maximum frame receiving time length of the frame group to the stripe network quality checking module 5, the stripe media receiving module 3 sends the video frame packet data of the frame group to the image QP monitoring module 4, and the stripe media receiving module 3 sends the packet, the packet sequence number and the slice sequence number of the video frame to the stripe media sending module 8;
image QP monitoring module 4: the image QP monitoring module 4 receives the video frame sub-package data of the frame group of the stripe media receiving module 3, and the image QP monitoring module 4 combines the video frame sub-package of the frame group into a video frame; the image QP monitoring module 4 obtains QP quantized values for all macro blocks of the video frame, averages the QP quantized values to obtain video frame QP quantized values, the image QP monitoring module 4 obtains QP quantized values of the frame group from QP quantized values of all video frames of the frame group, and the image QP monitoring module 4 sends the QP quantized values of the frame group to the stripe receiving and adjusting module 6;
Strip network quality inspection module 5: the stripe network quality inspection module 5 receives the frame group packet loss rate and the frame group maximum frame receiving time length; the stripe network quality inspection module 5 calculates a quantized reference value of the video source, when the maximum frame receiving duration of the frame group is larger than a reference value, the quantized reference value is a square value of a frame group packet loss rate + an open root value of a sum of the square values of the frame group maximum frame receiving duration and the reference value, and when the maximum frame receiving duration of the frame group is smaller than the reference value, the quantized reference value is the frame group packet loss rate, the stripe network quality inspection module 5 generates a network quantized estimated value for the quantized reference value by using a Kalman filter, so that network quality judgment by a small amount of network jitter errors is avoided, and the stripe network quality inspection module 5 sends the network quantized estimated value to the stripe receiving adjustment module 6;
strip reception adjustment module 6: the stripe reception adjustment module 6 receives the network quantization estimation value of the stripe network quality check module 5, the stripe reception adjustment module 6 receives the QP quantization value of the frame group of the image QP monitoring module 4, the stripe reception adjustment module 6 judges that the QP quantization value of the frame group is large and the threshold value, the stripe reception adjustment module 6 reduces the image coding quantization QP maximum value of the video source by one adjustment value, when the video source processing details are more, the QP quantization value of the frame group is too high, thereby the actual image quality is poor, the network quantization estimation value from the video source stripe coding sending module 2 to the stripe media receiving module 3 is judged to be smaller than the lower threshold value, the image quality of the video source is improved by reducing the image coding quantization QP maximum value, the stripe reception adjustment module 6 reduces all the adjusted video sources, the method comprises the steps that I-frame band allocation is carried out, a band receiving adjustment module 6 carries out slicing on UTC time stamps by using slicing time, each time slicing sequence number is millisecond value/slicing time of the UTC time stamp, the band receiving adjustment module 6 takes a current video frame adjustment period, the band receiving adjustment module 6 carries out distribution processing on all video sources in sequence, the band receiving adjustment module 6 sequentially generates an I-frame of each video source into slicing sequence numbers, I-frame slicing numbers, P-frame slicing numbers and image coding quantization QP maximum values, the I-frame slicing numbers and the image coding quantization QP maximum values are sent to a video source band coding sending module 2, the band receiving adjustment module 6 sends the I-frame generation slicing numbers, the I-frame slicing numbers and the P-frame slicing numbers of all video sources to a band transfer adjustment module 7;
Strip transit adjustment module 7: the stripe transfer adjusting module 7 receives I frames of all video sources of the stripe receiving adjusting module 6 to generate a slice sequence number, an I frame slice number, a P frame slice number and an image coding quantization QP maximum value, generates an adjusting period and each adjusting slice flow predicted value according to the method of the stripe receiving adjusting module 6, when one video source is checked by a plurality of display ends, the flow predicted value of the slice with the I frame generated slice sequence number is added with the I frame flow predicted value of the video source, finds all P frame slices according to the method of the stripe receiving adjusting module 6, the flow predicted value of all P frame slices is added with the P frame flow predicted value of the video source, the I frame transfer slice sequence number converted in the display end of the video source is the I frame to generate a slice sequence number, the stripe transfer adjusting module 7 preferentially uses the stripe slice of the video source for the copy stream to ensure video transmission delay, when the stripe transfer adjustment module 7 receives the network quantization estimated value of the display network quality detection module 9, judges that the displayed network quantization estimated value is larger than a threshold value, considers that the copy flow is bad, and readjust the copy flow, the stripe reception adjustment module 6 readjust the slice of the adjustment period, which is the predicted value of the flow of the I frame slice of the video source, from the predicted value of the I frame flow of the video source, which is the predicted value of the flow of the P frame slice of the video source, from the predicted value of the P frame flow of the video source, the stripe transfer adjustment module 7 searches the slice with the lowest predicted value of the flow for the copy flow of the video source in the adjustment period, which is the sequence number of the I frame transfer slice of the copy flow of the video source, the predicted value of the slice is the predicted value of the I frame flow of the added video source, the stripe transfer adjustment module 7 finds all the P frame slices according to the method of the stripe reception adjustment module 6, the flow predicted value of the P frame fragments of all the copy flows is added with the P frame flow predicted value of the video source, the strip transfer adjustment module 7 generates a fragment sequence number from the I frame from the video source to the display end, the fragment sequence number is transferred in the I frame, the number of fragments between the I frames, and the number of fragments between the P frames is sent to the strip media sending module 8;
Stripe media transmission module 8: the stripe media sending module 8 receives the video frame packetization and packetization sequence number and the slicing sequence number of the stripe media receiving module 3, the stripe media sending module 8 receives the I frame from the video source of the stripe transfer adjusting module 7 to the display end to generate the slicing sequence number, the slicing sequence number is transferred in the I frame, the number of the slices between the I frames and the number of the slices between the P frames, the stripe media sending module 8 calculates a transfer deviation value, and the stripe media sending module 8 sends the video frame packetization and the packetization sequence number to the display end media receiving module 10;
display end network quality inspection module 9: the display end network quality inspection module 9 receives the frame group packet loss rate and the frame group maximum frame receiving duration of the display end media receiving module 10, calculates a quantized reference value of the medium transfer, when the frame group maximum frame receiving duration is larger than a reference value, the quantized reference value is an open root value of the sum of the square value of the frame group packet loss rate plus the square value of the frame group maximum frame receiving duration-reference value, when the frame group maximum frame receiving duration is smaller than the reference value, the quantized reference value is the frame group packet loss rate, the display end network quality inspection module 9 generates a network quantized estimated value for the quantized reference value by using a Kalman filter, so that network quality judgment by a small amount of network jitter errors is avoided, and the display end network quality inspection module 9 transfers the network quantized estimated value to the spring band regulation module 7;
Display side media receiving module 10: the video frame sub-package and sub-package sequence number of the stripe media sending module 8 are received, the display end media receiving module 10 obtains the total number of frame sub-package through counting the number of sub-package of the frame group, the frame sub-package loss number is determined through counting the number of sub-package missing sub-package sequence numbers of the frame group, the frame group loss rate is the frame sub-package loss number/the frame sub-package total number, the frame group is an I frame and a reference P frame, the stripe media receiving module 10 calculates the receiving time difference between the last sub-package and the first sub-package of each frame of the frame group to obtain the frame receiving time difference, the maximum value is the frame receiving time of the frame group, the display end media receiving module 10 sends the frame group loss rate and the frame receiving time of the frame group to the display end network quality checking module 9, and the display end media receiving module 10 sends the video frame sub-package to the display end decoding rendering module 11;
the display end decoding rendering module 11: the display-side decoding rendering module 11 receives the video frame packets of the display-side media receiving module 10, merges the video frames, and decodes the rendering display.
Preferably, the video slice adjustment information comprises time-slicing adjustment slice information, the time-slicing uses a slicing duration to slice the UTC timestamp, each time-slicing sequence number is a millisecond value of the UTC timestamp/a slicing duration, the video slice adjustment information comprises a slicing duration I-frame generation slicing sequence number, a P-frame slicing number, an I-frame slicing number, and an image encoding quantization QP maximum.
Preferably, the stripe media receiving module 3 obtains the total number of frame component packets by counting the number of packets of the frame component, determines the number of lost frame component packets by counting the number of missing packets of the frame component, and obtains the frame component packet loss rate by counting the number of lost frame component packets/the total number of frame component packets.
Preferably, the larger the packet loss rate of the received frame group is, the worse the network is, the network may be the network interference packet loss, the network overload is also possible, the larger the maximum frame receiving time length of the frame group is, the network overload is illustrated, the smaller the maximum frame receiving time length of the frame group is, the network underload is illustrated, and the two network qualities are judged to be the greater the frame group packet loss rate and the maximum frame receiving time length of the frame group.
Preferably, when the network quantization estimation value is greater than the upper threshold, the slice receiving adjustment module 6 adds an adjustment value to the maximum value of the image coding quantization QP of the video source, the greater the maximum value of the image coding quantization QP, the smaller the image quality difference code stream, and when the network quality of the video source slice coding transmission module 2 to the slice media reception module 3 is poor, the network bandwidth is reduced by reducing the maximum value of the image coding quantization QP, and video clip is reduced.
Preferably, the adjustment period starting sequence number is equal to the time slicing sequence number at the current time/the I inter-frame slicing number, and is multiplied by the I inter-frame slicing number after being rounded, when the adjustment period starting sequence number is smaller than the time slicing sequence number at the current time, the I inter-frame slicing number is added, and the adjustment period ending sequence number is the adjustment period starting sequence number+the I inter-frame slicing number-1.
Preferably, the stripe receiving adjustment module 6 searches for a slice with the lowest flow prediction value for the video source in an adjustment period, allocates the slice sequence number of the slice to the video source, generates a slice sequence number according to an I frame of the video source, calculates all P frame slices in the adjustment period according to the slice sequence number generated by the I frame of the video source, generates a slice sequence number corresponding to the slice, obtains a next P frame slice by the new flow prediction value of the slice equal to the current flow prediction value of the slice+the I frame flow prediction value of the video source, and obtains a QP maximum value of the video source from the QP maximum value of the video source of 10 x 50-QP maximum value of the video source, wherein the QP maximum value of the video source is 50/1, calculates all P frame slices in the adjustment period according to the I frame generation slice sequence number and the P frame slice sequence number of the video source, obtains a slice sequence number of the P frame generated by the I frame generation slice sequence number+the P frame slice sequence number, and obtains a next P frame slice sequence number by the P frame slice sequence number when the next P frame slice is greater than the current flow prediction value of the video source and the P frame slice sequence number of the video source is 50-QP maximum value, and the current slice sequence number of the P frame slice sequence number is found from the QP maximum value of the video source is 50-50.
Preferably, the transfer bias value is a transfer burst sequence number-I frame generation burst sequence number in the I frame, when the transfer bias value is smaller than zero, the transfer bias value needs to be added with an I frame burst number, the transfer bias value is added with a network delay bias value to obtain a final transfer bias value, time slicing uses a burst duration to perform burst on UTC time stamps, each time burst sequence number is a millisecond value/a burst duration of the UTC time stamps, for each time burst, whether the time burst sequence number is a transmission burst of a transfer stream of a video source and a display end is judged, the burst sequence number accords with the I frame transfer burst sequence number+n×p frame burst number, N is an integer, and when the transfer stream is in transmission burst, the burst sequence number+the transfer bias value of the video frame packet is smaller than or equal to the burst sequence number.
The invention also provides a frame equalization coding and decoding method based on the strip I, which comprises the following steps,
s2, the video source stripe code sending module 2 receives the stripe code and adjusts the video code by the stripe code receiving and adjusting module 6,
s21, a video source stripe code sending module 2 receives YUV video frame data of a video source frame generating module 1;
s22, the video source stripe code sending module 2 receives video stripe adjustment information of the stripe receiving adjustment module 6, and carries out code compression on YUV video frame data;
S23, the video source stripe code sending module 2 calculates the number of fragments among P frames and the fragment duration to obtain a video frame generation time interval, and sends the video frame generation time interval to the video source frame generation module 1;
s24, the video source stripe coding and transmitting module 2 calculates the time stamp value of the I frame generation UTC as the I frame generation fragmentation sequence number (I fragmentation duration), the video source stripe coding and transmitting module 2 obtains the sequence number of the next P frame as the I frame generation fragmentation sequence number+the P inter-frame fragmentation number, and the video source stripe coding and transmitting module 2 can sequentially calculate the fragmentation sequence numbers of all P frames by accumulating the P inter-frame fragmentation numbers;
s25, the video source stripe coding and transmitting module 2 obtains the next I frame sequence number as the I frame generation fragmentation sequence number and the I frame fragmentation number, the video source stripe coding and transmitting module 2 can obtain the sequence numbers of all the I frames by accumulating the I frame fragmentation numbers, and when the sequence numbers of the generated I frames are the same as the sequence numbers of the P frames, the time fragmentation only generates the I frames;
s26, the video source stripe coding and transmitting module 2 generates YUV video frame data of UTC timestamp values according to I frames and P frames, codes the YUV video frame data into video compression frames according to the maximum value of image coding quantization QP, and the video source stripe coding and transmitting module 2 packetizes the video compression frames according to fixed sizes and marks serial numbers, and the packetizing, packetizing serial numbers and slicing serial numbers are transmitted to the stripe media receiving module 3;
S3, the stripe media receiving module 3 receives video frame processing,
s31, the stripe media receiving module 3 receives the packetization and the packetization sequence number of the video compression frame of the video source stripe code transmitting module 2;
s32, the stripe media receiving module 3 obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packet losses by counting the number of the frame component packets missing and the number of the packet missing sub-packets, wherein the frame component packet loss rate is the number of frame component packet losses/the total number of frame component packets, and the frame component is an I frame and a reference P frame;
s33, the stripe media receiving module 3 calculates the receiving time difference between the last sub-packet and the first sub-packet of each frame of the frame group to obtain the frame receiving time difference, and the maximum value is the maximum frame receiving time length of the frame group;
s34, the stripe media receiving module 3 sends the frame group packet loss rate and the frame group maximum frame receiving duration to the stripe network quality checking module 5;
s35, the stripe media receiving module 3 sends the video frame sub-packets of the frame group to the image QP monitoring module 4, and the image QP monitoring module 4 combines the video frame sub-packets of the frame group into video frames;
s36, the stripe media receiving module 3 sends the video frame sub-package, the sub-package sequence number and the slice sequence number to the stripe media sending module 8;
s4, the image QP monitoring module 4 generates a frame group QP value,
S41, the image QP monitoring module 4 receives video frame sub-package data of the frame group of the stripe media receiving module 3, and the image QP monitoring module 4 combines the video frame sub-package of the frame group into a video frame;
s42, the image QP monitoring module 4 acquires QP quantized values for all macro blocks of the video frame, and averages the QP quantized values to obtain the video frame;
s43, the image QP monitoring module 4 obtains QP quantized values of the frame group from QP quantized values of all video frames of the frame group;
s44, the image QP monitoring module 4 sends the QP quantized value of the frame group to the stripe receiving and adjusting module 6;
s5, the strip network quality inspection module 5 generates a network quantitative estimation value,
s51, a stripe network quality inspection module 5 receives a frame group packet loss rate and a frame group maximum frame receiving duration;
s52, the strip network quality inspection module 5 calculates a quantization reference value of the video source;
s53, the stripe network quality inspection module 5 uses a Kalman filter to generate a network quantization estimated value for the quantization reference value, so as to avoid a small amount of network jitter error to judge the network quality;
s54, the strip network quality inspection module 5 sends the network quantitative estimation value to the strip receiving adjustment module 6;
s6, the stripe receiving adjustment module 6 combines the network quantization estimated value and the QP quantized value of the frame group to adjust the coding of the video source,
S7, the strip transit adjusting module 7 adjusts the transit sending strategy,
s8, the stripe media sending module 8 sends video frames according to the strategy of the stripe transfer adjusting module 7,
s81, the stripe media sending module 8 receives the video frame sub-package, the sub-package sequence number and the fragment sequence number of the stripe media receiving module 3;
s82, the stripe media sending module 8 receives the I frame from the video source of the stripe transfer adjusting module 7 to the display end to generate a fragmentation sequence number, the fragmentation sequence number is transferred in the I frame, the fragmentation number is between the I frames, and the fragmentation number is between the P frames;
s83, the stripe media sending module 8 calculates a transfer deviation value, generates a fragment sequence number for the I frame of the transfer fragment sequence number-I frame, adds the I frame fragment number to the transfer deviation value when the transfer deviation value is smaller than zero, and adds the network delay deviation value to the transfer deviation value to obtain a final transfer deviation value;
s84, time slicing is carried out on UTC time stamps by using slicing time length, each time slicing sequence number is a millisecond value/slicing time length of the UTC time stamp, and for each time slicing, whether the time slicing is a transmitting slicing of a transfer stream of a video source and a display end or not is judged, wherein the slicing sequence number accords with the number of the transfer slicing sequence number +N.P inter-frame slicing of an I frame, and N is an integer;
s85, when the middle transfer stream is in the sending of the fragments, the fragment sequence number of the video frame subpacket of the middle transfer stream and the middle transfer deviation value are smaller than or equal to the fragment sequence number;
S86, the stripe media sending module 8 sends the video frame sub-packets and the sub-packet sequence numbers to the display end media receiving module 10;
s9, the display end media receiving module 10 receives the video frame sub-package and sub-package sequence number processing of the stripe media sending module 8,
s10, a network quality inspection module 9 at a display end generates a network quantitative estimated value;
s11, the display end decoding and rendering module 11 receives the video frame subpackets of the display end media receiving module 10, combines the video frames, and decodes and renders the display.
Preferably, the step S6 includes the steps of:
s61, the strip receiving adjustment module 6 receives the network quantization estimated value of the strip network quality inspection module 5;
s62, the strip receiving adjustment module 6 receives the QP quantized value of the frame group of the image QP monitoring module 4;
s63, when the network quantization estimated value is larger than the upper limit threshold value, the stripe receiving and adjusting module 6 accumulates an adjusting value of the maximum value of the image coding quantization QP of the video source, the larger the maximum value of the image coding quantization QP is, the smaller the image quality is, and when the network quality of the video source stripe coding and sending module 2 to the stripe media receiving module 3 is poor, the network bandwidth is reduced by reducing the maximum value of the image coding quantization QP, and video blocking is reduced;
S64, the stripe reception adjustment module 6 judges that the QP quantized value of the frame group is large and the threshold value, the stripe reception adjustment module 6 reduces the maximum value of the QP quantized for the image coding of the video source by one adjustment value, and when the QP quantized value of the frame group is too high due to more processing details of the video source, and the actual image quality is poor, the stripe reception adjustment module 2 of the video source judges that the network quantization estimated value of the stripe media reception module 3 is smaller than the lower limit threshold value, and the image quality of the video source is improved by reducing the maximum value of the QP quantized for the image coding;
s65, the band receiving and adjusting module 6 distributes the bands of the I frames to all the adjusted video sources, the band receiving and adjusting module 6 uses the slicing time to slice the UTC time stamp, and each time slicing sequence number is the millisecond value/the slicing time of the UTC time stamp;
s66, the band receiving and adjusting module 6 takes the current video frame adjusting period, the initial sequence number of the adjusting period is equal to the time slicing sequence number of the current moment/the I inter-frame slicing number, and multiplies the current video frame adjusting period by the I inter-frame slicing number after rounding, when the initial sequence number of the adjusting period is smaller than the time slicing sequence number of the current moment, the I inter-frame slicing number is added, and the end sequence number of the adjusting period is the initial sequence number of the adjusting period and the I inter-frame slicing number of the adjusting period;
S67, the strip receiving and adjusting module 6 sequentially distributes all video sources, the strip receiving and adjusting module 6 searches fragments with the lowest flow predicted value for the video source in an adjusting period, and distributes the fragment serial numbers of the fragments to the video source, and the I frame of the video source generates fragment serial numbers;
s68, the strip receiving and adjusting module 6 sequentially generates a slice sequence number and an I frame slice number of each video source, and transmits the P frame slice number and the image coding quantization QP maximum value to the video source strip coding and transmitting module 2;
s69, the stripe receiving adjustment module 6 generates the slice sequence number of the I frames of all video sources, the inter-I frame slice number and the inter-P frame slice number, and the maximum value of the image coding quantization QP is sent to the stripe transfer adjustment module 7.
Preferably, the step S7 includes the steps of:
s71, the stripe transfer adjustment module 7 receives I frame generation fragment serial numbers of all video sources of the stripe receiving adjustment module 6, the I frame fragment numbers, the P frame fragment numbers and the image coding quantization QP maximum value, and generates an adjustment period and each adjustment fragment flow predicted value according to the method of the I frame generation fragment serial numbers and the I frame fragment numbers by the stripe receiving adjustment module 6;
s72, when a video source is checked by a plurality of display ends, generating a slice flow predicted value of a slice sequence number in an I frame, adding the I frame flow predicted value of the video source, finding all P frame slices according to a method of a strip receiving and adjusting module 6, wherein the flow predicted value of all P frame slices is added with the P frame flow predicted value of the video source, and the slice sequence number in the I frame in transit flow in the display end of the video source is the I frame to generate the slice sequence number;
S73, the strip transfer adjustment module 7 preferentially uses the strip slicing of the video source for the copy flow, so as to ensure video transmission delay, and when the strip transfer adjustment module 7 receives the network quantization estimated value of the display network quality detection module 9, the displayed network quantization estimated value is judged to be larger than a threshold value, the copy flow is considered to be bad, and the copy flow is readjusted;
s74, the stripe receiving and adjusting module 6 automatically subtracts the I frame flow predicted value of the video source from the flow predicted value of the I frame of the video source and automatically subtracts the P frame flow predicted value of the video source from the flow predicted value of the P frame of the video source;
s75, searching a fragment with the lowest flow predicted value for the copy flow of the video source in an adjustment period by the strip transfer adjustment module 7, and taking the fragment sequence number as an I frame transfer fragment sequence number of the copy flow of the video source, wherein the flow predicted value of the fragment is added with the I frame flow predicted value of the video source;
s76, the stripe transfer adjustment module 7 finds all P frame fragments according to the method of the stripe receiving adjustment module 6, and the flow predicted value of the P frame fragments of all the copy flows is added with the P frame flow predicted value of the video source;
s77, the stripe transfer adjusting module 7 generates a slice sequence number from an I frame of a stream from a video source to a display end, the slice sequence number is transferred in the I frame, the slice number is between the I frames, and the slice number between the P frames is sent to the stripe media sending module 8.
Preferably, the step S9 includes the steps of:
s91, the display end media receiving module 10 obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packet losses by counting the number of the frame component packets missing the number of the packet, the frame group packet loss rate is the number of frame group packet losses/the total number of frame group packets, the frame group is an I frame and a reference P frame;
s92, the display-side media receiving module 10 calculates a receiving time difference between the last packet and the first packet of each frame of the frame group, obtaining a frame receiving time difference, and taking the maximum value as the maximum frame receiving time length of the frame group;
s93, the display end media receiving module 10 sends the frame group packet loss rate and the frame group maximum frame receiving duration to the display end network quality checking module 9;
s94, the display end media receiving module 10 sends the video frame packets to the display end decoding rendering module 11.
Preferably, the step S10 includes the steps of:
s101, a network quality inspection module 9 of a display end receives a frame group packet loss rate and a frame group maximum frame receiving time length of a media receiving module 10 of the display end, calculates a quantization reference value of the transfer flow, when the frame group maximum frame receiving time length is larger than a reference value, the quantization reference value is an open root value of the sum of a square value of the frame group packet loss rate + a square value of the frame group maximum frame receiving time length-the reference value, and when the frame group maximum frame receiving time length is smaller than the reference value, the quantization reference value is the frame group packet loss rate;
S102, a network quality inspection module 9 at a display end generates a network quantization estimated value by using a Kalman filter on a quantization reference value, so that a small amount of network jitter errors are avoided to judge the network quality;
s103, the display end network quality inspection module 9 transfers the network quantitative estimated value to the adjustment module 7.
Preferably, the video slice adjustment information in S1 includes adjustment of slice information by time slices, the time slices use a slice duration to slice the UTC timestamp, and each time slice sequence number is a millisecond value of the UTC timestamp/a slice duration. The video slice adjustment information includes a slice duration I frame generated slice sequence number, a P inter slice number, an I inter slice number, and an image coding quantization QP maximum.
Preferably, in S52, when the frame group maximum frame receiving duration is greater than the reference value, the quantization reference value is an open root value of a sum of a square value of the frame group packet loss rate and a square value of the frame group maximum frame receiving duration-reference value, and when the frame group maximum frame receiving duration is less than the reference value, the quantization reference value is the frame group packet loss rate.
Preferably, in S67, the initial value of the traffic prediction value of each slice of the adjustment period is 0, and the I-frame of the video source is adjusted to generate a slice sequence number corresponding to the slice, so that the new traffic prediction value of the slice is equal to the current traffic prediction value of the slice+the I-frame traffic prediction value of the video source; the predicted value of the I frame flow of the video source is 10 x 50-the maximum value of the QP of the video source is 50-the maximum value of the QP of the video source, the ratio of the QP of the video source to the P frame code stream is 10/1, all P frame fragments in the adjustment period are calculated according to the generated fragment sequence number of the I frame and the P frame fragment number of the video source, the generated fragment sequence number of the I frame and the P frame fragment number are used for obtaining the fragment sequence number of the P frame, the fragment sequence number of the P frame and the P frame fragment number are used for obtaining the next P frame fragment sequence number, when the next P frame fragment sequence number is larger than the end sequence number of the adjustment period, the next P frame fragment sequence number is reduced by the I frame fragment number until the P frame fragment sequence number is the same as the P frame generation fragment sequence number, the fragment sequence number of the P frame fragment is found, and the predicted value of the flow of the P frame fragment sequence number is equal to the predicted value of the current flow of the fragment sequence number of the P frame and the predicted value of the P frame flow of the video source is used for obtaining the P frame sequence number of the video source; the predicted P-frame traffic for the video source is 50-QP maximum for the video source.
Compared with the prior art, the invention has the beneficial effects that:
by adopting the technical scheme of the invention, the time is sliced, the video source transmits in different slices of the slices, so that the network peak value is avoided, and simultaneously, the network bandwidth is reduced by reducing the maximum value of the QP (quantization) of the image coding when the network quality is detected to be poor, and the video clip is reduced.
When the situation that the actual image quality of the frame group is poor due to more processing details of the video source occurs, the network is judged to have a margin through the network quantization estimation value, and the image quality of the video source is improved through reducing the maximum value of the image coding quantization QP.
And when the video copy stream is judged to be received poorly by the display end, the video copy stream is subjected to stripe slicing redistribution, so that the copy stream of the same video source realizes I frame transmission dislocation, the network peak value is avoided, and the transmission quality is improved.
Drawings
FIG. 1 is a functional framework schematic of the present invention;
the numbers in the figures are marked: the video source frame generation module 1, the video source stripe code sending module 2, the stripe media receiving module 3, the image QP monitoring module 4, the stripe network quality checking module 5, the stripe receiving adjustment module 6, the stripe transfer adjustment module 7, the stripe media sending module 8, the display end network quality checking module 9, the display end media receiving module 10 and the display end decoding rendering module 11.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides a technical solution: a stripe I-frame based equalization codec system comprising:
the video source frame generation module 1, the video source stripe code transmission module 2, the stripe media receiving module 3, the image QP monitoring module 4, the stripe network quality check module 5, the stripe receiving adjustment module 6, the stripe transfer adjustment module 7, the stripe media transmission module 8, the display end network quality check module 9, the display end media receiving module 10 and the display end decoding rendering module 11;
video source frame generation module 1: the video source frame generation module 1 receives a video frame generation time interval of the video source stripe code transmission module 2, and generates YUV video frame data according to the time interval;
Video source stripe code transmitting module 2: the video source stripe coding and transmitting module 2 receives YUV video frame data of the video source frame generation module 1, receives video stripe adjustment information of the stripe receiving adjustment module 6, and codes and compresses the YUV video frame data; the video source stripe coding and transmitting module 2 calculates the time duration of the time slicing between P frames to obtain a video frame generation time interval, the video frame generation time interval is transmitted to the video source frame generation module 1, the video source stripe coding and transmitting module 2 calculates the time stamp value of the I frame generation UTC time stamp value as the time duration of the time slicing sequence of the I frame generation time slicing, the video source stripe coding and transmitting module 2 obtains the time stamp value of the next P frame as the time stamp value of the I frame generation slicing sequence number+the time slicing between P frames, the video source stripe coding and transmitting module 2 can sequentially calculate the sequence number of all P frames by accumulating the time stamp value between P frames, the video source stripe coding and transmitting module 2 obtains the next I frame sequence number as the time stamp value of the I frame generation slicing sequence number+the time stamp number between I frame to obtain the sequence number of all I frames by accumulating the time stamp value between I frames, when the sequence number of the generated I frame is the same as the sequence number of the P frame, the time stamp value of the time stamp of the I frame is only generated by the I frame, the video source stripe coding and transmitting module 2 receives the video frame according to the quantized sequence number of the quantized YUV 3 according to the maximum value of the image sub-packet data of the I frame and the time stamp value of the P frame generation time stamp value of the P frame;
Stripe media reception module 3: the stripe media receiving module 3 receives the packet and the packet sequence number of the video compression frame of the video source stripe code sending module 2, the stripe media receiving module 3 calculates the packet loss rate of the frame group, the frame group is an I frame and a reference P frame, the stripe media receiving module 3 calculates the receiving time difference of the last packet and the first packet of each frame of the frame group, the frame receiving time difference is obtained, the maximum value is the maximum frame receiving time length of the frame group, the stripe media receiving module 3 sends the packet loss rate of the frame group and the maximum frame receiving time length of the frame group to the stripe network quality checking module 5, the stripe media receiving module 3 sends the video frame packet data of the frame group to the image QP monitoring module 4, and the stripe media receiving module 3 sends the packet, the packet sequence number and the slice sequence number of the video frame to the stripe media sending module 8;
image QP monitoring module 4: the image QP monitoring module 4 receives the video frame sub-package data of the frame group of the stripe media receiving module 3, and the image QP monitoring module 4 combines the video frame sub-package of the frame group into a video frame; the image QP monitoring module 4 obtains QP quantized values for all macro blocks of the video frame, averages the QP quantized values to obtain video frame QP quantized values, the image QP monitoring module 4 obtains QP quantized values of the frame group from QP quantized values of all video frames of the frame group, and the image QP monitoring module 4 sends the QP quantized values of the frame group to the stripe receiving and adjusting module 6;
Strip network quality inspection module 5: the stripe network quality inspection module 5 receives the frame group packet loss rate and the frame group maximum frame receiving time length; the stripe network quality inspection module 5 calculates a quantized reference value of the video source, when the maximum frame receiving duration of the frame group is larger than a reference value, the quantized reference value is a square value of a frame group packet loss rate + an open root value of a sum of the square values of the frame group maximum frame receiving duration and the reference value, and when the maximum frame receiving duration of the frame group is smaller than the reference value, the quantized reference value is the frame group packet loss rate, the stripe network quality inspection module 5 generates a network quantized estimated value for the quantized reference value by using a Kalman filter, so that network quality judgment by a small amount of network jitter errors is avoided, and the stripe network quality inspection module 5 sends the network quantized estimated value to the stripe receiving adjustment module 6;
strip reception adjustment module 6: the stripe reception adjustment module 6 receives the network quantization estimation value of the stripe network quality check module 5, the stripe reception adjustment module 6 receives the QP quantization value of the frame group of the image QP monitoring module 4, the stripe reception adjustment module 6 judges that the QP quantization value of the frame group is large and the threshold value, the stripe reception adjustment module 6 reduces the image coding quantization QP maximum value of the video source by one adjustment value, when the video source processing details are more, the QP quantization value of the frame group is too high, thereby the actual image quality is poor, the network quantization estimation value from the video source stripe coding sending module 2 to the stripe media receiving module 3 is judged to be smaller than the lower threshold value, the image quality of the video source is improved by reducing the image coding quantization QP maximum value, the stripe reception adjustment module 6 reduces all the adjusted video sources, the method comprises the steps that I-frame band allocation is carried out, a band receiving adjustment module 6 carries out slicing on UTC time stamps by using slicing time, each time slicing sequence number is millisecond value/slicing time of the UTC time stamp, the band receiving adjustment module 6 takes a current video frame adjustment period, the band receiving adjustment module 6 carries out distribution processing on all video sources in sequence, the band receiving adjustment module 6 sequentially generates an I-frame of each video source into slicing sequence numbers, I-frame slicing numbers, P-frame slicing numbers and image coding quantization QP maximum values, the I-frame slicing numbers and the image coding quantization QP maximum values are sent to a video source band coding sending module 2, the band receiving adjustment module 6 sends the I-frame generation slicing numbers, the I-frame slicing numbers and the P-frame slicing numbers of all video sources to a band transfer adjustment module 7;
Strip transit adjustment module 7: the stripe transfer adjusting module 7 receives I frames of all video sources of the stripe receiving adjusting module 6 to generate a slice sequence number, an I frame slice number, a P frame slice number and an image coding quantization QP maximum value, generates an adjusting period and each adjusting slice flow predicted value according to the method of the stripe receiving adjusting module 6, when one video source is checked by a plurality of display ends, the flow predicted value of the slice with the I frame generated slice sequence number is added with the I frame flow predicted value of the video source, finds all P frame slices according to the method of the stripe receiving adjusting module 6, the flow predicted value of all P frame slices is added with the P frame flow predicted value of the video source, the I frame transfer slice sequence number converted in the display end of the video source is the I frame to generate a slice sequence number, the stripe transfer adjusting module 7 preferentially uses the stripe slice of the video source for the copy stream to ensure video transmission delay, when the stripe transfer adjustment module 7 receives the network quantization estimated value of the display network quality detection module 9, judges that the displayed network quantization estimated value is larger than a threshold value, considers that the copy flow is bad, and readjust the copy flow, the stripe reception adjustment module 6 readjust the slice of the adjustment period, which is the predicted value of the flow of the I frame slice of the video source, from the predicted value of the I frame flow of the video source, which is the predicted value of the flow of the P frame slice of the video source, from the predicted value of the P frame flow of the video source, the stripe transfer adjustment module 7 searches the slice with the lowest predicted value of the flow for the copy flow of the video source in the adjustment period, which is the sequence number of the I frame transfer slice of the copy flow of the video source, the predicted value of the slice is the predicted value of the I frame flow of the added video source, the stripe transfer adjustment module 7 finds all the P frame slices according to the method of the stripe reception adjustment module 6, the flow predicted value of the P frame fragments of all the copy flows is added with the P frame flow predicted value of the video source, the strip transfer adjustment module 7 generates a fragment sequence number from the I frame from the video source to the display end, the fragment sequence number is transferred in the I frame, the number of fragments between the I frames, and the number of fragments between the P frames is sent to the strip media sending module 8;
Strap strip media (media) the sending module 8: the stripe media sending module 8 receives the video frame packetization and packetization sequence number and the slicing sequence number of the stripe media receiving module 3, the stripe media sending module 8 receives the I frame from the video source of the stripe transfer adjusting module 7 to the display end to generate the slicing sequence number, the slicing sequence number is transferred in the I frame, the number of the slices between the I frames and the number of the slices between the P frames, the stripe media sending module 8 calculates a transfer deviation value, and the stripe media sending module 8 sends the video frame packetization and the packetization sequence number to the display end media receiving module 10;
display end network quality inspection module 9: the display end network quality inspection module 9 receives the frame group packet loss rate and the frame group maximum frame receiving duration of the display end media receiving module 10, calculates a quantized reference value of the medium transfer, when the frame group maximum frame receiving duration is larger than a reference value, the quantized reference value is an open root value of the sum of the square value of the frame group packet loss rate plus the square value of the frame group maximum frame receiving duration-reference value, when the frame group maximum frame receiving duration is smaller than the reference value, the quantized reference value is the frame group packet loss rate, the display end network quality inspection module 9 generates a network quantized estimated value for the quantized reference value by using a Kalman filter, so that network quality judgment by a small amount of network jitter errors is avoided, and the display end network quality inspection module 9 transfers the network quantized estimated value to the spring band regulation module 7;
Display side media receiving module 10: the video frame sub-package and sub-package sequence number of the stripe media sending module 8 are received, the display end media receiving module 10 obtains the total number of frame sub-package through counting the number of sub-package of the frame group, the frame sub-package loss number is determined through counting the number of sub-package missing sub-package sequence numbers of the frame group, the frame group loss rate is the frame sub-package loss number/the frame sub-package total number, the frame group is an I frame and a reference P frame, the stripe media receiving module 10 calculates the receiving time difference between the last sub-package and the first sub-package of each frame of the frame group to obtain the frame receiving time difference, the maximum value is the frame receiving time of the frame group, the display end media receiving module 10 sends the frame group loss rate and the frame receiving time of the frame group to the display end network quality checking module 9, and the display end media receiving module 10 sends the video frame sub-package to the display end decoding rendering module 11;
the display end decoding rendering module 11: the display-side decoding rendering module 11 receives the video frame packets of the display-side media receiving module 10, merges the video frames, and decodes the rendering display.
The video slice adjustment information comprises time-slicing adjustment slice information, time-slicing UTC time stamps using a slicing duration, each time-slicing sequence number being a millisecond value of the UTC time stamp/a slicing duration, the video slice adjustment information includes a slice duration I frame generated slice sequence number, a P inter slice number, an I inter slice number, and an image coding quantization QP maximum.
The stripe media receiving module 3 obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packets lost by counting the number of the missing sub-packet sequence numbers of the frame component packets, and obtains the frame component packet loss rate by counting the number of the frame component packets lost/the total number of the frame component packets.
The larger the packet loss rate of the received frame group is, the network difference is indicated, the network interference packet loss is possible, the network overload is also possible, the larger the maximum frame receiving time length of the frame group is, the network overload is indicated, the smaller the maximum frame receiving time length of the frame group is, the network underload is indicated, and the two network qualities are judged to be required to be combined with the larger the packet loss rate of the frame group and the maximum frame receiving time length of the frame group.
When the network quantization estimation value is greater than the upper limit threshold value, the stripe reception adjustment module 6 adds an adjustment value to the maximum value of the image coding quantization QP of the video source, the larger the maximum value of the image coding quantization QP is, the smaller the image quality is, and when the network quality of the video source stripe coding transmission module 2 to the stripe media reception module 3 is poor, the network bandwidth is reduced by reducing the maximum value of the image coding quantization QP, so that video jamming is reduced.
And when the adjustment period starting slicing sequence number is smaller than the time slicing sequence number at the current time, adding the I frame slicing number, and the adjustment period ending slicing sequence number is the adjustment period starting slicing sequence number+the I frame slicing number-1.
The stripe receiving adjustment module 6 searches the video source for the slice with the lowest flow prediction value in the adjustment period, the slice serial number of the slice is distributed to the video source, the I frame of the video source generates the slice serial number, the initial value of the flow prediction value of each slice in the adjustment period is 0, the I frame of the video source generates the slice serial number corresponding to the slice, then the new flow prediction value of the slice is equal to the current flow prediction value of the slice+the I frame flow prediction value of the video source, the I frame flow prediction value of the video source is 10 x 50-the QP maximum value of the video source is 50-the QP maximum value of the video source, the ratio of the I to the P frame stream is 10/1, all P frame slices in the adjustment period are calculated according to the I frame generation slice serial number and the P frame slice serial number of the video source, the I frame generation slice serial number+the P frame slice serial number obtains the slice serial number of the P frame, the P frame slice serial number+the P frame slice serial number obtains the next P frame slice, when the next P frame slice is greater than the current flow prediction value of the video source, the QP maximum value of the P frame is 50-the video source is reached, the current flow of the P frame is subtracted from the video source, the current value of the P frame is 50, the P frame serial number is reached, and the current value of the P frame is subtracted from the video source is 50.
The transfer bias value is the transfer sequence number-I frame of the I frame to generate the fragment sequence number, the transfer bias value is less than zero, the I frame fragment number is added to the transfer bias value, the network delay bias value is added to the transfer bias value to obtain the final transfer bias value, the UTC time stamp is sliced by time slicing time, each time slicing sequence number is the millisecond value/the slicing time of the UTC time stamp, for each time slicing, whether the time slicing sequence number is the transmission fragment of the transfer stream of the video source and the display end is judged, the fragment sequence number accords with the transfer fragment sequence number +N of the I frame, N is the whole, and when the transfer stream is in the transmission fragment, the fragment sequence number +middle transfer bias value of the video frame packet is less than or equal to the fragment sequence number.
The invention also provides a frame equalization coding and decoding method based on the strip I, which comprises the following steps:
s2, the video source stripe code sending module 2 receives the stripe code and adjusts the video code by the stripe code receiving and adjusting module 6,
s21, a video source stripe code sending module 2 receives YUV video frame data of a video source frame generating module 1;
s22, the video source stripe code sending module 2 receives video stripe adjustment information of the stripe receiving adjustment module 6, and carries out code compression on YUV video frame data;
S23, the video source stripe code sending module 2 calculates the number of fragments among P frames and the fragment duration to obtain a video frame generation time interval, and sends the video frame generation time interval to the video source frame generation module 1;
s24, the video source stripe coding and transmitting module 2 calculates the time stamp value of the I frame generation UTC as the I frame generation fragmentation sequence number (I fragmentation duration), the video source stripe coding and transmitting module 2 obtains the sequence number of the next P frame as the I frame generation fragmentation sequence number+the P inter-frame fragmentation number, and the video source stripe coding and transmitting module 2 can sequentially calculate the fragmentation sequence numbers of all P frames by accumulating the P inter-frame fragmentation numbers;
s25, the video source stripe coding and transmitting module 2 obtains the next I frame sequence number as the I frame generation fragmentation sequence number and the I frame fragmentation number, the video source stripe coding and transmitting module 2 can obtain the sequence numbers of all the I frames by accumulating the I frame fragmentation numbers, and when the sequence numbers of the generated I frames are the same as the sequence numbers of the P frames, the time fragmentation only generates the I frames;
s26, the video source stripe coding and transmitting module 2 generates YUV video frame data of UTC timestamp values according to I frames and P frames, codes the YUV video frame data into video compression frames according to the maximum value of image coding quantization QP, and the video source stripe coding and transmitting module 2 packetizes the video compression frames according to fixed sizes and marks serial numbers, and the packetizing, packetizing serial numbers and slicing serial numbers are transmitted to the stripe media receiving module 3; s3, the stripe media receiving module 3 receives video frame processing,
S31, the stripe media receiving module 3 receives the packetization and the packetization sequence number of the video compression frame of the video source stripe code transmitting module 2;
s32, the stripe media receiving module 3 obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packet losses by counting the number of the frame component packets missing and the number of the packet missing sub-packets, wherein the frame component packet loss rate is the number of frame component packet losses/the total number of frame component packets, and the frame component is an I frame and a reference P frame;
s33, the stripe media receiving module 3 calculates the receiving time difference between the last sub-packet and the first sub-packet of each frame of the frame group to obtain the frame receiving time difference, and the maximum value is the maximum frame receiving time length of the frame group;
s34, the stripe media receiving module 3 sends the frame group packet loss rate and the frame group maximum frame receiving duration to the stripe network quality checking module 5;
s35, the stripe media receiving module 3 sends the video frame sub-packets of the frame group to the image QP monitoring module 4, and the image QP monitoring module 4 combines the video frame sub-packets of the frame group into video frames;
s36, the stripe media receiving module 3 sends the video frame sub-package, the sub-package sequence number and the slice sequence number to the stripe media sending module 8;
s4, the image QP monitoring module 4 generates a frame group QP value,
s41, the image QP monitoring module 4 receives video frame sub-package data of the frame group of the stripe media receiving module 3, and the image QP monitoring module 4 combines the video frame sub-package of the frame group into a video frame;
S42, the image QP monitoring module 4 acquires QP quantized values for all macro blocks of the video frame, and averages the QP quantized values to obtain the video frame;
s43, the image QP monitoring module 4 obtains QP quantized values of the frame group from QP quantized values of all video frames of the frame group;
s44, the image QP monitoring module 4 sends the QP quantized value of the frame group to the stripe receiving and adjusting module 6;
s5, the strip network quality inspection module 5 generates a network quantitative estimation value,
s51, a stripe network quality inspection module 5 receives a frame group packet loss rate and a frame group maximum frame receiving duration;
s52, the strip network quality inspection module 5 calculates a quantization reference value of the video source;
s53, the stripe network quality inspection module 5 uses a Kalman filter to generate a network quantization estimated value for the quantization reference value, so as to avoid a small amount of network jitter error to judge the network quality;
s54, the strip network quality inspection module 5 sends the network quantitative estimation value to the strip receiving adjustment module 6;
s6, the stripe receiving adjustment module 6 combines the network quantization estimated value and the QP quantized value of the frame group to adjust the coding of the video source,
s7, the strip transit adjusting module 7 adjusts the transit sending strategy,
s8, the stripe media sending module 8 sends video frames according to the strategy of the stripe transfer adjusting module 7,
S81, the stripe media sending module 8 receives the video frame sub-package, the sub-package sequence number and the fragment sequence number of the stripe media receiving module 3;
s82, the stripe media sending module 8 receives the I frame from the video source of the stripe transfer adjusting module 7 to the display end to generate a fragmentation sequence number, the fragmentation sequence number is transferred in the I frame, the fragmentation number is between the I frames, and the fragmentation number is between the P frames;
s83, the stripe media sending module 8 calculates a transfer deviation value, generates a fragment sequence number for the I frame of the transfer fragment sequence number-I frame, adds the I frame fragment number to the transfer deviation value when the transfer deviation value is smaller than zero, and adds the network delay deviation value to the transfer deviation value to obtain a final transfer deviation value;
s84, time slicing is carried out on UTC time stamps by using slicing time length, each time slicing sequence number is a millisecond value/slicing time length of the UTC time stamp, and for each time slicing, whether the time slicing is a transmitting slicing of a transfer stream of a video source and a display end or not is judged, wherein the slicing sequence number accords with the number of the transfer slicing sequence number +N.P inter-frame slicing of an I frame, and N is an integer;
s85, when the middle transfer stream is in the sending of the fragments, the fragment sequence number of the video frame subpacket of the middle transfer stream and the middle transfer deviation value are smaller than or equal to the fragment sequence number;
s86, the stripe media sending module 8 sends the video frame sub-packets and the sub-packet sequence numbers to the display end media receiving module 10;
S9, the display end media receiving module 10 receives the video frame sub-package and sub-package sequence number processing of the stripe media sending module 8,
s10, a network quality inspection module 9 at a display end generates a network quantitative estimated value;
s11, the display end decoding and rendering module 11 receives the video frame subpackets of the display end media receiving module 10, combines the video frames, and decodes and renders the display.
The step S6 comprises the following steps:
s61, the strip receiving adjustment module 6 receives the network quantization estimated value of the strip network quality inspection module 5;
s62, the strip receiving adjustment module 6 receives the QP quantized value of the frame group of the image QP monitoring module 4;
s63, when the network quantization estimated value is larger than the upper limit threshold value, the stripe receiving and adjusting module 6 accumulates an adjusting value of the maximum value of the image coding quantization QP of the video source, the larger the maximum value of the image coding quantization QP is, the smaller the image quality is, and when the network quality of the video source stripe coding and sending module 2 to the stripe media receiving module 3 is poor, the network bandwidth is reduced by reducing the maximum value of the image coding quantization QP, and video blocking is reduced;
s64, the stripe reception adjustment module 6 judges that the QP quantized value of the frame group is large and the threshold value, the stripe reception adjustment module 6 reduces the maximum value of the QP quantized for the image coding of the video source by one adjustment value, and when the QP quantized value of the frame group is too high due to more processing details of the video source, and the actual image quality is poor, the stripe reception adjustment module 2 of the video source judges that the network quantization estimated value of the stripe media reception module 3 is smaller than the lower limit threshold value, and the image quality of the video source is improved by reducing the maximum value of the QP quantized for the image coding;
S65, the band receiving and adjusting module 6 distributes the bands of the I frames to all the adjusted video sources, the band receiving and adjusting module 6 uses the slicing time to slice the UTC time stamp, and each time slicing sequence number is the millisecond value/the slicing time of the UTC time stamp;
s66, the band receiving and adjusting module 6 takes the current video frame adjusting period, the initial sequence number of the adjusting period is equal to the time slicing sequence number of the current moment/the I inter-frame slicing number, and multiplies the current video frame adjusting period by the I inter-frame slicing number after rounding, when the initial sequence number of the adjusting period is smaller than the time slicing sequence number of the current moment, the I inter-frame slicing number is added, and the end sequence number of the adjusting period is the initial sequence number of the adjusting period and the I inter-frame slicing number of the adjusting period;
s67, the strip receiving and adjusting module 6 sequentially distributes all video sources, the strip receiving and adjusting module 6 searches fragments with the lowest flow predicted value for the video source in an adjusting period, and distributes the fragment serial numbers of the fragments to the video source, and the I frame of the video source generates fragment serial numbers;
s68, the strip receiving and adjusting module 6 sequentially generates a slice sequence number and an I frame slice number of each video source, and transmits the P frame slice number and the image coding quantization QP maximum value to the video source strip coding and transmitting module 2;
S69, the stripe receiving adjustment module 6 generates the slice sequence number of the I frames of all video sources, the inter-I frame slice number and the inter-P frame slice number, and the maximum value of the image coding quantization QP is sent to the stripe transfer adjustment module 7.
The step S7 comprises the following steps:
s71, the stripe transfer adjustment module 7 receives I frame generation fragment serial numbers of all video sources of the stripe receiving adjustment module 6, the I frame fragment numbers, the P frame fragment numbers and the image coding quantization QP maximum value, and generates an adjustment period and each adjustment fragment flow predicted value according to the method of the I frame generation fragment serial numbers and the I frame fragment numbers by the stripe receiving adjustment module 6;
s72, when a video source is checked by a plurality of display ends, generating a slice flow predicted value of a slice sequence number in an I frame, adding the I frame flow predicted value of the video source, finding all P frame slices according to a method of a strip receiving and adjusting module 6, wherein the flow predicted value of all P frame slices is added with the P frame flow predicted value of the video source, and the slice sequence number in the I frame in transit flow in the display end of the video source is the I frame to generate the slice sequence number;
s73, the strip transfer adjustment module 7 preferentially uses the strip slicing of the video source for the copy flow, so as to ensure video transmission delay, and when the strip transfer adjustment module 7 receives the network quantization estimated value of the display network quality detection module 9, the displayed network quantization estimated value is judged to be larger than a threshold value, the copy flow is considered to be bad, and the copy flow is readjusted;
S74, the stripe receiving and adjusting module 6 automatically subtracts the I frame flow predicted value of the video source from the flow predicted value of the I frame of the video source and automatically subtracts the P frame flow predicted value of the video source from the flow predicted value of the P frame of the video source;
s75, searching a fragment with the lowest flow predicted value for the copy flow of the video source in an adjustment period by the strip transfer adjustment module 7, and taking the fragment sequence number as an I frame transfer fragment sequence number of the copy flow of the video source, wherein the flow predicted value of the fragment is added with the I frame flow predicted value of the video source;
s76, the stripe transfer adjustment module 7 finds all P frame fragments according to the method of the stripe receiving adjustment module 6, and the flow predicted value of the P frame fragments of all the copy flows is added with the P frame flow predicted value of the video source;
s77, the stripe transfer adjusting module 7 generates a slice sequence number from an I frame of a stream from a video source to a display end, the slice sequence number is transferred in the I frame, the slice number is between the I frames, and the slice number between the P frames is sent to the stripe media sending module 8.
The step S9 comprises the following steps:
s91, the display end media receiving module 10 obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packets lost by counting the number of the frame component packets missing and the number of the packet missing, wherein the frame component packet loss rate is the number of frame component packets lost/the total number of frame component packets, and the frame component is an I frame and a reference P frame;
S92, the display end media receiving module 10 calculates the receiving time difference between the last sub-packet and the first sub-packet of each frame of the frame group to obtain the frame receiving time difference, and takes the maximum value as the maximum frame receiving time length of the frame group;
s93, the display end media receiving module 10 sends the frame group packet loss rate and the frame group maximum frame receiving duration to the display end network quality checking module 9;
s94, the display end media receiving module 10 sends the video frame packets to the display end decoding rendering module 11.
The step S10 comprises the following steps:
s101, a network quality inspection module 9 of a display end receives a frame group packet loss rate and a frame group maximum frame receiving time length of a media receiving module 10 of the display end, calculates a quantization reference value of the transfer flow, when the frame group maximum frame receiving time length is larger than a reference value, the quantization reference value is an open root value of the sum of a square value of the frame group packet loss rate + a square value of the frame group maximum frame receiving time length-the reference value, and when the frame group maximum frame receiving time length is smaller than the reference value, the quantization reference value is the frame group packet loss rate;
s102, a network quality inspection module 9 at a display end generates a network quantization estimated value by using a Kalman filter on a quantization reference value, so that a small amount of network jitter errors are avoided to judge the network quality;
s103, the display end network quality inspection module 9 transfers the network quantitative estimated value to the adjustment module 7.
The video band adjustment information in the S1 comprises band adjustment information according to time slicing, the UTC time stamp is sliced by using the slicing time length, and each time slicing sequence number is the millisecond value/the slicing time length of the UTC time stamp. The video slice adjustment information includes a slice duration I frame generated slice sequence number, a P inter slice number, an I inter slice number, and an image coding quantization QP maximum.
In S52, when the maximum frame receiving duration of the frame group is greater than the reference value, the quantization reference value is the square value of the packet loss rate of the frame group+the open root value of the sum of the square values of the maximum frame receiving duration of the frame group and the reference value, and when the maximum frame receiving duration of the frame group is less than the reference value, the quantization reference value is the packet loss rate of the frame group.
S67, the initial value of the flow predicted value of each slice of the adjustment period is 0, and the I frame of the video source is adjusted to generate a slice sequence number corresponding to the slice, so that the new flow predicted value of the slice is equal to the current flow predicted value of the slice and the I frame flow predicted value of the video source; the predicted value of the I frame flow of the video source is 10 x 50-the maximum value of the QP of the video source is 50-the maximum value of the QP of the video source, the ratio of the QP of the video source to the P frame code stream is 10/1, all P frame fragments in the adjustment period are calculated according to the generated fragment sequence number of the I frame and the P frame fragment number of the video source, the generated fragment sequence number of the I frame and the P frame fragment number are used for obtaining the fragment sequence number of the P frame, the fragment sequence number of the P frame and the P frame fragment number are used for obtaining the next P frame fragment sequence number, when the next P frame fragment sequence number is larger than the end sequence number of the adjustment period, the next P frame fragment sequence number is reduced by the I frame fragment number until the P frame fragment sequence number is the same as the P frame generation fragment sequence number, the fragment sequence number of the P frame fragment is found, and the predicted value of the flow of the P frame fragment sequence number is equal to the predicted value of the current flow of the fragment sequence number of the P frame and the predicted value of the P frame flow of the video source is used for obtaining the P frame sequence number of the video source; the predicted P-frame traffic for the video source is 50-QP maximum for the video source.
The above-described embodiments are intended to illustrate the present invention, not to limit it, and any modifications and variations made thereto are within the spirit of the invention and the scope of the appended claims.
While the fundamental and principal features of the invention and advantages of the invention have been shown and described, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (9)

1. A system for equalizing encoding and decoding based on a strip I frame, comprising
The video source frame generation module (1), the video source stripe code sending module (2), the stripe media receiving module (3), the image QP monitoring module (4), the stripe network quality checking module (5), the stripe receiving adjustment module (6), the stripe transfer adjustment module (7), the stripe media sending module (8), the display end network quality checking module (9), the display end media receiving module (10) and the display end decoding rendering module (11);
video source frame generation module (1): the video source frame generation module (1) receives the video frame generation time interval of the video source stripe code transmission module (2), and generates YUV video frame data according to the time interval;
video source stripe code transmitting module (2): the video source stripe coding and transmitting module (2) receives YUV video frame data of the video source frame generating module (1), receives video stripe adjusting information of the stripe receiving and adjusting module (6), and codes and compresses the YUV video frame data; the video band adjustment information comprises time slicing adjustment band information according to time slicing, the time slicing uses slicing time length to slice UTC time stamp, each time slicing sequence number is millisecond value/slicing time length of UTC time stamp, the video band adjustment information comprises slicing time length I frame generation slicing sequence number, P frame slicing number, I frame slicing number and image coding quantization QP maximum value;
The video source stripe coding and transmitting module (2) calculates the time duration of the time slicing of the number of P frames to obtain a video frame generation time interval, the video frame generation time interval is transmitted to the video source frame generation module (1), the video source stripe coding and transmitting module (2) calculates the time stamp value of the I frame generation UTC as the time duration of the time slicing of the number of the I frame generation slicing and the time slicing of the P frames, the video source stripe coding and transmitting module (2) obtains the number of the next P frames as the number of the I frame generation slicing and the number of the P frames, the video source stripe coding and transmitting module (2) can sequentially calculate the number of all P frames by accumulating the number of the next I frame as the number of the I frame generation slicing and the number of the I frame slicing, the video source stripe coding and transmitting module (2) can obtain the number of all the I frames by accumulating the number of the I frame slicing, when the number of the generated I frame is the same as the number of the P frame, the time slicing only generates the I frame, the video source stripe coding and transmitting module can generate the number of the I frame according to the number of the P frame and the number of the P frame, and the video source stripe coding and the video frame is compressed and the video frame is transmitted to the video frame with the maximum value according to the compressed and the quantized sequence number of the video frame, and the video frame is transmitted to the video frame and the quantized frame number is quantized according to the number of the value of the volume of the bit of the audio frame and the bit of the volume of the bit of the P frame and the bit of the frame;
Stripe media receiving module (3): the method comprises the steps that a stripe media receiving module (3) receives the packetization and packetization sequence number of a video compression frame of a video source stripe code sending module (2), the stripe media receiving module (3) calculates a frame group packet loss rate, the frame group is an I frame and a reference P frame, the stripe media receiving module (3) calculates the receiving time difference between the last packetization and the first packetization of each frame of the frame group to obtain a frame receiving time difference, the maximum value is the maximum frame receiving time length of the frame group, the stripe media receiving module (3) sends the frame group packet loss rate and the maximum frame receiving time length of the frame group to a stripe network quality checking module (5), the stripe media receiving module (3) sends video frame packetization data of the frame group to an image QP monitoring module (4), and the stripe media receiving module (3) sends the video frame packetization, the packetization sequence number and the slicing sequence number to a stripe media sending module (8);
image QP monitoring module (4): the image QP monitoring module (4) receives video frame sub-package data of the frame group of the stripe media receiving module (3), and the image QP monitoring module (4) combines the video frame sub-packages of the frame group into video frames; the image QP monitoring module (4) obtains QP quantized values of all macro blocks of the video frames, averages the QP quantized values of the video frames to obtain QP quantized values of the video frames, the image QP monitoring module (4) obtains QP quantized values of the frame groups from QP quantized values of all video frames of the frame groups, and the image QP monitoring module (4) sends the QP quantized values of the frame groups to the stripe receiving and adjusting module (6);
Strip network quality inspection module (5): the stripe network quality inspection module (5) receives the frame group packet loss rate and the frame group maximum frame receiving duration; the stripe network quality inspection module (5) calculates a quantized reference value of the video source, when the maximum frame receiving duration of a frame group is larger than a reference value, the quantized reference value is an open root value of the sum of square values of the square value of the packet loss rate of the frame group + (the maximum frame receiving duration of the frame group-the reference value), when the maximum frame receiving duration of the frame group is smaller than the reference value, the quantized reference value is the packet loss rate of the frame group, the stripe network quality inspection module (5) generates a network quantized estimated value for the quantized reference value by using a Kalman filter, the network quality judgment of a small amount of network jitter errors is avoided, and the stripe network quality inspection module (5) sends the network quantized estimated value to the stripe receiving adjustment module (6);
stripe reception adjustment module (6): the stripe reception adjustment module (6) receives the network quantization estimated value of the stripe network quality inspection module (5), the stripe reception adjustment module (6) receives the QP quantized value of the frame group of the image QP monitoring module (4), the stripe reception adjustment module (6) judges that the QP quantized value of the frame group is large and a threshold value, the stripe reception adjustment module (6) reduces the image coding quantization QP maximum value of the video source by one adjustment value, when the video source processing detail is more, the QP quantized value of the frame group is too high, thereby the actual image quality is poor, the network quantization estimated value from the video source stripe coding sending module (2) to the stripe media receiving module (3) is judged to be smaller than a lower limit threshold value, the image quality improvement of the video source is realized by reducing the image coding quantization QP maximum value, the band receiving and adjusting module (6) distributes the band of the I frame to all the adjusted video sources, the band receiving and adjusting module (6) segments UTC time stamp by using the segment time length, each time segment serial number is millisecond value/segment time length of UTC time stamp, the band receiving and adjusting module (6) takes the current video frame adjusting period, the band receiving and adjusting module (6) distributes all the video sources in sequence, the band receiving and adjusting module (6) sequentially generates the segment serial number of the I frame of each video source, the number of the segment between the I frames, the number of the segment between the P frames, the maximum value of the quantized QP of the image code, the band receiving and adjusting module (6) transmits the I frame of all the video sources to the video source band code transmitting module (2), the I inter-frame fragment number and the P inter-frame fragment number, and the maximum value of the image coding quantization QP are sent to a stripe transfer adjustment module (7);
Strip transit adjustment module (7): the stripe transfer adjusting module (7) receives I frames of all video sources of the stripe receiving adjusting module (6) to generate a slice sequence number, an I frame slice number and a P frame slice number, an image coding quantization QP maximum value, generates an adjusting period and each slice flow prediction value according to the method of the stripe receiving adjusting module (6), when one video source is checked by a plurality of display ends, the slice flow prediction value of the I frame generation slice sequence number is added with the I frame flow prediction value of the video source, all P frame slices are found according to the method of the stripe receiving adjusting module (6), the slice flow prediction value of all P frame slices is added with the P frame flow prediction value of the video source, the I frame transfer slice sequence number of the transfer stream of the display end of the video source is I frame generation slice sequence number, the stripe transfer adjusting module (7) preferentially uses the slice of the video source to ensure video transmission delay, when the stripe transfer adjusting module (7) receives the network quantization estimated value of the display network quality detecting module (9), judges that the displayed network estimated value is larger than the threshold value, the video stream is not considered to be copied to be the video source, the slice flow of the video source is copied from the video source, the video frame flow of the video source is reduced from the video frame slice flow prediction value is reduced to the minimum, the video flow of the video source is copied from the video frame slice flow prediction value of the P frame flow prediction value of the video source is reduced by the video source, the slice flow prediction value of the video source is copied from the video source in the method of the stripe receiving adjusting module (6), the flow predicted value of the slice is self-added with the I frame flow predicted value of the video source, the slice transfer adjusting module (7) finds all P frame slices according to the method of the slice receiving adjusting module (6), the flow predicted value of the P frame slices of all the copy flows is self-added with the P frame flow predicted value of the video source, the slice transfer adjusting module (7) generates the slice sequence number from the I frame of the video source to the display end, the slice sequence number is transferred in the I frame, the slice number is transferred between the I frames, and the slice number between the P frames is sent to the slice media sending module (8);
Stripe media transmission module (8): the stripe media sending module (8) receives the video frame sub-package and sub-package sequence number and the slice sequence number of the stripe media receiving module (3), the stripe media sending module (8) receives the I frame from the video source of the stripe transfer adjusting module (7) to the display end to generate the slice sequence number, the slice sequence number is transferred in the I frame, the slice number is between I frames, the slice number is between P frames, the stripe media sending module (8) calculates a transfer deviation value, and the stripe media sending module (8) sends the video frame sub-package and sub-package sequence number to the display end media receiving module (10);
display end network quality inspection module (9): the display end network quality inspection module (9) receives the frame group packet loss rate and the frame group maximum frame receiving time length of the display end media receiving module (10), calculates a quantization reference value of the transfer flow, when the frame group maximum frame receiving time length is larger than a reference value, the quantization reference value is an open root value of the sum of square values of the frame group packet loss rate + (frame group maximum frame receiving time length-reference value), when the frame group maximum frame receiving time length is smaller than the reference value, the quantization reference value is the frame group packet loss rate, the display end network quality inspection module (9) generates a network quantization estimated value for the quantization reference value by using a Kalman filter, so that network quality judgment by a small amount of network jitter errors is avoided, and the display end network quality inspection module (9) transfers the network quantization estimated value to the regulation module (7);
Display side media receiving module (10): the video frame sub-package and sub-package sequence number of the receiving stripe media sending module (8), the display end media receiving module (10) obtains the total number of frame sub-packages by counting the sub-package number of the frame group, the frame sub-package loss number is determined by counting the sub-package missing sub-package sequence number of the frame group, the frame group packet loss rate is the frame sub-package loss number/the frame sub-package total number, the frame group is an I frame and a reference P frame, the stripe media receiving module (10) calculates the receiving time difference of the last sub-package and the first sub-package of each frame of the frame group to obtain the frame receiving time difference, the maximum value is the maximum frame receiving time of the frame group, the display end media receiving module (10) sends the frame group packet loss rate and the maximum frame receiving time of the frame group to the display end network quality checking module (9), and the display end media receiving module (10) sends the video frame sub-packages to the display end decoding rendering module (11);
display end decoding rendering module (11): the display end decoding rendering module (11) receives the video frame subpackets of the display end media receiving module (10), combines the video frames, and decodes, renders and displays.
2. The frame equalization codec system based on stripe I as claimed in claim 1, wherein the stripe media receiving module (3) obtains a total number of frame component packets by counting a number of packets of the frame component, determines a number of frame component packets lost by counting a number of missing packets of the frame component, and obtains a frame component packet loss rate by counting a number of frame component packets lost/a total number of frame component packets.
3. The system of claim 1, wherein the larger the packet loss rate of the received frame group is, the worse the network, the network interference packet loss is likely to be caused, the network overload is likely to be caused, the larger the frame group maximum frame receiving time length is, the network overload is likely to be caused, the smaller the frame group maximum frame receiving time length is, the network underload is detected, and the higher the frame group packet loss rate and the frame group maximum frame receiving time length are determined to be fused.
4. The system according to claim 1, wherein the slice-I frame-based equalization codec is characterized in that the slice-receive adjustment module (6) adds an adjustment value to the maximum value of the picture-code quantization QP for the video source when the network quantization estimation value is greater than the upper threshold, and reduces the network bandwidth by reducing the maximum value of the picture-code quantization QP when the video source slice-code transmission module (2) to the slice-media reception module (3) is not good in network quality by reducing the maximum value of the picture-code quantization QP when the maximum value of the picture-code quantization QP is greater than the upper threshold.
5. The system of claim 1, wherein the adjustment period start sequence number is equal to the current time interval sequence number/I frame sequence number, and the adjustment period end sequence number is equal to the adjustment period start sequence number+i frame sequence number-1, when the adjustment period start sequence number is smaller than the current time interval sequence number, the I frame sequence number is added.
6. The system of claim 1, wherein the stripe-based I-frame equalization coding and decoding system is characterized in that the stripe reception adjustment module (6) searches for a slice with a lowest estimated flow value for the video source in an adjustment period, allocates a slice sequence number of the slice to the video source, generates a slice sequence number for the I-frame of the video source, wherein an initial value of the estimated flow value of each slice in the adjustment period is 0, adjusts the I-frame generated slice sequence number of the video source to the corresponding slice, then the new estimated flow value of the slice is equal to the estimated flow value of the current flow value of the slice+i-frame of the video source, the estimated flow value of the I-frame of the video source is 10 x (QP maximum value of the video source) (QP maximum value of the 50-video source), the maximum value of the QP of the video source is 50 x (QP maximum value of the video source), the ratio of the I to the P-frame stream is 10/1, all the P-frame slices in the adjustment period are calculated according to the estimated flow sequence number of the I-frame generated slice sequence number of the I-frame and the P-frame sequence number of the video source, the I-frame generated slice sequence number of the I-frame is equal to the P-frame sequence number of the video source, and the P-frame sequence number of the P-frame of the video source is obtained when the current flow value of the P-frame of the slice is equal to the estimated flow value of the current flow value of the video source is equal to the estimated flow value of the QP of the video source (QP of the video source, the QP value of the video source is 50 x maximum value of the QP value of the P-frame stream of the video source is 50 x P-frame sequence number of the video source).
7. The system of claim 1, wherein the relay bias value is an I-frame relay burst sequence number-an I-frame generated burst sequence number, the relay bias value is equal to or less than zero, the I-frame burst number is added to the relay bias value, the network delay bias value is added to the relay bias value to obtain a final relay bias value, the time slicing uses a burst duration to perform a burst on the UTC timestamp, each time burst sequence number is a millisecond/a burst duration of the UTC timestamp, for each time burst, it is determined whether the burst is a transmission burst of a relay stream between the video source and the display terminal, the burst sequence number conforms to the I-frame relay burst sequence number+n×p-frame burst number, N is the whole, and the burst sequence number of the video frame sub-packet+the relay bias value is equal to or less than the burst sequence number when the relay stream is in the transmission burst.
8. A codec method based on the stripe I frame-based equalization codec system of claim 1, comprising the steps of:
s2, a video source stripe code transmitting module (2) receives a stripe code and adjusts a video code by a stripe receiving adjusting module (6),
s21, a video source stripe code sending module (2) receives YUV video frame data of a video source frame generating module (1);
S22, a video source stripe code sending module (2) receives video stripe adjustment information of a stripe receiving adjustment module (6) and carries out code compression on YUV video frame data; the video band adjustment information comprises time slicing adjustment band information according to time slicing, the time slicing uses slicing time length to slice UTC time stamp, each time slicing sequence number is millisecond value/slicing time length of UTC time stamp, the video band adjustment information comprises slicing time length I frame generation slicing sequence number, P frame slicing number, I frame slicing number and image coding quantization QP maximum value;
s23, a video source stripe code sending module (2) calculates the number of fragments among P frames and the time length of fragments to obtain a video frame generation time interval, and sends the video frame generation time interval to a video source frame generation module (1);
s24, the video source stripe code sending module (2) calculates the time stamp value of the I frame generation UTC as the I frame generation fragmentation sequence number (P-frame fragmentation time length), the video source stripe code sending module (2) obtains the sequence number of the next P frame as the I frame generation fragmentation sequence number+the P-frame fragmentation number, and the video source stripe code sending module (2) can sequentially calculate the fragmentation sequence numbers of all the P frames by accumulating the P-frame fragmentation numbers;
s25, the video source stripe code sending module (2) obtains the next I frame sequence number as the I frame generation fragmentation sequence number+I frame fragmentation number, the video source stripe code sending module (2) can obtain the sequence numbers of all the I frames by accumulating the I frame fragmentation numbers, and when the sequence numbers of the generated I frames are the same as the sequence numbers of the P frames, the time fragmentation only generates the I frames;
S26, a video source stripe coding and transmitting module (2) generates YUV video frame data with UTC timestamp values according to I frames and P frames, codes the YUV video frame data into video compression frames according to the maximum value of an image coding quantization QP, and the video source stripe coding and transmitting module (2) packetizes the video compression frames according to a fixed size and marks serial numbers, and the packetizing, packetizing serial numbers and the slicing serial numbers are transmitted to a stripe media receiving module (3);
s3, the stripe media receiving module (3) receives video frame processing,
s31, the stripe media receiving module (3) receives the packetization and the packetization sequence number of the video compression frame of the video source stripe code transmitting module (2);
s32, the stripe media receiving module (3) obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of lost frame component packets by counting the number of missing sub-packets of the frame component packets and determines the frame component packet loss rate as the number of lost frame component packets/the total number of frame component packets, wherein the frame component is an I frame and a reference P frame;
s33, the stripe media receiving module (3) calculates the receiving time difference between the last sub-packet and the first sub-packet of each frame of the frame group, obtains the frame receiving time difference, and takes the maximum value as the maximum frame receiving time length of the frame group;
s34, the stripe media receiving module (3) sends the frame group packet loss rate and the frame group maximum frame receiving duration to the stripe network quality checking module (5);
S35, the stripe media receiving module (3) sends the video frame sub-packets of the frame group to the image QP monitoring module (4), and the image QP monitoring module (4) combines the video frame sub-packets of the frame group into video frames;
s36, the stripe media receiving module (3) sends the video frame sub-package, the sub-package serial number and the slice serial number to the stripe media sending module (8);
s4, the image QP monitoring module (4) generates a frame group QP value,
s41, an image QP monitoring module (4) receives video frame sub-package data of a frame group of the stripe media receiving module (3), and the image QP monitoring module (4) combines the video frame sub-package of the frame group into a video frame;
s42, an image QP monitoring module (4) acquires QP quantized values of all macro blocks of the video frame, and averages the QP quantized values of the video frame;
s43, an image QP monitoring module (4) obtains QP quantized values of the frame group from QP quantized values of all video frames of the frame group;
s44, the image QP monitoring module (4) sends the QP quantized value of the frame group to the stripe receiving and adjusting module (6);
s5, a strip network quality inspection module (5) generates a network quantization estimated value,
s51, a stripe network quality inspection module (5) receives a frame group packet loss rate and a frame group maximum frame receiving duration;
s52, calculating a quantization reference value of the video source by a strip network quality inspection module (5);
S53, a stripe network quality inspection module (5) generates a network quantization estimated value by using a Kalman filter on the quantization reference value, so that a small amount of network jitter errors are avoided to judge the network quality;
s54, the strip network quality inspection module (5) sends the network quantitative estimation value to the strip receiving adjustment module (6);
s6, a stripe receiving and adjusting module (6) combines the network quantization estimated value and the QP quantized value of the frame group to adjust and adjust the coding of the video source,
s7, a strip transit adjusting module (7) adjusts a transit sending strategy,
s8, the stripe media sending module (8) sends video frames according to the strategy of the stripe transfer adjusting module (7),
s81, a stripe media sending module (8) receives video frame sub-packets, sub-packet serial numbers and fragment serial numbers of a stripe media receiving module (3);
s82, the stripe media sending module (8) receives the I frame from the video source of the stripe transfer adjustment module (7) to the display end to generate a slice sequence number, the slice sequence number is transferred in the I frame, the slice number is between the I frames, and the slice number is between the P frames;
s83, a stripe media sending module (8) calculates a transfer deviation value, generates a fragment sequence number for an I frame transfer fragment sequence number-I frame, adds the I frame fragment number to the transfer deviation value when the transfer deviation value is smaller than zero, and adds the network delay deviation value to the transfer deviation value to obtain a final transfer deviation value;
S84, time slicing is carried out on UTC time stamps by using slicing time length, each time slicing sequence number is a millisecond value/slicing time length of the UTC time stamp, and for each time slicing, whether the time slicing is a transmitting slicing of a transfer stream of a video source and a display end or not is judged, wherein the slicing sequence number accords with the number of the transfer slicing sequence number +N.P inter-frame slicing of an I frame, and N is an integer;
s85, when the middle transfer stream is in the sending of the fragments, the fragment sequence number of the video frame subpacket of the middle transfer stream and the middle transfer deviation value are smaller than or equal to the fragment sequence number;
s86, the stripe media sending module (8) sends the video frame sub-packets and the sub-packet serial numbers to the display end media receiving module (10);
s9, the display end media receiving module (10) receives the video frame sub-package and sub-package sequence number processing of the stripe media sending module (8),
s10, a network quality inspection module (9) at a display end generates a network quantitative estimated value;
s11, a display end decoding rendering module (11) receives video frame subpackets of a display end media receiving module (10), combines video frames, and decodes, renders and displays.
9. The method for equalizing codec based on the stripe I frame as recited in claim 8, wherein S6 comprises the steps of:
s61, a strip receiving adjustment module (6) receives a network quantization estimated value of the strip network quality inspection module (5);
S62, a strip receiving and adjusting module (6) receives QP quantized values of the frame groups of the image QP monitoring module (4);
s63, when the network quantization estimated value is larger than an upper limit threshold value, the stripe receiving adjustment module (6) accumulates an adjustment value of the maximum value of the image coding quantization QP of the video source, the larger the maximum value of the image coding quantization QP is, the smaller the image quality is, and when the network quality of the video source stripe coding transmission module (2) to the stripe media receiving module (3) is poor, the network bandwidth is reduced by reducing the maximum value of the image coding quantization QP, and video clamping is reduced;
s64, judging that the QP quantized value of the frame group is large and a threshold value by the stripe receiving and adjusting module (6), reducing the maximum value of the QP quantized for the image coding of the video source by one adjusting value by the stripe receiving and adjusting module (6), and under the condition that the QP quantized value of the frame group is too high due to more processing details of the video source so as to cause poor actual image quality, judging that the network quantized estimated value from the stripe coding and transmitting module (2) of the video source to the stripe media receiving module (3) is smaller than a lower limit threshold value, and improving the image quality of the video source by reducing the maximum value of the QP quantized for the image coding;
s65, the strip receiving and adjusting module (6) distributes the strips of the I frames to all the adjusted video sources, the strip receiving and adjusting module (6) segments the UTC time stamp by using the segment duration, and each time segment sequence number is the millisecond value/segment duration of the UTC time stamp;
S66, a band receiving and adjusting module (6) takes the current video frame adjusting period, the initial sequence number of the adjusting period is equal to the time slicing sequence number of the current moment/the I inter-frame slicing number, the integer is multiplied by the I inter-frame slicing number, when the initial sequence number of the adjusting period is smaller than the time slicing sequence number of the current moment, the I inter-frame slicing number is added, and the end sequence number of the adjusting period is the initial sequence number of the adjusting period plus the I inter-frame slicing number-1;
s67, a strip receiving and adjusting module (6) sequentially distributes all video sources, the strip receiving and adjusting module (6) searches fragments with the lowest flow predicted value for the video sources in an adjusting period, and distributes the fragment serial numbers of the fragments to the video sources, and I frames of the video sources generate fragment serial numbers;
s68, the strip receiving and adjusting module (6) sequentially generates a slice sequence number and an I frame slice number and a P frame slice number of each video source, and the maximum value of the QP (quantization) of the image coding is sent to the video source strip coding and sending module (2);
s69, the strip receiving and adjusting module (6) generates the slice sequence number, the inter-I-frame slice number and the inter-P-frame slice number of the I frames of all video sources, and the maximum value of the image coding quantization QP is sent to the strip transferring and adjusting module (7);
The step S7 comprises the following steps:
s71, a stripe transfer adjustment module (7) receives I frames of all video sources of a stripe receiving adjustment module (6) to generate fragment numbers, I frame fragment numbers, P frame fragment numbers and image coding quantization QP maximum values, and generates adjustment periods and each adjustment fragment flow predicted value according to the method of the I frames to generate fragment numbers and the I frame fragment numbers by the stripe receiving adjustment module (6);
s72, when a video source is checked by a plurality of display ends, generating a fragment number in an I frame, wherein the fragment number in the I frame is the predicted value of the flow of the self-added video source, finding all P frame fragments according to the method of the strip receiving and adjusting module (6), the predicted value of the flow of all P frame fragments is the predicted value of the flow of the P frame of the self-added video source, and the fragment number in the I frame converted in the display end of the video source is the I frame to generate the fragment number;
s73, the strip transfer adjustment module (7) preferentially uses the strip slicing of the video source for the copy flow, so as to ensure video transmission delay, and when the strip transfer adjustment module (7) receives the network quantization estimated value of the display network quality detection module (9), the displayed network quantization estimated value is judged to be larger than a threshold value, the copy flow is considered to be bad, and the copy flow is readjusted;
S74, the stripe receiving and adjusting module (6) automatically subtracts the I frame flow predicted value of the video source from the flow predicted value of the I frame of the video source, and automatically subtracts the P frame flow predicted value of the video source from the flow predicted value of the P frame of the video source;
s75, searching a fragment with the lowest flow predicted value for the copy flow of the video source in an adjustment period by the strip transfer adjustment module (7), and taking the fragment as an I-frame transfer fragment sequence number of the copy flow of the video source, wherein the flow predicted value of the fragment is added with the I-frame flow predicted value of the video source;
s76, the stripe transfer adjustment module (7) finds all P frame fragments according to the stripe receiving adjustment module (6), and the flow predicted value of the P frame fragments of all the copy flows is added with the P frame flow predicted value of the video source;
s77, a stripe transfer adjustment module (7) generates a slice sequence number from an I frame of a stream from a video source to a display end, the slice sequence number is transferred in the I frame, the slice number is between I frames, and the slice number between P frames is sent to a stripe media sending module (8);
the step S9 comprises the following steps:
s91, a display end media receiving module (10) obtains the total number of frame component packets by counting the number of the frame component packets, determines the number of frame component packets lost by counting the number of the frame component packets missing the number of the packet packets, wherein the frame component packet loss rate is the number of frame component packets lost/the total number of frame component packets, and the frame component is an I frame and a reference P frame;
S92, a display end media receiving module (10) calculates the receiving time difference between the last sub-packet and the first sub-packet of each frame of the frame group to obtain a frame receiving time difference, and takes the maximum value as the maximum frame receiving time length of the frame group;
s93, the display end media receiving module (10) sends the frame group packet loss rate and the frame group maximum frame receiving duration to the display end network quality checking module (9);
s94, the display end media receiving module (10) sends the video frame sub-packets to the display end decoding rendering module (11);
the step S10 comprises the following steps:
s101, a network quality inspection module (9) at a display end receives a frame group packet loss rate and a frame group maximum frame receiving duration of a media receiving module (10) at the display end, calculates a quantized reference value of the transfer flow, when the frame group maximum frame receiving duration is larger than a reference value, the quantized reference value is an open root value of the sum of square values of the frame group packet loss rate + (frame group maximum frame receiving duration-reference value), and when the frame group maximum frame receiving duration is smaller than the reference value, the quantized reference value is the frame group packet loss rate;
s102, a network quality inspection module (9) at a display end generates a network quantization estimated value by using a Kalman filter on a quantization reference value, so that a small amount of network jitter errors are avoided to judge the network quality;
S103, a display end network quality inspection module (9) transfers the network quantitative estimated value to a spring band transfer adjustment module (7);
the video strip adjustment information in S1 comprises adjustment strip information according to time slicing, the time slicing uses slicing time to slice UTC time stamp, each time slicing sequence number is millisecond value/slicing time of UTC time stamp; the video stripe adjustment information comprises a slice duration I frame generation slice sequence number, a P frame slice number, an I frame slice number and an image coding quantization QP maximum value;
in S52, when the maximum frame receiving duration of the frame group is greater than the reference value, the quantization reference value is the open root value of the sum of square values of the square value + (the maximum frame receiving duration of the frame group-reference value) of the frame group packet loss rate, and when the maximum frame receiving duration of the frame group is less than the reference value, the quantization reference value is the frame group packet loss rate;
s67, the initial value of the flow predicted value of each slice of the adjustment period is 0, and the I frame of the video source is adjusted to generate a slice sequence number corresponding to the slice, so that the new flow predicted value of the slice is equal to the current flow predicted value of the slice and the I frame flow predicted value of the video source; the predicted value of I frame flow of the video source is 10 x (50-QP maximum value of the video source), the QP maximum value of the video source is 50, the ratio of I to P frame code streams is 10/1, all P frame fragments in the adjustment period are calculated according to the I frame generated fragment sequence number and the P frame fragment number of the video source, the I frame generated fragment sequence number and the P frame fragment number obtain the fragment sequence number of the P frame, the P frame fragment sequence number and the P frame fragment number obtain the next P frame fragment sequence number, when the next P frame fragment sequence number is greater than the end sequence number of the adjustment period, the next P frame fragment sequence number is reduced from the I frame fragment sequence number until the P frame fragment sequence number is the same as the I frame generated fragment sequence number, the predicted value of the P frame fragment flow is equal to the predicted value of the current fragment flow of the P frame and the P frame flow of the video source is predicted value; the predicted P-frame traffic for the video source is (QP maximum for the 50-video source) ×qp maximum for the 50-video source.
CN202210274463.1A 2022-03-21 2022-03-21 Frame equalization coding and decoding system and method based on strip I Active CN114760472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210274463.1A CN114760472B (en) 2022-03-21 2022-03-21 Frame equalization coding and decoding system and method based on strip I

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210274463.1A CN114760472B (en) 2022-03-21 2022-03-21 Frame equalization coding and decoding system and method based on strip I

Publications (2)

Publication Number Publication Date
CN114760472A CN114760472A (en) 2022-07-15
CN114760472B true CN114760472B (en) 2023-05-12

Family

ID=82327574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210274463.1A Active CN114760472B (en) 2022-03-21 2022-03-21 Frame equalization coding and decoding system and method based on strip I

Country Status (1)

Country Link
CN (1) CN114760472B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881947A (en) * 2017-05-15 2018-11-23 阿里巴巴集团控股有限公司 A kind of infringement detection method and device of live stream
WO2019080022A1 (en) * 2017-10-26 2019-05-02 天彩电子(深圳)有限公司 Method and device for network video stream transmission congestion control
CN113259661A (en) * 2020-02-12 2021-08-13 腾讯美国有限责任公司 Method and device for video decoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5072893B2 (en) * 2009-03-25 2012-11-14 株式会社東芝 Image encoding method and image decoding method
US20160205398A1 (en) * 2015-01-08 2016-07-14 Magnum Semiconductor, Inc. Apparatuses and methods for efficient random noise encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881947A (en) * 2017-05-15 2018-11-23 阿里巴巴集团控股有限公司 A kind of infringement detection method and device of live stream
WO2019080022A1 (en) * 2017-10-26 2019-05-02 天彩电子(深圳)有限公司 Method and device for network video stream transmission congestion control
CN113259661A (en) * 2020-02-12 2021-08-13 腾讯美国有限责任公司 Method and device for video decoding

Also Published As

Publication number Publication date
CN114760472A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US8503538B2 (en) Method, apparatus, system, and program for content encoding, content distribution, and content reception
Fukunaga et al. Error resilient video coding by dynamic replacing of reference pictures
CN1859579B (en) Apparatus and method for transmitting a multimedia data stream
US8566662B2 (en) Transmission apparatus, receiving apparatus, and method
CN103795996B (en) 3D delivery of video method and apparatus
CN104219539A (en) Video encoding and transmitting method based on TD-LTE (time division long term evolution) channel detection
US9247448B2 (en) Device and method for adaptive rate multimedia communications on a wireless network
RU2009116472A (en) DYNAMIC MODIFICATION OF VIDEO PROPERTIES
CN101909208A (en) Video wireless transmission control method suitable for CDMA2000
WO2011038694A1 (en) Method, device and network system for transmission processing and sending processing of video data
WO2014035833A2 (en) Device and method for adaptive rate multimedia communications on a wireless network
CN100558028C (en) A kind of method and system and a kind of access device of realizing error correction of realizing error correction
CN103338412A (en) Adaptive video coding systemcwireless adaptive modulation and coding
CN102256183A (en) Mobile-communication-network-based audio and video signal real-time transmission method
CN103024400A (en) Video compression fault-tolerant transmission method and system based on network
US11109022B2 (en) Transmitter communication device and method for transmitting video data
CN108429921B (en) Video coding and decoding method and device
KR101443061B1 (en) Adhoc multimedia group communication terminal robust packet loss and operating method thereof
CN114760472B (en) Frame equalization coding and decoding system and method based on strip I
CN101296166B (en) Method for measuring multimedia data based on index
JP2011172153A (en) Media encoding and transmitting apparatus
CN101754001B (en) Video data priority confirming method, device and system
Nunome The effect of MMT AL-FEC on QoE of error-concealed video streaming
WO2004112420A1 (en) Medium signal reception device, transmission device, and transmission/reception system
KR20100107547A (en) Apparatus and method of control traffic in streaming system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant