CN111064962B - Video transmission system and method - Google Patents

Video transmission system and method Download PDF

Info

Publication number
CN111064962B
CN111064962B CN201911408868.4A CN201911408868A CN111064962B CN 111064962 B CN111064962 B CN 111064962B CN 201911408868 A CN201911408868 A CN 201911408868A CN 111064962 B CN111064962 B CN 111064962B
Authority
CN
China
Prior art keywords
frame
sub
image
video stream
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911408868.4A
Other languages
Chinese (zh)
Other versions
CN111064962A (en
Inventor
关本立
欧俊文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ava Electronic Technology Co Ltd
Original Assignee
Ava Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ava Electronic Technology Co Ltd filed Critical Ava Electronic Technology Co Ltd
Priority to CN201911408868.4A priority Critical patent/CN111064962B/en
Publication of CN111064962A publication Critical patent/CN111064962A/en
Application granted granted Critical
Publication of CN111064962B publication Critical patent/CN111064962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video transmission system and a method, wherein the video transmission system comprises: the system comprises a sending end, a server and a receiving end; the method comprises the steps that the sending end divides an image frame of an original video stream to be transmitted into n interlaced sub-image frames, inserts the n sub-image frames divided by the same image frame into the preset sequence to form a video stream with a frame rate n times of that of the original video stream, codes and uploads a code stream to a server; the server unpacks and packages the code stream according to the bandwidth between the server and the receiving end, reserves k of the n sub-image frames and transmits the k sub-image frames to the receiving end; and decoding by a receiving end, restoring the sub-image frames to the image frames of the original video stream according to the original split position relation, or carrying out spatial interpolation calculation according to the successfully decoded sub-image frames to restore the image frames of the original video stream to form the original video stream. The invention uses standard coder-decoder to select proper code stream according to bandwidth, effectively utilizes bandwidth and has good compatibility.

Description

Video transmission system and method
Technical Field
The invention belongs to the technical field of video communication, and particularly relates to a video transmission system and a video transmission method.
Background
Currently, a relatively mature solution for adaptive transmission of Video streams is Scalable Video Coding (SVC). This scheme encodes the original video into different frame rates (temporal scalability), resolutions (spatial scalability) or video qualities (quality scalability), partitions the video stream into a base layer that provides the user with the most basic video quality, frame rate and resolution, and enhancement layers that complement the video quality so that different meshes and terminals adaptively select the video layer for decoding. The more layers of SVC a terminal user receives, the higher the quality of the obtained video. In the transmission process, the video code stream can be adjusted through time domain scalability, space scalability and quality scalability according to the importance degree of the data, so that the data is influenced to be processed by the network equipment, namely discarded, delayed or continuously transmitted, and good network and terminal adaptability is embodied.
The main problem with this scheme is that SVC is not compatible with the conventional h.264, MPEG4 codec and also requires a separate SVC player to play the video data saved by this scheme. Therefore, in order to meet the above-mentioned needs, the video communication system and the network equipment used must be upgraded, resulting in poor practical compatibility of SVC scalable video coding, which cannot be used on most equipment.
Disclosure of Invention
The invention provides a video transmission system and a method thereof aiming at the defects that the perceived video fluency is reduced because of the loss of data packets in the video communication of the prior art, and the compatibility is poor when an SVC scalable video coding mode is used.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a video transmission system, comprising: a sending terminal, a server and a receiving terminal;
the transmitting end comprises: a partitioning module and an encoding module, the partitioning module comprising: a splitting unit;
the splitting unit is used for splitting an image frame of an original video stream into n interlaced sub-image frames, wherein n is larger than or equal to 2, n sub-image frames split from the same image frame are inserted into a time interval of one frame of the original video stream according to a preset sequence by taking one frame of the original video stream as a unit to form a group of sub-frame sequences, the sequence among the sub-frame sequences is arranged according to the time sequence of the original video stream and combined to form a second video stream, and the frame rate of the second video stream is n times of the frame rate of the original video stream;
the encoding module is used for encoding the second video stream to form a second video code stream;
the server is used for unpacking and packaging the second video code stream according to the bandwidth between the server and the receiving end, forming a third video code stream which contains k sub-image frames in a reserved sub-frame sequence, wherein k is less than or equal to n, and transmitting the third video code stream to the receiving end;
the receiving end includes: a synthesis module and a decoding module;
the decoding module is used for decoding the third video code stream to form a third video stream;
and the synthesis module is used for restoring the sub-image frames belonging to the same sub-frame sequence in the third video stream into a frame image frame according to the original split position relationship and forming an original video stream, wherein if the restored image frame is an incomplete image frame, the incomplete image frame is subjected to spatial interpolation calculation.
Further, the encoding module is further configured to perform inter-frame prediction, and when the encoding module performs inter-frame prediction, in the second video code stream, if a1 st frame of one group of subframe sequences is a P frame, the 1 st frame of the P frame refers to a1 st frame of a previous group of subframe sequences.
Further, the encoding module is further configured to perform inter-frame prediction, when performing inter-frame prediction, in the second video code stream, in a period of one key frame, except for a subframe sequence of which a1 st frame is an I frame, if a group of subframe sequences is located at an odd number position, the 1 st frame of the group of subframe sequences located at the odd number position refers to the 1 st frame of a previous group of subframe sequences located at the odd number position, and if a group of subframe sequences is located at an even number position, the 1 st frame of the group of subframe sequences located at the even number position refers to the 1 st frame of the previous group of subframe sequences.
Furthermore, for any mth frame in the group of subframe sequences, m is more than or equal to 2 and less than or equal to n, and the mth frame refers to one of the 1 st frame to the m-1 st frame in the group of subframe sequences.
Further, for a set of subframe sequences, data frames other than the 1 st frame are referred to the data frame of the previous frame.
Further, for a set of subframe sequences, data frames other than frame 1 refer to data frames of frame 1 of the set of subframe sequences.
Further, the sending end further includes: the transmission module is used for uploading the second video code stream to the server; the receiving end further includes: and the receiving module is used for receiving the third video code stream transmitted by the server.
Further, the segmentation module further comprises: labeling units; the synthesis module further comprises: a judgment unit;
the marking unit is used for adding marking information to the second video code stream, and the marking information is used for explaining the coding mode of the second video code stream;
the judging unit is used for receiving the marking information;
and the decoding module determines a decoding mode according to the marking information received by the judging unit.
Further, the splitting unit splits the image frame of the original video stream into 2 interleaved sub-image frames, and the encoding module is an SVC temporal scalable encoder.
Further, the splitting unit splits the image frame of the original video stream into 2 interleaved sub-image frames in an interlaced or interlaced manner; or, splitting the image frame of the original video stream into 4 interlaced sub-image frames in an interlaced alternate mode; alternatively, the image frames of the original video stream are split into 2 interleaved sub-image frames in a criss-cross manner.
Further, the splitting unit divides the image frame of the original video stream into a plurality of image blocks with the resolution ratio of 3 × 2, marks the pixel point of the second row of each image block as the first pixel point, marks the first pixel point of the first row and the third pixel point of the second row of each image block as the second pixel point, marks the third pixel point of the first row and the first pixel point of the second row of each image block as the third pixel point, sets of all the first pixel points as the first sub-image frame, sets of all the second pixel points as the second sub-image frame, and sets of all the third pixel points as the third sub-image frame.
In a second aspect, the present invention provides a video transmission method, including the following steps:
s1: splitting an image frame of an original video stream into n interlaced sub-image frames, wherein n is more than or equal to 2;
s2: inserting n sub-image frames split from the same frame image frame into a time interval of one frame of the original video stream according to a preset sequence by taking one frame of the original video stream as a unit to form a group of sub-frame sequences;
s3: arranging the sequence among the sub-frame sequences according to the time sequence of the original video stream and combining to form a second video stream, wherein the frame rate of the second video stream is n times of that of the original video stream;
s4: coding the second video stream to form a second video code stream;
s5: uploading the second video code stream to a server;
s6: the server unpacks and packages the second video code stream according to the bandwidth to form a third video code stream which contains k sub-image frames in a reserved sub-frame sequence, wherein k is less than or equal to n, and the third video code stream is transmitted;
s7: receiving and decoding the third video code stream to form a third video stream;
s8: and restoring the sub-image frames belonging to the same sub-frame sequence in the third video stream into a frame of image frame according to the original split position relationship, and forming the original video stream, wherein if the restored image frame is an incomplete image frame, spatial interpolation calculation is carried out on the incomplete image frame.
Further, the encoding process of step S4 further includes the following steps:
s41: and performing inter-frame prediction, wherein when the inter-frame prediction is performed, in the second video code stream, if the 1 st frame of one group of subframe sequences is a P frame, the 1 st frame of the P frame refers to the 1 st frame of the previous group of subframe sequences.
Further, the encoding process of step S4 further includes the following steps:
s42: and performing inter-frame prediction, wherein in the second video code stream, in a period of a key frame, except that the 1 st frame is a subframe sequence of an I frame, if a group of subframe sequences are located at odd positions, the 1 st frame of the group of subframe sequences located at the odd positions refers to the 1 st frame of a previous group of subframe sequences located at the odd positions, and if a group of subframe sequences are located at even positions, the 1 st frame of the group of subframe sequences located at the even positions refers to the 1 st frame of the previous group of subframe sequences.
Further, the encoding process of step S4 further includes the following steps:
s43: for any mth frame in a group of subframe sequences, m is more than or equal to 2 and less than or equal to n, and the mth frame refers to one of the 1 st to m-1 st frames.
Further, step S43 further includes the following steps:
s431: for a set of subframe sequences, data frames other than frame 1 are referenced to the data frame of the previous frame.
Further, step S43 further includes the following steps:
s432: for a set of subframe sequences, data frames other than frame 1 refer to data frames of frame 1 of the set of subframe sequences.
Further, the encoding module is capable of performing intra prediction.
Further, step S4 further includes a labeling step, and step S7 further includes a judging step before decoding;
the labeling step is to add labeling information to the second video code stream, wherein the labeling information is used for explaining the coding mode of the second video code stream;
the judging step is to determine a decoding mode according to the label information.
Further, in step S1, the image frame of the original video stream is split into 2 interleaved sub-image frames in an interlaced or interlaced manner; or, splitting the image frame of the original video stream into 4 interlaced sub-image frames in an interlaced alternate mode; alternatively, the image frames of the original video stream are split into 2 interleaved sub-image frames in a criss-cross manner.
Further, in step S1, the image frame of the original video stream is divided into a plurality of image blocks with a resolution of 3 × 2, the second row of pixels of each image block is marked as first pixels, the first row of first pixels and the second row of third pixels of each image block are marked as second pixels, the first row of third pixels and the second row of first pixels of each image block are marked as third pixels, the set of all first pixels is a first sub-image frame, the set of all second pixels is a second sub-image frame, and the set of all third pixels is a third sub-image frame.
The invention has the following beneficial effects:
1. selecting a path of code stream according to the available bandwidth, and performing spatial interpolation at a receiving end to recover an original image, so that network packet loss is avoided, and the fluency of video communication is maintained;
2. the invention uses a plurality of different P frame reference modes to avoid overlarge color deviation of the recovered video stream;
3. a standard codec is used, and the network does not need to be modified;
4. and selecting a code stream suitable for the current bandwidth for transmission, and utilizing the bandwidth to the maximum extent.
Drawings
FIG. 1 is a general schematic diagram of a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an interlaced splitting scheme of the present invention;
FIG. 3 is a schematic of the 2X 2 splitting mode of the present invention;
FIG. 4 is a schematic of the 3X 2 splitting mode of the present invention;
FIG. 5 is a schematic diagram of the criss-cross splitting of the present invention;
FIG. 6 is a schematic diagram of the frame interpolation of the 2 × 2 split mode according to the present invention;
FIG. 7 is a schematic diagram of the transmission of the 2 × 2 split mode of the present invention;
FIG. 8 is a schematic diagram of the 2 × 2 splitting scheme of the present invention;
FIG. 9 is a diagram of an inter-frame prediction reference according to the present invention;
FIG. 10 is a diagram of another inter prediction reference of the present invention;
FIG. 11 is a schematic flow chart of the second embodiment of the present invention;
fig. 12 is a flowchart illustrating step S4 according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
As shown in fig. 1, the present invention discloses a video transmission system, which includes: a sender 10, a server 20 and one or more receivers 30.
Wherein, the transmitting end 10 includes: a segmentation module 11, an encoding module 12 and a transmission module 13.
The division module 11 includes: a splitting unit 111.
Splitting unit 111 receives the original video stream of video source 40. The splitting unit 111 is configured to split an image frame of an original video stream into n interlaced sub-image frames, where n is greater than or equal to 2, and insert n sub-image frames split from the same image frame into n sub-image frames in a preset order within a time interval of each frame of the original video stream by using one frame of the original video stream as a unit to form a group of sub-frame sequences, arrange an order between each group of sub-frame sequences according to the time order of the original video stream, and combine the sub-image frames to form a second video stream, where a frame rate of the second video stream is n times of a frame rate of the original video stream.
The encoding module 12 is configured to encode the second video stream to form a second video stream.
The transmission module 13 is configured to upload the second video code stream to the server 20.
The server 20 is configured to unpack and encapsulate the second video code stream according to a bandwidth between the server 20 and the receiving end 30, form a third video code stream including k sub-image frames in the reserved sub-frame sequence, where k is less than or equal to n, and transmit the third video code stream to the receiving end.
The receiving end 30 includes: a receiving module 31, a synthesizing module 32 and a decoding module 33.
The receiving module 31 is configured to receive the third video stream transmitted by the server 20.
The decoding module 33 is configured to decode the third video code stream to form a third video stream.
The synthesizing module 32 is configured to restore sub-image frames belonging to the same sub-frame sequence in the third video stream to a frame of image frame according to the original split position relationship, and form the original video stream, and if the restored image frame is an incomplete image frame, perform spatial interpolation calculation on the incomplete image frame.
As shown in fig. 2, the splitting unit 111 splits the image frame of the original video stream into 2 interleaved sub-image frames, namely, sub-image frame 1 with odd lines and sub-image frame 2 with even lines, in an interlaced manner, and assuming that the resolution of the original video stream is 1920 × 1080, the resolution of the sub-image frame 1 with odd lines and the sub-image frame 2 with even lines is 1920 × 540. Of course, the splitting unit 111 may also split in alternate columns to form sub-image frames in odd columns and even columns, where the resolution of two sub-image frames is 960 × 1080.
As shown in fig. 3, which illustrates another splitting manner of the splitting unit 111, the splitting unit 111 splits the image frames of the original video stream into 4 interleaved sub-image frames in an interlaced and spaced manner, so as to form a sub-image frame 1 in an upper left set, a sub-image frame 2 in an upper right set, a sub-image frame 3 in a lower left set, and a sub-image frame 4 in a lower right set, and assuming that the resolution of the original video stream is 1920 × 1080, the resolutions of the sub-image frames 1-4 at this time are 960 × 540.
As shown in fig. 4, another splitting manner of the splitting unit 111 is shown. In the splitting mode, an image frame of an original video stream is divided into a plurality of image blocks with the resolution ratio of 3 x 2, pixel points in a second row of each image block are marked as first pixel points, a first pixel point in a first row and a third pixel point in a second row of each image block are marked as second pixel points, a third pixel point in the first row and a first pixel point in the second row of each image block are marked as third pixel points, a set of all the first pixel points is a first sub-image frame, a set of all the second pixel points is a second sub-image frame, and a set of all the third pixel points is a third sub-image frame. Of course, under the guidance of the idea of the splitting method, the image frame of the original video stream may also be divided into a plurality of image blocks with the resolution of 2 × 3.
Fig. 2-4 only exemplarily show the interlaced splitting manner and the 2 × 2 splitting manner, and a person skilled in the art may adopt a splitting manner of 4 × 2, i.e. 2 rows and 4 columns, 8 sub-image frames, or a splitting manner of 3 × 3, i.e. 3 rows and 3 columns, 9 sub-image frames, etc., according to actual needs. As shown in fig. 5, the image frame of the original video stream may be split into 2 interleaved sub-image frames in a criss-cross manner by splitting as in a chess board.
As shown in fig. 6, fig. 6 adopts the aforementioned splitting manner of 2 × 2 to split the image frames of the original video stream. In a unit of one frame of the original video stream, 4 sub-image frames A1-a4 split from the same frame image frame a are inserted in a preset order in a time interval of each frame of the original video stream, i.e., a time interval between an a frame and a B frame, forming a sub-frame sequence A1A2A3a4 with respect to the a frame. The predetermined order of insertion for sub-image frames a1-a4 in fig. 6 is 1-2-3-4, but other orders such as 1-3-2-4 may be used as desired by those skilled in the art. The same method of splitting the inserted B, C frames continues, forming a sequence of sub-frames for B, C frames, until all the image frames of the original video stream are split.
After all image frames of the original video stream are split and inserted, a plurality of sets of subframe sequences are formed, and then the sequence among the sets of subframe sequences is arranged according to the time sequence of the original video stream to form a second video stream, that is, as shown in fig. 6, the second video stream with the sequence of A1A2A3A4B1B2B3B4C1C2C3C4 … … is formed. Assuming that the resolution of the original video stream is 1920 × 1080 and the frame rate is 30fps, the original 1 frame image is changed into 4 frames through the processing of the splitting unit 1110, and the frame rate is 4 times of the original frame image, so that the resolution is 960 × 540 at this time and the frame rate is 120fps of the second video stream. If the split is performed in an interlaced manner, the resolution is 1920 × 540, and the frame rate is 60fps, and if the split is performed in an alternate column manner, the resolution is 960 × 1080, and the frame rate is also 60 fps; if the splitting scheme shown in fig. 4 is adopted, the resolution is 640 × 1080 and the frame rate is 90 fps.
And after the second video stream is formed, the coding module codes the second video stream to form a second video code stream, and the transmission module uploads the second video code stream to the server. The server unpacks and packages the second video code stream according to the bandwidth between the server and each receiving end to form a third video code stream, and then transmits the third video code stream to the receiving end. According to the bandwidth between the server and each receiving end, the server selects to be unchanged or reduce the size of the second video stream, for example, the second video stream with the resolution of 960 × 540, the frame rate of 120fps is formed, and is encoded into the video stream with the resolution of 960 × 540, and the frame rate of 120fps, as shown in fig. 7, the bandwidth between the second receiving end and the server is not enough to transmit the video stream with the resolution of 120fps, in order to avoid the situation of packet loss, the video stream with half the size of the second video stream can be selected to be transmitted and received, at this time, the server can only keep the sub-image frames with the sequence numbers of 1 and 3 in each group of sub-frame sequences according to the timestamp, that is, k is 2, to form a third video stream, at this time, the sequence of the third video stream is A1A3B1B3C1C3 … … with the resolution or 960 × 540, and the frame rate is reduced to half of the original frame rate, that of 60 fps. Similarly, the third receiving end is suitable for receiving one fourth video code stream of the second video code stream, the server unpacks and packages the second video code stream to form the third video code stream with the resolution of 960 × 540 and the frame rate of 30 fps. In addition, if the bandwidth between the server and the receiving end is sufficient to transmit the second video stream, as shown in fig. 7, the second video stream is directly transmitted as the third video stream.
It should be noted that, in addition to the above-mentioned retention of the sub-image frames with sequence numbers 1 and 3 in the sub-frame sequence by means of time stamping, other unpacking and packing methods may be adopted by those skilled in the art to retain the sub-image frames therein. In addition, the sub-picture frames with sequence numbers 1 and 3 in the reserved sub-picture sequence are only a symbolic example, and those skilled in the art can also use other sequence combinations to reserve, such as 2-4. In addition, in order to make the algorithm of the subsequent spatial interpolation step for recovering the image simpler, 3 steps, which are 100%, 50% and 25% respectively, are usually set for the 2 × 2 splitting manner as shown in fig. 7, that is, only 4, 2 or 1 sub-image frames are respectively reserved, but if there is a better output picture, 75% steps can be added, and the corresponding frame rate is 90 fps.
As shown in fig. 1, after receiving the third video stream transmitted by the server 20, the receiving module 31 at the receiving end passes the third video stream to the decoding module 33 for decoding, so as to form a third video stream. The synthesis module 32 restores the sub-image frames belonging to the same sub-frame sequence in the third video stream into one image frame according to the original split position relationship, and forms the original video stream, and if the restored image frame is an incomplete image frame, performs spatial interpolation calculation on the incomplete image frame. Continuing with the example in fig. 7, assuming that the current receiving end is the first receiving end, the decoding module of the first receiving end decodes the third video stream, as shown in fig. 8, to obtain the third video stream (100%) with a resolution of 960 × 540 and a frame rate of 120fps, where the third video stream is not dropped with respect to the second video stream, and the synthesizing module 32 directly restores the image frames of the third video stream to the image frames of one original video stream according to the original split position relationship, so as to form the original video stream. Assuming that the current receiving end is the second receiving end, the decoding module of the second receiving end decodes the third video stream, as shown in fig. 8, the third video stream (50%) with a resolution of 960 × 540 and a frame rate of 60fps is obtained, the sequence of the video stream is that the A1A3B1B3C1C3 … … synthesis module 32 synthesizes image frames of the third video stream according to the originally split position relationship, only image frames of odd columns of the image frames of the original video stream are obtained, that is, the respective rates are 960 × 1080, at this time, the recovered image frames of each frame are all incomplete image frames, and spatial interpolation calculation needs to be performed on the incomplete image frames, so spatial interpolation calculation is performed on even columns to recover the image frames of the original video stream, that is, the incomplete image frames are recovered to A, B, C and other frames of the original video stream through spatial interpolation calculation, and then the original video stream is formed. It should be noted that the original video stream obtained by the restoration has the same resolution and frame rate as the original video stream from the video source, i.e. the video stream finally output by the receiving end is still 1920 × 1080 in resolution, and the frame rate is 30 fps.
In one embodiment, the encoding module is capable of performing inter-frame prediction, and when performing inter-frame prediction, in the second video code stream, if a1 st frame of a group of subframe sequences is a P frame, the 1 st frame refers to a1 st frame of a previous group of subframe sequences.
Performing inter-frame prediction can effectively compress a video stream, but if a second video stream as shown in fig. 4 is inter-frame prediction encoded using a general encoding module, the following frames can only refer to the previous frame, for example, the a2 frame refers to the a1 frame, the A3 frame refers to the a2 frame, the a4 frame refers to the A3 frame, the B1 frame refers to the a4 frame … …, and actually, the B1 frame and the a1 frame are located at the same position in the original image frame, but the B1 frame is obtained by referring to the a2-a4 frames, which are located at different positions, so the color bias can be large. In addition, since the server may only reserve part of the sub-image frames according to the bandwidth, if the inter-frame prediction is performed by the above-mentioned general method, the image frames referred to by the part of the reserved sub-image frames may not be reserved, so that no image frame can be referred to by the part of the reserved sub-image frames. Therefore, as shown in fig. 9, in the second video stream, if the 1 st frame of one group of sub-frame sequences is a P frame, the 1 st frame refers to the 1 st frame of the previous group of sub-frame sequences. In fig. 9, a1 frame is an I frame and does not need to refer to any frame, B1 and C1 are the 1 st frame of the B group subframe sequence and the 1 st frame of the C group subframe sequence, respectively, and B1 and C1 frames are both P frames, so that B1 frame refers to the 1 st frame of the previous group subframe sequence, i.e., a1 frame, and C1 frame refers to the 1 st frame of the previous group subframe sequence, i.e., B1 frame. This is done to correct the 1 st frame of each group of sub-frame sequences, so color deviations can be effectively avoided.
As shown in fig. 10, another method for referencing image frames between sets of sub-frame sequences is shown. In a second video code stream, in a period of a key frame, except for a subframe sequence of which the 1 st frame is an I frame, the 1 st frame in a group of subframe sequences at odd positions refers to the 1 st frame in a previous group of subframe sequences at odd positions, and the 1 st frame in a group of subframe sequences at even positions refers to the 1 st frame in a previous group of subframe sequences at odd positions. In fig. 10, the sub-frame sequences are arranged in the order of AA ' BB ' CC ' and all of them are in a GOP (Group of Pictures) of a key frame, and usually, a key frame includes at least one I-frame as the first frame in the period of each key frame. A, B, C is a group of sub-frame sequences located at odd positions, and A ', B ', and C ' are a group of sub-frame sequences located at even positions. The frame A1 of the sub-frame sequence group A is an I frame, no reference is needed to any frame, the frames B1 and C1 of the sub-frame sequence group B, C at odd positions are P frames, the frame B1 is made to reference the frame 1 of the sub-frame sequence group at odd positions, namely, the frame A1, and the frame C1 is made to reference the frame 1 of the sub-frame sequence group at odd positions, namely, the frame B1. The 1 st frame a ' 1, B ' 1, C ' 1 in the even-positioned sub-frame sequences a ', B ', C ' are all P frames, so that they all refer to the 1 st frame in the previous set of odd-positioned sub-frame sequences, i.e., a ' 1 refers to a1, B ' 1 refers to B1, C ' 1 refers to C1. It has to be noted here that the parity position referred to herein refers to the parity position of a group of sub-frame sequences within the period of one key frame, rather than the parity position of the group of sub-frame sequences in the whole second video stream.
By using the frame skipping mode shown in fig. 10, in a period of a key frame, the 1 st frame of each group of sub-frame sequences located at odd positions is used as a reference standard of the 1 st frames of the following two groups of sub-frame sequences, so that the color of the following two groups of sub-frame sequences can be effectively prevented from generating serious deviation. In addition, if the method shown in fig. 9 is used, if the 1 st frame of one group of sub-frame sequences located at even-numbered positions is in error, all the following image frames will be affected by the error to generate errors. However, with the frame skipping mode shown in fig. 10, since the 1 st frame of the sub-frame sequence located at the odd-numbered position refers to the 1 st frame of the previous sub-frame sequence located at the odd-numbered position, even if the 1 st frame of the sub-frame sequence located at the even-numbered position has an error, the sub-frame sequence of the subsequent sub-frame sequence is not affected, and the stability of the video is improved.
For each group of subframe sequences, the 1 st frame is calibrated by the method, so that the color is more accurate, and the color can be referred to as a reference frame of the group of subframe sequences to further reduce the color deviation.
As shown in fig. 9 or 10, there are two methods for referencing the reference frame in the sub-frame sequence, the first method is shown by the solid arrow in fig. 9 or 10, the data frames other than the 1 st frame are all referenced to the data frame of the previous frame, and the other method is shown by the dotted arrow in fig. 9 or 10, the data frames other than the 1 st frame are all referenced to the 1 st frame in the sub-frame sequence. The solid and dashed arrows of fig. 9 or 10 are only exemplary representations, and may of course be mixed, for example, frame 2 refers to frame 1, frame 3 refers to frame 1, and frame 4 refers to frame 2. In any case, the 1 st frame is taken as a reference, that is, for any mth frame in the subframe sequence, m is more than or equal to 2 and less than or equal to n, and the mth frame refers to one of the 1 st to m-1 st frames.
In one embodiment, the encoding module is capable of performing intra prediction.
In one embodiment, the splitting unit splits the image frames of the original video stream into 2 interleaved sub-image frames, and the encoding module is an SVC temporal scalable encoder. Because the SVC temporal scalable encoder is characterized by odd frames referencing a previous odd frame and even frames referencing a previous odd frame. And when there are only two sub-image frames, the 1 st frame in all sub-frame sequences is an odd frame in the second video stream, and the 2 nd frame in all sub-frame sequences is an even frame in the second video stream. At this time, the 1 st frame of the sub-frame sequence becomes an odd frame with reference to the previous odd frame with reference to the 1 st frame of the previous sub-frame sequence; the 2 nd frame in each group of sub-frame sequences can only be referred to by the previous frame, i.e. the even frame refers to the previous odd frame. Thus, the above-described functionality can also be achieved using an SVC temporal scalable encoder when splitting into 2 interleaved sub-image frames. Of course, the manner of splitting into 2 interlaced sub-image frames is various, such as the above interlaced or interlaced manner, or the criss-cross manner.
In one embodiment, the segmentation module 11 further comprises: the labeling unit 112, the synthesizing module 32 further includes: a judgment unit 322; the labeling unit 112 is configured to add labeling information to a second video code stream, where the labeling information is used to describe a coding mode of the second video code stream; the judging unit 322 is configured to receive the annotation information; the decoding module 33 determines a decoding mode according to the label information. This improves the adaptability of the video transmission system of the invention, and can be used as a common transmission system without dividing video frames. In order to encode the video encoded by the common encoding method without being divided and encode the information labeled with different information by the above-mentioned inter-frame reference method after being divided, the decoder uses different decoding methods according to the different information, so that the present invention can be used as a general transmitting apparatus.
The invention can select video code streams with different sizes according to different bandwidths between the server and the receiving end, and carry out spatial interpolation on the receiving end to recover the original image, thereby avoiding network packet loss, keeping the fluency of video communication and utilizing the bandwidth to the maximum extent. In addition, the invention uses a plurality of different P frame reference modes, thereby avoiding the condition that the video color deviation is large in the recovered video stream. Moreover, the invention uses standard codec, and does not need to modify the network.
Example two
Corresponding to the embodiment of the video transmission system in the first embodiment, the present invention further provides a video transmission method, as shown in fig. 11 and 12, the video transmission method includes the following steps:
s1: splitting an image frame of an original video stream to be transmitted into n interlaced sub-image frames, wherein n is greater than or equal to 2;
s2: inserting n sub-image frames split from the same frame of image frame into each frame of the original video stream in a preset sequence to form a group of sub-frame sequences by taking one frame of the original video stream as a unit and within the time interval of each frame of the original video stream;
s3: arranging the sequence among the sub-frame sequences according to the time sequence of the original video stream and combining to form a second video stream, wherein the frame rate of the second video stream is n times of that of the original video stream;
s4: coding the second video stream to form a second video code stream;
s5: uploading the second video code stream to a server;
s6: the server unpacks and packages the second video code stream according to the bandwidth to form a third video code stream which contains k sub-image frames in the reserved sub-frame sequence, wherein k is less than or equal to n, and the third video code stream is sent;
s7: receiving and decoding a third video code stream to form a third video stream;
s8: and restoring the sub-image frames belonging to the same sub-frame sequence in the third video stream to the image frame of the original video stream according to the original split position relationship, and forming the original video stream, wherein if the restored image frame is an incomplete image frame, spatial interpolation calculation is carried out on the incomplete image frame.
Through the above steps, the video transmission method can implement the functions of the system in the first embodiment.
In one embodiment, the encoding process of step S4 further includes the steps of:
s41: and performing inter-frame prediction, wherein when the inter-frame prediction is performed, in the second video code stream, if the 1 st frame of one group of subframe sequences is a P frame, the 1 st frame of the P frame refers to the 1 st frame of the previous group of subframe sequences.
In one embodiment, the encoding process of step S4 further includes the steps of:
s42: and performing inter-frame prediction, wherein in a second video code stream, in a period of a key frame, except for the 1 st frame which is a subframe sequence of the I frame, the 1 st frame of the subframe sequence at the odd position refers to the 1 st frame of the previous group of subframe sequences at the odd position, and the 1 st frame of the subframe sequence at the even position refers to the 1 st frame of the previous group of subframe sequences at the odd position.
As described in the first embodiment, the two frame skipping steps of step S4 can effectively prevent the color of the restored original video stream image from being biased.
In one embodiment, the encoding process of step S4 further includes the steps of:
s43: for any mth frame in a group of subframe sequences, m is more than or equal to 2 and less than or equal to n, and the mth frame refers to one of the 1 st to m-1 st frames.
In one embodiment, the encoding process of step S43 further includes the steps of:
s431: for a set of subframe sequences, data frames other than frame 1 are referenced to the data frame of the previous frame.
In one embodiment, the encoding process of step S43 further includes the steps of:
s432: for a set of subframe sequences, data frames other than frame 1 refer to data frames of frame 1 of the set of subframe sequences.
As described in the first embodiment, the frame skipping step in the subframe sequence group in step S4 can further prevent the color of the restored original video stream image from being biased.
In one embodiment, the encoding module is capable of performing intra prediction.
In one embodiment, step S4 further includes a labeling step, and step S7 further includes a judging step before decoding;
a labeling step, wherein labeling information is added to a second video code stream, and the labeling information is used for explaining the coding mode of the second video code stream;
and a judging step, namely determining a decoding mode according to the labeling information.
The addition of the marking step and the judging step can improve the adaptability of the method and can be used as a common transmission method without segmenting the video frame.
In one embodiment, the image frames of the original video stream are split into 2 interleaved sub-image frames in an interlaced or interlaced manner in step S1; or, splitting the image frame of the original video stream into 4 interlaced sub-image frames in an interlaced alternate mode; alternatively, the image frames of the original video stream are split into 2 interleaved sub-image frames in a criss-cross manner. The more the number of the split molecule image frames is, the more the transmission grading is, and the more the transmission is beneficial to selecting the proper code stream size for transmission according to different bandwidths.
In one embodiment, in step S1, an image frame of an original video stream is first divided into a plurality of image blocks with a resolution of 3 × 2, a second row of pixel points of each image block is marked as first pixel points, a first row of first pixel points and a second row of third pixel points of each image block are marked as second pixel points, a first row of third pixel points and a second row of first pixel points of each image block are marked as third pixel points, a set of all first pixel points is a first sub-image frame, a set of all second pixel points is a second sub-image frame, and a set of all third pixel points is a third sub-image frame.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (21)

1. A video transmission system, comprising: a sending terminal, a server and a receiving terminal;
the transmitting end comprises: a partitioning module and an encoding module, the partitioning module comprising: a splitting unit;
the splitting unit is used for splitting an image frame of an original video stream into n interlaced sub-image frames, wherein n is larger than or equal to 2, n sub-image frames split from the same image frame are inserted into a time interval of one frame of the original video stream according to a preset sequence by taking one frame of the original video stream as a unit to form a group of sub-frame sequences, the sequence among the sub-frame sequences is arranged according to the time sequence of the original video stream and combined to form a second video stream, and the frame rate of the second video stream is n times of the frame rate of the original video stream;
the encoding module is used for encoding the second video stream to form a second video code stream;
the server is used for unpacking and packaging the second video code stream according to the bandwidth between the server and the receiving end, forming a third video code stream which contains k sub-image frames in a reserved sub-frame sequence, wherein k is less than or equal to n, and transmitting the third video code stream to the receiving end;
the receiving end includes: a synthesis module and a decoding module;
the decoding module is used for decoding the third video code stream to form a third video stream;
and the synthesis module is used for restoring the sub-image frames belonging to the same sub-frame sequence in the third video stream into a frame image frame according to the original split position relationship and forming an original video stream, wherein if the restored image frame is an incomplete image frame, the incomplete image frame is subjected to spatial interpolation calculation.
2. The video transmission system according to claim 1, wherein said coding module is further configured to perform inter-frame prediction, and when performing inter-frame prediction, in said second video stream, if a frame 1 of a group of sub-frame sequences is a P frame, the frame 1 of the P frame refers to a frame 1 of a previous group of sub-frame sequences.
3. The video transmission system according to claim 1, wherein the encoding module is further configured to perform inter-frame prediction, and when performing inter-frame prediction, in the second video stream, in a period of a key frame, except for a subframe sequence in which the 1 st frame is an I frame, if a group of subframe sequences is located at an odd-numbered position, the 1 st frame of the group of subframe sequences located at the odd-numbered position refers to the 1 st frame of a previous group of subframe sequences located at the odd-numbered position, and if a group of subframe sequences is located at an even-numbered position, the 1 st frame of the group of subframe sequences located at an even-numbered position refers to the 1 st frame of the previous group of subframe sequences.
4. A video transmission system according to claim 2 or 3, wherein for any mth frame in the sequence of subframes of the set, 2 ≦ m ≦ n, the mth frame referring to one of the 1 st to m-1 st frames in the sequence of subframes of the set.
5. The video transmission system according to claim 4, wherein for a set of sub-frame sequences, data frames other than the 1 st frame are referenced to a data frame of a previous frame.
6. The video transmission system of claim 4, wherein for a set of sub-frame sequences, data frames other than frame 1 refer to data frames of frame 1 of the set of sub-frame sequences.
7. The video transmission system according to claim 1, 2, 3, 5 or 6, wherein said transmitting end further comprises: the transmission module is used for uploading the second video code stream to the server; the receiving end further includes: and the receiving module is used for receiving the third video code stream transmitted by the server.
8. The video transmission system of claim 7, wherein the partitioning module further comprises: labeling units; the synthesis module further comprises: a judgment unit;
the marking unit is used for adding marking information to the second video code stream, and the marking information is used for explaining the coding mode of the second video code stream;
the judging unit is used for receiving the marking information;
and the decoding module determines a decoding mode according to the marking information received by the judging unit.
9. The video transmission system of claim 1, wherein the splitting unit splits the image frames of the original video stream into 2 interleaved sub-image frames, and wherein the encoding module is an SVC temporal scalable encoder.
10. The video transmission system according to claim 1, 2, 3, 5 or 6, wherein the splitting unit splits image frames of an original video stream into 2 interleaved sub-image frames in an interlaced or interlaced manner; or, splitting the image frame of the original video stream into 4 interlaced sub-image frames in an interlaced alternate mode; alternatively, the image frames of the original video stream are split into 2 interleaved sub-image frames in a criss-cross manner.
11. The video transmission system according to claim 1, 2, 3, 5 or 6, wherein the splitting unit divides the image frames of the original video stream into a plurality of image blocks with a resolution of 3 × 2, marks the pixels in the second column of each image block as pixels No. one, marks the first pixel in the first row and the third pixel in the second row of each image block as pixels No. two, marks the third pixel in the first row and the first pixel in the second row of each image block as pixels No. three, and combines the set of all pixels No. one into the first sub-image frame, the set of all pixels No. two into the second sub-image frame, and the set of all pixels No. three into the third sub-image frame.
12. A video transmission method, comprising the steps of:
s1: splitting an image frame of an original video stream into n interlaced sub-image frames, wherein n is more than or equal to 2;
s2: inserting n sub-image frames split from the same frame image frame into a time interval of one frame of the original video stream according to a preset sequence by taking one frame of the original video stream as a unit to form a group of sub-frame sequences;
s3: arranging the sequence among the sub-frame sequences according to the time sequence of the original video stream and combining to form a second video stream, wherein the frame rate of the second video stream is n times of that of the original video stream;
s4: coding the second video stream to form a second video code stream;
s5: uploading the second video code stream to a server;
s6: the server unpacks and packages the second video code stream according to the bandwidth to form a third video code stream which contains k sub-image frames in a reserved sub-frame sequence, wherein k is less than or equal to n, and the third video code stream is transmitted;
s7: receiving and decoding the third video code stream to form a third video stream;
s8: and restoring the sub-image frames belonging to the same sub-frame sequence in the third video stream into a frame of image frame according to the original split position relationship, and forming the original video stream, wherein if the restored image frame is an incomplete image frame, spatial interpolation calculation is carried out on the incomplete image frame.
13. The video transmission method according to claim 12, wherein the encoding process of step S4 further comprises the steps of:
s41: and performing inter-frame prediction, wherein when the inter-frame prediction is performed, in the second video code stream, if the 1 st frame of one group of subframe sequences is a P frame, the 1 st frame of the P frame refers to the 1 st frame of the previous group of subframe sequences.
14. The video transmission method according to claim 12, wherein the encoding process of step S4 further comprises the steps of:
s42: and performing inter-frame prediction, wherein in the second video code stream, in a period of a key frame, except that the 1 st frame is a subframe sequence of an I frame, if a group of subframe sequences are located at odd positions, the 1 st frame of the group of subframe sequences located at the odd positions refers to the 1 st frame of a previous group of subframe sequences located at the odd positions, and if a group of subframe sequences are located at even positions, the 1 st frame of the group of subframe sequences located at the even positions refers to the 1 st frame of the previous group of subframe sequences.
15. The video transmission method according to claim 13 or 14, wherein the encoding process of step S4 further comprises the steps of:
s43: for any mth frame in a group of subframe sequences, m is more than or equal to 2 and less than or equal to n, and the mth frame refers to one of the 1 st to m-1 st frames.
16. The video transmission method according to claim 15, wherein the step S43 further comprises the steps of:
s431: for a set of subframe sequences, data frames other than frame 1 are referenced to the data frame of the previous frame.
17. The video transmission method according to claim 15, wherein the step S43 further comprises the steps of:
s432: for a set of subframe sequences, data frames other than frame 1 refer to data frames of frame 1 of the set of subframe sequences.
18. Video transmission method according to claim 12, 13, 14, 16 or 17, wherein-the coding module is capable of performing intra prediction.
19. The video transmission method according to claim 12, 13, 14, 16 or 17, wherein the step S4 further includes a labeling step, and the step S7 further includes a judging step before decoding;
the labeling step is to add labeling information to the second video code stream, wherein the labeling information is used for explaining the coding mode of the second video code stream;
the judging step is to determine a decoding mode according to the label information.
20. The video transmission method according to claim 12, 13, 14, 16 or 17, wherein in step S1, the image frames of the original video stream are split into 2 interleaved sub-image frames in an interlaced or interlaced manner; or, splitting the image frame of the original video stream into 4 interlaced sub-image frames in an interlaced alternate mode; alternatively, the image frames of the original video stream are split into 2 interleaved sub-image frames in a criss-cross manner.
21. The video transmission method according to claim 12, 13, 14, 16 or 17, wherein in step S1, the image frame of the original video stream is divided into a plurality of image blocks with resolution of 3 × 2, the second column of the image blocks is marked as a first pixel, the first pixel in the first row and the third pixel in the second row of each image block are marked as a second pixel, the third pixel in the first row and the first pixel in the second row of each image block are marked as a third pixel, the set of all the first pixels is the first sub-image frame, the set of all the second pixels is the second sub-image frame, and the set of all the third pixels is the third sub-image frame.
CN201911408868.4A 2019-12-31 2019-12-31 Video transmission system and method Active CN111064962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911408868.4A CN111064962B (en) 2019-12-31 2019-12-31 Video transmission system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911408868.4A CN111064962B (en) 2019-12-31 2019-12-31 Video transmission system and method

Publications (2)

Publication Number Publication Date
CN111064962A CN111064962A (en) 2020-04-24
CN111064962B true CN111064962B (en) 2022-02-15

Family

ID=70305403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911408868.4A Active CN111064962B (en) 2019-12-31 2019-12-31 Video transmission system and method

Country Status (1)

Country Link
CN (1) CN111064962B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770347A (en) * 2020-07-17 2020-10-13 广州市奥威亚电子科技有限公司 Video transmission method and system
CN111770333B (en) * 2020-07-17 2022-05-24 广州市奥威亚电子科技有限公司 Image merging method and system
CN115550688A (en) * 2021-06-30 2022-12-30 华为技术有限公司 Video code stream processing method, medium, program product and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101047860A (en) * 2006-03-27 2007-10-03 华为技术有限公司 Vedio layering coding method at interleaving mode
CN101127918A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 A video error tolerance control system and method
CN101252686A (en) * 2008-03-20 2008-08-27 上海交通大学 Undamaged encoding and decoding method and system based on interweave forecast
CN102724560A (en) * 2012-06-28 2012-10-10 广东威创视讯科技股份有限公司 Method and device for audio data display
CN103636228A (en) * 2011-04-28 2014-03-12 三星电子株式会社 Method and apparatus for adjusting data transmission rate in wireless communication system
CN103916675A (en) * 2014-03-25 2014-07-09 北京工商大学 Low-latency intraframe coding method based on strip division

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2497751B (en) * 2011-12-19 2015-03-04 Canon Kk Method of transmitting video information over a wireless multi-path communication link and corresponding wireless station
US11153578B2 (en) * 2018-04-27 2021-10-19 Ati Technologies Ulc Gradient texturing compression codec

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101047860A (en) * 2006-03-27 2007-10-03 华为技术有限公司 Vedio layering coding method at interleaving mode
CN101127918A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 A video error tolerance control system and method
CN101252686A (en) * 2008-03-20 2008-08-27 上海交通大学 Undamaged encoding and decoding method and system based on interweave forecast
CN103636228A (en) * 2011-04-28 2014-03-12 三星电子株式会社 Method and apparatus for adjusting data transmission rate in wireless communication system
CN102724560A (en) * 2012-06-28 2012-10-10 广东威创视讯科技股份有限公司 Method and device for audio data display
CN103916675A (en) * 2014-03-25 2014-07-09 北京工商大学 Low-latency intraframe coding method based on strip division

Also Published As

Publication number Publication date
CN111064962A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN111064962B (en) Video transmission system and method
US7693220B2 (en) Transmission of video information
US6081551A (en) Image coding and decoding apparatus and methods thereof
US8179420B2 (en) Minimal decoding method for spatially multiplexing digital video pictures
CN105765980B (en) Transmission device, transmission method, reception device, and reception method
CN1801944B (en) Method and device for coding and decoding video
TWI279742B (en) Method for coding sequences of pictures
EP1811787A2 (en) Picture encoding method and apparatus and picture decoding method and apparatus
CN107995493B (en) Multi-description video coding method of panoramic video
CN103814572B (en) Frame-compatible full resolution stereoscopic 3D compression and decompression
US20100034293A1 (en) Method and apparatus of multi-view coding and decoding
KR101632076B1 (en) Apparatus and method for transmitting stereoscopic image data according to priority
KR20080095833A (en) Method and system for partitioning and encoding of uncompressed video for transmission over wireless medium
CN1250193A (en) Motion image specialist group data stream switching method
CN1951119A (en) Method and apparatus enabling fast channel change for DSL system
EP2932711B1 (en) Apparatus and method for generating and rebuilding a video stream
JP2012010066A (en) Transmitter, receiver and communication system
CN102804791A (en) Reception device, transmission device, communication system, method for controlling reception device, and program
CN113630597B (en) Method and system for preventing video from losing packets irrelevant to encoding and decoding
CN111770347A (en) Video transmission method and system
US10477246B2 (en) Method for encoding streams of video data based on groups of pictures (GOP)
KR20080022071A (en) Data transmission method for improving video packet loss resilience and system using the data transmission method
US9001892B2 (en) Moving image encoder and moving image decoder
EP1719343A1 (en) Transmission of video information
Zare et al. Self-contained slices in H. 264 for partial video decoding targeting 3D light-field displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant