Embodiment
Below with reference to accompanying drawing the preferred embodiments of the present invention are described.
Embodiment 1
In the block diagram of the video decoder that utilizes video encoding/decoding method, 52 couples of code signal Str of variable-length decoder carry out decoding of variable-length ground and output frequency coding component DCoef.Go zigzag scanning element 54 in two-dimensional block, to rearrange the frequency component of frequency coding component DCoef, and output frequency component Fcoef, module unit frequency component.The 56 pairs of frequency components in anti-cosine transform unit are implemented to go to quantize and anti-DCT operation, and output difference image DifCoef.
The motion compensation block size MCsize of the size of motion compensation block is represented in the reference image R ef output that motion compensator 60 adds up from memory 64 as motion compensated image MCpel and output by the locational pixel of outside input motion vector MV indication.Adder 58 is exported reconstructed image Coef with difference image DifCoef mutually with motion compensated image Mcpel.
De-blocking filter 62 receives reconstructed image Coef, and motion compensation block size MCsize and difference image DifCoef implement coding distortion and eliminate, then the picture signal Vout behind the output decoder.Reconstructed image Coef is stored in the memory 64, and is used as the reference image R ef of next picture decoding.
Fig. 2 is the block diagram that utilizes the de-blocking filter 62 (be also referred to as coding distortion and eliminate the unit) according to coding distortion removal method of the present invention.This de-blocking filter 62 is that the content of the de-blocking filter H.26L described among the TML8 with reference to the ITU-T recommendation by the present inventor is created.
The controller of filtering pixel counts 4 determines to comprise among each reconstructed image Coef the location of pixels of coding distortion, exports the pixel counts FtrPel of filtering then.The pixel counts FtrPel of filtering need to represent the location of pixels of filtering.
Filter factor controller 6 utilizes FtrPel of filtering pixel counts and reconstructed image Coef to determine to be fit to eliminate the filter factor (quantity that comprises filter taps) of coding distortion from specified pixel, exports filter factor FtrTap then.
The filter factor that filter processor 8 utilizes by filter factor FtrTap indication is used Filtering Processing and eliminate coding distortion from reconstructed image Coef, and the picture signal Vout behind the output decoder.
Difference image DifCoef and motion compensation block size MCsize are input to motion compensation block Boundary Detection unit 2, motion compensation block Boundary Detection unit 2 determines whether the difference image DifCoef of this processing block is less than or equal to set-point, for example, whether be 0, detect the border of motion compensation block, then output movement compensation block boundary marker IsEdge.
Fig. 3 has shown the H.26LTML8 example of the middle motion compensation block size that uses of ITU-T recommendation.As shown in these examples, maximum motion compensation block size is 16 * 16 pixels, and is called as the big or small identical of macro block.Motion compensation block size shown in Fig. 3 (a) to (g) is 4 * 4,4 * 8,8 * 4,8 * 8,8 * 16,16 * 8 and 16 * 16 pixels.The ITU-T recommendation H.26LTML8 in, from these seven kinds of motion compensation block sizes, select and be fit to the macroblock unit size, be used for Code And Decode then.It should be noted that Code And Decode can be applied in the suitable unit of two adjacent macro blocks of vertical direction, the unit of these macro blocks is called " macro block to ".
H.26L the unit that is used for frequency translation and coding at the ITU-T recommendation among the TML8 is 4 * 4 pixels.The unit of this 4 * 4 pixel is called as " coding unit ".Shown in Fig. 3 (a), 16 piece A are the pieces of 4 * 4 pixels to each piece among the P.4 * 4 pixel coder unit only under situation shown in Fig. 3 (a) in the size coupling of motion compensation block.Because occur as the minimum code cell size of the special piece distortion of dividing intuitively of coding distortion, so conventional coding distortion removal method is always handled 4 * 4 pixel cells with 4 * 4 pixels.
If strong especially in the correlation between the picture behind the motion compensation encoding, the coding motion compensated error is 0 between the picture so.In this case because in 4 * 4 pixel cells the difference image DifCoef of Code And Decode also be 0, so that the place except the motion compensation block boundary can not occur in the Code And Decode process pixel value that causes from coding distortion is discontinuous.Therefore, if the motion compensation block of selection shown in Fig. 3 (b), 4 * 4 pixel cell borders that are illustrated by the broken lines between piece AC, BD, EG, FH, IK, JL, MO and the NP shown in Fig. 3 (a) so do not need coding distortion to eliminate processing.4 * 4 pixel cell borders of being represented by the dotted line between piece AB, CD, EF, GH, IJ, KL, MN and the OP shown in Fig. 3 (a) do not need piece equally.If being used for the difference image DifCoef of 4 * 4 pixel cell coding/decodings also is 0, so only should spend piece, and the boundary of 4 * 4 pixel cells in motion compensation block should not spend piece on the border of motion compensation block.With go piece to compare to all block boundaries, can reduce the number of operations of coding distortion in eliminate handling like this.
If the difference image DifCoef of procedure block is 0 and is not the border of motion compensation block, motion compensation block Boundary Detection unit 2 all is set to selector 10a and 10b to close (being represented by solid line) so, and selector 10b output reconstructed image Coef is as decoded image signal Vout.By being set, motion compensation block boundary marker IsEdge switches selector 10a and 10b.By closing the processing that selector 10a and 10b omit the controller of filtering pixel counts 4, filter factor controller 6 and filter processor 8.Except above-mentioned situation, selector 10a and 10b work (being illustrated by the broken lines) are exported from selector 10b from the output of filter processor 8, as decoded image signal Vout.The state of this selector also is provided with by applying motion compensation block boundary marker IsEdge.
The present invention is by applying the ability that the motion compensation block boundary marker IsEdge that is provided with has rightly introduced the operation of omitting the controller of filtering pixel counts 4, filter factor controller 6 and filter processor 8, by skipping the power consumption that these processing were handled and reduced in the acceleration quickly of these unit.
It should be noted, according to not applying any coding distortion elimination processing present embodiment has been described simply, can use simple coding distortion elimination processing to substitute the process of skipping, and can between the coding distortion elimination processing of complexity and the coding distortion elimination processing in 4 * 4 pixel cells, switch.
Embodiment 2
Can easily realize the particular procedure that coding distortion is eliminated in this embodiment below with reference to the flow chart description according to coding distortion removal method of the present invention shown in Figure 4.
At first determine at step S18 whether object block is that coding distortion is eliminated piece.If control proceeds to step S19.If not, control proceeds to step S24.
In step S19, select suitable coding distortion to eliminate filter, in step S20, utilize the filter of selecting to implement coding distortion and eliminate process, in step S21, the purpose pixel changed into next untreated pixel in this piece then.If there is not untreated pixel (step S22 returns not) in this piece, control proceeds to step S24 so, if untreated pixel (step S22 returns and is) is arranged, control turns back to step S19 and repeats this process so.
Step S24 detects in the picture whether have another untreated.If have, control proceeds to step S23.If all pieces all processed (step S24 returns not), the coding distortion elimination process of this picture finishes so.
If also have untreated, in step S23, the purpose piece is become next untreated so, control loop turns back to step S18 and repeats this process.
Fig. 6 has shown the flow chart of pixel quantity how to determine to want filtering in coding distortion removal method of the present invention (below be called " filtering pixel counts ").This flow chart description the controller of filtering pixel counts 4 manipulable examples shown in Figure 2.Fig. 6 has shown the situation of motion compensation block for shown in Fig. 8 (a) time.Shown in Fig. 8 (b), the purpose pixel value that carry out the coding distortion elimination is p3, p2, p1, p0, q0, q1, q2, q3 shown in Fig. 8 (b), and the pixel value after eliminating through coding distortion is P3, P2, P1, P0, Q0, Q1, Q2, Q3.These pixel values distribute successively according to the order identical with location of pixels, and p0 represents same middle respective pixel to p3 and P0 to P3, and q0 represents same middle respective pixel to q3 and Q0 to Q3.
Along with quantization parameter QP increases, quantization step becomes the size of big (more coarse) and coding distortion also along with increase.Therefore to change filter effectively according to the size of quantization parameter QP.The expression quantization parameter QP of Fig. 5 and coding distortion are eliminated the table of correlation between the parameter.Below table 1 shown correlation between the parameter П that goes the piece process, the Ω of the parameter n that is used for determining representing filtering pixel counts and the n.It should be noted,,, and preferably П is set so that the pixel difference is not applied filter less than the pixel of П because it represents the edge if the pixel difference more greatly then should not apply filter.In addition, if the pixel difference is less, along with the pixel difference reduces, pixel does not increase in the possibility at edge, therefore, Ω is set preferably, so that according to pixel difference whether extremely low (less than Ω) or (apply stronger filter (that is, n is higher) somewhat for a short time less than 2 * Ω).
Table 1
Condition A condition B n
dif1a>П dif2a<Ω 0
dif1a>П Ω≤dif2a≤2×Ω 0
dif1a>П dif2a≥2×Ω 0
dif1a≤П dif2a<Ω 2
dif1a≤П Ω≤dif2a≤2×Ω 1
dif1a≤П dif2a≥2×Ω 0
Dif1=p0-q0 wherein
dif2=p1-q1
dif1a=|dif1|
dif2a=|dif2|
In other words, table 1 has been summed up the flow process of determining filtering pixel counts in the coding distortion removal method of the present invention.
Step S27 calculating pixel difference DifPel, it be will double counting in coding distortion elimination process parameter.Notice that pixel difference DifPel is meant dif1a and the dif2a that calculates among the step S27.
Then, compare dif1a and П at step S28.If dif1a greater than П, is provided with n=0 so and does not move coding distortion and eliminate this process that just finishes of handling in rapid S29.If dif1a is less than or equal to П, control proceeds to step S30 so.
Compare dif2a and Ω at step S30.If dif2a, is provided with n=2 (in other words, implement coding distortion from the border of each adjacent block to second pixel and eliminate) so less than Ω in step S31, process finishes then.If dif2a is more than or equal to Ω, control proceeds to step S32 so.
At step S32, compare dif2a and 2 * Ω.If dif2a, is provided with n=1 (in other words, implement coding distortion from the border of each adjacent block to first pixel and eliminate) so less than 2 * Ω in step S33, process finishes then.Dif2 is the absolute value near the difference in the pixel value on border, and because reducing along with this difference, quantity near the high fdrequency component on border also reduces, so along with dif2 diminishes, can eliminate coding distortion effectively from borderline region by the quantity that increases to the pixel that piece handles.
Fig. 7 is the flow chart that is used for determining the process of filter factor in the coding distortion removal method of the present invention, and is the operational instances of filter factor controller among Fig. 2.
Utilize n, dif1, dif2 and φ among the step S37 to compare three conditions.If three conditions all are true, at step S39 three tap filtering are set so and handle.That is, φ is the threshold value that is used for determining filter taps quantity, when high fdrequency component be low (n=2) and pixel value when almost not changing on the border (| dif2a-dif1a|<φ) adopts the filter of three taps.The filter of three taps can provide stronger high fdrequency component to suppress than the filter of a tap usually.Because utilize the n value can change filtering, so the pixel quantity that can operation parameter n replaces applying filter changes filter type.The parameter n of Huo Deing can also be used to changing the pixel quantity of filtering and the filter type that is adopted like this.
If three conditions are not true in step S37, then detect the value of n at S38.If n 〉=1, step S40 is provided with the filtering of a tap.If n=0, step S42 closes filter so.
It should be noted, can change quantization parameter QP at each piece.Yet coding distortion is eliminated the border of process between the piece with different quantization parameter QP can become more complicated.When quantification parameter QP changes in boundary block, utilization of the present invention
* the average quantisation parameter QP of adjacent block (mark can round up),
* the maximum quantization parameter QP of adjacent block,
* the minimum quantization parameter QP of adjacent block, or
* left side neighbour or upward the quantization parameter QP of adjacent piece
Prevent that as the quantization parameter QP that is used for the piece filtering on both sides, border it from complicating.It should be noted, utilize the difference between these four quantization parameter QP very little, can use of chosen in advance.
Can eliminate coding distortion at an easy rate by said method.
Fig. 8 (a) is one of part of dotted line among the block diagram of another embodiment of de-blocking filter 62 shown in Figure 1 and Fig. 2 separately embodiment.It should be noted that the same section in the block diagram of the coding distortion elimination unit of the conventional coding distortion removal method of Fig. 8 and utilization shown in Figure 34 is denoted by like references, and has omitted further describing them here.
Pixel difference counter 20 is from the pixel difference of reconstructed image Coef calculating at block boundary, and output pixel difference DifPel.This pixel difference DifPel comprises the signal that is equal to dif1a and dif2a.By the relatively left-right symmetric on border or the pixel of symmetric position up and down between the coding unit piece, and utilize difference d1, d2, d3, d4 (aberration and luminance difference) between it to obtain pixel difference DifPel.If the mean value of these differences (for example, (d1+d2+d3+d4)/4) is less than or equal to set-point, the image boundary line can not appear at and be used for determining therefore will implementing block elimination filtering in the width range of d4 so.On the other hand, if, then there is image boundary more than or equal to set-point in mean value and does not implement block elimination filtering.It should be noted, this comparison can use among d1, d2, d3 and the d4 any one, any two, or any three.Utilize mean value, not equal to be to substitute comparison with set-point with maximum difference.
Be used for determining that the flow chart of filtering pixel counts can be as the example of the controller of filtering pixel counts 4 operations.The operational instances that has shown filter factor controller 6 among this embodiment in the flow chart that is used for determining filter factor shown in Figure 7.By with reference to the pixel difference DifPel shown in Fig. 8 (b), can reduce the pixel Difference Calculation amount of controller of filtering pixel counts 4 and filter factor controller 6.Therefore controller of filtering pixel counts 4 and filter factor controller 6 can not be provided with filtering pixel counts and filter factor with reference to reconstructed image Coef.
Clearly, can reduce amount of calculation by recycling as the value that pixel difference DifPel calculates.
Embodiment 3
This embodiment of the present invention has described the encoding apparatus and decoding apparatus of the coding distortion removal method of describing in the enforcement another embodiment of the present invention.
Fig. 9 is the block diagram of code device.
Motion detection unit 30 will be respectively compares from the reference image R ef1 of first memory 38 and second memory 40 outputs and reference image R ef2 and picture signal Vin, and detect motion vector MV, that is the amount of exercise among the picture signal Vin relevant, with reference picture.It should be noted, represent that by reference reference image R ef1 or reference image R ef2 information that whether prediction error can be lacked is also contained among the motion vector MV and is reported to motion compensation units 32.Motion compensation units 32 is extracted the locational image by motion vector MV indication from reference image R ef1 or reference image R ef2, then with its image M Cpel output after as motion compensation.
Subtracter 42 obtains the difference of picture signal Vin and motion compensated image MCpel, and outputs to cosine transform unit (DCT) 46.Cosine transform unit 46 calculates DCT and quantizes the difference of this input, output frequency component FCoef then.The frequency code component DCoef of the order of frequency component Fcoef is reset in 48 outputs of zigzag scanner, and the 50 pairs of frequency code components in variable length code unit carry out variable length code with outupt coded signal Str.
The output of DCT unit (cosine transform unit) 46 also is input to anti-DCT unit (anti-cosine transform unit) 44.The motion compensated image MCpel that synthesizer 34 merges frequency component FCoef and exports from motion compensation units 32, and the image after the output merging.Image C oef after the merging stores first memory 38 same as before into, and is handled by de-blocking filter 36, and the decoded image signal Vout that will be eliminated coding distortion stores second memory 40 into.
Figure 10 is the flow chart of decoding device.This decoding device is correctly decoded to the code signal Str by the coding of the code device shown in the block diagram among Fig. 9.Operation among Figure 10 part identical with counterpart among Figure 32 or Figure 33 discerned by identical reference number, omits description of them at this.Anti-DCT unit (anti-cosine transform unit) 56 removes the sampling frequency component and calculates anti-DCT with output difference image DifCoef.Adder 58 is obtained reconstructed image Coef with difference image DifCoef mutually with motion compensated image MCpel.Reconstructed image Coef stores first memory 64 into, eliminates the decoded image signal Vout that obtains behind the coding distortion by de-blocking filter 62 from reconstructed image Coef and stores second memory 66 into.
The result of this operation is that the image that is not eliminated coding distortion stores first memory 38 and first memory 64 into, and the image that has been eliminated coding distortion stores second memory 40 and second memory 66 into.Coding distortion is eliminated process and is not always only needed except that coding distortion, and part true picture signal also may be lost.Therefore, the code device shown in the allocation plan 9 is so that motion detection unit 30 always can be selected best output the two from first memory 38 and second memory 40.
If utilize the configuration of present embodiment to lose a part of original image signal, can select suitable reference picture by reference first memory 38 so by eliminating coding distortion.Equally, can select suitable reference picture by decoding device shown in Figure 10.
It should be noted that this embodiment uses DCT as orthogonal transform among the present invention, also can use Hadamard conversion or wavelet transformation.
Embodiment 4
Figure 11 is according to the preferred embodiment of the present invention, and the coding distortion of de-blocking filter 62 corresponding for example shown in Figure 1 is eliminated the block diagram of unit.This coding distortion is eliminated unit and is distinguished by the threshold value of determining to be used for to be provided with filter.It should be noted that execution has identical reference number and omitted description of them here with the part that coding distortion shown in Figure 34 is eliminated the unit same operation.
Filter is provided with 22 pairs of filters of parameter decoder parameter signal FtrStr decoding, output filter parameter F trPrm then is set.It is not threshold value that this filter is provided with parameter signal FtrStr, but is used for being provided with the parameter of threshold value.Filter parameter FtrPrm is equal to П, Ω and the φ among Fig. 5.Obtain to be used for optimizing these parameters П, the Ω of each picture and the data of φ by decoding and parameter signal FtrStr being set, can carry out suitable coding distortion to image and eliminate from filter.
Figure 12 has shown the structure of coded signal Str in the coding distortion removal method of the present invention.Figure 12 (a) is the code signal of a picture, and comprises the public picture header PicHdr of all data in the picture data PicData of the data of preserving a picture and the picture.This picture header PicHdr comprises filter parameter signal FtrStr is set.
Figure 12 (b) has shown the structure of picture data PicData.This picture data PicData comprises sheet signal SliceStr, comprises the code signal of a slice of one group of a plurality of module unit.
Figure 12 (c) has shown the structure of sheet signal SliceStr, and it comprises the public head SliceHdr of all data in the sheet data SliceData of the data of preserving a slice and a slice.By to head SliceHdr writing filtering device parameter signal FtrStr being set, code signal decoding correctly to receiving in the sheet data SliceData unit.
If comprise a plurality of signal SliceStr in picture data PicData, filter is provided with parameter signal FtrStr and can only writes some head SliceHdr so, writes all head SliceHdr and filter can not be provided with parameter signal FtrStr.If it is public to each sheet that filter is provided with parameter signal FtrStr, and filter is provided with parameter signal FtrStr and does not write head SliceHdr as shown in Figure 12 (c), is provided with that parameter signal FtrStr can suppress owing to repeat the increase that filter is provided with the bit number that parameter signal FtrStr caused by replace filter from another head SliceHdr so.
Resemble grouping such small data unit rather than single continuous bit stream if code signal Str sends to, head and non-head part can separately send so.In this case, head and data division be not as shown in figure 12 in a bit stream.Yet, even the sending order of head and data division is not continuous, but because the head of particular data packet can send in another grouping at an easy rate, even therefore this transmission is not an independent bit stream, its notion is identical with the bit stream shown in Figure 12.
Figure 13 is the block diagram of code device.It should be noted that among Figure 13 with Fig. 9 in identical part be denoted by like references, and omitted further describing here to it.
Memory 217 memory image signal Vin are coding and the picture signal of input.Coding target image signal and decoded image signal Vout that picture quality comparing unit 216 relatively reads from memory 217.From being stored in comparison memory 218 with the block elimination filtering threshold value that is used for decoded picture by the relatively back mistake size that obtains of picture quality comparing unit 216.Selected cell 219 will be selected as optimizing threshold value corresponding to the threshold value that is stored in the block elimination filtering of smallest error in the comparison memory 218.Selected optimization threshold value is used as relative added bit stream and is multiplexed in the bit stream of corresponding picture.Optimization threshold value according to selected cell 219 outputs, threshold value control unit 215 produces candidate's threshold value of the block elimination filtering that is used for next picture, notice de-blocking filter 36 and change coding distortion are eliminated the threshold value of process, then the current threshold value of using are sent to comparison memory 218.
Figure 14 is the representation of concept of the prescribed coding device shown in the block diagram among Figure 13.In Figure 14, optimize the branch operation of the part among threshold value selected cell 226 execution Figure 13 except zigzag scanner 48, variable length code unit 50 and threshold value extra cell 220, be equivalent to memory 27, picture quality comparing unit 216, the relatively operation of memory 218, selected cell 219 and threshold value control unit 215.Memory 217 among Video Decoder 227 corresponding Figure 13, picture quality comparing unit 216, the relatively operation of memory 218, selected cell 219 and threshold value control unit 215 parts in addition.Threshold value 228 is equal to above-mentioned optimization threshold value.
Optimize threshold value selected cell 226 and select to optimize threshold value.This optimization threshold value is equal to the П, the Ω that determine for each the quantization parameter QP among Fig. 5 and the setting of φ value.Selected optimization threshold value is stored in threshold value memory 228, and is applied to video encoder 227 and as filter parameter signal FtrStr is set.Filter shown in Figure 11 is provided with parameter decoder 22, for example, in decoder, handles coding filter parameter signal FtrStr is set.
It should be noted, optimize threshold value and can be stored in the memory in the threshold value control unit 215 shown in Figure 13, and threshold data sends to threshold value extra cell 220 by threshold value control unit 215.
Describe below when eliminating coding distortion and determine that filter is provided with the operation of parameter signal FtrStr.Figure 15, Figure 16 and Figure 17 have shown the operational flowchart of the code device that Figure 13 and Figure 14 describe.
Figure 15 is the flow chart of the operation of measurement image quality.
Purpose frame target_frame at first is set, and exports first picture (step 229).Purpose frame target_frame is the picture that is used for obtaining threshold value.
Then, threshold value control unit 215 is provided with threshold range (step 230), then from the threshold value control unit 215 output these scope one sides value as initial threshold (step 231).
Utilize this initial threshold, de-blocking filter 36 is eliminated coding distortion, begin the picture coding (step 232) to purpose frame target_frame, then, picture quality comparing unit 216 is measured the image quality and the picture signal Vin (step 233) of this picture that at first is encoded.
This comparative result is stored in comparison memory 218 (step 234), increases progressively present frame numbering current_frame (step 235) then.In other words, just become next picture from first picture, then at processed picture, next picture outputs to, for example, optimization threshold value selected cell 226 and video encoder 227 shown in Figure 14, or memory 217 shown in Figure 13, motion detection unit 30 and subtracter 42.
Then, step 236 determines whether present frame numbering current_frame has arrived purpose frame target_frame.If also do not have, repeating step 233 to 235.Picture quality comparing unit 216 is measured the picture quality of input picture, stores the result into comparison memory 218.If present frame numbering current_frame equals purpose numbering target_frame, control proceeds to step 237, and present frame numbering current_frame is re-set as first picture.
Then, threshold value control unit 215 increases progressively this threshold value (step 238A), that is, this threshold value is set to next value." next value " is to increase progressively value after the increase by appointment from first value.
Then, determine that all threshold values are whether to the threshold value (step 238B) of the other end of the scope that is provided with of having tested.If after tested all threshold values, then finish to determine optimize the process of threshold value.If also do not test all threshold values, control loop is encoded to step 232 and to the picture of purpose frame target_frame.
Can utilize a threshold value to come the measurement image quality like this,, measure the picture quality of all purpose frame target_frame once more then with the threshold increment specified amount by the picture quality of measuring all purpose frame target_frame.
Referring to the flow chart of Figure 16, the picture quality of utilizing in the picture of all threshold measurements that is provided with in the threshold range has been described, proceed to the method that next picture and utilization are provided with all the threshold measurement picture qualities in the threshold range then.
Purpose frame target_frame at first is set and exports first picture (step 239).Then present frame is numbered current_frame and be initialized as 0 (step 240).
Then, threshold value control unit 215 is provided with threshold range (step 241), and threshold value is set to de-blocking filter 36 (step 242).
After this, utilize the initialization threshold value, measure the picture quality (step 244) of coded picture by picture quality comparing unit 216 first picture coding (coding distortion elimination process) (step 243).
The result of picture quality comparing unit 216 outputs stores comparison memory 218 (step 245) into, and threshold value control unit 215 is worth (step 246A) with threshold increment to the next one.
All threshold values (step 246B) have been determined whether after tested then.If also do not test all threshold values, control loop is to step 242 and utilize the picture quality of the different same pictures of threshold measurement.If after tested all threshold values, control proceeds to step 247.
Increase progressively present frame numbering current_frame in step 247 then.Promptly, the picture of handling becomes second picture (second frame) from first picture (first frame), and next picture outputs to, for example, memory 217, motion detection unit 30 and the subtracter 42 shown in the optimization threshold value selected cell 226 shown in Figure 14 and video encoder 227 or Figure 13.
Then, step 248 is determined the present frame numbering current_frame frame target_frame that whether achieved the goal.If also do not reach, repeating step 241 to 247.If current_frame equals target_frame, finish the image quality measurement process.
Figure 17 be used for according to the described threshold value of Figure 15 or Figure 16 select to optimize threshold value method flow chart and with the result of this threshold measurement picture quality.
Selected cell 219 obtains among image quality measurement result and Figure 17 corresponding threshold data in the step 249.
Then, arrange measurement result (step 250) by specified order.
Then, the picture (step 251) that selection has optimum picture quality according to specified requirements, the threshold value of selecting this image is as optimal threshold.These specified requirementss can be any one or the combinations in following: low S/N ratio, the minimal difference between reconstructed image (being removed the picture of piece in this threshold value) and the raw frames (received image signal Vin) and this difference minimum all just.
Then, selected optimization threshold value is provided with parameter signal FtrStr as filtering outputs to, for example, the video encoder 227 (step 252) among Figure 14.
Utilization can be selected optimal threshold with reference to the method that Figure 17 describes.
As mentioned above, the preferred embodiment is measured the picture quality of all threshold values in the specified scope, collects the image quality measurement result, and selects to optimize threshold value from the result.Can also after detecting point, finish the measurement of picture quality by the proceeding measurement picture quality of all threshold values in threshold range, select to produce this picture quality result's threshold value as optimizing threshold value with optimum picture quality.This method can reduce the number of times of carries out image mass measurement.
Given coding distortion is eliminated process pixel value in this piece and the pixel value in the adjacent block is compared.In this case, adjacent block is to have finished the piece that coding distortion is eliminated process and end pixel value correction.
After the piece G from Figure 18 eliminates coding distortion, for example, can by with four adjacent block E, D, H and M in any one relatively eliminates coding distortion.But, finished the piece that coding distortion is eliminated process by utilizing, can eliminate coding distortion more accurately.
Preferably eliminate coding distortion with linear order by scanning sequency.In other words, press in the horizontal scanning line sequence along the scanning direction of the horizontal scanning line of picture and eliminate coding distortion.
In other words, referring to Figure 18, at first piece A, B, E, the F to first scan line carries out coding distortion elimination process, handles piece C, D, G, the H of next line then, and the like.Each piece has four borders, but preferably utilization is implemented coding distortion elimination process with the adjacent block that the coboundary contacts with left margin.
In this case, because adjacent block contacts with its coboundary or left margin, eliminate processing so piece A is not implemented coding distortion.
Equally, therefore the adjacent block that does not contact with the coboundary of piece B utilizes the piece A that contacts with the left margin of piece B to implement piece.
Piece E and D coboundary and the left margin with piece G respectively are adjacent, therefore when not using piece H and M, utilize piece E and D to eliminate coding distortion from piece G.
By eliminating coding distortion between the adjacent block of having eliminated from new piece and coding distortion, and, can eliminate coding distortion more accurately not with reference to also not carrying out the adjacent block that coding distortion is handled.
Embodiment 5
It is the group that a plurality of pixels are arranged respectively that present embodiment is at first described pixel segmentation, then that assembly is right as comprising the group of 4 pixels in the row, and group is eliminated implementing coding distortion.The coding distortion elimination process of using in the present embodiment is meant and determines whether the zone of block boundary both sides is implemented to remove piece, and goes block operations itself this two kinds of processes or one of them process.Piece can be 4 * 4 of 16 pixels, that is, and and the minimum code unit, or with reference to above-mentioned any of applying motion compensation of figure 3.
As shown in figure 19,4 pixels in group are to utilize the group of block boundary 4 pixels by rows.4 such groups have been shown, r1, r2, r3 and r4 among Figure 19.Data from these 4 groups r1, r2, r3 and r4 can store (for example, simd register) in 4 registers into.Group r1, r2 and group r3, r4 are symmetrically located at the left side and the right side of block boundary.Pixel value among the group r1 compares with the pixel value of organizing among the r2, utilizes the difference that obtains to implement coding distortion then and eliminates.
More precisely, difference 1 between the top pixel among top pixel among the acquisition group r1 and the group r2, difference 2 between the pixel of the pixel of time row on the top of group among the r1 and time row on the top among the group r2, the difference 3 of the pixel of delegation and the difference 4 of group r1 bottom pixel and group r2 bottom pixel on the bottom among the pixel of delegation and the group r2 on the bottom among the group r1.The mean value of difference 1, difference 2, difference 3 and difference 4, or the absolute value sum of difference 1, difference 2, difference 3 and difference 4 is as typical difference, and should typical case's difference and given threshold ratio.Also can use other method.Because these operations are to be that unit carries out with 4 pixels on the same group,, adopt parallel processing to improve bulk velocity significantly so compare with pixel of single treatment.
Though described the comparison that only utilization group r1 and group r2 carry out above, if require more accurate, pixel intensity among pixel intensity in can comparative group r3 and the group r4, then the typical difference from the comparative result of organizing r1 and group r2 is added to from group r3 and the typical difference of organizing r4, perhaps on average eliminates coding distortion with the typical difference of group r3 and group r4.
Aforesaid operations is applied to vertical block boundary, but also can by along horizontal boundary simply the horizontal group of one group of 4 pixel of assembly implement identical operations in fact.
Figure 20 (a) and (b) show the situation that the screen upper tracer replaces.Interlaced picture is meant the picture of being made of a frame two fields representing at different time.Can be by a frame be treated to a frame, Code And Decode to interlaced picture is perhaps finished by frame structure in the frame or field structure in two fields.In Figure 20, little grey square is represented the odd-numbered line pixel, and the Xiao Bai square is represented the even number line pixel.The gray pixels of odd-numbered line forms an of frame, and the white pixel on the even number line forms on the same frame another.
In the interlaced picture signal, a frame is by forming at difference two (even field and odd fields) constantly.Pixel value does not change in time in tableaux, and in the frame correlation between the vertical adjacent lines vertically the correlation between the adjacent lines is strong in one.Yet in motion picture, picture is very big over time, and the pixel value in two fields has a great difference, and in one the correlation between the vertical adjacent lines than vertically the correlation between the adjacent lines is strong in the frame.Therefore, handle tableaux frame by frame and handle motion picture by the field can be more effective.
In interlaced picture, (1) all pieces can be frame structure piece (following meeting further describe the frame structure piece), (2) all pieces can be field structure piece (following meeting further describe the field structure piece), perhaps (3) picture can comprise frame structure and field structure the two.
If picture comprises all frame structure pieces (1), implement all by the frame structure unit so and remove piece.If picture comprises all field structure pieces (2), implement all by the field structure unit so and remove piece.If picture comprises frame structure and field structure piece (3), so by implementing piece to frame structure or from frame structure to the self adaptation conversion of field structure from field structure.Below these operations will be described in more detail.
Shown in Figure 20 (a) (to call " frame structure " in the following text), use frame unit processing rest image of forming by odd field and even field or the interlaced picture that comprises light exercise.In frame structure, shown in the right side of Figure 20 (a), the piece of 16 pixels comprises odd-numbered line pixel and even number line pixel.Utilize frame structure between piece, to implement coding distortion and eliminate process.That is,, block boundary is implemented coding distortion eliminate process as described with reference to figure 8 (b).
Shown in Figure 20 (b) (to call " field structure " in the following text), by the interlaced picture of the field cell processing strenuous exercise that is divided into odd field and even field.Shown in the right side of Figure 20 (b), picture is divided into the odd field of odd-numbered line and the even field of even number line; Odd field comprises the piece of odd-numbered line, and even field comprises the piece of even number line.Only at the field structure piece of odd-numbered line only or only implement coding distortion between the field structure piece of even number line and eliminate process.
Figure 21 (a) shows the part of interlaced picture and is made up of the frame structure piece, and the situation that another part is made up of the field structure piece.Preferably, the motion picture of image partly comprises the field structure piece, and tableaux partly comprises the frame structure piece.The least unit that is formed by field structure or frame structure is called macro block,, implements the maximum unit (or super-macroblock of a plurality of macro blocks) of DCT or other orthogonal transform or motion compensation that is.Suppose below to comprise among Figure 21 (a) in the rectangle of automobile and comprise the field structure piece, and the remainder of picture comprises the frame structure piece.
Next how describing the border between field structure part and frame structure part implements coding distortion and eliminates.
Referring to Figure 21 (b), the piece among row C1, C2, C3 and the C4 belongs to the image-region that comprises automobile, so and because have motion to have field structure in this image-region.The piece of row among C5, C6, C7 and the C8 belong to automobile not image-region, that is, therefore the tableaux zone has valid frame structure.It should be noted that in this example macro block has 16 pixels in every side, and piece there are 4 pixels in every side.Row C4 and C5 shown in Figure 21 (b) separate, but in fact are adjacent in picture.The coding distortion that block boundary between the block boundary between row C3 and the C4 and row C5 and C6 is implemented shown in Fig. 8 (b) is eliminated.
For handling the block boundary between row C4 and the C5, the frame structure piece among the row C5 at first is converted into the field structure piece shown in Figure 21 (c).It is by the odd-numbered line pixel transitions among the row C5 shown in Figure 21 (b) is the gray pixels piece among the row C5 as Figure 21 (c) as shown in, and is that white pixel block among the row C5 as shown in Figure 21 (c) is carried out with the even number line pixel transitions among the row C5 shown in Figure 21 (b).Shown in Fig. 8 (b), eliminate the coding distortion of the block boundary between row C4 and the C5 then.
When there being when moving in the picture because if the field structure piece is converted to the frame structure piece, vertical correlation between the pixel can be lost so, if and non-decay naturally can take place in enforcement coding distortion elimination process between vertical adjacent block, therefore the frame structure piece will be converted to the field structure piece.On the other hand, if in tableaux, the frame structure piece is converted to the field structure piece, the code error that can reduce high fdrequency component in vertical direction so suppresses, so the vertical correlation between the pixel can not be lost and also is not easy to take place non-natural image quality decrease.
In above-mentioned example, the frame structure piece is converted to the field structure piece to reduce treating capacity (only frame being converted to the field).But, if be indifferent to number of operations, can use another kind of method, be about to frame and be converted to the field and the field is converted to frame, owing to the field need be converted to the extra process of frame, therefore comparing with previous example has increased number of operations.More precisely, the at first definite purpose pixel (that is, change the current pixel of pixel value by removing piece) that will carry out the coding distortion elimination is in the frame structure piece or in the block structure on the scene.If carry out in the purpose pixel block structure on the scene of coding distortion elimination, so the frame structure piece is converted to the field structure piece (promptly, the block type of purpose pixel), and if carry out purpose pixel that coding distortion eliminates in the frame structure piece, so the field structure piece is converted to frame structure piece (that is the block type of purpose pixel).
Operation when next mixing with reference to the flow chart description frame structure among Figure 22 and field structure.
Frame in the interlaced image signal stream is made up of two fields of scanning constantly in difference.Therefore can carry out frame coding (frame structure coding) to a frame by two fields being combined as a coding unit, perhaps can utilize coding separately and two of handling to carry out field encode (field structure coding).These coding methods also can be grouped into following two classes, regular coding and adaptive coding.For regular coding, between a frame coding or a coding, switch whole image.For adaptive coding, image frame is divided into several pieces, and each piece is a frame coding or a coding.
Regular coding also comprises anchor-frame coding that is applied in the frame structure piece and the fixed field coding that is applied in the field structure piece.For regular coding, always come the interlaced video sequence coding with a frame coding or a coding, and irrelevant with content.
But,, can select a frame coding or a coding according to the encoding block unit self-adapting in content, picture or the picture for adaptive coding.Encoding block in these pictures can be little as macro block.For adaptive coding, can therefore utilize a frame coding or a coding that single macro block is encoded.With macro block as following coding unit.
Can utilize to be applied to the constructed to the frame encoding block of non-interlaced video, that is, the piece with frame structure carries out the processing that coding distortion is eliminated.
For the field encoding block, promptly have the piece of field structure, the field is divided into even field and odd field, the independent picture of each conduct is handled, and then piece is removed in each enforcement.
With reference to the flow chart among Figure 22, to determine that at first the purpose piece is a coding or frame coding (step 63).If block is a coding, operating procedure 64 to 69.If block is the frame coding, operating procedure 70 to 72.
Step 64 to 66 is handled the even field block structure, and step 67 to 69 is handled the odd field block structure.Step 64 to 66 is eliminated the coding distortion between the boundary white pixel that is listed as among Figure 21 (b) between C3 and the C4, and the coding distortion between the boundary gray pixels between step 67 to 69 elimination Figure 21 (b) middle row C3 and the C4.
More precisely, compared pixels brightness is eliminated need to determine whether coding distortion in step 64.Then, in step 65, determine to want the pixel quantity of filtering.In step 66, eliminate coding distortion then with field mode.
Step 67,68 and 69 is carried out respectively and step 64,65 and 66 identical operations.
Step 70 to 72 processed frame block structure is to eliminate the coding distortion that is listed as boundary between C5 and the C6 among Figure 21 (b).More precisely, compared pixels brightness is eliminated need to determine whether coding distortion in step 70.Then, in step 71, determine to want the pixel quantity of filtering.In step 72, eliminate coding distortion then with frame pattern.
Determine whether that in step 73 all pieces are all processed, and whether their operation finishes.
Figure 23 has shown another kind of method, wherein the step 64 among Figure 22 and 67 is combined as a single step.More precisely, determine whether and to eliminate coding distortion from even field piece and odd field piece, and whether need dual numbers and odd field piece to implement to go piece.Do the coding distortion elimination process of having simplified like this.
Figure 24 has shown another kind of method, wherein the step 65 among Figure 23 and 68 is combined as the quantity that a single operation determines to go pixel in the even field piece of piece and the odd field piece.Then according to this dual numbers and elimination of odd field piece enforcement coding distortion as a result.This method has further been simplified coding distortion and has been eliminated process.
Figure 25 is blended in the single picture at a frame encoding block and encoding block, and the flow chart of the processing used between frame structure piece and field structure piece time of block boundary.
Whether at first, step 95 determines will to carry out whether the boundary line that coding distortion eliminates between the piece of handling is the boundary line of appointment, that is, be the frame structure piece in a side of this line and the online opposite side of field structure piece.Whether this and definite this line are listed as between C4 and the C5 similar in Figure 21 (b).If (step 95 is returned and is), control proceeds to step 96 so.
Then, the frame structure piece with border one side is converted to field structure piece (step 96).It is similar that piece among this transfer process and Figure 21 (b) among the row C5 is transformed into the piece that is listed as among the C5 among Figure 21 (c).Be " conversion block " with the piece of calling in the following text after the conversion.
Then, between the field structure piece of conversion block and border opposite side, whether need to determine coding distortion to eliminate (step 97).This step and determine row C4 in Figure 21 (c) and C5 between the border whether need piece similar.If desired, control proceeds to step 98.
Then determine to want the pixel quantity (step 98) of filtering, eliminate coding distortion (step 99) with field mode then.
Figure 25 has shown that the frame structure piece is converted to the field structure piece and eliminates the method for coding distortion when the frame structure of adaptive coding is adjacent with the field structure piece from the field, but also can conversely the field structure piece be converted to the frame structure piece, eliminate coding distortion based on frame then.
The advantage based on field elimination coding distortion as shown in Figure 25 is to operate the decline that can stop non-natural image quality, even this is to eliminate coding distortion because only use in the picture signal of rapid movement in the pixel in the identical moment.On the other hand since in frame strong than picture signal midfield of the correlation between the pixel on the vertical direction with light exercise, therefore based on the deterioration of the high fdrequency component of going the piece generation of frame than going the deterioration of piece little based on the field.Two kinds of methods all have superiority, and equipment manufacturers can select preferable methods or device are provided, so that the user can select the method wanted.
The module unit that can also utilize picture unit (frame or) rather than have an adaptive coding is implemented coding distortion and is eliminated.By being provided, a kind of field mode or the frame pattern de-blocking filter that are used for handling picture unit simplify de-blocking filter.Filter can be fixed as field mode or frame pattern, perhaps can switch based on picture.If filter is based on the picture switching, code device can be determined the pattern that is fit to so, and the de-blocking filter of expression decoding device still can add the code flow head to the identification signal of frame mode operation sends to decoder then with the field mode operation.
In addition, then and there or frame mode operation can switch based on module unit, and forbid based on the field go piece and switching when (for example, forbidding in picture, switching) by frame parameter is set, can eliminate coding distortion by frame unit.
It should be noted that the de-blocking filter among above-mentioned first to the 5th embodiment can be as postfilter shown in Figure 32 or the loop filter shown in Figure 33.
By before removing block operations with storage in memory 64, the image of also not eliminating the piece distortion is called predetermined picture when being used as loop filter, spend the piece picture and compare with making as predetermined picture, quality of coded picture descend morely a little.
On the other hand, when as postfilter, conduct is not with reference to image owing to the result behind the elimination coding distortion, and irrelevant with the type of the de-blocking filter that uses, decoded image quality can not descend a lot.For example, can use the equipment of the simple filter of execution minimal action as the precedence requirement low power consumption, as the de-blocking filter in the cell phone 62, and can be with the filter of high accuracy, high image quality as the de-blocking filter 62 in the fixedly entertainment systems of realizing requiring picture quality.
Embodiment 6
Be recorded on floppy disk or other the computer-readable data carrier process of describing among the embodiment above can being easy on computer system independently, carry out by program with the step of realization coding distortion removal method, coding method and the coding/decoding method described in the embodiment of front.
Figure 26 has shown that the data storage medium (being floppy disk in this example) that utilizes the described coding distortion removal method of first to the 5th embodiment, coding method and coding/decoding method above the storage obtains the computer system of another embodiment of the present invention.
Figure 26 (b) has shown from the floppy disk of seeing previously, floppy disk cross section and actual dish medium, and Figure 26 (a) has shown the physical format of typical floppy disk recording medium.Floppy disk FD is built among the box F.Formed a plurality of concentric tracks Tr from the inside circumference of the excircle of panel surface, track is divided into 16 sector Se along the angular direction.Store according to the present invention among the floppy disk FD of said procedure and have coding distortion removal method of the present invention, coding method and coding/decoding method are recorded in appointment among the floppy disk FD as computer executable program zone.
Figure 26 (c) has shown the device that utilizes floppy disk FD record and read these programs.For these programs are recorded on the floppy disk FD, computer system Cs writes coding distortion removal method, coding method and coding/decoding method by floppy disk FDD as program.For on computer system from storing program coding distortion removing method, coding method and the coding/decoding method of floppy disk FD into, go up the fetch program by floppy disk from floppy disk FD and be sent to computer system then.
It should be noted,, comprise CD-ROM, storage card, ROM box though top description floppy disk as data carrier, also can use the computer-readable medium of CD or other type, but or any medium of other similar logging program.
The system that implements method for video coding and video encoding/decoding method according to the foregoing description will be described below.
Figure 27 is the schematic diagram that content that expression is used to provide content release service provides the configured in one piece of the ex100 of system.The service area of this communication system is divided into the sub-district of specifying size, base station ex107 has been installed to ex110 (fixed wireless base station) in each sub-district.
This content provides the ex100 of system to have in a large number for example by ISP ex102, telephone network ex104, be connected to the autonomous device of internet ex101 with base station ex107 to ex110, as the cell phone ex115 of computer ex111, PDA (personal digital assistant) ex112, video camera ex113, cell phone ex114 and band camera.
Yet this content provides the ex100 of system should not only limit to configuration shown in Figure 27, also should selectively connect the equipment of needs.Individual equipment can not be directly connected to telephone network ex104 by fixed base stations ex107 to ex110 yet.
Video camera ex113 is the device of digital video camcorder or other energy capture video images.Cell phone can use any in the variety of protocol, comprises PDC (individual digital communication), CDMA (code division multiple access), W-CDMA (Wideband Code Division Multiple Access (WCDMA)), GSM (global system for mobile communications), and PHS (personnel hand system).
Video camera ex113 can be connected to the streaming server ex103 that the encoded content that the user can be utilized video camera ex113 to send is made as the stream live broadcast by base station ex109 and telephone network ex104.Can be from the content that video camera ex113 receives by video camera ex113 or by server code.Also can send to streaming server ex103 by camera ex116 video captured data by computer ex111.This camera ex116 is that Digital camera or other can be caught the device of tableaux and video.Can be from the video data that camera ex116 receives by camera ex116 or computer ex111 coding.Under any situation, video data is handled by LSI device ex117 among the computer ex111 or camera ex116.The software that is used for video coding and decoding can be stored in the mechanized data recording medium (as CD-ROM, floppy disk or hard disk) that any computer ex111 can visit.
Video data also can be sent by the cell phone ex115 of band camera.In this case, video data is encoded by the LSI device among the cell phone ex115 of band camera.
Utilize this content that the ex100 of system is provided, can be to utilizing video camera ex113, camera ex116 by the user, or the content of other device recording (record at the scene as content) being encoded and sending among the streaming server ex103 as what embodiment above the present invention described.Then, streaming server ex103 is to the client output content data of request msg.Client can be anyly can comprise computer ex111, PDA ex112, video camera ex113 and cell phone ex114 to the device of the decoding of encoded content.Content provides the ex110 of system that client is received and reproduces content encoded data, can make client receive in real time, decode and play content, therefore is each system that can carry out personal broadcaster.
Described video coding apparatus of the present invention of top embodiment and video decoder can provide the single assembly among the ex100 of system to be used for Code And Decode by this content.
Be used for the cell phone that this content provides the ex100 of system below by case description.
Figure 28 has shown the cell phone ex115 that utilizes above-mentioned method for video coding and video encoding/decoding method according to the present invention.As shown in figure 28, this cell phone ex115 that has camera have one be used for the antenna ex201 of base station ex110 exchange RF signal, such as the CCD camera, be used for capturing video and tableaux camera ex203, such as LCD, be used for showing image of catching or the display unit ex202 that decodes then by the image that antenna ex201 receives by camera ex203; Guidance panel with keyboard ex204 and other control; Such as loud speaker ex208, be used for the audio output unit of output audio; The voice input device of microphone ex205 or other type; Be used for storing the encoding or the recording medium ex207 of decoded data, the e-mail that receives or other video or tableaux data such as video or rest image of catching by camera ex203; And be used for recording medium ex207 is loaded into slot ex206 on the cell phone ex115.Recording medium ex207 can be the flash memory device of SD card or other type, for example is built in the EEPROM (EEPROM (Electrically Erasable Programmable Read Only Memo)) in the plastic casing.
Further describe cell phone ex115 by reference Figure 29.Be connected to master controller ex311 by synchronous bus ex313, that be used for systematically controlling each part of comprising display unit ex202 and keyboard ex204 cell phone ex115 is power circuit ex310, operation input controller ex304, image encoding unit ex312, utilizing camera interface ex303, lcd controller ex302, picture decoding unit ex309, multiplexer/demultiplexer ex308, read/write cell ex307, modulator/demodulator unit ex306 and audio treatment unit ex305.
When position that user's end of conversation and power key are set to out, power circuit ex310 is from battery pack each part power supply to cell phone ex115, and the digital cellular telephone that has a camera is set to mode of operation.
By master controller ex311 control, the audio signal that the cell phone ex115 that generally includes CPU, ROM and RAM is gathered by microphone ex205 during with call mode is converted to digital audio-frequency data by audio treatment unit ex305.Modulator/demodulator unit ex306 is to the output spread spectrum of audio treatment unit ex305 then, and telecommunication circuit ex301 implements D/A conversion and frequency conversion process, then by antenna ex201 output.When call mode, cell phone ex115 amplifies the signal that receives by antenna ex201 and implements frequency inverted and the A/D processing, modulator/demodulator unit ex306 is to the signal de-spread, then, audio treatment unit ex305 is a simulated audio signal with the conversion of signals of de-spread, and from loud speaker ex208 output simulated audio signal.
If send Email when data communication mode utilizes keyboard ex204 to import the text data of email message, send to master controller ex311 by operation input controller ex304 then.Then, master controller ex311 utilizes modulator/demodulator unit ex306 to the text data spread spectrum, utilizes telecommunication circuit ex301 to carry out D/A conversion and frequency conversion process, sends to base station ex110 from antenna ex201 then.
In order to send view data when the data communication mode, the view data of utilizing camera ex203 to capture offers image encoding unit ex312 by utilizing camera interface ex303.If view data does not send, the view data that is captured by camera ex302 can directly be presented on the display unit ex202 by utilizing camera interface ex303 and lcd controller ex302 so.
Image encoding unit ex312 has the configuration according to picture coding device of the present invention.It utilizes the coding method of using in the picture coding device of describing among the embodiment of front by compressed encoding the view data that camera ex203 provides to be converted to coded picture data, then coded image data is outputed to multiplexer/demultiplexer ex308.When camera ex203 write down, the audio frequency of being caught by the microphone ex205 of cell phone ex115 also sent to multiplexer/demultiplexer ex308 as digital audio-frequency data by audio treatment unit ex305.
Multiplexer/demultiplexer ex308 will be multiplexed with the voice data that provides from audio treatment unit ex305 from the coded picture data that image encoding unit ex312 provides.By modulator/demodulator unit ex306 multiplexed data are carried out spread spectrum then, implement D/A conversion and frequency inverted, send this signal from antenna ex201 then by telecommunication circuit ex301.
If receive the data of coming at data communication mode since the video file of the site access on the internet, so by modulator/demodulator unit ex306 to separating spread spectrum from the signal that base station ex110 receives by antenna ex201, and the multiplexed data that obtains is sent to multiplexer/demultiplexer ex308.
For the multiplexed data decoding to receiving by antenna ex201, multiplexer/demultiplexer ex308 multichannel is decomposed this multiplexed data to separate coding video frequency data bit stream and coding audio data bit stream.Then, by synchronous bus ex313 the coding video frequency data bit stream is offered picture decoding unit ex309 and the coding audio data bit stream is offered audio treatment unit ex305.
Picture decoding unit ex309 has and the identical configuration of the described picture decoding apparatus of top embodiment.Its utilize with above-mentioned coding method corresponding decoding method by the coding video frequency data bit stream decoding is produced the reconstructing video data, and decode video data is offered display unit ex202 by lcd controller ex302.Like this, can show from the video data of the video file of the visit of the homepage on the internet.Audio treatment unit ex305 also is converted to simulated audio signal with voice data simultaneously, and the result is offered loud speaker ex208.Like this, the voice data that comprises from the video file that the homepage of internet has access to also can be from loudspeaker reproduction.
Communication system of the present invention is not limited only to above-mentioned configuration.For example, this system can utilize picture coding device of the present invention and/or picture decoding apparatus employing digit broadcasting system as shown in figure 30 to visit via satellite or the digital broadcasting of terrestrial network emission.
More particularly, broadcasting station ex409 is by the video data bit stream of radio wave to communication or broadcasting satellite ex410 launching code.The broadcasting satellite ex410 emission that receives this emission for example utilizes the broadcast singal of satellite broadcast receiver by the reception of the antenna ex406 in the family.Then, by television receiver ex401, set-top box (STB) ex407, or other device is to bitstream encoded decoding and reconstruct.
Video decoder of the present invention also can be by being used for reading and the playing device ex403 that is recorded in the coded bit stream on the recording medium such as CD, DVD or other storage medium ex402 that decodes realizes.In this case, the vision signal of reconstruct for example is presented on the monitor ex404.
Picture decoding apparatus of the present invention also can be built in satellite or terrestrial broadcasting antenna ex406 or be used for inserting the having among the set-top box ex407 that wire antenna ex405 links to each other of cable TV.The output of this set-top box ex407 also may be displayed on the display ex408 of television set.
Picture decoding apparatus can also be built in the television set rather than in the set-top box.
Also can be from antenna ex410 or base station ex107 by automobile ex412 received signal with respective antenna ex411, and the video of decoding be may be displayed on the display of the auto-navigation system among the automobile ex412.
Vision signal can also be by encoding right according to the video coding apparatus of the embodiment of the invention and being recorded on the data carrier.More particularly, the DVD logging machine can be recorded to picture signal on the DVD dish ex421, and perhaps hdd recorder ex420 can recording image signal.Vision signal can also record SD card ex422.If register ex420 has according to video decoder of the present invention, so it can also on monitor ex408, play and displayed record to DVD dish ex421, SD card ex422, or the vision signal on other storage medium.
It should be noted that auto-navigation system ex413 can not dispose the camera ex203 shown in Figure 29, utilizing camera interface ex303 and image encoding unit ex312.So also can be applied to, for example computer ex111 and TV (receiver) ex401.
Cell phone ex114 or other terminal can be used as the transceiver terminal with above-mentioned encoder, and perhaps it can be used as the launch terminal that has only encoder, or have only the receiving terminal of decoder.
Clearly, encoding apparatus and decoding apparatus of the present invention are not only limited to above-mentioned first to the 6th described configuration of embodiment, and multiple variation pattern can be arranged.
Method for video coding and video encoding/decoding method that top embodiment describes can be used for above-mentioned any device and system, and obtain the effect of these embodiment.
Coding distortion removal method of the present invention provides has simply dealt coding distortion removal method, a kind of coding distortion removal method that does not almost not cause the possibility that the picture quality of picture signal reduces, and the coding method and the coding/decoding method of possibility that can reduce to reduce the picture quality of picture signal as the result who eliminates coding distortion because of coding distortion.Therefore the present invention has very high practical value.
Though described the present invention with reference to the accompanying drawings in conjunction with the preferred embodiments, be noted that various changes and modifications are apparent to those skilled in the art.Be appreciated that these changes and improvements are included in the scope of the present invention that is defined by claims, unless they have broken away from scope of the present invention.