WO2019150411A1 - Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo - Google Patents

Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo Download PDF

Info

Publication number
WO2019150411A1
WO2019150411A1 PCT/JP2018/002811 JP2018002811W WO2019150411A1 WO 2019150411 A1 WO2019150411 A1 WO 2019150411A1 JP 2018002811 W JP2018002811 W JP 2018002811W WO 2019150411 A1 WO2019150411 A1 WO 2019150411A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
component
candidates
target block
differential motion
Prior art date
Application number
PCT/JP2018/002811
Other languages
English (en)
Japanese (ja)
Inventor
幸二 山田
中川 章
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/JP2018/002811 priority Critical patent/WO2019150411A1/fr
Publication of WO2019150411A1 publication Critical patent/WO2019150411A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to a video encoding device, a video encoding method, a video decoding device, a video decoding method, and a video encoding system.
  • HEVC High Efficiency Video Coding
  • CABAC context-adaptive binary arithmetic coding
  • Inter prediction is a prediction method that uses a pixel value of a block (reference block) that is temporally close to the encoding target block
  • intra prediction is a pixel value of a block that is close in distance to the encoding target block. The prediction method used.
  • a motion vector indicating a reference block is generated.
  • the motion vector includes a horizontal component (x component) and a vertical component (y component) in the image at each time included in the video.
  • the motion vector for the encoding target block often has a high correlation with the motion vectors for the blocks around the encoding target block. Therefore, in HEVC, a predicted motion vector, which is a predicted value of a motion vector of an encoding target block, is obtained from the motion vectors of surrounding blocks, and is a difference between the actual motion vector of the encoding target block and the predicted motion vector. A differential motion vector is generated. By encoding this differential motion vector, the amount of motion vector coding can be compressed.
  • FVC Future Video Coding
  • the syntax of a differential motion vector in inter prediction is encoded by CABAC.
  • CABAC the syntax of a differential motion vector in inter prediction is encoded by CABAC.
  • the code amount of the flag indicating the sign (positive or negative) of each component of the differential motion vector is not sufficiently compressed.
  • an object of the present invention is to reduce a code amount accompanying a motion vector in video encoding.
  • the video encoding device includes a first encoding unit, a determination unit, a generation unit, and a second encoding unit.
  • the first encoding unit encodes the encoding target block in the image included in the video.
  • the determination unit generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block.
  • the determination unit generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative.
  • the second differential motion vector is determined from among the differential motion vector candidates.
  • the determination unit includes the locally decoded pixel values of the encoded pixels adjacent to the encoding target block and the locally decoded pixels of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates.
  • the second difference motion vector is determined using the value.
  • the generating unit generates coincidence information indicating whether or not the code of the component of the first differential motion vector matches the code of the component of the second differential motion vector.
  • the second encoding unit encodes the absolute value of the first differential motion vector component and the coincidence information.
  • an occurrence probability model used for arithmetic coding is determined in accordance with other syntax elements or values of syntax elements adjacent to a region to be encoded. . In this case, the occurrence probability of each value of logic “0” and logic “1” is variable. On the other hand, for bins whose occurrence probabilities are difficult to estimate, a bypass mode is selected in which the occurrence probabilities of the values of logic “0” and logic “1” are fixed to 0.5.
  • Arithmetic coding Based on the occurrence probability of the occurrence symbol, a real number straight line of 0 or more and less than 1 is sequentially divided into sections, and finally a code word in binary notation is generated from the real number indicating the divided section. Is done.
  • the syntax of the differential motion vector in HEVC includes a flag indicating the sign (positive or negative) of the x component and the y component of the differential motion vector.
  • the flag indicating the sign of the x component is mvd_sign_flag [0]
  • the flag indicating the sign of the y component is mvd_sign_flag [1].
  • the values of these flags are specified as follows.
  • CABAC CABAC since it is difficult to predict the probability that the sign of the x component and the y component of the differential motion vector will be positive or negative, in CABAC, a flag indicating the sign of the x component and the y component is encoded in the bypass mode. For this reason, the code amount of these flags is not compressed.
  • FIG. 1 shows an example of the functional configuration of the video encoding apparatus according to the embodiment.
  • the video encoding device 101 of FIG. 1 includes a first encoding unit 111, a determination unit 112, a generation unit 113, and a second encoding unit 114.
  • FIG. 2 is a flowchart showing an example of video encoding processing performed by the video encoding device 101 of FIG.
  • the first encoding unit 111 encodes an encoding target block in an image included in the video (step 201).
  • the determination unit 112 generates a first differential motion vector from the motion vector for the encoding target block and the predicted motion vector for the encoding target block (step 202).
  • the determination unit 112 generates a plurality of differential motion vector candidates including the first differential motion vector by changing a sign indicating whether the component of the first differential motion vector is positive or negative (Ste 203). Then, the determination unit 112 determines a second differential motion vector from among these differential motion vector candidates (step 204). At this time, the determination unit 112 performs local decoding of the encoded pixel included in each of the plurality of reference block candidates indicated by the plurality of reference motion vector candidates and the local decoded pixel value of the encoded pixel adjacent to the encoding target block. A second differential motion vector is determined using the pixel value.
  • the generation unit 113 generates coincidence information indicating whether or not the code of the first differential motion vector component matches the code of the second differential motion vector component (step 205). Then, the second encoding unit 114 encodes the absolute value of the first differential motion vector component and the coincidence information (step 206).
  • FIG. 3 shows a functional configuration example of the video decoding apparatus according to the embodiment. 3 includes a first decoding unit 311, a determination unit 312, a generation unit 313, and a second decoding unit 314.
  • FIG. 4 is a flowchart showing an example of video decoding processing performed by the video decoding device 301 in FIG.
  • the first decoding unit 311 decodes the encoded video, and restores the absolute value of the first differential motion vector component for the decoding target block in the image included in the encoded video (step 401).
  • the first decoding unit 311 matches information indicating whether the code indicating whether the component of the first difference motion vector is positive or negative matches the code of the component of the second difference motion vector. Are restored together with the absolute values of the components of the first differential motion vector.
  • the determination unit 312 generates a plurality of differential motion vector candidates by adding codes to the absolute values of the components of the first differential motion vector (step 402). Then, the determination unit 312 determines a second differential motion vector from among these differential motion vector candidates (step 403). At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of differential motion vector candidates, A differential motion vector is determined.
  • the generation unit 313 generates a first differential motion vector from the second differential motion vector based on the match information (step 404). Then, the generation unit 313 generates a motion vector for the decoding target block from the first differential motion vector and the predicted motion vector for the decoding target block (step 405).
  • the second decoding unit 314 decodes the coefficient information of the decoding target block using the motion vector for the decoding target block (step 406).
  • the video decoding apparatus 301 in FIG. 3 can reduce the amount of code associated with a motion vector in video encoding.
  • FIG. 5 shows an example of the difference motion vector.
  • a reference image 501 in FIG. 5 is a locally decoded image of an image encoded before the encoding target image.
  • the reference image 501 includes a block 511 that exists at the same position as the encoding target block in the encoding target image, and a reference block 512 for the encoding target block.
  • the motion vector 521 for the encoding target block is a vector from the block 511 to the reference block 512, and is obtained by motion search processing.
  • the prediction vector 522 is obtained from the motion vectors for the blocks around the encoding target block, and the difference motion vector 523 represents the difference between the motion vector 521 and the prediction vector 522.
  • FIG. 6 shows examples of differential motion vector candidates and reference block candidates.
  • the direction from left to right is the positive direction of the x coordinate
  • the direction from top to bottom is the positive direction of the y coordinate.
  • Motion vector candidates 614 are generated.
  • the difference motion vector candidate 611 is a vector from the end point of the prediction vector 522 toward the reference block candidate 601, and the x component and the y component of the difference motion vector candidate 611 are positive.
  • the difference motion vector candidate 612 is a vector from the end point of the prediction vector 522 toward the reference block candidate 602, and the x component of the difference motion vector candidate 612 is positive and the y component is negative.
  • the difference motion vector candidate 613 is a vector from the end point of the prediction vector 522 toward the reference block candidate 603, and the x component and the y component of the difference motion vector candidate 613 are negative.
  • the difference motion vector candidate 614 is a vector from the end point of the prediction vector 522 toward the reference block candidate 604, and the x component of the difference motion vector candidate 614 is negative and the y component is positive.
  • FIG. 7 shows a specific example of the video encoding device 101 of FIG. 7 includes a block division unit 711, a prediction error generation unit 712, an orthogonal transformation unit 713, a quantization unit 714, an arithmetic coding unit 715, and an encoding control unit 716. Furthermore, the video encoding device 701 includes an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, an in-loop filter 723, and a memory 724. including.
  • the in-loop filter 723 corresponds to the first encoding unit 111 in FIG.
  • the video encoding device 701 can be implemented as a hardware circuit, for example.
  • each component of the video encoding device 701 may be mounted as an individual circuit or may be mounted as one integrated circuit.
  • the video encoding device 701 encodes the input video to be encoded, and outputs the encoded video as an encoded stream.
  • the video encoding device 701 can transmit the encoded stream to the video decoding device 301 in FIG. 3 via a communication network.
  • the encoding target video includes a plurality of images corresponding to a plurality of times.
  • the image at each time corresponds to the image to be encoded and may be called a picture or a frame.
  • Each image may be a color image or a monochrome image.
  • the pixel value may be in RGB format or YUV format.
  • the block division unit 711 divides the encoding target image into a plurality of blocks, and outputs the original image of the encoding target block to the prediction error generation unit 712, the intra-frame prediction unit 717, and the inter-frame prediction unit 718.
  • the intra-frame prediction unit 717 performs intra prediction on the encoding target block, and outputs a prediction image of intra prediction to the selection unit 719.
  • the inter-frame prediction unit 718 performs inter prediction on the encoding target block, and outputs a predicted image of inter prediction to the selection unit 719. At this time, the inter-frame prediction unit 718 obtains a motion vector for the encoding target block by the motion search process, and outputs the obtained motion vector to the arithmetic coding unit 715.
  • the selection unit 719 selects a prediction image output by either the intra-frame prediction unit 717 or the inter-frame prediction unit 718, and outputs the prediction image to the prediction error generation unit 712 and the reconstruction unit 722.
  • the prediction error generation unit 712 outputs the difference between the prediction image output from the selection unit 719 and the original image of the encoding target block to the orthogonal transformation unit 713 as a prediction error.
  • the orthogonal transform unit 713 performs orthogonal transform on the prediction error output from the prediction error generation unit 712, and outputs a transform coefficient to the quantization unit 714.
  • the quantization unit 714 quantizes the transform coefficient and outputs the quantization coefficient to the arithmetic coding unit 715 and the inverse quantization unit 720.
  • the arithmetic encoding unit 715 encodes the quantized coefficient output from the quantizing unit 714 and the motion vector output from the inter-frame prediction unit 718 using CABAC, and outputs an encoded stream. Then, the arithmetic encoding unit 715 outputs the amount of information generated by CABAC to the encoding control unit 716.
  • the inverse quantization unit 720 performs inverse quantization on the quantization coefficient output from the quantization unit 714, generates an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 721. .
  • the inverse orthogonal transform unit 721 performs inverse orthogonal transform on the inverse quantization coefficient, generates a prediction error, and outputs the generated prediction error to the reconstruction unit 722.
  • the reconstruction unit 722 adds the prediction image output from the selection unit 719 and the prediction error output from the inverse orthogonal transform unit 721 to generate a reconstruction image, and the generated reconstruction image is converted into the in-loop filter 723 and Output to the memory 724.
  • the in-loop filter 723 performs a filtering process such as a deblocking filter on the reconstructed image output from the reconstructing unit 722 to generate a local decoded image, and outputs the generated local decoded image to the memory 724.
  • the memory 724 stores the reconstructed image output from the reconstructing unit 722 as a locally decoded image and also stores the locally decoded image output from the in-loop filter 723.
  • the locally decoded image stored in the memory 724 is output to the intra-frame prediction unit 717, the inter-frame prediction unit 718, and the arithmetic coding unit 715.
  • the intra-frame prediction unit 717 uses the local decoded pixel value included in the local decoded image as a reference pixel value for the subsequent block
  • the inter-frame prediction unit 718 uses the local decoded image as a reference image for the subsequent image.
  • the encoding control unit 716 determines a quantization parameter (QP) so that the information amount output from the arithmetic encoding unit 715 becomes the target information amount, and outputs the determined QP to the quantization unit 714.
  • QP quantization parameter
  • FIG. 8 shows a first functional configuration example of the arithmetic encoding unit 715 of FIG. 8 includes a determination unit 801, a generation unit 802, and an encoding unit 803.
  • the determination unit 801 includes a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813.
  • the determination unit 801, the generation unit 802, and the encoding unit 803 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
  • the difference motion vector calculation unit 811 calculates a difference motion vector representing a difference between the motion vector output from the inter-frame prediction unit 718 and the prediction motion vector for the encoding target block.
  • the predicted motion vector is obtained from the motion vectors for the blocks around the encoding target block by the inter-frame prediction unit 718 or the differential motion vector calculation unit 811.
  • the difference motion vector candidate calculation unit 812 generates four difference motions based on four combinations of the positive sign and negative sign of the x component of the difference motion vector and the positive sign and negative sign of the y component. Calculate vector candidates.
  • the estimated difference motion vector calculation unit 813 obtains four reference block candidates corresponding to each of the four difference motion vector candidates. Then, the estimated difference motion vector calculation unit 813 obtains the local decoded pixel value of the encoded pixel adjacent to the encoding target block and the local decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated differential motion vector.
  • the estimated difference motion vector is calculated by a calculation method different from the motion search process in the inter-frame prediction unit 718, it does not always match the difference motion vector calculated by the difference motion vector calculation unit 811.
  • the generation unit 802 generates a code flag indicating whether or not the code of each component of the differential motion vector matches the code of each component of the estimated differential motion vector.
  • the encoding unit 803 encodes the absolute value and sign flag of each component of the differential motion vector and the quantization coefficient output by the quantization unit 714 using CABAC context modeling using a variable occurrence probability. The sign flag of each component corresponds to the coincidence information.
  • the sign flag does not directly indicate the positive or negative sign of each component of the difference motion vector, but instead shows the difference between the sign of each component of the difference motion vector and the sign of each component of the estimated difference motion vector. Show.
  • the occurrence probability of the value “0” indicating that the two codes are the same can be made higher than the occurrence probability of the value “1” indicating that the two codes are different. Therefore, the code flag can be encoded by arithmetic encoding using context modeling, and the code amount of the code flag is reduced.
  • FIG. 9 shows an example of a first calculation method of the estimated difference motion vector.
  • An encoding target block 903 is included in the area 902 of the encoding target image 901.
  • the estimated difference motion vector calculation unit 813 acquires the local decoded pixel value of the encoded pixel 911 adjacent to the encoding target block 903 in the region 902.
  • the estimated difference motion vector calculation unit 813 arranges each reference block candidate 904 so as to overlap the position of the encoding target block 903 in the region 902, and locally decodes the encoded pixels included in the reference block candidate 904. Get the value.
  • the estimated difference motion vector calculation unit 813 can acquire the locally decoded pixel value of the encoded pixel 912 in the reference block candidate 904 adjacent to the encoded pixel 911.
  • the statistical value Ai of the pixel value is calculated.
  • the statistical value A0 and the statistical value Ai an average value, median value, mode value, and the like of a plurality of local decoded pixel values can be used.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the statistical value A0 and the statistical value Ai. For example, the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a differential motion vector candidate indicating the obtained reference block candidate. A differential motion vector can be determined.
  • FIG. 10 shows an example of a second calculation method of the estimated difference motion vector.
  • the estimated difference motion vector calculation unit 813 calculates the absolute difference value of the local decoded pixel values of the two pixels for the pair 1001 of the encoded pixel 911 and the encoded pixel 912. Then, the estimated difference motion vector calculation unit 813 calculates the difference absolute value sum by accumulating the difference absolute values for a plurality of pairs on the upper boundary and the left boundary of the encoding target block 903.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
  • FIG. 11 shows an example of the third calculation method of the estimated difference motion vector.
  • the estimated difference motion vector calculation unit 813 selects a combination 1101 of four encoded pixels in the vicinity of the boundary of the encoding target block 903.
  • the combination 1101 includes an encoded pixel 1111 to an encoded pixel 1114.
  • the encoded pixel 1111 and the encoded pixel 1112 are pixels included in the region 902 of the encoding target image 901.
  • the encoded pixel 1112 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1111 is present. Is adjacent to the encoded pixel 1112.
  • the encoded pixel 1113 and the encoded pixel 1114 are pixels included in the reference block candidate 904, the encoded pixel 1113 is adjacent to the boundary of the encoding target block 903, and the encoded pixel 1114 is encoded. Adjacent to the converted pixel 1113.
  • the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1111 and the encoded pixel 1112. Further, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block 903 from the locally decoded pixel values of the encoded pixel 1113 and the encoded pixel 1114.
  • FIG. 12 shows an example of a method for calculating a predicted pixel value on the boundary of the encoding target block 903.
  • the horizontal axis in FIG. 12 represents the y axis of the encoding target image 901, and the vertical axis represents the pixel value.
  • the estimated difference motion vector calculation unit 813 sets the y coordinate y1 of the boundary of the encoding target block 903 on the straight line 1201 that passes through the local decoded pixel value of the encoded pixel 1111 and the local decoded pixel value of the encoded pixel 1112.
  • the corresponding pixel value p1 is obtained as the predicted pixel value.
  • the estimated difference motion vector calculation unit 813 predicts a pixel value p2 corresponding to y1 on a straight line 1202 that passes through the local decoded pixel value of the encoded pixel 1113 and the local decoded pixel value of the encoded pixel 1114. Obtained as a pixel value.
  • the estimated difference motion vector calculation unit 813 calculates a difference absolute value 1203 between the predicted pixel value p1 and the predicted pixel value p2. Then, the estimated difference motion vector calculation unit 813 accumulates the difference absolute values 1203 for a plurality of combinations on the upper boundary and the left boundary of the encoding target block 903, and calculates a difference absolute value sum.
  • the estimated difference motion vector calculation unit 813 determines the estimated difference motion vector by comparing the four difference absolute value sums with respect to each of the four reference block candidates. For example, the estimated difference motion vector calculation unit 813 can obtain a reference block candidate having the smallest sum of absolute differences, and can determine a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference motion vector.
  • the first to third calculation methods of the estimated difference motion vector it is possible to obtain an estimated difference motion vector that is highly likely to match the difference motion vector with a smaller calculation amount than the motion search processing in the inter-frame prediction unit 718. Can do. As a result, the occurrence probability of the value “0” in the sign flag can be made higher than the occurrence probability of the value “1”.
  • FIG. 13 is a flowchart showing a specific example of the video encoding process performed by the video encoding device 701 in FIG.
  • the intra-frame prediction unit 717 performs intra prediction on the encoding target block (step 1301)
  • the inter-frame prediction unit 718 performs inter prediction on the encoding target block (step 1302).
  • the prediction error generation unit 712, the orthogonal transformation unit 713, and the quantization unit 714 code the encoding target block using the prediction image output from either the intra-frame prediction unit 717 or the inter-frame prediction unit 718. To generate a quantized coefficient (step 1303). Then, the determination unit 801 and the generation unit 802 of the arithmetic encoding unit 715 generate a code flag for the differential motion vector of the encoding target block (step 1304).
  • the video encoding device 701 determines whether or not the encoding of the encoding target image has been completed (step 1305). When an unprocessed block remains (step 1305, NO), the video encoding device 701 repeats the processing after step 1301 for the next block.
  • the encoding unit 803 of the arithmetic encoding unit 715 performs variable length encoding on the quantization coefficient and the prediction mode information (step 1306).
  • the prediction mode information includes the absolute value and sign flag of each component of the differential motion vector.
  • the video encoding device 701 determines whether or not the encoding of the encoding target video has been completed (step 1307). When an unprocessed image remains (step 1307, NO), the video encoding device 701 repeats the processing after step 1301 for the next image. Then, when encoding of the encoding target video is completed (step 1307, YES), the video encoding device 701 ends the process.
  • FIG. 14 is a flowchart showing an example of the first code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the first calculation method shown in FIG.
  • the difference motion vector calculation unit 811 of the determination unit 801 calculates a difference motion vector from the motion vector and the predicted motion vector (step 1401).
  • the difference motion vector candidate calculation unit 812 then calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 1402).
  • the estimated difference motion vector calculation unit 813 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 1403).
  • the estimated difference motion vector calculation unit 813 calculates a statistical value A0 of the locally decoded pixel value of the encoded pixel adjacent to the encoding target block in the encoding target image (step 1404).
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and obtains a differential motion vector candidate indicating the obtained reference block candidate.
  • the estimated difference motion vector is determined (step 1406).
  • the generation unit 802 compares the code of each component of the differential motion vector and the code of each component of the estimated differential motion vector to determine the value of the code flag of each component (step 1407). If the code of the component of the differential motion vector and the code of the component of the estimated differential motion vector are the same, the value of the code flag of that component is determined to be “0”, and if the two codes are different, the code flag of that component The value of is determined to be “1”.
  • FIG. 15 is a flowchart showing an example of the second code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the second calculation method shown in FIG.
  • the processing in steps 1501 to 1503 and step 1506 in FIG. 15 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
  • step 1504 the estimated difference motion vector calculation unit 813, when each reference block candidate is arranged at the position of the encoding target block, is adjacent to the surrounding encoded pixels in the encoding target image. Identify encoded pixels in the candidate. Then, the estimated difference motion vector calculation unit 813, for each reference block candidate, the local decoded pixel value of the surrounding encoded pixel in the encoding target image and the local decoded pixel value of the encoded pixel in the reference block candidate The sum of absolute differences is calculated.
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 1505).
  • FIG. 16 is a flowchart showing an example of the third code flag generation process in step 1304 of FIG.
  • the estimated difference motion vector is calculated by the third calculation method shown in FIG.
  • the processing in steps 1601 to 1603 and step 1606 in FIG. 16 is the same as the processing in steps 1401 to 1403 and step 1407 in FIG.
  • step 1604 the estimated difference motion vector calculation unit 813 identifies two columns of encoded pixels in the encoding target image that are adjacent to the outside of the boundary of the encoding target block. Then, the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
  • the estimated difference motion vector calculation unit 813 has two columns in the reference block candidate that are adjacent to the inside of the boundary of the encoding target block when the reference block candidates are arranged so as to overlap the position of the encoding target block.
  • the encoded pixels are identified.
  • the estimated difference motion vector calculation unit 813 calculates a predicted pixel value on the boundary of the encoding target block from the locally decoded pixel values of the two columns of encoded pixels.
  • the estimated difference motion vector calculation unit 813 for each reference block candidate, the predicted pixel value calculated from the encoded pixel in the encoding target image and the predicted pixel calculated from the encoded pixel in the reference block candidate Calculate the sum of absolute differences from the value.
  • the estimated difference motion vector calculation unit 813 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 1605).
  • FIG. 17 shows a specific example of the video decoding device 301 in FIG. 17 includes an arithmetic decoding unit 1711, an inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718. , And a memory 1719.
  • An inverse quantization unit 1712, an inverse orthogonal transform unit 1713, a reconstruction unit 1714, an in-loop filter 1715, an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 correspond to the second decoding unit 314 in FIG. .
  • the video decoding device 1701 can be implemented as a hardware circuit, for example.
  • each component of the video decoding device 1701 may be mounted as an individual circuit, or may be mounted as one integrated circuit.
  • the video decoding device 1701 decodes the input encoded stream and outputs the decoded video.
  • the video decoding device 1701 can receive the encoded stream from the video encoding device 701 in FIG. 7 via the communication network.
  • the arithmetic decoding unit 1711 decodes the encoded stream by the CABAC decoding method, outputs the quantization coefficient of the decoding target block in the decoding target image to the inverse quantization unit 1712, and motion compensates the motion vector for the decoding target block. To the unit 1717.
  • the inverse quantization unit 1712 performs inverse quantization on the quantization coefficient output from the arithmetic decoding unit 1711 to generate an inverse quantization coefficient, and outputs the generated inverse quantization coefficient to the inverse orthogonal transform unit 1713.
  • the inverse orthogonal transform unit 1713 performs inverse orthogonal transform on the inverse quantization coefficient to generate a prediction error, and outputs the generated prediction error to the reconstruction unit 1714.
  • the motion compensation unit 1717 performs motion compensation processing on the decoding target block using the motion vector output from the arithmetic decoding unit 1711 and the reference image output from the memory 1719, generates an inter-prediction predicted image, and selects it. To the unit 1718.
  • the selection unit 1718 selects a prediction image output by either the intra-frame prediction unit 1716 or the motion compensation unit 1717 and outputs the prediction image to the reconstruction unit 1714.
  • the reconstruction unit 1714 adds the prediction image output from the selection unit 1718 and the prediction error output from the inverse orthogonal transform unit 1713 to generate a reconstruction image, and the generated reconstruction image is converted into an in-loop filter 1715 and It outputs to the intra-frame prediction part 1716.
  • the intra-frame prediction unit 1716 performs intra prediction on the decoding target block using the reconstructed image of the decoded block output from the reconstruction unit 1714, and outputs a prediction image of intra prediction to the selection unit 1718.
  • the in-loop filter 1715 performs a filtering process such as a deblocking filter and a sample adaptive offset filter on the reconstructed image output from the reconstructing unit 1714 to generate a decoded image. Then, the in-loop filter 1715 outputs the decoded image for one frame as the decoded video and also outputs it to the memory 1719.
  • a filtering process such as a deblocking filter and a sample adaptive offset filter
  • the memory 1719 stores the decoded image output from the in-loop filter 1715.
  • the decoded image stored in the memory 1719 is output to the motion compensation unit 1717 as a reference image for the subsequent image.
  • FIG. 18 shows a first functional configuration example of the arithmetic decoding unit 1711 in FIG.
  • the arithmetic decoding unit 1711 in FIG. 18 includes a decoding unit 1801, a determination unit 1802, and a generation unit 1803.
  • the determination unit 1802 includes a difference motion vector candidate calculation unit 1811 and an estimated difference motion vector calculation unit 1812.
  • the decoding unit 1801, the determination unit 1802, and the generation unit 1803 correspond to the first decoding unit 311, the determination unit 312 and the generation unit 313 in FIG.
  • the decoding unit 1801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Further, the decoding unit 1801 restores the absolute values of the x component and the y component of the differential motion vector and the code flags of the x component and the y component of the differential motion vector. Decoding section 1801 then outputs the absolute value of each component of the difference motion vector to determination section 1802 and outputs the sign flag of each component of the difference motion vector to generation section 1803.
  • the difference motion vector candidate calculation unit 1811 calculates a positive sign and a negative sign of the x component of the difference motion vector, a positive sign and a negative sign of the y component from the absolute values of the x component and the y component of the difference motion vector. Based on these four combinations, four differential motion vector candidates are calculated.
  • the estimated difference motion vector calculation unit 1812 obtains four reference block candidates corresponding to each of the difference motion vector candidates by using the prediction motion vector and the four difference motion vector candidates for the decoding target block.
  • the predicted motion vector is obtained from the motion vectors already calculated for the blocks around the decoding target block.
  • the estimated difference motion vector calculation unit 1812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the difference. Calculate the motion vector.
  • the generation unit 1803 determines the code of each component of the differential motion vector from the code of each component of the estimated differential motion vector based on the code flag of each component of the differential motion vector, and generates a differential motion vector.
  • the generation unit 1803 calculates a motion vector for the decoding target block by adding the predicted motion vector for the decoding target block and the generated differential motion vector.
  • FIG. 19 is a flowchart showing a specific example of video decoding processing performed by the video decoding device 1701 in FIG.
  • the arithmetic decoding unit 1711 performs variable length decoding on the encoded stream, and generates the quantization coefficient and prediction mode information of the decoding target block (step 1901). Then, the arithmetic decoding unit 1711 checks whether the prediction mode information of the decoding target block indicates inter prediction or intra prediction (step 1902).
  • the arithmetic decoding unit 1711 uses the absolute value and the sign flag of each component of the difference motion vector included in the prediction mode information to perform motion. A vector is generated (step 1903). Then, the motion compensation unit 1717 performs motion compensation processing on the decoding target block using the generated motion vector (step 1904).
  • the intra-frame prediction unit 1716 performs intra prediction on the decoding target block (step 1907).
  • the inverse quantization unit 1712 and the inverse orthogonal transform unit 1713 decode the quantization coefficient of the decoding target block and generate a prediction error (step 1905). Then, the selection unit 1718, the reconstruction unit 1714, and the in-loop filter 1715 generate a decoded image from the prediction error using the prediction image output by either the motion compensation unit 1717 or the intra-frame prediction unit 1716.
  • the video decoding device 1701 determines whether or not the decoding of the encoded stream has been completed (step 1906). If an unprocessed code string remains (step 1906, NO), the video decoding device 1701 repeats the processing from step 1901 on for the next code string. Then, when decoding of the encoded stream is completed (step 1906, YES), the video decoding device 1701 ends the process.
  • FIG. 20 is a flowchart showing an example of the first motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the first calculation method shown in FIG.
  • the difference motion vector candidate calculation unit 1811 of the determination unit 1802 calculates four difference motion vector candidates based on the four combinations of the x and y component codes of the difference motion vector (step 2001).
  • the estimated difference motion vector calculation unit 1812 obtains four reference block candidates indicated by each of the four difference motion vector candidates (step 2002).
  • the estimated difference motion vector calculation unit 1812 calculates the statistical value B0 of the decoded pixel value of the decoded pixel adjacent to the decoding target block in the decoding target image (step 2003).
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and obtains a differential motion vector candidate indicating the obtained reference block candidate.
  • the estimated difference motion vector is determined (step 2005).
  • the generation unit 1803 generates a difference motion vector using the sign flag of each component of the difference motion vector and the estimated difference motion vector, and adds the predicted motion vector and the difference motion vector to thereby generate a motion vector. Is calculated (step 2006).
  • FIG. 21 is a flowchart showing an example of the second motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 2101, step 2102, and step 2105 in FIG. 21 is the same as the processing in step 2001, step 2002, and step 2006 in FIG. 20.
  • step 2103 when the estimated difference motion vector calculation unit 1812 arranges each reference block candidate so as to overlap the position of the decoding target block, the estimated difference motion vector calculation unit 1812 includes a reference block candidate adjacent to the surrounding decoded pixels in the decoding target image. Identify decoded pixels. Then, for each reference block candidate, the estimated difference motion vector calculation unit 1812 calculates the absolute difference between the decoded pixel value of the surrounding decoded pixel in the decoding target image and the decoded pixel value of the decoded pixel in the reference block candidate. Calculate the sum.
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 2104).
  • FIG. 22 is a flowchart showing an example of the third motion vector generation process in step 1903 of FIG.
  • the estimated difference motion vector is calculated by the third calculation method shown in FIG.
  • the processing in step 2201, step 2202, and step 2205 in FIG. 22 is the same as the processing in step 2001, step 2002, and step 2006 in FIG.
  • the estimated difference motion vector calculation unit 1812 identifies two columns of decoded pixels in the decoding target image that are adjacent to the outside of the decoding target block boundary. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
  • the estimated difference motion vector calculation unit 1812 decodes two columns in the reference block candidate that are adjacent to the inner side of the boundary of the decoding target block when each reference block candidate is arranged at the position of the decoding target block. Identify the completed pixel. Then, the estimated difference motion vector calculation unit 1812 calculates a predicted pixel value on the boundary of the decoding target block from the decoded pixel values of the decoded pixels in the two columns.
  • the estimated difference motion vector calculation unit 1812 calculates, for each reference block candidate, the predicted pixel value calculated from the decoded pixel in the decoding target image and the predicted pixel value calculated from the decoded pixel in the reference block candidate. Calculate the sum of absolute differences.
  • the estimated difference motion vector calculation unit 1812 obtains a reference block candidate having the smallest difference absolute value sum among the four reference block candidates, and obtains a difference motion vector candidate indicating the obtained reference block candidate as an estimated difference.
  • a motion vector is determined (step 2204).
  • the video encoding device 101 in FIG. 1 and the video decoding device 301 in FIG. 3 generate a plurality of motion vector candidates by changing the code of the motion vector component instead of the code of the differential motion vector component. It is also possible to do.
  • the first encoding unit 111 in FIG. 1 encodes the encoding target block in the image included in the video.
  • the determining unit 112 generates a plurality of motion vector candidates including the first motion vector by changing a sign indicating whether the component of the first motion vector for the encoding target block is positive or negative. Then, the determination unit 112 determines the second motion vector from these motion vector candidates. At this time, the determination unit 112 performs local decoding pixel values of the encoded pixels included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates and the local decoding pixel value of the encoded pixel adjacent to the encoding target block. The second motion vector is determined using the value.
  • the generation unit 113 generates coincidence information indicating whether or not the code of the first motion vector component matches the code of the second motion vector component, and the second encoding unit 114 generates the first motion vector component.
  • the absolute value and the coincidence information are encoded.
  • the first decoding unit 311 in FIG. 3 decodes the encoded video, and restores the absolute value of the first motion vector component for the decoding target block in the image included in the encoded video.
  • the first decoding unit 311 includes coincidence information indicating whether or not the code indicating whether the component of the first motion vector is positive or negative matches the code of the component of the second motion vector. Restored together with the absolute value of the first motion vector component.
  • the determining unit 312 generates a plurality of motion vector candidates by adding a sign to the absolute value of the component of the first motion vector, and determines the second motion vector from the motion vector candidates. At this time, the determination unit 312 uses the decoded pixel value of the pixel adjacent to the decoding target block and the decoded pixel value of the pixel included in each of the plurality of reference block candidates indicated by the plurality of motion vector candidates to perform the second motion. Determine the vector.
  • the generating unit 313 generates a first motion vector from the second motion vector based on the match information, and the second decoding unit 314 decodes the coefficient information of the decoding target block using the first motion vector.
  • the arithmetic encoding unit 715 uses the absolute value and the code flag of each component of the motion vector instead of the absolute value and the code flag of each component of the differential motion vector as the prediction mode information of the inter prediction.
  • the arithmetic encoding unit 715 generates four motion vector candidates based on the four combinations of the codes of the respective components of the motion vector output from the inter-frame prediction unit 718.
  • FIG. 23 shows examples of motion vector candidates and reference block candidates.
  • the reference image 2301 in FIG. 23 includes a block 2311 that exists at the same position as the encoding target block in the encoding target image.
  • the direction from left to right is the positive direction of the x coordinate
  • the direction from top to bottom is the positive direction of the y coordinate.
  • motion vector candidates 2331 to 2334 are generated based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component.
  • the motion vector candidate 2331 is a vector from the block 2311 to the reference block candidate 2321, and the x component and the y component of the motion vector candidate 2331 are positive.
  • the motion vector candidate 2332 is a vector from the block 2311 to the reference block candidate 2322, and the x component of the motion vector candidate 2332 is positive and the y component is negative.
  • the motion vector candidate 2333 is a vector from the block 2311 to the reference block candidate 2323, and the x component and the y component of the motion vector candidate 2333 are negative.
  • the motion vector candidate 2334 is a vector from the block 2311 to the reference block candidate 2324, and the x component of the motion vector candidate 2334 is negative and the y component is positive.
  • FIG. 24 shows a second functional configuration example of the arithmetic encoding unit 715 of FIG. 24 includes a determination unit 2401, a generation unit 2402, and an encoding unit 2403.
  • the determination unit 2401 includes a motion vector candidate calculation unit 2411 and an estimated motion vector calculation unit 2412.
  • the determination unit 2401, the generation unit 2402, and the encoding unit 2403 correspond to the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG.
  • the motion vector candidate calculation unit 2411 calculates four motion vector candidates based on four combinations of the positive and negative signs of the x component of the motion vector and the positive and negative signs of the y component. calculate.
  • the estimated motion vector calculation unit 2412 obtains four reference block candidates corresponding to the four motion vector candidates. Then, the estimated motion vector calculation unit 2412 uses the locally decoded pixel value of the encoded pixel adjacent to the encoding target block and the locally decoded pixel value of the encoded pixel included in each of the four reference block candidates. To calculate an estimated motion vector.
  • the generating unit 2402 generates a code flag indicating whether or not the code of each component of the motion vector matches the code of each component of the estimated motion vector.
  • the encoding unit 2403 encodes the absolute value and the sign flag of each component of the motion vector and the quantization coefficient output from the quantization unit 714 using CABAC context modeling using a variable occurrence probability.
  • the code amount of the code flag is reduced by arithmetic coding using context modeling, similarly to the code flag of the differential motion vector.
  • FIG. 25 is a flowchart illustrating an example of a fourth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in Step 1304 of FIG.
  • the estimated motion vector is calculated by the first calculation method shown in FIG.
  • the processing in step 2503 and step 2504 in FIG. 25 is the same as the processing in step 1404 and step 1405 in FIG.
  • the motion vector candidate calculation unit 2411 of the determination unit 2401 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2501).
  • the estimated motion vector calculation unit 2412 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2502).
  • step 2505 the estimated motion vector calculation unit 2412 obtains a reference block candidate having a statistical value closest to the statistical value A0 among the statistical values A1 to A4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
  • the generation unit 2402 compares the code of each component of the motion vector and the code of each component of the estimated motion vector, and determines the value of the code flag of each component (step 2506). If the code of the motion vector component and the code of the estimated motion vector component are the same, the value of the code flag of the component is determined to be “0”. If the two codes are different, the value of the code flag of the component Is determined to be “1”.
  • FIG. 26 is a flowchart illustrating an example of a fifth code flag generation process performed by the arithmetic encoding unit 715 in FIG. 24 in step 1304 in FIG.
  • the estimated motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 2601, step 2602, and step 2605 in FIG. 26 is the same as the processing in step 2501, step 2502, and step 2506 in FIG. 25, and the processing in step 2603 is the same as the processing in step 1504 in FIG. It is.
  • the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 27 is a flowchart illustrating an example of a sixth code flag generation process performed by the arithmetic encoding unit 715 of FIG. 24 in step 1304 of FIG.
  • the estimated motion vector is calculated by the third calculation method shown in FIG.
  • the processing of Step 2701, Step 2702, and Step 2705 of FIG. 27 is the same as the processing of Step 2501, Step 2502, and Step 2506 of FIG. 25, and the processing of Step 2703 is the same as the processing of Step 1604 of FIG. It is.
  • the estimated motion vector calculation unit 2412 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 28 shows a second functional configuration example of the arithmetic decoding unit 1711 in FIG.
  • the arithmetic decoding unit 1711 in FIG. 28 includes a decoding unit 2801, a determination unit 2802, and a generation unit 2803.
  • the determination unit 2802 includes a motion vector candidate calculation unit 2811 and an estimated motion vector calculation unit 2812.
  • the decoding unit 2801, the determination unit 2802, and the generation unit 2803 respectively correspond to the first decoding unit 311, the determination unit 312, and the generation unit 313 in FIG.
  • the decoding unit 2801 decodes the encoded stream using a variable occurrence probability by CABAC context modeling, and restores the quantization coefficient of the decoding target block. Furthermore, the decoding unit 2801 restores the absolute values of the x and y components of the motion vector and the code flags of the x and y components of the motion vector. Then, decoding section 2801 outputs the absolute value of each component of the motion vector to determination section 2802, and outputs the sign flag of each component of the motion vector to generation section 2803.
  • the motion vector candidate calculation unit 2811 calculates the positive and negative signs of the x and y components of the motion vector and the positive and negative signs of the y component from the absolute values of the x and y components of the motion vector. Based on the combination, four motion vector candidates are calculated.
  • the estimated motion vector calculation unit 2812 obtains four reference block candidates corresponding to each of the four motion vector candidates. Then, the estimated motion vector calculation unit 2812 uses the decoded pixel value of the decoded pixel adjacent to the decoding target block and the decoded pixel value of the decoded pixel included in each of the four reference block candidates, to estimate the motion vector. Calculate
  • the generation unit 2803 determines the code of each component of the motion vector from the code of each component of the estimated motion vector based on the code flag of each component of the motion vector, and generates a motion vector for the decoding target block.
  • FIG. 29 is a flowchart illustrating an example of a fourth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG.
  • the estimated motion vector is calculated by the first calculation method shown in FIG.
  • the processing in step 2903 and step 2904 in FIG. 29 is the same as the processing in step 2003 and step 2004 in FIG.
  • the motion vector candidate calculation unit 2811 of the determination unit 2802 calculates four motion vector candidates based on the four combinations of the x and y component codes of the motion vector (step 2901).
  • the estimated motion vector calculation unit 2812 obtains four reference block candidates indicated by each of the four motion vector candidates (step 2902).
  • the estimated motion vector calculation unit 2812 obtains a reference block candidate having a statistical value closest to the statistical value B0 among the statistical values B1 to B4, and estimates a motion vector candidate indicating the obtained reference block candidate. The motion vector is determined.
  • the generation unit 2803 calculates a motion vector using the sign flag of each component of the motion vector and the estimated motion vector (step 2906).
  • FIG. 30 is a flowchart illustrating an example of a fifth motion vector generation process performed by the arithmetic decoding unit 1711 in FIG. 28 in Step 1903 in FIG.
  • the estimated motion vector is calculated by the second calculation method shown in FIG.
  • the processing in step 3001, step 3002, and step 3005 in FIG. 30 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3003 is the same as the processing in step 2103 in FIG. It is.
  • step 3004 the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences among the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 31 is a flowchart showing an example of sixth motion vector generation processing performed by the arithmetic decoding unit 1711 in FIG. 28 in step 1903 in FIG.
  • the estimated motion vector is calculated by the third calculation method shown in FIG.
  • the processing in step 3101, step 3102, and step 3105 in FIG. 31 is the same as the processing in step 2901, step 2902, and step 2906 in FIG. 29, and the processing in step 3103 is the same as the processing in step 2203 in FIG. 22. It is.
  • step 3104 the estimated motion vector calculation unit 2812 obtains a reference block candidate having the smallest sum of absolute differences from the four reference block candidates, and obtains a motion vector candidate indicating the obtained reference block candidate as an estimated motion vector. To decide.
  • FIG. 32 shows a functional configuration example of the video encoding system.
  • 32 includes the video encoding device 701 in FIG. 7 and the video decoding device 1701 in FIG. 17 and is used for various purposes.
  • the video encoding system 3201 may be a video camera, a video transmission device, a video reception device, a videophone system, a computer, or a mobile phone.
  • FIG. 7 is merely an example, and some components may be omitted or changed depending on the use or conditions of the video encoding device.
  • the configuration of the arithmetic encoding unit 715 in FIGS. 8 and 24 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding device.
  • the video encoding apparatus may adopt an encoding method other than HEVC, or may adopt a variable length encoding method other than CABAC.
  • the video decoding apparatus may employ a decoding scheme other than HEVC or a variable length decoding scheme other than CABAC.
  • the configuration of the video encoding system 3201 in FIG. 32 is merely an example, and some components may be omitted or changed according to the use or conditions of the video encoding system 3201.
  • FIGS. 2, 4, 13 to 16, 19 to 22, 25 to 27, and 29 to 31 are merely examples, and the configuration of the video encoding device or the video decoding device is shown. Alternatively, some processes may be omitted or changed according to conditions.
  • Step 2603 of FIG. 26, Step 2703 of FIG. 27, Step 3003 of FIG. 30, and Step 3103 of FIG. 31 another index indicating the degree of difference or similarity is used instead of the sum of absolute differences. Also good.
  • the motion vector, the predicted motion vector, the difference motion vector, the difference motion vector candidate, and the motion vector candidate shown in FIG. 5, FIG. 6, and FIG. 23 are merely examples, and these vectors change according to the encoding target video. To do.
  • the calculation methods shown in FIGS. 9 to 12 are merely examples, and the estimated difference motion vector or the estimated motion vector may be calculated by another calculation method using the local decoded pixel value or the decoded pixel value.
  • the video encoding device 101 in FIG. 1, the video decoding device 301 in FIG. 3, the video encoding device 701 in FIG. 7, and the video decoding device 1701 in FIG. 17 can also be implemented as hardware circuits, as shown in FIG. It can also be implemented using such an information processing apparatus (computer).
  • CPU 33 includes a central processing unit (CPU) 3301, a memory 3302, an input device 3303, an output device 3304, an auxiliary storage device 3305, a medium driving device 3306, and a network connection device 3307. These components are connected to each other by a bus 3308.
  • CPU central processing unit
  • the memory 3302 is a semiconductor memory such as a Read Only Memory (ROM), a Random Access Memory (RAM), or a flash memory, and stores programs and data used for processing.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • flash memory any type of non-volatile memory
  • the memory 3302 can be used as the memory 724 in FIG. 7 and the memory 1719 in FIG.
  • the CPU 3301 (processor) operates as, for example, the first encoding unit 111, the determination unit 112, the generation unit 113, and the second encoding unit 114 in FIG. 1 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the first decoding unit 311, the determination unit 312, the generation unit 313, and the second decoding unit 314 in FIG. 3 by executing the program using the memory 3302.
  • the CPU 3301 uses the memory 3302 to execute a program, thereby performing block division unit 711, prediction error generation unit 712, orthogonal transform unit 713, quantization unit 714, arithmetic coding unit 715, and coding control in FIG.
  • the unit 716 also operates.
  • the CPU 3301 uses the memory 3302 to execute a program, so that an intra-frame prediction unit 717, an inter-frame prediction unit 718, a selection unit 719, an inverse quantization unit 720, an inverse orthogonal transform unit 721, a reconstruction unit 722, and It also operates as an in-loop filter 723.
  • the CPU 3301 also operates as the determination unit 801, the generation unit 802, and the encoding unit 803 in FIG. 8 by executing the program using the memory 3302.
  • the CPU 3301 also operates as a difference motion vector calculation unit 811, a difference motion vector candidate calculation unit 812, and an estimated difference motion vector calculation unit 813 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the arithmetic decoding unit 1711, the inverse quantization unit 1712, the inverse orthogonal transform unit 1713, the reconstruction unit 1714, and the in-loop filter 1715 in FIG. 17 by executing the program using the memory 3302. .
  • the CPU 3301 also operates as an intra-frame prediction unit 1716, a motion compensation unit 1717, and a selection unit 1718 by executing a program using the memory 3302.
  • the CPU 3301 also operates as the decoding unit 1801, the determination unit 1802, the generation unit 1803, the difference motion vector candidate calculation unit 1811, and the estimated difference motion vector calculation unit 1812 in FIG. 18 by executing the program using the memory 3302. To do.
  • the CPU 3301 also operates as the determination unit 2401, the generation unit 2402, the encoding unit 2403, the motion vector candidate calculation unit 2411, and the estimated motion vector calculation unit 2412 in FIG. 24 by executing a program using the memory 3302. .
  • the CPU 3301 also operates as the decoding unit 2801, the determination unit 2802, the generation unit 2803, the motion vector candidate calculation unit 2811, and the estimated motion vector calculation unit 2812 in FIG. 28 by executing the program using the memory 3302.
  • the input device 3303 is, for example, a keyboard, a pointing device, or the like, and is used for inputting an instruction or information from a user or an operator.
  • the output device 3304 is, for example, a display device, a printer, a speaker, or the like, and is used to output an inquiry to a user or an operator or a processing result.
  • the processing result may be a decoded video.
  • the auxiliary storage device 3305 is, for example, a magnetic disk device, an optical disk device, a magneto-optical disk device, a tape device, or the like.
  • the auxiliary storage device 3305 may be a hard disk drive.
  • the information processing apparatus can store programs and data in the auxiliary storage device 3305 and load them into the memory 3302 for use.
  • the medium driving device 3306 drives the portable recording medium 3309 and accesses the recorded contents.
  • the portable recording medium 3309 is a memory device, a flexible disk, an optical disk, a magneto-optical disk, or the like.
  • the portable recording medium 3309 may be a Compact Disk Read Only Memory (CD-ROM), Digital Versatile Disk (DVD), or Universal Serial Bus (USB) memory.
  • CD-ROM Compact Disk Read Only Memory
  • DVD Digital Versatile Disk
  • USB Universal Serial Bus
  • the computer-readable recording medium for storing the program and data used for processing includes physical (non-transitory) media such as the memory 3302, the auxiliary storage device 3305, and the portable recording medium 3309.
  • a recording medium is included.
  • the network connection device 3307 is a communication interface circuit that is connected to a communication network such as Local Area Network (LAN) or the Internet and performs data conversion accompanying communication.
  • the network connection device 3307 can transmit the encoded stream to the video decoding device and receive the encoded stream from the video encoding device.
  • the information processing apparatus can receive a program and data from an external apparatus via the network connection apparatus 3307 and can use them by loading them into the memory 3302.
  • the information processing apparatus does not have to include all the components shown in FIG. 33, and some of the components can be omitted depending on the application or conditions. For example, when an interface with a user or an operator is unnecessary, the input device 3303 and the output device 3304 may be omitted. When the information processing apparatus does not access the portable recording medium 3309, the medium driving device 3306 may be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Selon la présente invention, une première unité d'encodage encode un bloc à encoder dans une image incluse dans une vidéo. Une unité de détermination produit un premier vecteur de mouvement de différence à partir d'un vecteur de mouvement et d'un vecteur de mouvement prédit par rapport au bloc à encoder. Ensuite, l'unité de détermination change un signe indiquant si une composante du premier vecteur de mouvement de différence est positive ou négative de façon à produire une pluralité de vecteurs de mouvement de différence candidats, et détermine un deuxième vecteur de mouvement de différence parmi les vecteurs de mouvement de différence candidats. Dans ce cas, l'unité de détermination détermine le deuxième vecteur de mouvement de différence en utilisant une valeur de pixel de décodage local d'un pixel encodé adjacent au bloc à encoder et une valeur de pixel de décodage local d'un pixel encodé inclus dans chaque bloc d'une pluralité de blocs de référence candidats indiqués par la pluralité de vecteurs de mouvement de différence candidats. Une unité de production produit des informations de correspondance indiquant si le signe de la composante du premier vecteur de mouvement de différence correspond, ou non, au signe de la composante du deuxième vecteur de mouvement de différence. Une deuxième unité d'encodage encode la valeur absolue de la composante du premier vecteur de mouvement de différence et des informations de correspondance.
PCT/JP2018/002811 2018-01-30 2018-01-30 Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo WO2019150411A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/002811 WO2019150411A1 (fr) 2018-01-30 2018-01-30 Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/002811 WO2019150411A1 (fr) 2018-01-30 2018-01-30 Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo

Publications (1)

Publication Number Publication Date
WO2019150411A1 true WO2019150411A1 (fr) 2019-08-08

Family

ID=67477955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/002811 WO2019150411A1 (fr) 2018-01-30 2018-01-30 Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo

Country Status (1)

Country Link
WO (1) WO2019150411A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019140630A (ja) * 2018-02-15 2019-08-22 日本放送協会 映像符号化装置、映像復号装置、及びこれらのプログラム
WO2023128548A1 (fr) * 2021-12-28 2023-07-06 주식회사 케이티 Procédé de codage/décodage de signal vidéo et support d'enregistrement sur lequel est stocké un flux binaire

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011034148A1 (fr) * 2009-09-18 2011-03-24 シャープ株式会社 Appareils codeur et décodeur, appareils codeur et décodeur d'image animée et codage de données
WO2012042646A1 (fr) * 2010-09-30 2012-04-05 富士通株式会社 Appareil de codage de vidéo de mouvement, procédé de codage de vidéo de mouvement, programme informatique de codage de vidéo de mouvement, appareil de décodage de vidéo de mouvement, procédé de décodage de vidéo de mouvement, programme informatique de décodage de vidéo de mouvement
JP2012235278A (ja) * 2011-04-28 2012-11-29 Jvc Kenwood Corp 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム
WO2012176450A1 (fr) * 2011-06-24 2012-12-27 パナソニック株式会社 Procédé de codage d'images, procédé de décodage d'images, dispositif de codage d'images, dispositif de décodage d'images et dispositif de codage/décodage d'images
WO2013006483A1 (fr) * 2011-07-01 2013-01-10 Qualcomm Incorporated Codage vidéo au moyen de résolution adaptative de vecteurs de mouvement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011034148A1 (fr) * 2009-09-18 2011-03-24 シャープ株式会社 Appareils codeur et décodeur, appareils codeur et décodeur d'image animée et codage de données
WO2012042646A1 (fr) * 2010-09-30 2012-04-05 富士通株式会社 Appareil de codage de vidéo de mouvement, procédé de codage de vidéo de mouvement, programme informatique de codage de vidéo de mouvement, appareil de décodage de vidéo de mouvement, procédé de décodage de vidéo de mouvement, programme informatique de décodage de vidéo de mouvement
JP2012235278A (ja) * 2011-04-28 2012-11-29 Jvc Kenwood Corp 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム
WO2012176450A1 (fr) * 2011-06-24 2012-12-27 パナソニック株式会社 Procédé de codage d'images, procédé de décodage d'images, dispositif de codage d'images, dispositif de décodage d'images et dispositif de codage/décodage d'images
WO2013006483A1 (fr) * 2011-07-01 2013-01-10 Qualcomm Incorporated Codage vidéo au moyen de résolution adaptative de vecteurs de mouvement

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019140630A (ja) * 2018-02-15 2019-08-22 日本放送協会 映像符号化装置、映像復号装置、及びこれらのプログラム
JP7076221B2 (ja) 2018-02-15 2022-05-27 日本放送協会 映像符号化装置、映像復号装置、及びこれらのプログラム
JP2022103284A (ja) * 2018-02-15 2022-07-07 日本放送協会 映像符号化装置、映像復号装置、及びこれらのプログラム
JP7361838B2 (ja) 2018-02-15 2023-10-16 日本放送協会 映像符号化装置、映像復号装置、及びこれらのプログラム
WO2023128548A1 (fr) * 2021-12-28 2023-07-06 주식회사 케이티 Procédé de codage/décodage de signal vidéo et support d'enregistrement sur lequel est stocké un flux binaire

Similar Documents

Publication Publication Date Title
US11178421B2 (en) Method and apparatus for encoding/decoding images using adaptive motion vector resolution
JP5277257B2 (ja) 動画像復号化方法および動画像符号化方法
US11641481B2 (en) Method and apparatus for encoding/decoding images using adaptive motion vector resolution
US11284087B2 (en) Image encoding device, image decoding device, and image processing method
KR102070719B1 (ko) 인터 예측 방법 및 그 장치
CN111133759B (zh) 编码或解码视频数据的方法和装置
JP2018511997A (ja) 画像予測方法および関連装置
CN113383550A (zh) 光流修正的提前终止
CN111418214A (zh) 使用重建像素点的语法预测
JP7494403B2 (ja) 復号方法、符号化方法、装置、デバイスおよび記憶媒体
CN117280691A (zh) 增强的运动向量预测
JP6662123B2 (ja) 画像符号化装置、画像符号化方法、及び画像符号化プログラム
WO2019150411A1 (fr) Dispositif d'encodage de vidéo, procédé d'encodage de vidéo, dispositif de décodage de vidéo et procédé de décodage de vidéo, et système d'encodage de vidéo
KR20110048004A (ko) 움직임 벡터 해상도 제한을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치
JP6019797B2 (ja) 動画像符号化装置、動画像符号化方法、及びプログラム
US20150237345A1 (en) Video coding device, video coding method, and video coding program
JP2019140630A (ja) 映像符号化装置、映像復号装置、及びこれらのプログラム
WO2019150435A1 (fr) Dispositif de codage vidéo, procédé de codage vidéo, dispositif de décodage vidéo, procédé de décodage vidéo et système de codage vidéo
KR20200126954A (ko) 인터 예측 방법 및 그 장치
KR102173576B1 (ko) 인터 예측 방법 및 그 장치
WO2021111595A1 (fr) Procédé de génération de filtre, dispositif de génération de filtre et programme
JP6853697B2 (ja) 時間予測動きベクトル候補生成装置、符号化装置、復号装置、及びプログラム
WO2011142221A1 (fr) Dispositif de codage et dispositif de décodage
CN111247804A (zh) 图像处理的方法与装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18904400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18904400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP