GB2590723A - Data encoding and decoding - Google Patents

Data encoding and decoding Download PDF

Info

Publication number
GB2590723A
GB2590723A GB1919470.3A GB201919470A GB2590723A GB 2590723 A GB2590723 A GB 2590723A GB 201919470 A GB201919470 A GB 201919470A GB 2590723 A GB2590723 A GB 2590723A
Authority
GB
United Kingdom
Prior art keywords
data
filter
video
bit depth
given
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1919470.3A
Other versions
GB201919470D0 (en
Inventor
James Sharman Karl
Mark Keating Stephen
Richard Browne Adrian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to GB1919470.3A priority Critical patent/GB2590723A/en
Publication of GB201919470D0 publication Critical patent/GB201919470D0/en
Publication of GB2590723A publication Critical patent/GB2590723A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video data decoding method comprises: receiving an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; generating filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth; and decoding the encoded video data to generate output video samples, the decoding step comprising applying a filtering operation defined by the filter parameter data. The applying step may comprise decoding encoded video data to generate intermediate video samples, and filtering the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate the output video samples. A predicted version of video samples may be generated, by generating, by inter image prediction with respect to the output video samples, the predicted version. The video sample filter may comprise a sample adaptive offset (SAO) filter; filter offset amounts may have a maximum offset magnitude depending on at least the given bit depth e.g. greater for a greater given bit depth value.

Description

DATA ENCODING AND DECODING
BACKGROUND Field
This disclosure relates to data encoding and decoding.
Description of Related Art
The "background" description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, is neither expressly or impliedly admitted as
prior art against the present disclosure.
There are several systems, such as video or image data encoding and decoding systems which involve transforming video data into a frequency domain representation, quantising the frequency domain coefficients and then applying some form of entropy encoding to the quantised coefficients. This can achieve compression of the video data. A corresponding decoding or decompression technique is applied to recover a reconstructed version of the original video data.
Some example arrangements use a so-called sample adaptive offset (SAO) filter in an encoding-decoding loop. In general terms, in a sample adaptive offset filter, filter parameter data (derived at the encoder and communicated to the decoder) defines one or more offset amounts to be selectively combined with a given intermediate video sample (a sample of the signal 460) by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample. The offset values are communicated from encoding side to the decoding side.
SUMMARY
The present disclosure addresses or mitigates problems arising from this processing. Respective aspects and features of the present disclosure are defined in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary, but are not restrictive, of the present technology.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein: Figure 1 schematically illustrates an audio/video (AN) data transmission and reception system using video data compression and decompression; Figure 2 schematically illustrates a video display system using video data decompression; Figure 3 schematically illustrates an audio/video storage system using video data compression and decompression; Figure 4 schematically illustrates a video camera using video data compression; Figures 5 and 6 schematically illustrate storage media; Figure 7 provides a schematic overview of a video data compression and decompression apparatus; Figure 8 schematically illustrates a predictor; Figure 9 schematically illustrates an encoding apparatus; Figure 10 schematically illustrates a decoding apparatus; and Figures 11 to 13 are schematic flowcharts illustrating respective methods.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring now to the drawings, Figures 1-4 are provided to give schematic illustrations of apparatus or systems making use of the compression and/or decompression apparatus to be described below in connection with embodiments of the present technology.
All of the data compression and/or decompression apparatus to be described below may be implemented in hardware, in software running on a general-purpose data processing apparatus such as a general-purpose computer, as programmable hardware such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or as combinations of these. In cases where the embodiments are implemented by software and/or firmware, it will be appreciated that such software and/or firmware, and non-transitory data storage media by which such software and/or firmware are stored or otherwise provided, are considered as embodiments of the present technology.
Figure 1 schematically illustrates an audio/video data transmission and reception system using video data compression and decompression. In this example, the data values to be encoded or decoded represent image data.
An input audio/video signal 10 is supplied to a video data compression apparatus 20 which compresses at least the video component of the audio/video signal 10 for transmission along a transmission route 30 such as a cable, an optical fibre, a wireless link or the like. The compressed signal is processed by a decompression apparatus 40 to provide an output audio/video signal 50. For the return path, a compression apparatus 60 compresses an audio/video signal for transmission along the transmission route 30 to a decompression apparatus 70.
The compression apparatus 20 and decompression apparatus 70 can therefore form one node of a transmission link. The decompression apparatus 40 and decompression apparatus 60 can form another node of the transmission link. Of course, in instances where the transmission link is uni-directional, only one of the nodes would require a compression apparatus and the other node would only require a decompression apparatus.
Figure 2 schematically illustrates a video display system using video data decompression. In particular, a compressed audio/video signal 100 is processed by a decompression apparatus 110 to provide a decompressed signal which can be displayed on a display 120. The decompression apparatus 110 could be implemented as an integral part of the display 120, for example being provided within the same casing as the display device.
Alternatively, the decompression apparatus 110 maybe provided as (for example) a so-called set top box (STB), noting that the expression "set-top" does not imply a requirement for the box to be sited in any particular orientation or position with respect to the display 120; it is simply a term used in the art to indicate a device which is connectable to a display as a peripheral device.
Figure 3 schematically illustrates an audio/video storage system using video data compression and decompression. An input audio/video signal 130 is supplied to a compression apparatus 140 which generates a compressed signal for storing by a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device. For replay, compressed data is read from the storage device 150 and passed to a decompression apparatus 160 for decompression to provide an output audio/video signal 170.
It will be appreciated that the compressed or encoded signal, and a storage medium such as a machine-readable non-transitory storage medium, storing that signal, are considered as embodiments of the present technology.
Figure 4 schematically illustrates a video camera using video data compression. In Figure 4, an image capture device 180, such as a charge coupled device (CCD) image sensor and associated control and read-out electronics, generates a video signal which is passed to a compression apparatus 190. A microphone (or plural microphones) 200 generates an audio signal to be passed to the compression apparatus 190. The compression apparatus 190 generates a compressed audio/video signal 210 to be stored and/or transmitted (shown generically as a schematic stage 220).
The techniques to be described below relate primarily to video data compression and decompression. It will be appreciated that many existing techniques may be used for audio data compression in conjunction with the video data compression techniques which will be described, to generate a compressed audio/video signal. Accordingly, a separate discussion of audio data compression will not be provided. It will also be appreciated that the data rate associated with video data, in particular broadcast quality video data, is generally very much higher than the data rate associated with audio data (whether compressed or uncompressed). It will therefore be appreciated that uncompressed audio data could accompany compressed video data to form a compressed audio/video signal. It will further be appreciated that although the present examples (shown in Figures 1-4) relate to audio/video data, the techniques to be described below can find use in a system which simply deals with (that is to say, compresses, decompresses, stores, displays and/or transmits) video data. That is to say, the embodiments can apply to video data compression without necessarily having any associated audio data handling at all.
Figure 4 therefore provides an example of a video capture apparatus comprising an image sensor and an encoding apparatus of the type to be discussed below. Figure 2 therefore provides an example of a decoding apparatus of the type to be discussed below and a display to which the decoded images are output.
A combination of Figure 2 and 4 may provide a video capture apparatus comprising an image sensor 180 and encoding apparatus 190, decoding apparatus 110 and a display 120 to which the decoded images are output.
Figures 5 and 6 schematically illustrate storage media, which store (for example) the compressed data generated by the apparatus 20, 60, the compressed data input to the apparatus 110 or the storage media or stages 150, 220. Figure 5 schematically illustrates a disc storage medium such as a magnetic or optical disc, and Figure 6 schematically illustrates a solid state storage medium such as a flash memory. Note that Figures 5 and 6 can also provide examples of non-transitory machine-readable storage media which store computer software which, when executed by a computer, causes the computer to carry out one or more of the methods to be discussed below.
Therefore, the above arrangements provide examples of video storage, capture, transmission or reception apparatuses embodying any of the present techniques.
Figure 7 provides a schematic overview of a video or image data compression and decompression apparatus, for encoding and/or decoding image data representing one or more images.
A controller 343 controls the overall operation of the apparatus and, in particular when referring to a compression mode, controls a trial encoding processes by acting as a selector to select various modes of operation such as block sizes and shapes, and whether the video data is to be encoded losslessly or otherwise. The controller is considered to form part of the image encoder or image decoder (as the case may be). Successive images of an input video signal 300 are supplied to an adder 310 and to an image predictor 320. The image predictor 320 will be described below in more detail with reference to Figure 8. The image encoder or decoder (as the case may be) plus the intra-image predictor of Figure 8 may use features from the apparatus of Figure 7. This does not mean that the image encoder or decoder necessarily requires every feature of Figure 7 however.
The adder 310 in fact performs a subtraction (negative addition) operation, in that it receives the input video signal 300 on a "+" input and the output of the image predictor 320 on a "-" input, so that the predicted image is subtracted from the input image. The result is to generate a so-called residual image signal 330 representing the difference between the actual and predicted images.
One reason why a residual image signal is generated is as follows. The data coding techniques to be described, that is to say the techniques which will be applied to the residual image signal, tend to work more efficiently when there is less "energy" in the image to be encoded. Here, the term "efficiently" refers to the generation of a small amount of encoded data; for a particular image quality level, it is desirable (and considered "efficient") to generate as little data as is practicably possible. The reference to "energy" in the residual image relates to the amount of information contained in the residual image. If the predicted image were to be identical to the real image, the difference between the two (that is to say, the residual image) would contain zero information (zero energy) and would be very easy to encode into a small amount of encoded data. In general, if the prediction process can be made to work reasonably well such that the predicted image content is similar to the image content to be encoded, the expectation is that the residual image data will contain less information (less energy) than the input image and so will be easier to encode into a small amount of encoded data.
Therefore, encoding (using the adder 310) involves predicting an image region for an image to be encoded; and generating a residual image region dependent upon the difference between the predicted image region and a corresponding region of the image to be encoded. In connection with the techniques to be discussed below, the ordered array of data values comprises data values of a representation of the residual image region. Decoding involves predicting an image region for an image to be decoded; generating a residual image region indicative of differences between the predicted image region and a corresponding region of the image to be decoded; in which the ordered array of data values comprises data values of a representation of the residual image region; and combining the predicted image region and the residual image region.
The remainder of the apparatus acting as an encoder (to encode the residual or difference image) will now be described.
The residual image data 330 is supplied to a transform unit or circuitry 340 which generates a discrete cosine transform (DCT) representation of blocks or regions of the residual image data. The DCT technique itself is well known and will not be described in detail here.
Note also that the use of DCT is only illustrative of one example arrangement. Other transforms which might be used include, for example, the discrete sine transform (DST). A transform could also comprise a sequence or cascade of individual transforms, such as an arrangement in which one transform is followed (whether directly or not) by another transform. The choice of transform may be determined explicitly and/or be dependent upon side information used to configure the encoder and decoder. In other examples a so-called "transform skip" mode can selectively be used in which no transform is applied.
Therefore, in examples, an encoding and/or decoding method comprises predicting an image region for an image to be encoded; and generating a residual image region dependent upon the difference between the predicted image region and a corresponding region of the image to be encoded; in which the ordered array of data values (to be discussed below) comprises data values of a representation of the residual image region.
The output of the transform unit 340, which is to say On an example), a set of OCT coefficients for each transformed block of image data, is supplied to a quantiser 350. Various quantisation techniques are known in the field of video data compression, ranging from a simple multiplication by a quantisation scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter. The general aim is twofold. Firstly, the quantisation process reduces the number of possible values of the transformed data. Secondly, the quantisation process can increase the likelihood that values of the transformed data are zero. Both of these can make the entropy encoding process, to be described below, work more efficiently in generating small amounts of compressed video data.
A data scanning process is applied by a scan unit 360. The purpose of the scanning process is to reorder the quantised transformed data so as to gather as many as possible of the non-zero quantised transformed coefficients together, and of course therefore to gather as many as possible of the zero-valued coefficients together. These features can allow so-called run-length coding or similar techniques to be applied efficiently. So, the scanning process involves selecting coefficients from the quantised transformed data, and in particular from a block of coefficients corresponding to a block of image data which has been transformed and quantised, according to a "scanning order" so that (a) all of the coefficients are selected once as part of the scan, and (b) the scan tends to provide the desired reordering. One example scanning order which can tend to give useful results is a so-called up-right diagonal scanning order.
The scanning order can be different, as between transform-skip blocks and transform blocks (blocks which have undergone at least one spatial frequency transformation).
The scanned coefficients are then passed to an entropy encoder (EE) 370. Again, various types of entropy encoding may be used. Two examples are variants of the so-called CABAC (Context Adaptive Binary Arithmetic Coding) system and variants of the so-called CAVLC (Context Adaptive Variable-Length Coding) system. In general terms, CABAC is considered to provide a better efficiency, and in some studies has been shown to provide a 10- 20% reduction in the quantity of encoded output data for a comparable image quality compared to CAVLC. However, CAVLC is considered to represent a much lower level of complexity (in terms of its implementation) than CABAC. Note that the scanning process and the entropy encoding process are shown as separate processes, but in fact can be combined or treated together. That is to say, the reading of data into the entropy encoder can take place in the scan order. Corresponding considerations apply to the respective inverse processes to be described below.
The output of the entropy encoder 370, along with additional data (mentioned above and/or discussed below), for example defining the manner in which the predictor 320 generated the predicted image, whether the compressed data was transformed or transform-skipped or the like, provides a compressed output video signal 380.
However, a return path 390 is also provided because the operation of the predictor 320 itself depends upon a decompressed version of the compressed output data.
The reason for this feature is as follows. At the appropriate stage in the decompression process (to be described below) a decompressed version of the residual data is generated. This decompressed residual data has to be added to a predicted image to generate an output image (because the original residual data was the difference between the input image and a predicted image). In order that this process is comparable, as between the compression side and the decompression side, the predicted images generated by the predictor 320 should be the same during the compression process and during the decompression process. Of course, at decompression, the apparatus does not have access to the original input images, but only to the decompressed images. Therefore, at compression, the predictor 320 bases its prediction (at least, for inter-image encoding) on decompressed versions of the compressed images.
The entropy encoding process carried out by the entropy encoder 370 is considered (in at least some examples) to be "lossless", which is to say that it can be reversed to arrive at exactly the same data which was first supplied to the entropy encoder 370. So, in such examples the return path can be implemented before the entropy encoding stage. Indeed, the scanning process carried out by the scan unit 360 is also considered lossless, so in the present embodiment the return path 390 is from the output of the quantiser 350 to the input of a complimentary inverse quantiser 420. In instances where loss or potential loss is introduced by a stage, that stage (and its inverse) may be included in the feedback loop formed by the return path. For example, the entropy encoding stage can at least in principle be made lossy, for example by techniques in which bits are encoded within parity information. In such an instance, the entropy encoding and decoding should form part of the feedback loop.
In general terms, an entropy decoder 410, the reverse scan unit 400, an inverse quantiser 420 and an inverse transform unit or circuitry 430 provide the respective inverse functions of the entropy encoder 370, the scan unit 360, the quantiser 350 and the transform unit 340. For now, the discussion will continue through the compression process; the process to decompress an input compressed video signal will be discussed separately below.
In the compression process, the scanned coefficients are passed by the return path 390 from the quantiser 350 to the inverse quantiser 420 which carries out the inverse operation of the scan unit 360. An inverse quanfisation and inverse transformation process are carried out by the units 420, 430 to generate a compressed-decompressed residual image signal 440. The image signal 440 is added, at an adder 450, to the output of the predictor 320 to generate a reconstructed output image 460 (although this may be subject to so-called loop filtering and/or other filtering before being output -see below). This forms one input to the image predictor 320, as will be described below.
Turning now to the decoding process applied to decompress a received compressed video signal 470, the signal is supplied to the entropy decoder 410 and from there to the chain of the reverse scan unit 400, the inverse quantiser 420 and the inverse transform unit 430 before being added to the output of the image predictor 320 by the adder 450. So, at the decoder side, the decoder reconstructs a version of the residual image and then applies this (by the adder 450) to the predicted version of the image (on a block by block basis) so as to decode each block. In straightforward terms, the output 460 of the adder 450 forms the output decompressed video signal 480 (subject to the filtering processes discussed below). In practice, further filtering may optionally be applied (for example, by a loop filter 565 shown in Figure 8 but omitted from Figure 7 for clarity of the higher level diagram of Figure 7) before the signal is output.
The apparatus of Figures 7 and 8 can act as a compression (encoding) apparatus or a decompression (decoding) apparatus. The functions of the two types of apparatus substantially overlap. The scan unit 360 and entropy encoder 370 are not used in a decompression mode, and the operation of the predictor 320 (which will be described in detail below) and other units follow mode and parameter information contained in the received compressed bit-stream rather than generating such information themselves.
Figure 8 schematically illustrates the generation of predicted images, and in particular the operation of the image predictor 320.
There are two basic modes of prediction carried out by the image predictor 320: so-called intra-image prediction and so-called inter-image, or motion-compensated (MC), prediction. At the encoder side, each involves detecting a prediction direction in respect of a current block to be predicted, and generating a predicted block of samples according to other samples (in the same (intra) or another (inter) image). By virtue of the units 310 or 450, the difference between the predicted block and the actual block is encoded or applied so as to encode or decode the block respectively.
(At the decoder, or at the reverse decoding side of the encoder, the detection of a prediction direction may be in response to data associated with the encoded data by the encoder, indicating which direction was used at the encoder. Or the detection may be in response to the same factors as those on which the decision was made at the encoder).
Intra-image prediction bases a prediction of the content of a block or region of the image on data from within the same image. This corresponds to so-called l-frame encoding in other video compression techniques. In contrast to I-frame encoding, however, which involves encoding the whole image by intra-encoding, in the present embodiments the choice between intra-and inter-encoding can be made on a block-by-block basis, though in other embodiments the choice is still made on an image-by-image basis.
Motion-compensated prediction is an example of inter-image prediction and makes use of motion information which attempts to define the source, in another adjacent or nearby image, of image detail to be encoded in the current image. Accordingly, in an ideal example, the contents of a block of image data in the predicted image can be encoded very simply as a reference (a motion vector) pointing to a corresponding block at the same or a slightly different position in an adjacent image.
A technique known as "block copy" prediction is in some respects a hybrid of the two, as it uses a vector to indicate a block of samples at a position displaced from the currently predicted block within the same image, which should be copied to form the currently predicted 20 block.
Returning to Figure 8, two image prediction arrangements (corresponding to intra-and inter-image prediction) are shown, the results of which are selected by a multiplexer 500 under the control of a mode signal 510 (for example, from the controller 343) so as to provide blocks of the predicted image for supply to the adders 310 and 450. The choice is made in dependence upon which selection gives the lowest "energy" (which, as discussed above, may be considered as information content requiring encoding), and the choice is signalled to the decoder within the encoded output data-stream. Image energy, in this context, can be detected, for example, by carrying out a trial subtraction of an area of the two versions of the predicted image from the input image, squaring each pixel value of the difference image, summing the squared values, and identifying which of the two versions gives rise to the lower mean squared value of the difference image relating to that image area. In other examples, a trial encoding can be carried out for each selection or potential selection, with a choice then being made according to the cost of each potential selection in terms of one or both of the number of bits required for encoding and distortion to the picture.
The actual prediction, in the intra-encoding system, is made on the basis of image blocks received as part of the signal 460 (as filtered by loop filtering; see below), which is to say, the prediction is based upon encoded-decoded image blocks in order that exactly the same prediction can be made at a decompression apparatus. However, data can be derived from the input video signal 300 by an intra-mode selector 520 to control the operation of the intra-image predictor 530.
For inter-image prediction, a motion compensated (MC) predictor 540 uses motion information such as motion vectors derived by a motion estimator 550 from the input video signal 300. Those motion vectors are applied to a processed version of the reconstructed image 460 by the motion compensated predictor 540 to generate blocks of the inter-image prediction. Accordingly, the units 530 and 540 (operating with the estimator 550) each act as detectors to detect a prediction direction in respect of a current block to be predicted, and as a generator to generate a predicted block of samples (forming part of the prediction passed to the units 310 and 450) according to other samples defined by the prediction direction.
The processing applied to the signal 460 will now be described.
Firstly, the signal may be filtered by a so-called loop filter 565. Various types of loop filters may be used. One technique involves applying a "deblocking" filter to remove or at least tend to reduce the effects of the block-based processing carried out by the transform unit 340 and subsequent operations. A further technique involving applying a so-called sample adaptive offset (SAO) filter may also be used. In general terms, in a sample adaptive offset filter, filter parameter data (derived at the encoder and communicated to the decoder) defines one or more offset amounts to be selectively combined with a given intermediate video sample (a sample of the signal 460) by the sample adaptive offset filter in dependence upon a value of:(i) the given intermediate video sample; or 00 one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
Also, an adaptive loop filter is optionally applied using coefficients derived by processing the reconstructed signal 460 and the input video signal 300. The adaptive loop filter is a type of filter which, using known techniques, applies adaptive filter coefficients to the data to be filtered.
That is to say, the filter coefficients can vary in dependence upon various factors. Data defining which filter coefficients to use is included as part of the encoded output data-stream. Techniques to be discussed below relate to the handling of parameter data relating to the operation of filters. The actual filtering operations (such as SAO filtering) may use otherwise known techniques.
The filtered output from the loop filter unit 565 in fact forms the output video signal 480 when the apparatus is operating as a decompression apparatus. It is also buffered in one or more image or frame stores 570; the storage of successive images is a requirement of motion compensated prediction processing, and in particular the generation of motion vectors. To save on storage requirements, the stored images in the image stores 570 may be held in a compressed form and then decompressed for use in generating motion vectors. For this particular purpose, any known compression / decompression system may be used. The stored images may be passed to an interpolation filter 580 which generates a higher resolution version of the stored images; in this example, intermediate samples (sub-samples) are generated such that the resolution of the interpolated image is output by the interpolation filter 580 is 4 times (in each dimension) that of the images stored in the image stores 570 for the luminance channel of 4:2:0 and 8 times (in each dimension) that of the images stored in the image stores 570 for the chrominance channels of 4:2:0. The interpolated images are passed as an input to the motion estimator 550 and also to the motion compensated predictor 540.
The way in which an image is partitioned for compression processing will now be described. At a basic level, an image to be compressed is considered as an array of blocks or regions of samples. The splitting of an image into such blocks or regions can be carried out by a decision tree, such as that described in SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services -Coding of moving video High efficiency video coding Recommendation ITU-T H.265 12/2016. Also: High Efficiency Video Coding (HEVC) Algorithms and Architectures, chapter 3, Editors: Madhukar Budagavi, Gary J. Sullivan, Vivienne Sze; ISBN 978-3-319-06894-7; 2014 which are incorporated herein in their respective entireties by reference.
In some examples, the resulting blocks or regions have sizes and, in some cases, shapes which, by virtue of the decision tree, can generally follow the disposition of image features within the image. This in itself can allow for an improved encoding efficiency because samples representing or following similar image features would tend to be grouped together by such an arrangement. In some examples, square blocks or regions of different sizes (such as 4x4 samples up to, say, 64x64 or larger blocks) are available for selection. In other example arrangements, blocks or regions of different shapes such as rectangular blocks (for example, vertically or horizontally oriented) can be used. Other non-square and non-rectangular blocks are envisaged. The result of the division of the image into such blocks or regions is (in at least the present examples) that each sample of an image is allocated to one, and only one, such block or region.
SAO filter Figures 9 and 10 respectively refer to aspects on the encoder side and the decoder side (as discussed with respect to Figure 7 for example) relating to the operation of the loop filter 565 which in this example may be (or at least include) an SAO filter which receives reconstructed data 460 and outputs filtered output video 480.
The present description does not relate to specific aspects of the internal operation of the SAO filter itself; but rather to techniques for providing parameters to the SAO filter.
Therefore, the SAO filter itself may operate using otherwise known techniques.
As mentioned above, in an SAO filter, filter parameter data defines one or more offset amounts to be selectively combined within a given sample of the signal 460 according to a value of that sample or one or more surrounding samples having a predetermined spatial relationship sample to that relationship. Therefore, the data provided to the SAO filter defines at least the offset values. These may be signed offset values or may be provided as magnitude values with signs associated with them by the normal operation of the SAO filter 565.
Generally, SAO filters use a signed offset whose magnitude is clipped to: ( 1 « ( Min( bitDepth, 10) -5) ) -1 Here, "Min" is a minimum function outputting the minimum of its operands, and bitDepth is the current or prevailing bit depth of the data being encoded.
The magnitude of this value is in some examples unary coded in equiprobable (EP) CABAC bins, which implies that it could require up to 31 bits. Its sign may be encoded separately, for example by another bit.
In empirical tests of previously proposed arrangements, the SAO filter is less commonly selected for use (by the controller 343 in response to algorithms to improve the efficiency of encoding) for bit depths of 12 or more bits (than for smaller bit depths) as the offset is relatively small compared to the bit depth of the data in question. For a bit depth of 16 bits or above the SAO filter is rarely selected. In other words, the offsets introduced by the SAO process make relatively minor difference at higher bit depths. Techniques to address this and to render the SAO filter arrangement more useful or at least potentially more useful for bit depths higher than 10 bits will be discussed below. Note however that the same techniques can be useful at other bit depths and are not restricted to the range of greater than 10 bits.
Turning to Figure 9, a part of the operation of an encoder or encoding apparatus is provided, comprising a generator 900 to generate filter data defining a filtering operation such as an SAO filtering operation (but the technique may be applicable to other filter operations).
For example, the filter data may define at least an aspect of one or more offset values. This filter data 910 is provided to modification circuitry 920 and also to an encoder 930 which encodes a representation of the filter data to the encoded data stream output by the encoding apparatus. The modification circuitry 920 and the encoder 930 are both also responsive to a prevailing quantisation parameter 940, and the modification circuitry 920 is further responsive to a parameter 950 indicative of the bit depth of the current data being encoded along with a flag 960 indicating re-use of SAO filter parameters.
The generator 900 generates filter data such as offset data according to otherwise known techniques. The selection of whether or not to use the SAO filter is handled by the controller 343. The selection of whether or not to re-use coefficients is handled by the controller 343. In both of these operations by the controller 343, any full or partial trial encoding on which such a decision is made is based upon the modified filter parameter data (as output by the modification circuitry 920) rather than the raw filter data as generated by the generator 900.
Regarding the re-use flag, it is possible for the controller 343 to determine that the most beneficial step in terms of encoding efficiency is for the SAO filter to re-use filter parameters associated with a previously encoded or decoded coding tree unit (CTU) or the like. For this reason, a store 970 is provided so that when the re-use flag is set, SAO filter parameters such as offset values are simply read from the store 970 for use by the SAO filter 565. Similarly, at each CTU for which new parameters are generated, these are stored by the modification circuitry 920 in the store 970 in case they are required by a subsequently handled CTU.
The modification circuitry takes the filter data 910 generated by the generator 900 and produces filter parameter data 980 for actual use by the SAO filter 565. It achieves this by applying a predetermined operation to the filter data 910, the predetermined operation being dependent upon at least the bit depth 950 associated with the video samples to be encoded. In an example, the predetermined operation comprises a scaling operation in which the filter data 910 is scaled by a scaling factor. In some examples, the modification circuitry 920 is configured to derive the scaling factor in dependence upon a difference between the prevailing bit depth and a reference bit depth, for example as a scaling factor equal to the difference between the prevailing bit depth and the reference bit depth.
In some examples, the reference bit depth is 10 bits and a left shift of (prevailing bit depth minus 10) is applied to the filter data 910 to generate the filter parameter data 980.
A previously proposed derivation of SAO filter parameters is taken from the most recent VVC Draft (at the priority date of the present application) (JVET-P2001-vE), at equation 155 in section 7.4.10.3 (at least this section being incorporated in the present application by reference): SaoOffsetVal[ cldx][ rx][ ry][ i + 1] = ( 1 -2 * sao_offset_sign[ cldx][ rx][ ry][ i] )* (sao_offset_abs[ cldx][ rx][ ry][ i In examples of the present disclosure, to provide the modification discussed above and also so as to exclude prevailing bit depths of less than 10 causing instead a right shift, the expression used is as follows: SaoOffsetVal[ cldx][ rx][ ry][ i + 1] = ( 1 -2 * sao_offset_sign[ cldx][ rx][ ry][ ] )* (sao_offset_abs[ cldx][ rx][ ry][ i] << (bitDepth -Min(10, bitDepth)) Therefore, in these examples, a left shift (of bitDepth -10) where 10 is (in this example) a reference bit depth, is initiated for any bit depth greater than 10.
Accordingly, a first version of the filter data is generated and encoded to the data stream independently of the current or prevailing bit depth, but then at the encoder and (as will be described below) at the decoder, that value is modified in dependence upon at least the bit depth to generate the filter parameter data 980 which is actually used by the SAO filter 565.
This can provide for more relevant filter parameter data, for example offsets which are closer in magnitude to the dynamic range provided by the bit depth in use, without necessarily incurring any greater cost in the data stream for the transmission of those larger offsets.
This provides an example of an arrangement in which the one or more offset amounts of an SAO filter have a maximum offset magnitude which depends upon at least the given or prevailing bit depth. For example, the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
The amount by which the filter data is varied or scaled can depend as discussed above upon a difference between the given bit depth and a reference bit depth. For example a scaling factor representing shifting by a number of bit positions can be equal (or at least proportional) to the difference between the given bit depth and the reference bit depth.
At the decoder (Figure 10) data received from parameter set header data 1005 such as picture parameter sets sequence parameter sets and the like provides the bit depth 1000. Also from the encoded data stream, the filter data 1010 (corresponding to the encoded filter data 910), the quantisation factor in use 1020 and the re-use flag 1030 are obtained. These are decoded by decoder and modification circuitry 1040 which also applies a corresponding predetermined function of the bit depth at least to the filter data 1010 to generate filter parameter data 1020 for use by the SAO filter 565. Again, a store 1050 is provided to store the data 1020 so that if the re-use flag indicates re-use, the most recently generated (or other stored) filter parameter data 1020 can simply be retrieved from the store 1050.
Therefore, in Figure 10, the decoding process comprises at least decoding the encoded video data to generate intermediate video samples 460; and filtering the intermediate video samples, using a video sample filter 565 defined by the filter parameter data, to generate the output video samples 480. Note that as discussed above, these processes are also applicable to the encoding process of Figure 9.
The quantisation factor 940, 1020 can be relevant to the process as well.
In a first example, the use of the predetermined function to modify the filter data can be enabled or disabled in response to the size of the quantisation factor and/or other control data such as parameter or header data or other metadata so that the modification operation is performed or not performed in response to whether it is enabled or not. For example, for particularly harsh quantisation (which may for example be indicated by a quantisation parameter or a divisor of at least a threshold magnitude, depending on the representation in use), it may be appropriate to use the left shift, but for more gentle or mild (less harsh) quantisation the unshifted values may be appropriate.
Alternatively, the shift amount (or more generally a scaling factor by which the filter data is scaled) can be varied in response to respective quantisation values or ranges of quantisation values. Note that in general terms it is not a requirement that the shift be identical to or proportional to the difference between the prevailing bit depth and a reference bit depth; that is just an example. The modification could be by a factor which is not a power of two (that is to say, not a simple bit shift) though in other examples bit shifts are convenient to implement. The shift or other modification could be by a predetermined amount or factor which is enabled or disabled in dependence on prevailing bit depth. Other alternative arrangements are also possible.
In other examples, the manner of coding the filter data 910 can be varied according to the quantisation factor, or indeed according to the prevailing bit depth. For example, for some combinations of bit depth and/or quantisation, so-called Golomb-Rice coding with, say, 3 unary bits and 4 suffix bits can be used, whereas for lower bit depth and/or gentler quantisation, 5 unary and 2 suffix bits could be used. This provides an example in which an encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
In other example embodiments, the allowable magnitude of the filter data 910 as generated by the generator 900 can be varied so as to increase the maximum magnitude for increasing bit depth.
Figure 11 is an example flowchart representing a process performed at decoding in which at a step 1100, header, parameter set and/or other stream data is retrieved to obtain the required parameter amongst the bit depth, the re-use flag, the filter data and the quantisation 20 parameter.
At a step 1110, if the re-use flag is set then the filter parameter data is simply retrieved from the store at a step 1120 and control returns to the step 1100 for the next CTU.
If no, however, then at a step 1130 the filter data is decoded, for example making use of the quantisation parameter and/or the bit depth as discussed above, and at a step 1140 the decoded filter data is modified according to the modification function (for example a predetermined function, albeit optionally according to one or more parameters which may vary with prevailing bit depth). Then, at a step 1150 the filter parameter data obtained in this way is applied for a filtering operation by the SAO filter and also at a step 1160 stored in case of reuse.
Figure 12 is a schematic flowchart illustrating a video data decoding method comprising: receiving (at a step 1200) an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; generating (at a step 1210) filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth and decoding (at a step 1220) the encoded video data to generate output video samples, the decoding step comprising applying a filtering operation defined by the filter parameter data.
Figure 13 is a schematic flowchart illustrating a video data encoding method to encode video samples having a given bit depth, the method comprising: generating (at a step 1300) filter data defining a filtering operation; generating (at a step 1310) filter parameter data by applying a predetermined operation to the filter data, the predetermined operation being dependent upon at least the given bit depth; encoding (at a step 1320) video data representing video samples to generate an encoded data stream, in which the encoding step comprises applying a filtering operation defined by the filter parameter data; and associating (at a step 1330) the filter data with the encoded data stream.
The above encoding method may be implemented by the apparatus of Figures 7 and/or 8 and/or 9.
The above decoding method may be implemented by the apparatus of Figures 7 and/or Sand/or 10.
In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software; such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Similarly, a data signal comprising coded data generated according to the methods discussed above (whether or not embodied on a non-transitory machine-readable medium) is also considered to represent an embodiment of the present disclosure.
It will be apparent that numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended clauses, the technology may be practised otherwise than as specifically described herein.
It will be appreciated that the above description for clarity has described embodiments with reference to different functional units, circuitry and/or processors. However, it will be apparent that any suitable distribution of functionality between different functional units, circuitry and/or processors may be used without detracting from the embodiments.
Described embodiments may be implemented in any suitable form including hardware, software, firmware or any combination of these. Described embodiments may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors. The elements and components of any embodiment may be physically, functionally and logically implemented in any suitable way. Indeed the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the disclosed embodiments may be implemented in a single unit or may be physically and functionally distributed between different units, circuitry and/or processors.
Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in any manner suitable to implement the technique.
Respective aspects and features are defined by the following numbered clauses: 1. A video data decoding method comprising: receiving an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; generating filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth; and decoding the encoded video data to generate output video samples, the decoding step comprising applying a filtering operation defined by the filter parameter data.
2. The method of clause 1, in which the applying step comprises: decoding the encoded video data to generate intermediate video samples; and filtering the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate the output video samples.
3. The method of clause 2, comprising generating a predicted version of one of more video samples.
4. The method of clause 3, in which the step of generating a predicted version comprises generating, by inter-image prediction with respect to the output video samples, the predicted version of one of more video samples.
5. The method of clause 3 or clause 4, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: CO the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
6. The method of clause 5, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
7. The method of clause 6, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
8. The method of any one of the preceding clauses, in which the predetermined operation comprises a scaling operation in which the input filter data is scaled by a scaling factor.
9. The method of clause 8, in which the step of generating filter parameter data comprises deriving the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
10. The method of clause 9, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
11. The method of any one of the preceding clauses, in which: the decoding step comprises applying an inverse quantisafion dependent upon a quanfisafion factor applicable to the decoding of a given group of output video samples; and the step of generating filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the quantisation factor applicable to the given group of output video samples.
12. The method of any one of the preceding clauses, in which: the receiving step comprises receiving control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the step of generating filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
13. The method of any one of the preceding clauses, in which the receiving step comprises: receiving an encoded representation of the input filter data according to an encoding format; and decoding the input filter data from the encoded representation of the input filter data; in which the encoding format depends upon at least the given bit depth.
14. The method of clause 13, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
15. Computer software which, when executed by a computer, causes the computer to carry out the method of any one of the preceding clauses.
16. A non-transitory machine readable storage medium which stores computer software according to clause 15.
17. A video data encoding method to encode video samples having a given bit depth, the method comprising: generating filter data defining a filtering operation; generating filter parameter data by applying a predetermined operation to the filter data, the predetermined operation being dependent upon at least the given bit depth; encoding video data representing the video samples to generate an encoded data stream, in which the encoding step comprises applying a filtering operation defined by the filter parameter data; and associating the filter data with the encoded data stream.
18. The method of clause 17, in which the encoding step comprises: decoding the encoded video data to generate intermediate video samples; and filtering the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate reconstructed video samples.
19. The method of clause 18, comprising generating a predicted version of one of more video samples.
20. The method of clause 19, in which the step of generating a predicted version comprises generating, by inter-image prediction with respect to the reconstructed samples, the predicted version of one of more video samples.
21. The method of clause 19 or clause 20, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
22. The method of clause 21, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
23. The method of clause 22, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
24. The method of any one of clauses 17 to 23, in which the predetermined operation comprises a scaling operation in which the filter data is scaled by a scaling factor.
25. The method of clause 24, in which the step of generating the filter parameter data comprises deriving the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
26. The method of clause 25, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
27 The method of any one of clauses 17 to 26, in which: the encoding step comprises applying a quantisation dependent upon a quantisation factor applicable to the encoding of a given group of video samples; and the step of generating the filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of encoding of the given group of video samples in dependence upon the quantisation factor applicable to the given group of video samples.
28. The method of any one of clauses 17 to 27, in which: the receiving step comprises receiving control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the step of generating the filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
29. The method of any one of clauses 17 to 28, in which the associating step comprises: generating an encoded representation of the input filter data according to an encoding format and in which the encoding format depends upon at least the given bit depth.
30. The method of clause 29, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
31. Computer software which, when executed by a computer, causes the computer to carry out the method of any one of clauses 17 to 30.
32. A non-transitory machine readable storage medium which stores computer software according to clause 31.
33. A video data decoding apparatus comprising: a decoder configured to receive an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; and a generator configured to generate filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth; the decoder being configured to decode the encoded video data to generate output video samples by applying a filtering operation defined by the filter parameter data.
34. The apparatus of clause 33, in which the decoder is configured to: decode the encoded video data to generate intermediate video samples; and the apparatus comprises a filter configured to filter the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate the output video samples.
35. The apparatus of clause 34, in which the decoder comprises a predictor configured to generate a predicted version of one of more video samples.
36. The apparatus of clause 35, in which the predictor is configured to generate, by inter-image prediction with respect to the output video samples, the predicted version of one of more video samples.
37. The apparatus of clause 35 or clause 36, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
38. The apparatus of clause 37, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
39. The apparatus of clause 38, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
40. The apparatus of any one of clauses 33 to 39, in which the predetermined operation comprises a scaling operation in which the input filter data is scaled by a scaling factor.
41. The apparatus of clause 40, in which the generator is configured to derive the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
42. The apparatus of clause 41, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
43. The apparatus of any one of clauses 33 to 42, in which: the decoder comprises an inverse quantiser configured to apply an inverse quantisation dependent upon a quantisation factor applicable to the decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the quantisation factor applicable to the given group of output video samples.
44. The apparatus of any one of clauses 33 to 43, in which: the decoder is configured to receive control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
45 The apparatus of any one of clauses 33 to 44, in which the decoder is configured to: receive an encoded representation of the input filter data according to an encoding format; and decode the input filter data from the encoded representation of the input filter data; in which the encoding format depends upon at least the given bit depth.
46. The apparatus of clause 45, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
47. Video data capture, transmission, display and/or storage apparatus comprising the apparatus of any one of clauses 33 to 46.
48. A video data encoding apparatus to encode video samples having a given bit depth, the apparatus comprising: a generator configured to generate filter data defining a filtering operation and to generate filter parameter data by applying a predetermined operation to the filter data, the predetermined operation being dependent upon at least the given bit depth; and an encoder configured to encode video data representing the video samples to generate an encoded data stream, in which the encoding step comprises applying a filtering operation defined by the filter parameter data; the encoder being configured to associate the filter data with the encoded data stream.
49. The apparatus of clause 48, in which the encoder comprises: a decoder configured to decode the encoded video data to generate intermediate video samples; and a filter configured to filter the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate reconstructed video samples.
50. The apparatus of clause 49, comprising a predictor configured to generate a predicted version of one of more video samples.
51. The apparatus of clause 50, in which the predictor is configured to generate, by inter-image prediction with respect to the reconstructed samples, the predicted version of one of more video samples.
52. The apparatus of clause 50 or clause 51, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or 00 one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
53. The apparatus of clause 52, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
54. The apparatus of clause 53, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
55. The apparatus of any one of clauses 48 to 54, in which the predetermined operation comprises a scaling operation in which the filter data is scaled by a scaling factor.
56. The apparatus of clause 55, in which the generator is configured to derive the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
57. The apparatus of clause 56, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
58 The apparatus of any one of clauses 48 to 57, in which: the encoder comprises a quantiser configured to apply a quantisation dependent upon a quantisation factor applicable to the encoding of a given group of video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of encoding of the given group of video samples in dependence upon the quantisation factor applicable to the given group of video samples.
59. The apparatus of any one of clauses 48 to 58, in which: the generator is configured to generates control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
60. The apparatus of any one of clauses 48 to 59, in which the encoder is configured to generate an encoded representation of the input filter data according to an encoding format; in which the encoding format depends upon at least the given bit depth.
61. The apparatus of clause 60, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
62. Video data capture, transmission, display and/or storage apparatus comprising the apparatus of any one of clauses 48 to 61.

Claims (62)

  1. CLAIMS1. A video data decoding method comprising: receiving an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; generating filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth; and decoding the encoded video data to generate output video samples, the decoding step comprising applying a filtering operation defined by the filter parameter data.
  2. 2. The method of claim 1, in which the applying step comprises: decoding the encoded video data to generate intermediate video samples; and filtering the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate the output video samples.
  3. 3. The method of claim 2, comprising generating a predicted version of one of more video samples.
  4. 4. The method of claim 3, in which the step of generating a predicted version comprises generating, by inter-image prediction with respect to the output video samples, the predicted version of one of more video samples
  5. 5. The method of claim 3, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample. 30
  6. 6. The method of claim 5, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
  7. 7. The method of claim 6, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
  8. 8. The method of claim 1, in which the predetermined operation comprises a scaling operation in which the input filter data is scaled by a scaling factor.
  9. 9. The method of claim 8, in which the step of generating filter parameter data comprises deriving the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
  10. 10. The method of claim 9, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth. 10
  11. 11. The method of claim 1, in which: the decoding step comprises applying an inverse quantisation dependent upon a quantisafion factor applicable to the decoding of a given group of output video samples; and the step of generating filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the quanfisation factor applicable to the given group of output video samples.
  12. 12. The method of claim 1, in which: the receiving step comprises receiving control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the step of generating filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
  13. 13. The method of claim 1, in which the receiving step comprises: receiving an encoded representation of the input filter data according to an encoding format and decoding the input filter data from the encoded representation of the input filter data; in which the encoding format depends upon at least the given bit depth.
  14. 14. The method of claim 13, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
  15. 15. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 1.
  16. 16. A non-transitory machine readable storage medium which stores computer software according to claim 15.
  17. 17. A video data encoding method to encode video samples having a given bit depth, the method comprising: generating filter data defining a filtering operation; generating filter parameter data by applying a predetermined operation to the filter data, the predetermined operation being dependent upon at least the given bit depth; encoding video data representing the video samples to generate an encoded data stream, in which the encoding step comprises applying a filtering operation defined by the filter parameter data; and associating the filter data with the encoded data stream.
  18. 18. The method of claim 17, in which the encoding step comprises: decoding the encoded video data to generate intermediate video samples; and filtering the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate reconstructed video samples.
  19. 19. The method of claim 18, comprising generating a predicted version of one of more video samples.
  20. 20. The method of claim 19, in which the generating step comprises generating, by inter-image prediction with respect to the reconstructed samples, the predicted version of one of more video samples.
  21. 21. The method of claim 19, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
  22. 22. The method of claim 21, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
  23. 23. The method of claim 22, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
  24. 24. The method of claim 17, in which the predetermined operation comprises a scaling operation in which the filter data is scaled by a scaling factor.
  25. 25. The method of claim 24, in which the step of generating the filter parameter data comprises deriving the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
  26. 26. The method of claim 25, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
  27. 27. The method of claim 17, in which: the encoding step comprises applying a quantisation dependent upon a quantisation factor applicable to the encoding of a given group of video samples; and the step of generating the filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of encoding of the given group of video samples in dependence upon the quantisation factor applicable to the given group of video samples
  28. 28. The method of claim 17, in which: the receiving step comprises receiving control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the step of generating the filter parameter data comprises selectively applying the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
  29. 29. The method of claim 17, in which the associating step comprises: generating an encoded representation of the input filter data according to an encoding format; and in which the encoding format depends upon at least the given bit depth.
  30. 30. The method of claim 29, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
  31. 31. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 17.
  32. 32. A non-transitory machine readable storage medium which stores computer software according to claim 31. 10
  33. 33. A video data decoding apparatus comprising: a decoder configured to receive an encoded data stream representing encoded video data representing video samples having a given bit depth, the encoded data stream having associated input filter data; and a generator configured to generate filter parameter data by applying a predetermined operation to the input filter data, the predetermined operation being dependent upon at least the given bit depth; the decoder being configured to decode the encoded video data to generate output video samples by applying a filtering operation defined by the filter parameter data.
  34. 34. The apparatus of claim 33, in which the decoder is configured to: decode the encoded video data to generate intermediate video samples; and the apparatus comprises a filter configured to filter the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate the output video samples.
  35. 35. The apparatus of claim 34, in which the decoder comprises a predictor configured to generate a predicted version of one of more video samples.
  36. 36. The apparatus of claim 35, in which the predictor is configured to generate, by inter-image prediction with respect to the output video samples, the predicted version of one of more video samples.
  37. 37. The apparatus of claim 35, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
  38. 38. The apparatus of claim 37, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
  39. 39. The apparatus of claim 38, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
  40. 40. The apparatus of claim 33, in which the predetermined operation comprises a scaling operation in which the input filter data is scaled by a scaling factor.
  41. 41. The apparatus of claim 40, in which the generator is configured to derive the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
  42. 42. The apparatus of claim 41, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
  43. 43. The apparatus of claim 33, in which: the decoder comprises an inverse quantiser configured to apply an inverse quantisation dependent upon a quantisation factor applicable to the decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the quantisation factor applicable to the given group of output video samples.
  44. 44. The apparatus of claim 33, in which: the decoder is configured to receive control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
  45. 45. The apparatus of claim 33, in which the decoder is configured to: receive an encoded representation of the input filter data according to an encoding format; and decode the input filter data from the encoded representation of the input filter data; in which the encoding format depends upon at least the given bit depth.
  46. 46. The apparatus of claim 45, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
  47. 47. Video data capture, transmission, display and/or storage apparatus comprising the apparatus of claim 33. 10
  48. 48. A video data encoding apparatus to encode video samples having a given bit depth, the apparatus comprising: a generator configured to generate filter data defining a filtering operation and to generate filter parameter data by applying a predetermined operation to the filter data, the predetermined operation being dependent upon at least the given bit depth; and an encoder configured to encode video data representing the video samples to generate an encoded data stream, in which the encoding step comprises applying a filtering operation defined by the filter parameter data; the encoder being configured to associate the filter data with the encoded data stream.
  49. 49. The apparatus of claim 48, in which the encoder comprises: a decoder configured to decode the encoded video data to generate intermediate video samples; and a filter configured to filter the intermediate video samples, using a video sample filter defined by the filter parameter data, to generate reconstructed video samples.
  50. 50. The apparatus of claim 49, comprising a predictor configured to generate a predicted version of one of more video samples.
  51. 51. The apparatus of claim 50, in which the predictor is configured to generate, by inter-image prediction with respect to the reconstructed samples, the predicted version of one of more video samples.
  52. 52. The apparatus of claim 50, in which the video sample filter comprises a sample adaptive offset filter, the filter parameter data defining one or more offset amounts to be selectively combined with a given intermediate video sample by the sample adaptive offset filter in dependence upon a value of: (i) the given intermediate video sample; or (ii) one or more intermediate video samples having a predetermined spatial relationship to the given intermediate video sample.
  53. 53. The apparatus of claim 52, in which the one or more offset amounts have a maximum offset magnitude which depends upon at least the given bit depth.
  54. 54. The apparatus of claim 53, in which the one or more offset amounts have a maximum offset magnitude which is greater for a greater value of the given bit depth.
  55. 55. The apparatus of claim 48, in which the predetermined operation comprises a scaling operation in which the filter data is scaled by a scaling factor.
  56. 56. The apparatus of claim 55, in which the generator is configured to derive the scaling factor in dependence upon a difference between the given bit depth and a reference bit depth.
  57. 57. The apparatus of claim 56, in which the scaling factor represents shifting by a number of bit positions equal to the difference between the given bit depth and the reference bit depth.
  58. 58. The apparatus of claim 48, in which: the encoder comprises a quantiser configured to apply a quantisation dependent upon a quantisation factor applicable to the encoding of a given group of video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of encoding of the given group of video samples in dependence upon the quantisation factor applicable to the given group of video samples.
  59. 59. The apparatus of claim 48, in which: the generator is configured to generates control data defining whether or not to apply the predetermined operation in respect of decoding of a given group of output video samples; and the generator is configured to selectively apply the predetermined operation to provide filter parameter data for use in respect of decoding of the given group of output video samples in dependence upon the control data applicable to the given group of output video samples.
  60. 60. The apparatus of claim 48, in which the encoder is configured to generate an encoded representation of the input filter data according to an encoding format; in which the encoding format depends upon at least the given bit depth.
  61. 61. The apparatus of claim 60, in which the encoding format provides a first number of directly encoded bits and a second number of entropy encoded bits, at least one of the first number and the second number depending upon at least the given bit depth.
  62. 62. Video data capture, transmission, display and/or storage apparatus comprising the apparatus of claim 48.
GB1919470.3A 2019-12-31 2019-12-31 Data encoding and decoding Withdrawn GB2590723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1919470.3A GB2590723A (en) 2019-12-31 2019-12-31 Data encoding and decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1919470.3A GB2590723A (en) 2019-12-31 2019-12-31 Data encoding and decoding

Publications (2)

Publication Number Publication Date
GB201919470D0 GB201919470D0 (en) 2020-02-12
GB2590723A true GB2590723A (en) 2021-07-07

Family

ID=69416509

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1919470.3A Withdrawn GB2590723A (en) 2019-12-31 2019-12-31 Data encoding and decoding

Country Status (1)

Country Link
GB (1) GB2590723A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015097425A1 (en) * 2013-12-23 2015-07-02 Sony Corporation Data encoding and decoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015097425A1 (en) * 2013-12-23 2015-07-02 Sony Corporation Data encoding and decoding

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS", 2014, article "Infrastructure of audiovisual services - Coding of moving video High efficiency video coding Recommendation ITU-T H.265 12/2016. Also: High Efficiency Video Coding (HEVC) Algorithms and Architectures"
ALSHINA E ET AL: "AhG18: On SAO quant-bits coding", 17. JCT-VC MEETING; 27-3-2014 - 4-4-2014; VALENCIA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-Q0044, 15 March 2014 (2014-03-15), XP030115932 *
BROSS B ET AL: "High Efficiency Video Coding (HEVC) text specification draft 10 (for FDIS & Consent)", 12. JCT-VC MEETING; 103. MPEG MEETING; 14-1-2013 - 23-1-2013; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-L1003, 17 January 2013 (2013-01-17), XP030113948 *
BROSS B ET AL: "Versatile Video Coding (Draft 7)", no. JVET-P2001, 14 November 2019 (2019-11-14), XP030224330, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet/doc_end_user/documents/16_Geneva/wg11/JVET-P2001-v14.zip JVET-P2001-vE.docx> [retrieved on 20191114] *
W-S KIM ET AL: "AhG5: Offset Scaling in SAO for High Bit-depth Video Coding", 13. JCT-VC MEETING; 104. MPEG MEETING; 18-4-2013 - 26-4-2013; INCHEON; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-M0335, 9 April 2013 (2013-04-09), XP030114292 *

Also Published As

Publication number Publication date
GB201919470D0 (en) 2020-02-12

Similar Documents

Publication Publication Date Title
US9973777B2 (en) Data encoding and decoding apparatus, method and storage medium
US20230087135A1 (en) Controlling video data encoding and decoding levels
US20240064324A1 (en) Video data encoding and decoding circuity applying constraint data
US20210092415A1 (en) Image data encoding and decoding
WO2019150076A1 (en) Data encoding and decoding
GB2577338A (en) Data encoding and decoding
GB2596100A (en) Data encoding and decoding
GB2590723A (en) Data encoding and decoding
US20230028325A1 (en) Video data encoding and decoding
US20230179783A1 (en) Video data encoding and decoding using a coded picture buffer
GB2614271A (en) Picture data encoding and decoding
GB2593775A (en) Video data encoding and decoding
GB2590722A (en) Data encoding and decoding
US20220078430A1 (en) Image data encoding and decoding
GB2599433A (en) Data encoding and decoding
WO2022073789A1 (en) Data encoding and decoding
GB2593777A (en) Video data encoding and decoding
CA3143886A1 (en) Image data encoding and decoding

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)