CN101325707A - System for encoding and decoding texture self-adaption video - Google Patents

System for encoding and decoding texture self-adaption video Download PDF

Info

Publication number
CN101325707A
CN101325707A CN 200710069093 CN200710069093A CN101325707A CN 101325707 A CN101325707 A CN 101325707A CN 200710069093 CN200710069093 CN 200710069093 CN 200710069093 A CN200710069093 A CN 200710069093A CN 101325707 A CN101325707 A CN 101325707A
Authority
CN
China
Prior art keywords
texture
decoding
video
adaption
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710069093
Other languages
Chinese (zh)
Other versions
CN101325707B (en
Inventor
虞露
武晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN 200710069093 priority Critical patent/CN101325707B/en
Publication of CN101325707A publication Critical patent/CN101325707A/en
Application granted granted Critical
Publication of CN101325707B publication Critical patent/CN101325707B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention claims a texture self-adapted video coding system, a texture self-adapted video decoding system and a texture self-adapted video coding and decoding system. The texture self-adapted video coding system includes a video coder and a coding end texture analyzer. The texture self-adapted video decoding system includes a video decoder and a decoding end texture analyzer. The texture self-adapted video coding and decoding system includes a texture self-adapted video coding system and a texture self-adapted decoding system. The texture self-adapted video coding and decoding system brings the texture feature information of video image into the video coding and decoding system to improve the compression efficiency and perceived quality of video coding.

Description

System for encoding and decoding texture self-adaption video
Technical field
The present invention relates to the signal processing and the communications field, especially, relate to a kind of texture self-adaption video coded system, a kind of texture self-adaption video decode system and a kind of system for encoding and decoding texture self-adaption video.
Background technology
The current video encoding and decoding standard, formulate H.261 such as ITU, H.263, H.26L the MPEG-1 that organizes to set up with the MPEG of ISO, MPEG-2, MPEG-4, and H.264/MPEG-AVC (abbreviation is H.264) and the video encoding standard AVS second portion of Chinese independent intellectual property right that JVT formulates all are based on conventional hybrid coding and decoding video framework.
A free-revving engine of video coding compresses vision signal exactly, reduces the data volume of vision signal, thereby saves the memory space and the transmission bandwidth of vision signal.On the one hand, raw video signal, data volume is very huge, and this is the necessity place of video coding compression; On the other hand, there is a large amount of redundant informations in raw video signal, and this is video coding possibility of compressing place.These redundant informations can be divided into spatial redundancy information, time redundancy information, data redundancy information and visual redundancy information.Wherein the three kinds of redundant informations in front only are the redundant information on the statistic concept between considered pixel, general name statistical redundancy information; Visual redundancy information stresses to consider human visual system's characteristic more.Video coding will reduce the video signal data amount, just needs to reduce the various redundant informations that exist in the vision signal.Conventional hybrid video coding framework is a video coding framework of taking all factors into consideration predictive coding, transition coding and entropy coding, puts forth effort to reduce the statistical redundancy information of vision signal, and conventional hybrid video coding framework has following main feature:
(1) utilize predictive coding to reduce time redundancy information and spatial redundancy information;
(2) utilize transition coding further to reduce spatial redundancy information;
(3) utilize entropy coding to reduce data redundancy information;
Predictive coding comprises intraframe predictive coding and inter prediction encoding.With the frame of video of intraframe predictive coding technique compresses, be called intracoded frame (I frame).The cataloged procedure of intracoded frame is as follows: at first, coded frame is divided into encoding block (a kind of form of coding unit); Encoding block is carried out infra-frame prediction, obtain the residual error data of infra-frame prediction; Then residual error data is carried out the two-dimensional transform coding; In transform domain, conversion coefficient is quantized then; Convert 2D signal to one-dimensional signal through overscanning then; Carry out entropy coding at last.With the frame of video of inter prediction encoding technique compresses, be called inter-frame encoding frame (P frame, B frame).The cataloged procedure of inter-frame encoding frame is as follows: at first, coded frame is divided into encoding block; Adopt motion estimation techniques to obtain motion vector and reference block (a kind of form of reference unit) to encoding block; Adopt motion compensation technique then, obtain the residual error data behind the inter prediction; Then residual error data is carried out the two-dimensional transform coding; In transform domain, conversion coefficient is quantized then; Convert 2D signal to one-dimensional signal through overscanning then; Carry out entropy coding at last.Residual error data, residual signals just, with respect to raw video signal, spatial redundancy information and time redundancy information have all reduced.If spatial redundancy information and time redundancy information represent that with correlation on the mathematics then the spatial coherence of residual signals and temporal correlation are all little than original video information amount.Then residual signals is carried out the two-dimensional transform coding, further reduce spatial coherence, at last conversion coefficient is quantized to reduce data redundancy information with entropy coding.As seen to continue to improve the compression efficiency of video coding, need more accurate predictive coding, further reduce the spatial coherence and the temporal correlation of prediction back residual signals; Simultaneously also need more effective transition coding technology, further reduce spatial coherence; After predictive coding and transition coding, design the scanning technique, quantification technique and the entropy coding that adapt simultaneously.
Above-mentioned video encoding and decoding standard based on conventional hybrid coding and decoding video framework though obtained very big success, further improve the compression efficiency of video coding, and conventional hybrid coding and decoding video framework itself exists bottleneck.Result of study shows that vision signal is not a stationary source, that is to say that the feature of each coding unit is not quite similar.Yet, the design of functional module but is to be based upon on the hypothesis basis of steady vision signal in the conventional hybrid coding and decoding video framework, for example, wherein predictive coding module, transition coding module, quantization modules, scan module etc. are when encoding to coding unit, and the mode of operation that is adopted is all fixed:
When (1) predictive compensation is accurate to sub-pix in inter prediction encoding, need to adopt interpolation technique that the point of the sub-pix in the reference picture is made up.The existing video standard based on conventional hybrid coding and decoding video framework all adopts horizontal, vertical separable one dimension interpolation filter, and sub-pix is made up.The tap number and the coefficient of interpolation filter are all fixed, and therefore, the interpolation filter of employing is with irrelevant by the content of interpolation image.
(2) transition coding module extensively adopt discrete cosine transform (DCT) technology with and approximate converter technique integer cosine transformation (ICT) technology.Transition coding is intended reducing spatial coherence the coding unit energy being concentrated to a few conversion coefficient, is exactly to concentrate to low frequency energy in DCT, ICT.Transformation matrix is all fixed, and therefore, the conversion of employing is irrelevant with the content that is transformed image.
(3) quantization modules is that conversion coefficient a kind of diminished irreversible coding module.The quantification technique that adopts in the video standard is that each conversion coefficient is carried out the scalar quantization of identical step-length or by the quantization matrix weighting high frequency coefficient slightly quantized (utilizing human eye to the insensitive characteristic of high-frequency signal) at present.As seen, quantizing process is irrelevant with the picture material that is quantized.
(4) scan module is that 2D signal is changed into one-dimensional signal, and concrete is exactly that the two-dimensional transform coefficient after quantizing is converted into run, level signal, is beneficial to run, level signal are carried out entropy coding.Adopt in the video encoding standard at present at the zigzag scan mode of frame coding and the vertical preferential alternative scan mode of encoding at the field, and fix the scanning sequency of these two kinds of scannings, for the conversion coefficient of encoding block substantially along left to bottom right order.As seen, scanning sequency is irrelevant with the picture material that is scanned.
In recent years, in order further to improve video coding efficient, emerge some new coding techniquess, the common ground of these technology is " self adaptation ", can select different coded system (refer to some functional module select to adapt mode of operation) to every frame or each encoding block.The adaptive approach of these technology, some is realized based on rate-distortion optimization (RDO) technology, promptly selects a kind of method optimum on the RD meaning by the approach of this high complexity of RDO from several candidates' method; Some adopts the thought of " twice " based on the method for statistics, and after first pass finished, the mode of operation of utilizing the data statistics of first pass to obtain adapting was carried out coding second time with the mode of operation that adapts then.
A kind of adaptive transformation technology is arranged in addition, and it is based on neural network method.Initial transformation pattern of initial setting along with the carrying out of coding, is progressively trained new pattern conversion by neural net, and ensuing encoding block is carried out transition coding.
" self adaptation " viewpoint of these methods is actually the different thought of local feature of utilizing coding unit, but their local feature is fuzzy general.
On the basis of analyzing in front and studying, in order to break through the bottleneck of conventional hybrid coding and decoding video framework, the present invention proposes a kind of system for encoding and decoding texture self-adaption video, and system for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.System for encoding and decoding texture self-adaption video is brought the textural characteristics of video image (image local feature a kind of) information in the video coding and decoding system into, to improve the compression efficiency and the subjective quality of video coding.
Summary of the invention
The objective of the invention is to bottleneck, propose a kind of texture self-adaption video coded system, a kind of texture self-adaption video decode system and a kind of system for encoding and decoding texture self-adaption video at conventional hybrid coding and decoding video framework.System for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.System for encoding and decoding texture self-adaption video is brought the textural characteristics of video image in the video coding and decoding system into, to improve the compression efficiency and the subjective quality of video coding.
The texture self-adaption video coded system comprises video encoder and coding side texture analyzer; Video encoder comprises an encoding function module at least, to finish encoding compression; The coding side texture analyzer carries out texture analysis, to extract coding unit textural characteristics information; At least there is an encoding function module in the video encoder, the coding unit textural characteristics information Control that its mode of operation is extracted by the coding side texture analyzer.The input signal of coding side texture analyzer comprise following one or more: raw image data, reference image data, encoding function module dateout.The coding unit textural characteristics information that the coding side texture analyzer extracts comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.The mode of operation of encoding function module is meant the encoding function module according to coding unit textural characteristics information by the coding unit textural characteristics information Control that the coding side texture analyzer extracts, and determines to adopt the mode of operation that adapts with coding unit textural characteristics information; Different encoding function modules can use coding unit textural characteristics information of the same race or not of the same race to control.
The texture self-adaption video decode system comprises Video Decoder and decoding end grain reason analyzer; Video Decoder comprises a decoding function module at least, to finish decoding and rebuilding; The decoding end texture analyzer carries out texture analysis, to extract decoding unit textural characteristics information; At least there is a decoding function module in the Video Decoder, the decoding unit textural characteristics information Control that its mode of operation is extracted by the decoding end texture analyzer.The input signal of decoding end texture analyzer comprise following one or more: reference image data, decoding function module dateout.The decoding unit textural characteristics information that the decoding end texture analyzer extracts comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.The mode of operation of decoding function module is meant the decoding function module according to decoding unit textural characteristics information by the decoding unit textural characteristics information Control that the decoding end texture analyzer extracts, and determines to adopt the mode of operation that adapts with decoding unit textural characteristics information; Different decoding function modules can use decoding unit textural characteristics information of the same race or not of the same race to control.
System for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.
Description of drawings
Fig. 1 is a texture self-adaption video coded system schematic diagram;
Fig. 2 is a texture self-adaption video decode system schematic diagram;
Fig. 3 is the system for encoding and decoding texture self-adaption video schematic diagram;
Fig. 4 is n * m encoding block initial data schematic diagram;
Fig. 5 is the schematic diagram data of n * m reference image block;
Fig. 6 is that intra prediction mode is as coding side texture analyzer signal input schematic diagram;
Fig. 7 is a Sobel operator schematic diagram;
The schematic diagram of Fig. 8 texture self-adaption interpolating module;
Fig. 9 is whole pixel and sub-pix point schematic diagram;
Figure 10 is the schematic diagram of texture self-adaption scan module;
Figure 11 is vertical priority scan sequential schematic;
Figure 12 is horizontal priority scan sequential schematic;
Figure 13 is embodiment 1 schematic diagram: the texture self-adaption video coded system;
Figure 14 is embodiment 2 schematic diagrames: the texture self-adaption video decode system;
Figure 15 is embodiment 4 schematic diagrames: the texture self-adaption video coded system;
Figure 16 is embodiment 5 schematic diagrames: the texture self-adaption video decode system;
Figure 17 is embodiment 7 schematic diagrames: the texture self-adaption video coded system;
Figure 18 is embodiment 8 schematic diagrames: the texture self-adaption video decode system;
Figure 19 is embodiment 10 schematic diagrames: the texture self-adaption video coded system;
Figure 20 is embodiment 11 schematic diagrames: the texture self-adaption video decode system;
Embodiment
The present invention relates to a kind of texture self-adaption video coded system (shown in Figure 1), a kind of texture self-adaption video decode system (shown in Figure 2) and a kind of system for encoding and decoding texture self-adaption video (shown in Figure 3), notice that " transmission channel " shown in Fig. 3 is not included in the system for encoding and decoding texture self-adaption video.
The system for encoding and decoding texture self-adaption video coverage is very wide, below earlier the noun that relates among the present invention is illustrated.
The example of A, coding unit
Coding unit is the unit of texture self-adaption, the set that it is made up of the video image vegetarian refreshments.The form of coding unit is a lot, and in the differential pulse modulation coding system in early days, coding unit is independent one by one pixel; Coding unit is a rectangular block of pixels in current many video encoding standards, comprises square; And up-to-date the coding unit of mentioning in the document is arranged is triangle, different form such as trapezoidal; Coding unit also can be a band (slice), a frame, a form such as field; In addition, coding unit can also be made up of non-conterminous pixel.
Encoding block is a kind of example of coding unit, the rectangular block that it is made up of pixel, and the rectangular block size is n * m, and representing this encoding block height is n pixel, and width is a m pixel.Encoding block such as 16 * 16,16 * 8 encoding block, 8 * 16 encoding block, 8 * 8 encoding block, 8 * 4 encoding block, 4 * 8 encoding block, 4 * 4 encoding block.Below will be that example provides concrete embodiment with the encoding block, when not specifying, will use encoding block to replace coding unit.But the method for enumerating in the embodiment can be used for the coding unit of other form equally.
The example of B, decoding unit
Decoding unit is the different sayings of same things at system's diverse location with coding unit.Coding unit is the notion in the texture self-adaption video coded system, corresponding therewith, in the texture self-adaption video decode system, it just is called as decoding unit.So mention giving an example and also suitable the giving an example and explanation of explanation of coding unit among the A to decoding unit.
The example of C, video coding functional module
The encoding function module comprises one or more with in the lower module in the video encoder: prediction module, interpolating module, conversion module, inverse transform block, quantization modules, inverse quantization module, scan module, counter-scanning module, block elimination filtering module, entropy coding module etc.These encoding function modules can one be subdivided into a plurality of or a plurality of encoding function modules that are merged into, and can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module such as interpolating module; Merge into the change quantization module such as conversion module and quantization modules.Video encoder also can have other function division methods, forms the new encoding function module of a cover.
Encoding function module in the video encoder links by certain mode, finishes the function of encoding compression.
For an encoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap numbers and filter coefficient is exactly different mode of operations.
The example of D, video decode functional module
Decoding function module in the Video Decoder comprises one or more with in the lower module: prediction module, interpolating module, inverse transform block, inverse quantization module, counter-scanning module, block elimination filtering module, entropy decoder module etc.These decoding function modules can one be subdivided into a plurality of or a plurality of decoding function modules that are merged into, and can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module such as interpolating module; Merge into the inverse transformation quantization modules such as inverse transform block and inverse quantization module.Video Decoder also can have other function division methods, forms the new decoding function module of a cover.
Decoding function module in the Video Decoder links by certain mode, finishes the function of decoding and rebuilding.
For a decoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap numbers and filter coefficient is exactly different mode of operations.
The example of E, textural characteristics information
The expression mode of textural characteristics information can have grain direction information, texture strength information, grain direction strength information, and other expression mode also can be arranged, for example texture structure etc.
E-1, grain direction information
Grain direction information subjectivity show as texture in the image towards, generally represent with the texture angle of inclination.The angle of inclination is a continuous quantity, in use, can be quantified as discrete magnitude.Can select different precision when quantizing, texture is divided into different types of direction.During quantification, the texture angle of inclination that angle is belonged to same quantization areas is classified as same class grain direction.Such as, when quantified precision was 4 class, grain direction information can be divided into cross grain, vertical texture, left diagonal grain and right diagonal grain.Certainly, some coding unit, it does not have tangible grain direction, we can say that the texture strength of each grain direction correspondence is suitable yet, is referred to as flat site, and flat site is a kind of special grain direction information.
The direction at the edge in the image is a kind of example of grain direction information.
E-2, texture strength information
The texture strength information subjectivity shows as the obvious degree of texture in the image, can represent with gradient intensity, also can represent with energy intensity, can also use other method representation.
E-3, grain direction strength information
The grain direction strength information is meant grain direction is divided into variety classes by E-1, and the grain direction of each kind all has corresponding with it strength information.The grain direction strength information is exactly the texture strength information corresponding to each grain direction.
The example of the input signal of F, coding side texture analyzer
F-1, raw image data
Raw image data is meant the data of being formed or being made up by the original image original pixel value.Building mode is varied, for example interpolation method, filtering mode, pixel repetitive mode etc.
F-2, reference image data
Reference image data is meant the data of being formed or being made up by the pixel value of decipher reestablishment image.Building mode is varied, for example interpolation method, filtering mode, pixel repetitive mode etc.
F-3, encoding function module dateout
● the data corresponding of encoding function module output with the present encoding unit.
For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of present encoding unit.Fig. 6 is this routine schematic diagram.
● the corresponding data of coding unit of having encoded with (one or more) of encoding function module output.
For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of the coding unit of top, present encoding unit and left.In order to analyze the textural characteristics of present encoding unit for the coding side texture analyzer, these information should be through input coding end grain reason analyzer behind the buffer memory of certain hour.
Encoding function module dateout is not limited to the intra prediction mode of intra-framed prediction module output, and it can be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Notice that encoding function module dateout is meant the some or all of dateout of encoding function module, for example intra prediction mode is the part dateout of intra-framed prediction module here.
I-4, input signal are several data
Input signal comprises the several data in the following data: raw image data, reference image data and encoding function module dateout.
For example, the raw image data of coding unit and coding unit interframe matching unit are all sent into the coding side texture analyzer as input signal.
Wherein, Fig. 4 is an example of the raw image data of coding unit, and coding unit is an encoding block, and it is the piece P of a n * m, P Ji(j, the i) pixel value of position are the original pixel values of this pixel to represent.
Matching unit is the reference image data the most close with coding unit.The pixel of formation matching unit and present encoding unit are called the interframe matching unit not in same two field picture; The pixel of formation matching unit and present encoding unit are called matching unit in the frame in same two field picture.Fig. 5 is an example of interframe matching unit, and interframe matching unit R is that size is the piece of n * m.R JiRepresent (j, i) value of position pixel.
The example of the input signal of G, decoding end texture analyzer
G-1, reference image data
When the input signal of decoding end texture analyzer was reference image data, it was consistent with F-2.
G-2, decoding function module dateout
● the data corresponding of decoding function module output with current decoding unit.
For example, one, after the entropy decoder module carries out code stream analyzing, obtain current decoding unit textural characteristics information, this textural characteristics information is outputed to the decoding end texture analyzer; Two, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of current decoding unit.
● decoding function module output with the corresponding data of (one or more) decoded decoding unit.
For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of the decoding unit of current decoding unit top and left.In order to analyze the textural characteristics of current decoding unit for the decoding end texture analyzer, these information should be through input decoding end texture analyzer behind the buffer memory of certain hour.
Decoding function module dateout is not limited to these examples, and it can also be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Notice that decoding function module dateout is meant the some or all of dateout of decoding function module, for example intra prediction mode is the part dateout of entropy decoder module here.
G-3, input signal are several data
Input signal comprises the several data in the following data: reference image data and decoding function module dateout.
For example, the inter-frame forecast mode of the top of the interframe matching unit of decoding unit and decoding unit and left decoding unit is all sent into the decoding end texture analyzer as input signal.
H, coding side texture analyzer extract the example of coding unit textural characteristics information
H-1, input signal are raw image data
The input signal of coding side texture analyzer is as described in the F-1.
With the raw image data of coding unit is that the situation of input signal is an example, and coding unit is an encoding block herein, Fig. 4 is-and the encoding block P of individual n * m, P Ji(j, the i) value of position pixel are the initial data of this pixel to represent.Texture analyzer extracts the textural characteristics information that obtains encoding block P, i.e. grain direction information and texture strength information with the Sobel operator.Fig. 7 is the acquiring method of Sobel operator x direction and y direction gradient.Can be according to the Sobel operator in the hope of P JiThe x direction and the gradient of y direction.
The gradient of its x direction is:
hx(P ji)=P j-1,i-1+2×P j-1,i+P j-1,i+1-P j+1,i-1-2×P j+1,i-P j+1,i+1
The gradient of its y direction is:
hy(P ji)=P j-1,i+1+2×P j,i+1+P j+1,i+1-P j-1,i-1-2×P j-1,i-P j-1,i+1
P then JiGradient direction be
Dir (P Ji)=arctan (hy (P Ji)/hx (P Ji)), arctan is an arctan function;
P JiGradient intensity be
Mag (P Ji)=sqrt (hx ((P Ji) ^2+hy (P Ji) ^2, sqrt is the rooting function, ^2 refers to square.
With Dir (P Ji) be quantized into the class of corresponding precision, the corresponding a kind of grain direction information of each class.Quantize class such as four: the right diagonal angle of horizontal, vertical, left diagonal sum.
In order to determine grain direction information and the texture strength information of P, obtain P successively 11To P N-1, m-1Dir value of this (n-2) * (m-2) individual pixel and Mag value quantize class according to Dir, with these some classification, obtain pixel Mag value in every class and, the Mag value be the grain direction strength information of P.Prevailing grain direction strength information is exactly the texture strength information of P, and its pairing grain direction information is decided to be the grain direction information of P; If do not have prevailing texture strength information in the classification, can think that P is a flat site.
Texture analyzer can also utilize raw image data to adopt other operator or method to extract therewith the identical or different textural characteristics information of expression way in the example.
H-2, input signal are reference image data
The input signal of coding side texture analyzer is as described in the F-2.
Reference image data is an example with the interframe matching unit, and coding unit is an encoding block, and the interframe matching unit is interframe match block, for example Fig. 5.The textural characteristics information extracting method can adopt the method that exemplifies among the H-1.
II-3, input signal are encoding function module dateout
The input signal of coding side texture analyzer is as described in the F-3.
Example one: input signal is the intra prediction mode of present encoding unit among the F-3, and intra prediction mode is directive, such as the lateral prediction pattern, and vertical predictive mode, left diagonal angle predictive mode, right diagonal angle predictive mode and DC predictive mode.The coding side texture analyzer determines the textural characteristics information of this encoding block according to predictive mode, here grain direction information just.If predictive mode is the lateral prediction pattern, grain direction is cross grain; If predictive mode is vertical predictive mode, grain direction is vertical texture; If predictive mode is a left diagonal angle predictive mode, grain direction is left diagonal grain; If predictive mode is a right diagonal angle predictive mode, grain direction is right diagonal grain; If predictive mode is the DC predictive mode, textural characteristics information is flat site, does not have obvious texture.
Example two: input signal is the inter-frame forecast mode information of the coding unit of the top, present encoding unit of inter prediction module output and left.Block size when inter-frame forecast mode is meant inter prediction.Such as, when the inter-frame forecast mode of the encoding block of encoding block top and left is 16 * 8, determine that this encoding block is the cross grain direction; When the inter-frame forecast mode of the encoding block of encoding block top and left is 8 * 16, determine that this encoding block is vertical grain direction; Other situation determines that this encoding block is a flat site.
H-4, input signal are data splitting
The input signal of coding side texture analyzer is as described in the F-4.
Herein, raw image data is an example with the initial data of coding unit, and reference image data is an example with the interframe matching unit.These two kinds of signals are as the input signal of coding side texture analyzer, and coding unit is an encoding block, and the coding side texture analyzer is tried to achieve the differential signal between them earlier, and differential signal is meant the difference of encoding block and interframe match block; Adopt the method that exemplifies among the H-1 to handle to differential signal, to obtain coding unit textural characteristics information.
I, decoding end texture analyzer texture feature extraction information instances
I-1, input signal are reference image data
When the input signal of decoding end texture analyzer is reference image data, as described in G-1, can extract decoding unit textural characteristics information by the method that exemplifies among the H-2.
I-2, input signal are decoding function module dateout
Example one: comprise decoding unit textural characteristics information in the code stream, a kind of coding form of the textural characteristics information of the input signal of the decoding end texture analyzer decoding block that to be the entropy decoder module obtain by code stream analyzing or the textural characteristics information of decoding block, this information is input to the decoding end texture analyzer, decoding end texture analyzer these information of directly utilizing or decode obtain the textural characteristics information of decoding block.
Example two: the input signal of decoding end texture analyzer is the intra prediction mode of the current decoding block of intra-framed prediction module output.The decoding end texture analyzer is determined the textural characteristics information of decoding block according to the method that exemplifies among the H-3.
Example three: input signal is the inter-frame forecast mode information of the decoding unit of the current decoding unit top of inter prediction module output and left.The decoding end texture analyzer is determined the textural characteristics information of decoding block according to the method that exemplifies among the H-3.
The mode of operation of J, encoding function module, decoding function module and the example that textural characteristics adapts
J1, texture self-adaption interpolating module
Fig. 8 foot texture self-adaption interpolating module schematic diagram.The coding side texture analyzer extracts the textural characteristics information of encoding block, by textural characteristics information Control texture self-adaption interpolating module, makes it select a kind of interpolation method that adapts, and just mode of operation makes up the sub-pix point.The texture self-adaption interpolating module has N class grain direction interpolation among Fig. 8, the corresponding a kind of mode of operation of each class.The textural characteristics information that the coding side texture analyzer extracts has grain direction information.The texture self-adaption interpolating module after obtaining the grain direction information of encoding block, the mode of operation different according to the different choice of grain direction, the interpolation of carrying out sub-pix makes up.Fig. 9 is the sub-pix point schematic diagram that needs interpolation, the whole locations of pixels of capitalization A-P representative among the figure; The position of the sub-pix point of the whole pixel A correspondence of lowercase a-o representative.The sub-pix point can be divided into 1/2nd pixels and 1/4th pixels.Wherein, b, h, j represent 1/2nd pixels, and other lowercase is represented 1/4th pixels.What following form was listed is the pairing interpolation mode of operation of encoding block different texture direction.Certainly, the design of interpolation filter not only is confined to this method, and other method for designing can also be arranged.
Cross grain Vertical texture
/ 2nd pixel b b’=-I+5A+5B-J b=Clip((b’+4)>>3) b’=A+B b=(b’+1)>>1
/ 2nd pixel h h’=A+C h=Clip((h’+1)>>1) h’=-F+5A+5C-N h=Clip((h’+4)>>3)
/ 2nd pixel j j’=-bb+5h+5cc-dd j=(Clip(j’+4)>>3) j’=-aa+5b+5cc-ff j=Clip((j’+4)>>3)
/ 4th pixel a a’=A+b a=Clip((a’+1)>>1) a’=A+b a=Clip((a’+1)>>1)
/ 4th pixel c c’=B+b c=Clip((c’+1)>>1) c’=B+b c=Clip((c’+1)>>1)
/ 4th pixel d d’=A+h d=Clip((d’+1)>>1) d’=A+h d=Clip((d’+1)>>1)
/ 4th pixel f f’=b+j f=Clip((f’+1)>>1) f’=b+j f=Clip((f’+1)>>1)
/ 4th pixel i i’=h+j i=Clip((i’+1)>>1) i’=h+j i=Clip((i’+1)>>1)
/ 4th pixel k k’=j+cc k=Clip((k’+1)>>1) k’=j+cc k=Clip((k’+1)>>1)
/ 4th pixels 1 l’=C+h l=Clip((l’+1)>>1) l’=C+h l=Clip((l’+1)>>1)
/ 4th pixel n n’=j+ee n=Clip((n’+1)>>1) n’=j+ee n=Clip((n’+1)>>1)
/ 4th pixel e e’=d+f e=Clip((e’+1)>>1) e’=a+i e=Clip((e’+1)>>1)
/ 4th pixel g m’=f+p g=Clip((m’+1)>>1) g’=c+k g=Clip((m’+1)>>1)
/ 4th pixel m m’=1+n m=Clip((m’+1)>>1) m’=i+r m=Clip((m’+1)>>1)
/ 4th pixel o o’=n+q o=Clip((o’+1)>>1) o’=k+s o=Clip((o’+1)>>1)
Left side diagonal grain Right diagonal grain
/ 2nd pixel b b’=7A+7B+C+G b=Clip((b’+8)>>4) b’=7A+7B+F+D; b=Clip((b’+8)>>4)
/ 2nd pixel h h’=7A+7C+B+K h=Clip((h’+8)>>4) h’=7A+7C+I+D; h=Clip(h’+8)>>4)
/ 2nd pixel j j’=A+7B+7C+D; j=Clip((j’+8)>>4) j’=7A+B+C+7D; j=Clip((j’+8)>>4)
/ 4th pixel a a’=A+b a=Clip((a’+1)>>1) a’=A+b a=Clip((a’+1)>>1)
/ 4th pixel c c’=B+b c=Clip((c’+1)>>1) c’=B+b c=Clip((c’+1)>>1)
/ 4th pixel d d’=A+h d=Clip((d’+1)>>1) d’=A+h d=Clip((d’+1)>>1)
/ 4th pixel f f’=b+j f=Clip((f’+1)>>1) f’=b+j f=Clip((f’+1)>>1)
/ 4th pixel i i’=h+j i=Clip((i’+1)>>1) i’=h+j i=Clip((i’+1)>>1)
/ 4th pixel k k’=j+cc k=Clip((k’+1)>>1) k’=j+cc k=Clip((k’+1)>>1)
/ 4th pixels 1 l’=C+h l=Clip((l’+1)>>1) l’=C+h l=Clip((l’+1)>>1)
/ 4th pixel n n’=j+ee n=Clip((n’+1)>>1) n’=j+ee n=Clip((n’+1)>>1)
/ 4th pixel e e’=b+h e=Clip((e’+1)>>1) e’=A+j e=Clip((e’+1)>>1)
/ 4th pixel g g’=B+j g=Clip(g’+1)>>1) g’=b+cc g=Clip((g’+1)>>1)
/ 4th pixel m m’=j+C m=Clip((m’+1)>>1) m’=h+ee m=Clip((m’+1)>>1)
/ 4th pixel o o’=cc+ee o=Clip((o’+1)>>1) o’=j+D o=Clip((o’+1)>>1)
The texture self-adaption interpolating module is positioned at the texture self-adaption video coded system and is called texture self-adaption interpolation encoding function module, is positioned at the texture self-adaption video decode system and is called texture self-adaption interpolation decoding function module.
J-2, texture self-adaption scan module example
Figure 10 is a texture self-adaption scan method schematic diagram.The coding side texture analyzer extracts the textural characteristics information of encoding block, a kind of scanning sequency that adapts of textural characteristics Information Selection that the texture self-adaption scan module extracts according to the coding side texture analyzer, every kind of corresponding a kind of mode of operation of scanning sequency.Scanning sequency has vertically preferential scanning sequency; Laterally preferential scanning sequency is arranged; Have or not the scanning sequency of orientation preferentially etc.As shown in figure 11, be the example of 8 * 8 two kinds vertical preferential scanning sequencies, the scanning sequency on the right is bigger than the vertical degree of priority of the scanning sequency on the left side among the figure; As shown in figure 12, be the example of 8 * 8 two kinds horizontal preferential scanning sequencies, the scanning sequency on the right is bigger than the horizontal degree of priority of the scanning sequency on the left side among the figure; Directionless preferential scanning sequency is such as the zigzag scanning sequency.The textural characteristics information of texture analyzer output is grain direction information and texture strength information, and grain direction information is divided into cross grain, vertically three kinds of texture and other textures; Texture strength information is divided into strong, weak two kinds.Following table for example understand to be used the situation of grain direction information and texture strength information control texture self-adaption scan module mode of operation, and such as for strong cross grain, the adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 11 as mode of operation; For other weak texture, the adaptive scanning module adopts the zigzag scanning sequency as mode of operation etc.
Figure A20071006909300171
Following table is for example understood the situation only use grain direction information Control texture self-adaption scan module mode of operation, and such as for cross grain, the adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 11 as mode of operation; For vertical texture, the adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 12 as mode of operation; For other texture, the adaptive scanning module adopts the zigzag scanning sequency as mode of operation.
Figure A20071006909300172
The texture self-adaption scan module is positioned at the texture self-adaption video coded system and is called texture self-adaption scanning encoding functional module, is positioned at the texture self-adaption video decode system and is called texture self-adaption scan decoder functional module.
Certainly, the example that also has other encoding function module, decoding function module and textural characteristics to adapt such as conversion module, quantization modules or the like, exemplifies no longer one by one.
Embodiment 1
Figure 13 is the schematic diagram of embodiment 1, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and as described in F-2, output information is the grain direction information of coding unit, the extracting method of grain direction information such as the method that H-2 exemplified.The mode of operation of texture self-adaption interpolating module makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method that is exemplified among the J-1 according to grain direction information in the grain direction information Control video encoder of output.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 2
Figure 14 is the schematic diagram of embodiment 2, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data of decoding unit, and as described in G-1, output information is the grain direction information of decoding unit, the extracting method of grain direction information such as the method that I-1 exemplified.The mode of operation of texture self-adaption interpolating module makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method that is exemplified among the J-1 according to grain direction information in the grain direction information Control Video Decoder of output.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 3
The system for encoding and decoding texture self-adaption video of embodiment 3 comprises the texture self-adaption video coded system of embodiment 1 and the texture self-adaption video decode system of embodiment 2.
Embodiment 4
Figure 15 is the schematic diagram of embodiment 4, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and as described in F-2, output information is the grain direction information and the texture strength information of coding unit, and extracting method adopts the method that H-2 exemplified.Texture self-adaption interpolating module in the grain direction information Control video encoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method that is exemplified among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video encoder makes the texture self-adaption scan module choose the scanning work pattern that adapts according to the method that is exemplified among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 5
Figure 16 is the schematic diagram of embodiment 5, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data that needs the decoding unit of decoding, and as described in G-1, output information is grain direction information and the texture strength information that needs decoding unit, and extracting method adopts the method that I-1 exemplified.Texture self-adaption interpolating module in the grain direction information Control Video Decoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method that is exemplified among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video decoder makes the texture self-adaption scan module choose the counter-scanning mode of operation that adapts according to the method that is exemplified among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 6
The system for encoding and decoding texture self-adaption video of embodiment 6 comprises the texture self-adaption video coded system of embodiment 4 and the texture self-adaption video decode system of embodiment 5.
Embodiment 7
Figure 17 is the schematic diagram of embodiment 7, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the initial data that needs the coding unit of coding, and as described in F-1, output information is grain direction information and the texture strength information that needs coding unit, and extracting method adopts the method that H-1 exemplified.Texture self-adaption interpolating module in the grain direction information Control video encoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method that is exemplified among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video encoder makes the texture self-adaption scan module choose the scanning work pattern that adapts according to institute's simplified method among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 8
Figure 18 is the schematic diagram of embodiment 8, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the output signal of entropy decoder module, as described in G-2, output information is grain direction information and the texture strength information that needs the decoding unit of decoding, and the extracting method of grain direction information and texture strength information adopts the method that is exemplified among the I-2.Texture self-adaption interpolating module in the grain direction information Control Video Decoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method that is exemplified among the J-1 according to the grain direction characteristic information; Texture self-adaption scan module in grain direction information and the texture strength information control of video decoder makes the texture self-adaption scan module choose the counter-scanning mode of operation that adapts according to the method that is exemplified among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 9
The system for encoding and decoding texture self-adaption video of embodiment 9 comprises the texture self-adaption video coded system of embodiment 7 and the texture self-adaption video decode system of embodiment 8.
Embodiment 10
Figure 19 is the schematic diagram of embodiment 10, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the intra prediction mode of intra-framed prediction module output, and as described in F-3, output information is the grain direction information of coding unit, and the extracting method of grain direction information adopts the method that is exemplified among the H-3.Texture self-adaption interpolating module and texture self-adaption scan module in the grain direction information Control video encoder make the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method that is exemplified among the J-1 according to grain direction information; Make the texture self-adaption scan module according to grain direction information, choose the scanning work pattern that adapts according to method among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 11
Figure 20 is the schematic diagram of embodiment 11, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the intra prediction mode of intra-framed prediction module output, and as described in G-2, output information is the grain direction information that needs decoding unit, and the extracting method of grain direction information adopts the method that is exemplified among the I-2.Texture self-adaption interpolating module and texture self-adaption scan module in the grain direction information Control Video Decoder make the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method that exemplifies among the J-1 according to grain direction information; Make the texture self-adaption scan module according to grain direction information, choose the counter-scanning mode of operation that adapts according to the method that is exemplified among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 12
The system for encoding and decoding texture self-adaption video of embodiment 12 comprises the texture self-adaption video coded system of embodiment 10 and the texture self-adaption video decode system of embodiment 11.
The foregoing description is used for the present invention that explains, rather than limits the invention, and in the protection range of spirit of the present invention and claim, any modification and change to the present invention makes all fall into protection scope of the present invention.

Claims (9)

1. texture self-adaption video coded system, it is characterized in that: it comprises video encoder and coding side texture analyzer; Video encoder comprises an encoding function module at least, to finish encoding compression;
The coding side texture analyzer carries out texture analysis, to extract coding unit textural characteristics information; At least there is an encoding function module in the video encoder, the coding unit textural characteristics information Control that its mode of operation is extracted by the coding side texture analyzer.
2. texture self-adaption video coded system according to claim 1 is characterized in that: described textural characteristics information comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.
3. texture self-adaption video coded system according to claim 1 is characterized in that: the input signal of described coding side texture analyzer comprise following one or more: raw image data, reference image data, encoding function module dateout.
4. texture self-adaption video coded system according to claim 1, it is characterized in that: the mode of operation of described encoding function module is meant the encoding function module according to coding unit textural characteristics information by the coding unit textural characteristics information Control that the coding side texture analyzer extracts, and determines to adopt the mode of operation that adapts with coding unit textural characteristics information; Different encoding function modules can use coding unit textural characteristics information of the same race or not of the same race to control.
5. texture self-adaption video decode system is characterized in that: it comprises Video Decoder and decoding end grain reason analyzer; Video Decoder comprises a decoding function module at least, to finish decoding and rebuilding; The decoding end texture analyzer carries out texture analysis, to extract decoding unit textural characteristics information; At least there is a decoding function module in the Video Decoder, the decoding unit textural characteristics information Control that its mode of operation is extracted by the decoding end texture analyzer.
6. texture self-adaption video decode system according to claim 5 is characterized in that: described textural characteristics information comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.
7. texture self-adaption video decode system according to claim 5 is characterized in that: the input signal of described decoding end texture analyzer comprise following one or more: reference image data, decoding function module dateout.
8. texture self-adaption video decode system according to claim 5, it is characterized in that: the mode of operation of described decoding function module is meant the decoding function module according to decoding unit textural characteristics information by the decoding unit textural characteristics information Control that the decoding end texture analyzer extracts, and determines to adopt the mode of operation that adapts with decoding unit textural characteristics information; Different decoding function modules can use decoding unit textural characteristics information of the same race or not of the same race to control.
9. system for encoding and decoding texture self-adaption video, it is characterized in that: it comprises described texture self-adaption video coded system of claim 1 and the described texture self-adaption video decode system of claim 5.
CN 200710069093 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video Expired - Fee Related CN101325707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710069093 CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710069093 CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201110388181.6A Division CN102413330B (en) 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system

Publications (2)

Publication Number Publication Date
CN101325707A true CN101325707A (en) 2008-12-17
CN101325707B CN101325707B (en) 2012-04-18

Family

ID=40188991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710069093 Expired - Fee Related CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Country Status (1)

Country Link
CN (1) CN101325707B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011066672A1 (en) * 2009-12-04 2011-06-09 Thomson Licensing Texture-pattern-adaptive partitioned block transform
CN102186070A (en) * 2011-04-20 2011-09-14 北京工业大学 Method for realizing rapid video coding by adopting hierarchical structure anticipation
CN102447896A (en) * 2010-09-30 2012-05-09 华为技术有限公司 Method, device and system for processing image residual block
CN102595113A (en) * 2011-01-13 2012-07-18 华为技术有限公司 Method, device and system for scanning conversion coefficient block
CN102651816A (en) * 2011-02-23 2012-08-29 华为技术有限公司 Method and device for scanning transformation coefficient block
CN102857749A (en) * 2011-07-01 2013-01-02 华为技术有限公司 Pixel classification method and device for video image
CN102857751A (en) * 2011-07-01 2013-01-02 华为技术有限公司 Video encoding and decoding methods and device
CN103517069A (en) * 2013-09-25 2014-01-15 北京航空航天大学 HEVC intra-frame prediction quick mode selection method based on texture analysis
CN104349171A (en) * 2013-07-31 2015-02-11 上海通途半导体科技有限公司 Image compression encoding and decoding devices without visual loss, and encoding and decoding methods
CN104933736A (en) * 2014-03-20 2015-09-23 华为技术有限公司 Visual entropy acquisition method and device
US9432700B2 (en) 2011-09-27 2016-08-30 Broadcom Corporation Adaptive loop filtering in accordance with video coding
CN110637460A (en) * 2017-07-11 2019-12-31 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102640498B (en) * 2009-12-04 2015-04-29 汤姆森特许公司 Method and device for image encoding and decoding by texture-pattern-adaptive partitioned block transform
CN102640498A (en) * 2009-12-04 2012-08-15 汤姆森特许公司 Texture-pattern-adaptive partitioned block transform
WO2011066672A1 (en) * 2009-12-04 2011-06-09 Thomson Licensing Texture-pattern-adaptive partitioned block transform
US8666177B2 (en) 2009-12-04 2014-03-04 Thomson Licensing Texture-pattern-adaptive partitioned block transform
CN102447896B (en) * 2010-09-30 2013-10-09 华为技术有限公司 Method, device and system for processing image residual block
CN102447896A (en) * 2010-09-30 2012-05-09 华为技术有限公司 Method, device and system for processing image residual block
CN102595113A (en) * 2011-01-13 2012-07-18 华为技术有限公司 Method, device and system for scanning conversion coefficient block
WO2012094909A1 (en) * 2011-01-13 2012-07-19 华为技术有限公司 Scanning method, device and system for transformation coefficient block
CN102595113B (en) * 2011-01-13 2014-06-04 华为技术有限公司 Method, device and system for scanning conversion coefficient block
CN102651816A (en) * 2011-02-23 2012-08-29 华为技术有限公司 Method and device for scanning transformation coefficient block
CN102651816B (en) * 2011-02-23 2014-09-17 华为技术有限公司 Method and device for scanning transformation coefficient block
CN102186070A (en) * 2011-04-20 2011-09-14 北京工业大学 Method for realizing rapid video coding by adopting hierarchical structure anticipation
CN102857749B (en) * 2011-07-01 2016-04-13 华为技术有限公司 A kind of pixel classifications method and apparatus of video image
WO2013004161A1 (en) * 2011-07-01 2013-01-10 华为技术有限公司 Method and device for classifying pixel of video image
CN102857751A (en) * 2011-07-01 2013-01-02 华为技术有限公司 Video encoding and decoding methods and device
CN102857749A (en) * 2011-07-01 2013-01-02 华为技术有限公司 Pixel classification method and device for video image
US9432700B2 (en) 2011-09-27 2016-08-30 Broadcom Corporation Adaptive loop filtering in accordance with video coding
CN104349171B (en) * 2013-07-31 2018-03-13 上海通途半导体科技有限公司 The compression of images coding/decoding device and coding and decoding method of a kind of virtually lossless
CN104349171A (en) * 2013-07-31 2015-02-11 上海通途半导体科技有限公司 Image compression encoding and decoding devices without visual loss, and encoding and decoding methods
CN103517069A (en) * 2013-09-25 2014-01-15 北京航空航天大学 HEVC intra-frame prediction quick mode selection method based on texture analysis
CN103517069B (en) * 2013-09-25 2016-10-26 北京航空航天大学 A kind of HEVC intra-frame prediction quick mode selection method based on texture analysis
CN104933736A (en) * 2014-03-20 2015-09-23 华为技术有限公司 Visual entropy acquisition method and device
CN104933736B (en) * 2014-03-20 2018-01-19 华为技术有限公司 A kind of Vision Entropy acquisition methods and device
CN110637460A (en) * 2017-07-11 2019-12-31 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN110637460B (en) * 2017-07-11 2021-09-28 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116708789B (en) * 2023-08-04 2023-10-13 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Also Published As

Publication number Publication date
CN101325707B (en) 2012-04-18

Similar Documents

Publication Publication Date Title
CN101325707B (en) System for encoding and decoding texture self-adaption video
US11949881B2 (en) Apparatus for encoding and decoding image using adaptive DCT coefficient scanning based on pixel similarity and method therefor
CN100586187C (en) Method and apparatus for image intraperdiction encoding/decoding
US9621895B2 (en) Encoding/decoding method and device for high-resolution moving images
CN104602011B (en) Picture decoding apparatus
US8224100B2 (en) Method and device for intra prediction coding and decoding of image
EP3107292B1 (en) Video encoding method and apparatus, and video decoding apparatus
JPWO2012042720A1 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
WO2008020672A1 (en) Apparatus for encoding and decoding image using adaptive dct coefficient scanning based on pixel similarity and method therefor
KR101433169B1 (en) Mode prediction based on the direction of intra predictive mode, and method and apparatus for applying quantifization matrix and scanning using the mode prediction
CN102413330B (en) Texture-adaptive video coding/decoding system
WO2011033853A1 (en) Moving image decoding method and moving image encoding method
CN106534851B (en) Spatial domain scan method and device priority-based in video compress

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120418